id
stringlengths
11
11
channel
stringclasses
2 values
channel_id
stringclasses
2 values
title
stringlengths
12
100
categories
sequence
tags
sequence
description
stringlengths
66
5k
text
stringlengths
577
90.4k
segments
list
eYgPJ_7BkEw
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "google", "semi-supervised", "unlabeled", "augmentation", "research", "randaugment" ]
FixMatch is a simple, yet surprisingly effective approach to semi-supervised learning. It combines two previous methods in a clever way and achieves state-of-the-art in regimes with few and very few labeled examples. Paper: https://arxiv.org/abs/2001.07685 Code: https://github.com/google-research/fixmatch Abstract: Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at this https URL. Authors: Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi, today we're looking at FixMatch simplifying semi-supervised learning with consistency and confidence by Kyuk Son, David Berthelot and others of Google research. So this paper concerns semi-supervised learning. So what does semi-supervised learning mean? In semi-supervised learning you have a data set of labeled samples. So right, you have this data set of X's and corresponding Y labels. But this data set sometimes is very small. Now you have a much bigger data set of unlabeled examples, just X's with no labels, right? So you don't know what the labels of the unlabeled examples are, but what you would like to do is you would like to use this really large data set in order to help you with learning the association between the data points and the labels. So for example, in this case you would have something like an image classification data set. And I'm going to take the example here of medical data. So you have pictures of lungs. Let's draw a lung here. That is an ugly lung. You have pictures of lungs and whether or not they have a tumor in them. So medical data is very hard to get, especially labeled medical data. Because first of all you need the data itself, but then you also need at least one, but ideally three radiologists to look at whether or not this is a good or a bad image and label it. So it's usually very expensive to collect that data. But you might have plenty of unlabeled data, right? You might just be able to go through some database and find like anonymized, undiagnosed lung scans somewhere lying around. The same with image, like other images. So labeling images is pretty human intensive, but the internet contains like a whole bunch of unlabeled images. So the task of semi-supervised learning is how do you use this unlabeled data set in order to make your classification on the labeled data set easier. And FixMatch combines two approaches to this in a smart way, namely consistency and confidence approach. So what does... we'll jump right into the method. So basically what you want to do is you want to say my loss that I optimize, this is my loss, consists of two parts, namely a supervised loss, which is your classic classification loss, plus an unsupervised loss, right? And then you have like some sort of a trade-off parameter in front. Now your supervised loss here, this is just the cross entropy, let's call it H, between your predicted labels and the actual true labels, right? And the predicted labels, they can be, you know, kind of a distribution over labels. Now the magic of course is here in the unsupervised loss. And this unsupervised loss, this is what's described here in this part, right? So the unsupervised loss is going to be this H between P and Q, and we'll see what P and Q is. So if for the unsupervised loss you of course want to start with an unlabeled example, then you have the same sample go into two different pipelines. In the first pipeline up here, what you do is you so called weakly augmented. And here we're dealing with images, so we have to talk about image augmentation. So image augmentation has long been used in supervised learning to kind of give you more, it's kind of a cheat to give you more training data. So if you have an image, right, of let's say our famous cat, you can obtain more training data if you, for example, by random cropping. So you can random crop, let's say we just take this bottom right corner here, and then we enlarge it to the original size, right? Then it is still sort of a cat, but it's just a part of a cat, right? But usually that helps because you say, okay, my image data set is just pictures of animals, right? It's entirely conceivable that someone held the camera like this or like this, right? So technically in terms of generalizing to a test set, these both data points should be valid. So I'm just going to add both to my training data. So you can see how from one training data point you can get many training data points just by doing this cropping. What you can also do is you can flip it left right, right? You just swap the pixels left right, and usually these kind of... So a cat that has a little dark spot here is still a cat when it has the little dark spot over there, right? But to your classifier, those are two different samples. So you can do many of those things, and they have two kind of augmentations. They have what they call weakly augmented and strongly augmented, right? So in the weakly augmented pipeline, I think they just they crop and they shift and they rotate or something like this. So you can see here this horsey here, it is something like it's cropped here about, then it is turned slightly to the left, and then... Yeah, I think that's it. So they crop, they rotate, and then they also flip horizontally at random in like 50% of the time. So these are what's called weakly augmented. The goal here is just to kind of obtain a bit more training data, alright? So you run this through your model, through your classification model as you would a regular sample, and you get a prediction. Now from your prediction, you can take the highest prediction here, and that is going to be your pseudo-label. So this is P of Y, this is your distribution that you estimate, right? So and this, if you just take the max, this is going to be your Y hat, right? And this is what they call a pseudo-label, sorry. You'll see why it is called a pseudo-label. So the other pipeline here is the strong augmentation pipeline. Now in weak augmentation, we just wanted to get some more training data in strong augmentation. Now the goal is to really screw up that picture to the point where it's still, you know, you could recognize in the same class, but you can see here the augmentations, they go wild. So you play around with the color, with the hue, you play around with the light intensity, right? With the contrast, you can do many, many things. You can see this image looks basically nothing like this image, but you can still kind of recognize it as a horse. But the strongly augmented data is much more distorted than the weakly augmented data. And that's the point. So also you send the strongly augmented data through the model, and again you get a prediction, right? And now the trick is you take the label from here, and you take that as if it were the true label, right? You take that as if it were the true label, and you form a loss from this prediction being the model prediction, as if this thing here that also comes from the model, as if that was the true label, right? That's why it's called a pseudo label, because it is a label that you produce from the model itself. Now of course if these were to be the same picture, it would be kind of pointless, right? That's why you see there needs to be a weakly and a strongly augmented pipeline. I'm pretty sure if you want a more basic version of this, make this just clean, so no augmentation, and make this augmented, right? That's how you can think of it. The fact that there is weak and here strong augmentation I think is just your classic trick to get more training data. But in essence you can think of it as this is here, the clean thing, you just want to produce a label, and then you want that an augmented version of the image has the same label. Now you can think of it shortly, what does this model learn? If you just have this, you remember. I think the important thing is always to remember that there are two components here, right? There is first the supervised loss, this is the important one ultimately, because we have the true labels, right? And then second there is the unsupervised loss, which is just an auxiliary loss that is supposed to just kind of tune our model to the nature of the data, right? So don't forget that this down here just concerns the unsupervised part of that loss. So if you think what does the model actually learn whenever you train it like this, it basically learns to revert this strong augmentation, right? So it basically says, hey model, whenever I give you a weak augmented image and I distort it heavily, right? Whenever I give you an image and I distort it heavily, I want the label to be the same. So the model basically learns that whatever the image, the whatever the image, the model at the end of the training will be able to basically map any strongly augmented picture to the same class as a weakly augmented picture if it comes from the same source, right? So the model basically learns to ignore these kinds of augmentations. That's what this loss over here does. It basically says these sorts of augmentations, these sorts of distortions of images, please ignore those because I always want you to output the same label here in the prediction here as if I had not distorted or just weakly distorted the image. So that's what you have to keep in mind that this loss is designed to make the model not distinguish between differently augmented versions of the same image. And interestingly, that really seems to help with the supervised loss, right? My kind of hypothesis is that all these methods, what they're kind of trying to do is to just tune the neural network to the, let's say the orders of magnitude of the input data and also to the kinds of augmentations that the humans come up with. And that's a very important point. So the augmentations, and here we said, you know, it's kind of a rotation and the crop, the kind of augmentation really seemed to play a role. So this paper finds that on CIFAR-10, where the state of the art I believe is something like 96, 97 percent accuracy, on CIFAR-10 with just 250 labeled examples, right? Now the usual data set size is about 50,000. It goes to 94.9%. So almost 95 percent accuracy with the state of the art being like 97. This is incredible with just 250 labeled examples. Crazy, right? And with only four labels per class, it gets 88.6 percent. So that's just 40 images with labels. They get 88.6 percent of accuracy compared to the 97 percent that you get with like 50,000 images. That is pretty pretty cool, right? Simply by having all other images not labeled but pseudo labeled and consistency regularized, right? So the two things that are combined by FixMatch again are consistency regularization, which basically it means that the model should output similar predictions when fed perturbed versions of the same image, right? They're really forthcoming that they are not the ones who invented this. They just combine the consistency regularization with the pseudo labeling. Now the pseudo labeling they have also not invented. The pseudo labeling leverages the idea that we should use the model itself to obtain artificial labels for unlabeled data. We've seen a lot of papers in the last few months or years where it's like the teacher teaches the student and then the student teaches the teacher model again and so on. So they simply combine the two methods in a clever way. They have one last thing that is not in this drawing, namely they only use the pseudo label. They have a break right here and they only use the pseudo label if the confidence, so if this P of Y here is above a certain threshold. So they don't take all the pseudo labels but they only take the labels where the model is fairly sure about, right? So they have actually an ablation study where they show that this is reasonably important. And if you go down here where they say ablation, where is it? Ablation study, oh yeah something I also find cool. If you just give one image per class, one image per class, ten images that are labeled, it still gets like 78% accuracy. I think the images are chosen as good representations of their class but still one image per class. Pretty pretty cool. An important part of this is the ablation study where they say okay we want to tease apart why this algorithm, why this semi-supervised learning technique works so well. And they find several important factors. They find for example that their augmentation strategy is extremely important. So how they augment the images is very important. You see here the error of this 4.8% on the 250 label split. If you change up the augmentation strategies your error gets higher, right? So they say we use this cutout and we measure the effect of cutout. We find that both cutout and CCT augment are required to obtain the best performance. Removing either results in a comparable increase in error rate. Now you've seen before for example they went from this 93, sorry, 93 point something percent to 94 point something percent from the previous state-of-the-art semi-supervised learning. And here they find that simply changing the augmentation strategy changes the error by more than a percent. So you can just see this in context of what's important here. They say again the ratio of unlabeled data seems pretty important. We observe a significant decrease in error rates by using large amounts of unlabeled data. Then the optimizer and learning rate schedule seems to be very important as well in that they use this, they say SGD with momentum works much better than Adam and then they use this decreasing learning rate schedule, this cosine learning rate schedule. So there seem to be a lot of things, a lot of hyperparameters that are fairly important here. And you can see that the gains are substantial sometimes but they aren't like through the roof substantial, where you can make a good argument that it is unclear how much really comes from this clever combination that FixMatch proposes and how much also just comes from whether or not you set the hyperparameters correctly and exactly how much computation are you able to throw at selecting your hyper parameters. So that seems to be a bit of a pain point for me. They also say we find that tuning the weight decay is exceptionally important for low label regimes. Choosing a value that is just one order of magnitude larger or smaller than optimal can cost 10 percentage points or more. And so that all of that seems to me that this kind of research where you're nibbling for half or single percentage points in accuracy while a single misstep in a choice of hyper parameter might cost you 10 times that gain is a bit sketchy. Now I recognize they get numbers like no one else has gotten before but where exactly the gains come from and if the gains really come from this architecture or actually just more from throwing computers at it I don't know. Alright with that I hope you enjoyed this and I invite you to check out the paper. Bye bye.
[ { "end": 5.5600000000000005, "start": 0, "text": " Hi, today we're looking at FixMatch simplifying semi-supervised learning" }, { "end": 13.280000000000001, "start": 5.5600000000000005, "text": " with consistency and confidence by Kyuk Son, David Berthelot and others of" }, { "end": 19.76, "start": 13.280000000000001, "text": " Google research. So this paper concerns semi-supervised learning. So what does" }, { "end": 24.2, "start": 19.76, "text": " semi-supervised learning mean? In semi-supervised learning you have a" }, { "end": 30.64, "start": 24.2, "text": " data set of labeled samples. So right, you have this data set of X's and" }, { "end": 38.2, "start": 30.64, "text": " corresponding Y labels. But this data set sometimes is very small. Now you have a" }, { "end": 47.28, "start": 38.2, "text": " much bigger data set of unlabeled examples, just X's with no labels, right?" }, { "end": 53.32, "start": 47.28, "text": " So you don't know what the labels of the unlabeled examples are, but" }, { "end": 58.24, "start": 53.32, "text": " what you would like to do is you would like to use this really large data set" }, { "end": 65.28, "start": 58.24, "text": " in order to help you with learning the association between the data points and" }, { "end": 72.64, "start": 65.28, "text": " the labels. So for example, in this case you would have something like an" }, { "end": 75.76, "start": 72.64, "text": " image classification data set. And I'm going to take the example here of" }, { "end": 82.6, "start": 75.76, "text": " medical data. So you have pictures of lungs. Let's draw a lung here. That is an" }, { "end": 89.64, "start": 82.6, "text": " ugly lung. You have pictures of lungs and whether or not they have" }, { "end": 94.72, "start": 89.64, "text": " a tumor in them. So medical data is very hard to get, especially" }, { "end": 100.11999999999999, "start": 94.72, "text": " labeled medical data. Because first of all you need the data itself, but" }, { "end": 106.44, "start": 100.11999999999999, "text": " then you also need at least one, but ideally three radiologists" }, { "end": 113.84, "start": 106.44, "text": " to look at whether or not this is a good or a bad image and label it. So it's" }, { "end": 118.03999999999999, "start": 113.84, "text": " usually very expensive to collect that data. But you might have plenty of" }, { "end": 122.03999999999999, "start": 118.03999999999999, "text": " unlabeled data, right? You might just be able to go through some" }, { "end": 128.52, "start": 122.03999999999999, "text": " database and find like anonymized, undiagnosed lung scans somewhere lying" }, { "end": 135.56, "start": 128.52, "text": " around. The same with image, like other images. So labeling images is pretty" }, { "end": 139.96, "start": 135.56, "text": " human intensive, but the internet contains like a whole bunch of unlabeled" }, { "end": 145, "start": 139.96, "text": " images. So the task of semi-supervised learning is how do you use this" }, { "end": 150.76, "start": 145, "text": " unlabeled data set in order to make your classification on the labeled data set" }, { "end": 156.56, "start": 150.76, "text": " easier. And FixMatch combines two approaches to this in a smart way, namely" }, { "end": 166.24, "start": 156.56, "text": " consistency and confidence approach. So what does... we'll jump right" }, { "end": 171.44, "start": 166.24, "text": " into the method. So basically what you want to do is you want to say my loss" }, { "end": 178.44, "start": 171.44, "text": " that I optimize, this is my loss, consists of two parts, namely a" }, { "end": 184.88, "start": 178.44, "text": " supervised loss, which is your classic classification loss, plus an" }, { "end": 189.44, "start": 184.88, "text": " unsupervised loss, right? And then you have like some sort of a trade-off" }, { "end": 194.48, "start": 189.44, "text": " parameter in front. Now your supervised loss here, this is just the" }, { "end": 200.76, "start": 194.48, "text": " cross entropy, let's call it H, between your predicted labels and the" }, { "end": 206.44, "start": 200.76, "text": " actual true labels, right? And the predicted labels, they can be, you know," }, { "end": 212.76, "start": 206.44, "text": " kind of a distribution over labels. Now the magic of course is here in the" }, { "end": 217.84, "start": 212.76, "text": " unsupervised loss. And this unsupervised loss, this is what's described here in" }, { "end": 224.84, "start": 217.84, "text": " this part, right? So the unsupervised loss is going to be this H between P and Q," }, { "end": 232.79999999999998, "start": 224.84, "text": " and we'll see what P and Q is. So if for the unsupervised loss you of course" }, { "end": 239.76, "start": 232.79999999999998, "text": " want to start with an unlabeled example, then you have the same sample go into" }, { "end": 244.92, "start": 239.76, "text": " two different pipelines. In the first pipeline up here, what you do is you so" }, { "end": 252.2, "start": 244.92, "text": " called weakly augmented. And here we're dealing with images, so we have to talk" }, { "end": 255.76, "start": 252.2, "text": " about image augmentation. So image augmentation has long been used in" }, { "end": 260.88, "start": 255.76, "text": " supervised learning to kind of give you more, it's kind of a cheat to give you" }, { "end": 269.24, "start": 260.88, "text": " more training data. So if you have an image, right, of let's say our famous cat," }, { "end": 279.8, "start": 269.24, "text": " you can obtain more training data if you, for example, by random cropping. So you" }, { "end": 285.32, "start": 279.8, "text": " can random crop, let's say we just take this bottom right corner here, and then" }, { "end": 293.12, "start": 285.32, "text": " we enlarge it to the original size, right? Then it is still sort of a cat, but it's" }, { "end": 298.52, "start": 293.12, "text": " just a part of a cat, right? But usually that helps because you say, okay," }, { "end": 303.96, "start": 298.52, "text": " my image data set is just pictures of animals, right? It's entirely conceivable" }, { "end": 309.32, "start": 303.96, "text": " that someone held the camera like this or like this, right? So technically in" }, { "end": 313.91999999999996, "start": 309.32, "text": " terms of generalizing to a test set, these both data points should be valid." }, { "end": 317.59999999999997, "start": 313.91999999999996, "text": " So I'm just going to add both to my training data. So you can see how from" }, { "end": 322.4, "start": 317.59999999999997, "text": " one training data point you can get many training data points just by doing this" }, { "end": 326.52, "start": 322.4, "text": " cropping. What you can also do is you can flip it left right, right? You just" }, { "end": 334.76, "start": 326.52, "text": " swap the pixels left right, and usually these kind of... So a cat that has a" }, { "end": 339.44, "start": 334.76, "text": " little dark spot here is still a cat when it has the little dark spot over" }, { "end": 344.28, "start": 339.44, "text": " there, right? But to your classifier, those are two different samples. So you can do" }, { "end": 350.35999999999996, "start": 344.28, "text": " many of those things, and they have two kind of augmentations. They have what" }, { "end": 355.84, "start": 350.35999999999996, "text": " they call weakly augmented and strongly augmented, right? So in the weakly" }, { "end": 361.79999999999995, "start": 355.84, "text": " augmented pipeline, I think they just they crop and they shift and they" }, { "end": 367.15999999999997, "start": 361.79999999999995, "text": " rotate or something like this. So you can see here this horsey here, it is" }, { "end": 374.35999999999996, "start": 367.15999999999997, "text": " something like it's cropped here about, then it is turned slightly to the left," }, { "end": 383.88, "start": 374.35999999999996, "text": " and then... Yeah, I think that's it. So they crop, they rotate, and then they also flip" }, { "end": 389.44, "start": 383.88, "text": " horizontally at random in like 50% of the time. So these are what's called" }, { "end": 394.24, "start": 389.44, "text": " weakly augmented. The goal here is just to kind of obtain a bit more training" }, { "end": 399.44, "start": 394.24, "text": " data, alright? So you run this through your model, through your classification" }, { "end": 405.44, "start": 399.44, "text": " model as you would a regular sample, and you get a prediction. Now from your" }, { "end": 409.76, "start": 405.44, "text": " prediction, you can take the highest prediction here, and that is going to be" }, { "end": 416.59999999999997, "start": 409.76, "text": " your pseudo-label. So this is P of Y, this is your distribution that you" }, { "end": 423.68, "start": 416.59999999999997, "text": " estimate, right? So and this, if you just take the max, this is going to be" }, { "end": 431.36, "start": 423.68, "text": " your Y hat, right? And this is what they call a pseudo-label, sorry. You'll see why" }, { "end": 436.12, "start": 431.36, "text": " it is called a pseudo-label. So the other pipeline here is the strong" }, { "end": 440.28000000000003, "start": 436.12, "text": " augmentation pipeline. Now in weak augmentation, we just wanted to get some" }, { "end": 444.96, "start": 440.28000000000003, "text": " more training data in strong augmentation. Now the goal is to really" }, { "end": 450.16, "start": 444.96, "text": " screw up that picture to the point where it's still, you know, you could recognize" }, { "end": 455.24, "start": 450.16, "text": " in the same class, but you can see here the augmentations, they go wild. So you" }, { "end": 460.24, "start": 455.24, "text": " play around with the color, with the hue, you play around with the light intensity," }, { "end": 469.44, "start": 460.24, "text": " right? With the contrast, you can do many, many things. You can see this image" }, { "end": 475.16, "start": 469.44, "text": " looks basically nothing like this image, but you can still kind of recognize it" }, { "end": 482.12, "start": 475.16, "text": " as a horse. But the strongly augmented data is much more distorted than the" }, { "end": 486.92, "start": 482.12, "text": " weakly augmented data. And that's the point. So also you send the strongly" }, { "end": 493.04, "start": 486.92, "text": " augmented data through the model, and again you get a prediction, right? And now" }, { "end": 502.20000000000005, "start": 493.04, "text": " the trick is you take the label from here, and you take that as if it" }, { "end": 508.12, "start": 502.20000000000005, "text": " were the true label, right? You take that as if it were the true label, and you" }, { "end": 515.9200000000001, "start": 508.12, "text": " form a loss from this prediction being the model prediction, as if this thing" }, { "end": 521.1999999999999, "start": 515.92, "text": " here that also comes from the model, as if that was the true label, right? That's" }, { "end": 526.7199999999999, "start": 521.1999999999999, "text": " why it's called a pseudo label, because it is a label that you produce from the" }, { "end": 531.88, "start": 526.7199999999999, "text": " model itself. Now of course if these were to be the same picture, it would be kind" }, { "end": 535.7199999999999, "start": 531.88, "text": " of pointless, right? That's why you see there needs to be a weakly and a" }, { "end": 543.3199999999999, "start": 535.7199999999999, "text": " strongly augmented pipeline. I'm pretty sure if you want a more basic version" }, { "end": 551.5200000000001, "start": 543.32, "text": " of this, make this just clean, so no augmentation, and make this augmented," }, { "end": 556.12, "start": 551.5200000000001, "text": " right? That's how you can think of it. The fact that there is weak and" }, { "end": 560.8000000000001, "start": 556.12, "text": " here strong augmentation I think is just your classic trick to get more" }, { "end": 564.84, "start": 560.8000000000001, "text": " training data. But in essence you can think of it as this is here, the clean" }, { "end": 570.5600000000001, "start": 564.84, "text": " thing, you just want to produce a label, and then you want that an augmented" }, { "end": 576.28, "start": 570.56, "text": " version of the image has the same label. Now you can think of it shortly, what" }, { "end": 581.28, "start": 576.28, "text": " does this model learn? If you just have this, you remember. I think the important" }, { "end": 585.0999999999999, "start": 581.28, "text": " thing is always to remember that there are two components here, right? There is" }, { "end": 590.7199999999999, "start": 585.0999999999999, "text": " first the supervised loss, this is the important one ultimately, because we have" }, { "end": 596, "start": 590.7199999999999, "text": " the true labels, right? And then second there is the unsupervised loss, which is" }, { "end": 602.88, "start": 596, "text": " just an auxiliary loss that is supposed to just kind of tune our model to the" }, { "end": 607.16, "start": 602.88, "text": " nature of the data, right? So don't forget that this down here just" }, { "end": 614.08, "start": 607.16, "text": " concerns the unsupervised part of that loss. So if you think what does the model" }, { "end": 621.08, "start": 614.08, "text": " actually learn whenever you train it like this, it basically learns to" }, { "end": 629.88, "start": 621.08, "text": " revert this strong augmentation, right? So it basically says, hey model, whenever I" }, { "end": 636, "start": 629.88, "text": " give you a weak augmented image and I distort it heavily, right? Whenever I" }, { "end": 640.08, "start": 636, "text": " give you an image and I distort it heavily, I want the label to be the same." }, { "end": 650.1600000000001, "start": 640.08, "text": " So the model basically learns that whatever the image, the whatever the" }, { "end": 657.68, "start": 650.16, "text": " image, the model at the end of the training will be able to basically map" }, { "end": 663.92, "start": 657.68, "text": " any strongly augmented picture to the same class as a weakly augmented" }, { "end": 670.64, "start": 663.92, "text": " picture if it comes from the same source, right? So the model basically learns to" }, { "end": 677.28, "start": 670.64, "text": " ignore these kinds of augmentations. That's what this loss over here does. It" }, { "end": 681.68, "start": 677.28, "text": " basically says these sorts of augmentations, these sorts of distortions" }, { "end": 688.92, "start": 681.68, "text": " of images, please ignore those because I always want you to output the same label" }, { "end": 695.56, "start": 688.92, "text": " here in the prediction here as if I had not distorted or just weakly distorted" }, { "end": 701.56, "start": 695.56, "text": " the image. So that's what you have to keep in mind that this" }, { "end": 707.8, "start": 701.56, "text": " loss is designed to make the model not distinguish between differently" }, { "end": 714, "start": 707.8, "text": " augmented versions of the same image. And interestingly, that really seems to help" }, { "end": 720.3199999999999, "start": 714, "text": " with the supervised loss, right? My kind of hypothesis is that all" }, { "end": 724.56, "start": 720.3199999999999, "text": " these methods, what they're kind of trying to do is to just tune the neural" }, { "end": 731.3599999999999, "start": 724.56, "text": " network to the, let's say the orders of magnitude of the input data and also" }, { "end": 736.08, "start": 731.36, "text": " to the kinds of augmentations that the humans come up with. And that's a very" }, { "end": 743.48, "start": 736.08, "text": " important point. So the augmentations, and here we said, you know, it's kind of a" }, { "end": 748.88, "start": 743.48, "text": " rotation and the crop, the kind of augmentation really seemed to play a" }, { "end": 756.08, "start": 748.88, "text": " role. So this paper finds that on CIFAR-10, where the state of the art I believe is" }, { "end": 763.6800000000001, "start": 756.08, "text": " something like 96, 97 percent accuracy, on CIFAR-10 with just 250 labeled" }, { "end": 774.32, "start": 763.6800000000001, "text": " examples, right? Now the usual data set size is about 50,000. It goes to 94.9%." }, { "end": 779.36, "start": 774.32, "text": " So almost 95 percent accuracy with the state of the art being like 97." }, { "end": 790.28, "start": 779.36, "text": " This is incredible with just 250 labeled examples. Crazy, right? And with" }, { "end": 798.96, "start": 790.28, "text": " only four labels per class, it gets 88.6 percent. So that's just 40 images with" }, { "end": 809.88, "start": 798.96, "text": " labels. They get 88.6 percent of accuracy compared to the 97 percent that" }, { "end": 815.84, "start": 809.88, "text": " you get with like 50,000 images. That is pretty pretty cool, right? Simply by" }, { "end": 821.48, "start": 815.84, "text": " having all other images not labeled but pseudo labeled and consistency" }, { "end": 830, "start": 821.48, "text": " regularized, right? So the two things that are combined by FixMatch again" }, { "end": 836.6, "start": 830, "text": " are consistency regularization, which basically it means that the model" }, { "end": 841.16, "start": 836.6, "text": " should output similar predictions when fed perturbed versions of the same image," }, { "end": 847.24, "start": 841.16, "text": " right? They're really forthcoming that they are not the ones who" }, { "end": 851.48, "start": 847.24, "text": " invented this. They just combine the consistency regularization with the" }, { "end": 857.16, "start": 851.48, "text": " pseudo labeling. Now the pseudo labeling they have also not invented. The pseudo" }, { "end": 862.6800000000001, "start": 857.16, "text": " labeling leverages the idea that we should use the model itself to obtain" }, { "end": 866.88, "start": 862.6800000000001, "text": " artificial labels for unlabeled data. We've seen a lot of papers in the last" }, { "end": 872.12, "start": 866.88, "text": " few months or years where it's like the teacher teaches the student and then the" }, { "end": 879.12, "start": 872.12, "text": " student teaches the teacher model again and so on. So they simply combine" }, { "end": 884.5600000000001, "start": 879.12, "text": " the two methods in a clever way. They have one last thing that is not in this" }, { "end": 890.64, "start": 884.5600000000001, "text": " drawing, namely they only use the pseudo label. They have a break right here and" }, { "end": 898, "start": 890.64, "text": " they only use the pseudo label if the confidence, so if this P of Y here is" }, { "end": 904.76, "start": 898, "text": " above a certain threshold. So they don't take all the pseudo labels but they only" }, { "end": 910.28, "start": 904.76, "text": " take the labels where the model is fairly sure about, right? So they have" }, { "end": 914.48, "start": 910.28, "text": " actually an ablation study where they show that this is reasonably" }, { "end": 923.38, "start": 914.48, "text": " important. And if you go down here where they say ablation, where is it?" }, { "end": 929.04, "start": 923.38, "text": " Ablation study, oh yeah something I also find cool. If you just give one" }, { "end": 935.36, "start": 929.04, "text": " image per class, one image per class, ten images that are labeled, it still gets" }, { "end": 943.96, "start": 935.36, "text": " like 78% accuracy. I think the images are chosen as good representations of their" }, { "end": 951.28, "start": 943.96, "text": " class but still one image per class. Pretty pretty cool. An important part of" }, { "end": 958, "start": 951.28, "text": " this is the ablation study where they say okay we want to tease apart why this" }, { "end": 963.4, "start": 958, "text": " algorithm, why this semi-supervised learning technique works so well. And" }, { "end": 967.8399999999999, "start": 963.4, "text": " they find several important factors. They find for example that their" }, { "end": 973.0799999999999, "start": 967.8399999999999, "text": " augmentation strategy is extremely important. So how they augment the" }, { "end": 983.2, "start": 973.08, "text": " images is very important. You see here the error of this 4.8% on the" }, { "end": 993.48, "start": 983.2, "text": " 250 label split. If you change up the augmentation" }, { "end": 999.5600000000001, "start": 993.48, "text": " strategies your error gets higher, right?" }, { "end": 1011.1199999999999, "start": 999.56, "text": " So they say we use this cutout and we measure the effect of cutout. We find" }, { "end": 1015.28, "start": 1011.1199999999999, "text": " that both cutout and CCT augment are required to obtain the best performance." }, { "end": 1023.0799999999999, "start": 1015.28, "text": " Removing either results in a comparable increase in error rate. Now you've" }, { "end": 1030.1200000000001, "start": 1023.08, "text": " seen before for example they went from this 93, sorry, 93 point something" }, { "end": 1035.52, "start": 1030.1200000000001, "text": " percent to 94 point something percent from the previous state-of-the-art" }, { "end": 1041.08, "start": 1035.52, "text": " semi-supervised learning. And here they find that simply changing the" }, { "end": 1046.52, "start": 1041.08, "text": " augmentation strategy changes the error by more than a percent. So you can just" }, { "end": 1056.28, "start": 1046.52, "text": " see this in context of what's important here. They say again the ratio" }, { "end": 1062.04, "start": 1056.28, "text": " of unlabeled data seems pretty important. We observe a significant decrease in" }, { "end": 1066.68, "start": 1062.04, "text": " error rates by using large amounts of unlabeled data. Then the" }, { "end": 1071.8, "start": 1066.68, "text": " optimizer and learning rate schedule seems to be very important as well in" }, { "end": 1079.04, "start": 1071.8, "text": " that they use this, they say SGD with momentum works much better than Adam and" }, { "end": 1084.84, "start": 1079.04, "text": " then they use this decreasing learning rate schedule, this cosine learning rate" }, { "end": 1092.76, "start": 1084.84, "text": " schedule. So there seem to be a lot of things, a lot of hyperparameters that are" }, { "end": 1101.56, "start": 1092.76, "text": " fairly important here. And you can see that the gains are substantial sometimes" }, { "end": 1109.72, "start": 1101.56, "text": " but they aren't like through the roof substantial, where you can make a good" }, { "end": 1115.84, "start": 1109.72, "text": " argument that it is unclear how much really comes from this clever" }, { "end": 1121.8799999999999, "start": 1115.84, "text": " combination that FixMatch proposes and how much also just comes from" }, { "end": 1127.6, "start": 1121.8799999999999, "text": " whether or not you set the hyperparameters correctly and exactly how" }, { "end": 1134.76, "start": 1127.6, "text": " much computation are you able to throw at selecting your hyper" }, { "end": 1143.7199999999998, "start": 1134.76, "text": " parameters. So that seems to be a bit of a pain point for me. They also" }, { "end": 1150.8799999999999, "start": 1143.7199999999998, "text": " say we find that tuning the weight decay is exceptionally important for low label" }, { "end": 1157.08, "start": 1150.8799999999999, "text": " regimes. Choosing a value that is just one order of magnitude larger or" }, { "end": 1164.8, "start": 1157.08, "text": " smaller than optimal can cost 10 percentage points or more. And so that" }, { "end": 1170.6, "start": 1164.8, "text": " all of that seems to me that this kind of research where you're" }, { "end": 1179, "start": 1170.6, "text": " nibbling for half or single percentage points in accuracy while a single" }, { "end": 1186, "start": 1179, "text": " misstep in a choice of hyper parameter might cost you 10 times that gain is" }, { "end": 1192.48, "start": 1186, "text": " a bit sketchy. Now I recognize they get numbers like no one else has gotten" }, { "end": 1197.72, "start": 1192.48, "text": " before but where exactly the gains come from and if the gains really come from" }, { "end": 1203.6, "start": 1197.72, "text": " this architecture or actually just more from throwing computers at it I don't" }, { "end": 1209.72, "start": 1203.6, "text": " know. Alright with that I hope you enjoyed this and I invite you to check" }, { "end": 1216.28, "start": 1209.72, "text": " out the paper. Bye bye." } ]
AU30czb4iQA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Imputer: Sequence Modelling via Imputation and Dynamic Programming
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "seq2seq", "autoregressive", "independence", "decoding" ]
The imputer is a sequence-to-sequence model that strikes a balance between fully autoregressive models with long inference times and fully non-autoregressive models with fast inference. The imputer achieves constant decoding time independent of sequence length by exploiting dynamic programming. https://arxiv.org/abs/2002.08926 Abstract: This paper presents the Imputer, a neural sequence model that generates output sequences iteratively via imputations. The Imputer is an iterative generative model, requiring only a constant number of generation steps independent of the number of input or output tokens. The Imputer can be trained to approximately marginalize over all possible alignments between the input and output sequences, and all possible generation orders. We present a tractable dynamic programming training algorithm, which yields a lower bound on the log marginal likelihood. When applied to end-to-end speech recognition, the Imputer outperforms prior non-autoregressive models and achieves competitive results to autoregressive models. On LibriSpeech test-other, the Imputer achieves 11.1 WER, outperforming CTC at 13.0 WER and seq2seq at 12.5 WER. Authors: William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, Navdeep Jaitly Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at the imputer sequence modeling via imputation and dynamic programming by William Chan, Chitwan Sariah, Jeffrey Hinton, Mohamed Nourouzi and Navdeep Jaitley. So this is a model to perform sequence-to-sequence tasks. Now sequence-to-sequence tasks are very very common in NLP, but in this case it's kind of a subset of sequence-to-sequence tasks. So a classic sequence-to-sequence task is a machine translation. Here for example the sentence I like you. If you want to translate it to German, sorry you, if you want to translate it to German that would become Ich mag dich. And you see that the input is a sequence right and the output is a sequence. Now the imputer deals with very special kind of sequence-to-sequence tasks. Namely it deals with sequence-to-sequence tasks where there is a monotonic alignment. So you see that this is given here. The first word is corresponding to the first word here, the second to the second and the third to the third. This is not always the case in machine translation. You know different languages have different sentence structures. So for example in French this would be je d'aime. And you can see that the first word is still the first word, however the third word has become the second, the you and the verb goes to the end. So the imputer would not be able to deal with this task very well. A different task where the imputer would be useful for would be something like speech recognition. So if someone were to speak the words I like you and you would measure the waveform of that it would look something like I like you. So if you have this waveform let's actually make some chunk samples out of this. Let's say this is a sample right here and here is a break here and here. So we have five samples on the bottom. You can see pretty easily that this sample here, this is the I and then this is silence, this is the like, this is silence and this is the you. So the imputer deals with these kind of sequence to sequence tasks where first of all there is a monotonic alignment, sorry monotonic alignment and second of all this is an engineering constraint where the length of the input sequence X is larger or equal to the length of the input sequence Y and you'll see why mainly because we rely on being able to compute this alignment here. The alignment of input samples to output samples. You can see that the monotonic alignment is actually given fairly well in speech recognition because if something is later down here it is also later in the sequence up here. That is a monotonic alignment and also usually we have more wave samples then we have words in the output sequence. So that would be a task for the imputer. Now let's think about how we would do something like this. So let's put X at the top here and we said X has five tokens in it and let's put Y at the bottom. Y actually has three tokens. So this here is I like you. This is the waveform and we want the I like you at the bottom. So what could we do? First of all what the imputer does is it represents I like you not as this form right here but as a form where you have the same length as X divided into the same amount of things and then it does the following. So for this this is an example. This is how it would represent Y. It would say I have as many chunks on the top as on the bottom. I know this chunk here corresponds to this token then this here to this and this here to this and then these are these intermediate ones. So you can see these correspond to those. These are silents right here. Now it doesn't always need to be that there is always one token and a silence then a token and a silence. The task of the imputer is actually to see whether this is more likely than for example I like and then silence silence and then you. So the task of the imputer is to distinguish these two from each other and then of course also produce the actual tokens. Now if you think about how would you go about taking X and producing something like Y. So this is Y let's call it tilde. This is the actual Y right but you can see that this here is a deterministic function in one way. It's actually not a deterministic function in the other way and that becomes interesting when you have to compute a loss for this. But how would we go about doing this? What we could do is we could just take a big transformer BERT. That's actually drawn arrow. We could just take BERT and we could simply so in BERT you have in if you if you construct it correctly you have as many input tokens as output tokens. So what we could simply say is for each of the outputs that we get we simply make this as a softmax classifier over our vocabulary with the silence being one special token and we simply classify each of the outputs into this vocabulary. This would be one step right? So we could do one step BERT bang bang input to output and there is more there are more sophisticated approaches to doing this in one step like CTC but ultimately we could just do one step but then you'd have the same problem like for example XL net if you haven't seen my XL net video I recommend it that they exactly take the problem if you do this right then at the moment where you decode the word like you have no idea that there is an I over here all you know is the the vector you have here that you sample the I from right but this could be a distribution where I is pretty high but some other word is also pretty high so this process over here that samples the word like has no idea which of the two here you actually would sample so it cannot condition on it so it is the the assumption here is that the sampling of the word like is independent of the sampling of the word I and of course that's not the case the you need to know what word is there if you want to sample the word like otherwise you can end up with some very confusing sentences so this one step process is pretty quick but it has the drawback that there are these conditional independence assumptions and again I invite you to watch the XL net video if you want to dive more into this problem the second thing we could do is we could just decode one after another right so we could say all right I'll make sorry I'll make my five slots here and I just leave them empty for now and I'm just going to decode the one that I am most sure about and let's say the the speech at the back here is very clear and you say other I'm I know this is a you right so I'm gonna fill in you right here right and make this alignment that this goes here this is the you right I still don't know what the others are but now what I did they do a second step and in the second step I get as an input not only the original input like this this thing here but I also get the fact that I already decoded the word you to here right in this step so now I say given that I already decoded the word you which one am I now most sure about and I might be most sure about to say I'm most sure about this now being an eye because there's a you at the end and this kind of sounds like an eye so an eye here right it goes to the next step and then the next step it already has the information that it decoded I and you and now it's a might say ah okay given these that's so probably this thing so I here probably the thing here the thing right here is silence right makes the most sense I kind of hear some noise but there's already a word after so now I'm pretty sure that this here is a silent token right and you go this until the end until you're actually at this so this here would be n step decoding this here would be n steps of decoding which now no longer has the problem of these conditional independence assumptions but of course now you have the problem that you need n steps right the imputer does something in the middle of this the imputer will as you can see here it will form this into blocks right blocks of size B and this is the empty symbol here right and what it will do is it will make a step where in each block for each block it will conditioned on the previous alignment and conditioned on the input it will decode whatever it feels it is most certain about in each block and then it does this for as long as there are still empty tokens right you can see here the first block and then in the second step it will decode this this this and this so the imputer can trade off between the conditional independence assumption of the one step BERT and the full conditional independence assumption of the n step decoding right so it will compute this alignment and the actual tokens at the same time in this process so how many steps does this take this takes now B steps and this is pretty cool because B is the block size so this is independent of the sequence length so it is able to compute this alignment and output in a constant number of steps right so you're by modulating this B you're now able to trade off speed versus let's say performance in the imputer and this is pretty cool so I think actually I think the the bigger point to understand here is how to actually use the assumption that there is a monotonic alignment right because if there is a monotonic alignment and if this thing is given here then you can do this you can do this representation right here with the silence tokens and that allows you to basically represent the output in a form that is of the same length as the input and do this kind of token by token decoding while still allowing you to have variable lengths output as long as they're smaller in length than the input so that's pretty cool and then the the next pretty cool thing is the fact that they do this in blocks now of course my issue with this so this is how the system works my issue with this is how the system is trained so if you think about how you train this you must train this first of all the loss function right has to revert this and how they do it as they marginalize you see this down here you want to marginalize over all the possible alignments right here so this is how you train you sample an alignment from the alignment policy and this alignment policy is I think they have some heuristics of how they construct the alignments during during training or you have experts actually giving you this alignment I think they use in the speech recognition they use something like CTC to give you the alignments from the alignment policy and then you have a masking policy and I think they also they just do random masking and then they use that for training and then they marginalize over the alignments this I'm pretty sure is not the same distribution as the decoding procedure I just described right so the decoding procedure if you do this in B steps right that means each of the step is dependent on the step before so that means the distribution of whatever you whatever the imputer sees is actually dependent on itself while these people are proposing a training framework where you have here you have a heuristic in order to come up with the training sample alignments and here you have a random I think a random masking policy that comes up with the with where the empty tokens are so this is not the same distribution and then also it marginalizes over all compatible alignments which I'm I'm pretty sure this is not the same distribution this is not the correct loss distribution they have some math to show that in expectation it's the same but yeah this is this is over there over their role in policy and role and expert and and marginalization this I don't want to go too deep into this I've given it some thought but it will make this video too long and boring if I actually go into the details here suffice to say I invite you to look at the loss computation and ask yourself if you think that is the correct way to produce the data set for training given how you do the inference later the architecture of the imputer is actually pretty similar to BERT in that first of all well okay you're dealing with audio in the input so you're going to have some convolutional network here and you also need to take as an input the prior alignment that you've already produced right so this you embed and but then you simply do an attention network a transformer which will which is pretty close to to the bird example we've made and so I mean they stress that that their that their loss is actually a lower bound on the loss so I shouldn't be I shouldn't be too hard when I say it's not the correct distribution they do minimize something some loss that actually makes sense but yeah I mainly wanted to go over the over the how the imputer works and how the it is structured and I think it's pretty cool and it lends itself very well to these tasks and most of all I like the fact that it exploits the these assumptions here so not all tasks fit these assumptions but if a task does fit the assumption then I think it should be you know it it should be fairly obvious that one should exploit that in order to perform better all right that was it for me thanks
[ { "end": 6.12, "start": 0, "text": " Hi there! Today we're looking at the imputer sequence modeling via imputation" }, { "end": 12.72, "start": 6.12, "text": " and dynamic programming by William Chan, Chitwan Sariah, Jeffrey Hinton, Mohamed" }, { "end": 18.96, "start": 12.72, "text": " Nourouzi and Navdeep Jaitley. So this is a model to perform sequence-to-sequence" }, { "end": 28.2, "start": 18.96, "text": " tasks. Now sequence-to-sequence tasks are very very common in NLP, but in this" }, { "end": 33.44, "start": 28.2, "text": " case it's kind of a subset of sequence-to-sequence tasks. So a classic" }, { "end": 38.04, "start": 33.44, "text": " sequence-to-sequence task is a machine translation. Here for example the" }, { "end": 45.32, "start": 38.04, "text": " sentence I like you. If you want to translate it to German, sorry you, if you" }, { "end": 55.92, "start": 45.32, "text": " want to translate it to German that would become Ich mag dich. And you see that the" }, { "end": 62.56, "start": 55.92, "text": " input is a sequence right and the output is a sequence. Now the imputer deals with" }, { "end": 66.76, "start": 62.56, "text": " very special kind of sequence-to-sequence tasks. Namely it deals with" }, { "end": 71.88, "start": 66.76, "text": " sequence-to-sequence tasks where there is a monotonic alignment. So you see" }, { "end": 75.88, "start": 71.88, "text": " that this is given here. The first word is corresponding to the first word here," }, { "end": 82.64, "start": 75.88, "text": " the second to the second and the third to the third. This is not always the case" }, { "end": 86.24, "start": 82.64, "text": " in machine translation. You know different languages have different sentence" }, { "end": 93.86, "start": 86.24, "text": " structures. So for example in French this would be je d'aime. And you can see that" }, { "end": 99.6, "start": 93.86, "text": " the first word is still the first word, however the third word has become the" }, { "end": 104.96000000000001, "start": 99.6, "text": " second, the you and the verb goes to the end. So the imputer would not be able to" }, { "end": 110.92, "start": 104.96000000000001, "text": " deal with this task very well. A different task where the imputer would be" }, { "end": 117.2, "start": 110.92, "text": " useful for would be something like speech recognition. So if someone were to speak" }, { "end": 121.4, "start": 117.2, "text": " the words I like you and you would measure the waveform of that it would" }, { "end": 129.64, "start": 121.4, "text": " look something like I like you. So if you have this waveform let's actually" }, { "end": 136.12, "start": 129.64, "text": " make some chunk samples out of this. Let's say this is a sample right here and" }, { "end": 143.6, "start": 136.12, "text": " here is a break here and here. So we have five samples on the bottom." }, { "end": 150.72, "start": 143.6, "text": " You can see pretty easily that this sample here, this is the I and then this" }, { "end": 157.28, "start": 150.72, "text": " is silence, this is the like, this is silence and this is the you. So the" }, { "end": 161.04000000000002, "start": 157.28, "text": " imputer deals with these kind of sequence to sequence tasks where first" }, { "end": 167.64, "start": 161.04, "text": " of all there is a monotonic alignment, sorry monotonic alignment and second of" }, { "end": 173.68, "start": 167.64, "text": " all this is an engineering constraint where the length of the input sequence X" }, { "end": 179.07999999999998, "start": 173.68, "text": " is larger or equal to the length of the input sequence Y and you'll see" }, { "end": 185.95999999999998, "start": 179.07999999999998, "text": " why mainly because we rely on being able to compute this alignment here. The" }, { "end": 193.92000000000002, "start": 185.96, "text": " alignment of input samples to output samples. You can see that the" }, { "end": 197.96, "start": 193.92000000000002, "text": " monotonic alignment is actually given fairly well in speech recognition" }, { "end": 202.88, "start": 197.96, "text": " because if something is later down here it is also later in the sequence up here." }, { "end": 210.68, "start": 202.88, "text": " That is a monotonic alignment and also usually we have more wave samples" }, { "end": 217.84, "start": 210.68, "text": " then we have words in the output sequence. So that would be a task for the" }, { "end": 225.36, "start": 217.84, "text": " imputer. Now let's think about how we would do something like this. So let's" }, { "end": 233.68, "start": 225.36, "text": " put X at the top here and we said X has five tokens in it and let's put Y at the" }, { "end": 246.28, "start": 233.68, "text": " bottom. Y actually has three tokens. So this here is I like you." }, { "end": 252.44, "start": 246.28, "text": " This is the waveform and we want the I like you at the bottom. So what could we" }, { "end": 259.36, "start": 252.44, "text": " do? First of all what the imputer does is it represents I like you not as this" }, { "end": 267.88, "start": 259.36, "text": " form right here but as a form where you have the same length as X divided into" }, { "end": 276.16, "start": 267.88, "text": " the same amount of things and then it does the following. So for this this is" }, { "end": 278.68, "start": 276.16, "text": " an example." }, { "end": 291, "start": 278.68, "text": " This is how it would represent Y. It would say I have as many chunks on" }, { "end": 296.24, "start": 291, "text": " the top as on the bottom. I know this chunk here corresponds to this token" }, { "end": 302.48, "start": 296.24, "text": " then this here to this and this here to this and then these are these" }, { "end": 308.6, "start": 302.48, "text": " intermediate ones. So you can see these correspond to those. These are" }, { "end": 314, "start": 308.6, "text": " silents right here. Now it doesn't always need to be that there is always one" }, { "end": 318.32000000000005, "start": 314, "text": " token and a silence then a token and a silence. The task of the imputer is" }, { "end": 329.20000000000005, "start": 318.32000000000005, "text": " actually to see whether this is more likely than for example I like and then" }, { "end": 334.52000000000004, "start": 329.20000000000005, "text": " silence silence and then you. So the task of the imputer is to distinguish" }, { "end": 339.24, "start": 334.52, "text": " these two from each other and then of course also produce the actual tokens." }, { "end": 346.32, "start": 339.24, "text": " Now if you think about how would you go about taking X and producing something" }, { "end": 351.84, "start": 346.32, "text": " like Y. So this is Y let's call it tilde. This is the actual Y right but you can" }, { "end": 356.2, "start": 351.84, "text": " see that this here is a deterministic function in one way. It's actually not a" }, { "end": 360.79999999999995, "start": 356.2, "text": " deterministic function in the other way and that becomes interesting when you" }, { "end": 365, "start": 360.8, "text": " have to compute a loss for this. But how would we go about doing this? What" }, { "end": 370.08, "start": 365, "text": " we could do is we could just take a big transformer BERT. That's actually" }, { "end": 379.12, "start": 370.08, "text": " drawn arrow. We could just take BERT and we could simply so in BERT you have" }, { "end": 385.64, "start": 379.12, "text": " in if you if you construct it correctly you have as many input tokens as output" }, { "end": 390.36, "start": 385.64, "text": " tokens. So what we could simply say is for each of the outputs that we get we" }, { "end": 397.24, "start": 390.36, "text": " simply make this as a softmax classifier over our vocabulary with the silence" }, { "end": 404.08000000000004, "start": 397.24, "text": " being one special token and we simply classify each of the outputs into this" }, { "end": 412.16, "start": 404.08000000000004, "text": " vocabulary. This would be one step right? So we could do one step BERT bang bang" }, { "end": 418.16, "start": 412.16, "text": " input to output and there is more there are more sophisticated approaches to" }, { "end": 423.20000000000005, "start": 418.16, "text": " doing this in one step like CTC but ultimately we could just do one step but" }, { "end": 428.16, "start": 423.20000000000005, "text": " then you'd have the same problem like for example XL net if you haven't seen" }, { "end": 434.20000000000005, "start": 428.16, "text": " my XL net video I recommend it that they exactly take the problem if you do this" }, { "end": 441.04, "start": 434.20000000000005, "text": " right then at the moment where you decode the word like you have no idea" }, { "end": 446.32000000000005, "start": 441.04, "text": " that there is an I over here all you know is the the vector you have here" }, { "end": 453.52, "start": 446.32, "text": " that you sample the I from right but this could be a distribution where I is" }, { "end": 458.2, "start": 453.52, "text": " pretty high but some other word is also pretty high so this process over here" }, { "end": 464.56, "start": 458.2, "text": " that samples the word like has no idea which of the two here you actually would" }, { "end": 470.08, "start": 464.56, "text": " sample so it cannot condition on it so it is the the assumption here is that" }, { "end": 473.68, "start": 470.08, "text": " the sampling of the word like is independent of the sampling of the word" }, { "end": 480, "start": 473.68, "text": " I and of course that's not the case the you need to know what word is there if" }, { "end": 486.12, "start": 480, "text": " you want to sample the word like otherwise you can end up with some very" }, { "end": 492.6, "start": 486.12, "text": " confusing sentences so this one step process is pretty quick but it has the" }, { "end": 495.8, "start": 492.6, "text": " drawback that there are these conditional independence assumptions and" }, { "end": 500.88, "start": 495.8, "text": " again I invite you to watch the XL net video if you want to dive more into this" }, { "end": 507.04, "start": 500.88, "text": " problem the second thing we could do is we could just decode one after another" }, { "end": 516.64, "start": 507.04, "text": " right so we could say all right I'll make sorry I'll make my five slots here" }, { "end": 521.52, "start": 516.64, "text": " and I just leave them empty for now and I'm just going to decode the one that I" }, { "end": 527.72, "start": 521.52, "text": " am most sure about and let's say the the speech at the back here is very clear" }, { "end": 533.48, "start": 527.72, "text": " and you say other I'm I know this is a you right so I'm gonna fill in you right" }, { "end": 540.36, "start": 533.48, "text": " here right and make this alignment that this goes here this is the you right I" }, { "end": 546.6800000000001, "start": 540.36, "text": " still don't know what the others are but now what I did they do a second step and" }, { "end": 556.6, "start": 546.6800000000001, "text": " in the second step I get as an input not only the original input like this this" }, { "end": 562.8000000000001, "start": 556.6, "text": " thing here but I also get the fact that I already decoded the word you to here" }, { "end": 568.48, "start": 562.8000000000001, "text": " right in this step so now I say given that I already decoded the word you which" }, { "end": 575.36, "start": 568.48, "text": " one am I now most sure about and I might be most sure about to say I'm most sure" }, { "end": 578.52, "start": 575.36, "text": " about this now being an eye because there's a you at the end and this kind" }, { "end": 584.2, "start": 578.52, "text": " of sounds like an eye so an eye here right it goes to the next step and then" }, { "end": 589.12, "start": 584.2, "text": " the next step it already has the information that it decoded I and you" }, { "end": 597.4000000000001, "start": 589.12, "text": " and now it's a might say ah okay given these that's so probably this thing so I" }, { "end": 604.2800000000001, "start": 597.4000000000001, "text": " here probably the thing here the thing right here is silence right makes the" }, { "end": 608.2, "start": 604.2800000000001, "text": " most sense I kind of hear some noise but there's already a word after so now I'm" }, { "end": 613.76, "start": 608.2, "text": " pretty sure that this here is a silent token right and you go this until the" }, { "end": 621.96, "start": 613.76, "text": " end until you're actually at this so this here would be n step decoding this" }, { "end": 628.24, "start": 621.96, "text": " here would be n steps of decoding which now no longer has the problem of these" }, { "end": 632.72, "start": 628.24, "text": " conditional independence assumptions but of course now you have the problem that" }, { "end": 641.16, "start": 632.72, "text": " you need n steps right the imputer does something in the middle of this the" }, { "end": 648.12, "start": 641.16, "text": " imputer will as you can see here it will form this into blocks right blocks of" }, { "end": 655.3199999999999, "start": 648.12, "text": " size B and this is the empty symbol here right and what it will do is it will" }, { "end": 661.36, "start": 655.3199999999999, "text": " make a step where in each block for each block it will conditioned on the" }, { "end": 665.36, "start": 661.36, "text": " previous alignment and conditioned on the input it will decode whatever it" }, { "end": 673.24, "start": 665.36, "text": " feels it is most certain about in each block and then it does this for as long" }, { "end": 678.64, "start": 673.24, "text": " as there are still empty tokens right you can see here the first block and then" }, { "end": 686.4, "start": 678.64, "text": " in the second step it will decode this this this and this so the imputer can" }, { "end": 692, "start": 686.4, "text": " trade off between the conditional independence assumption of the one step" }, { "end": 697.48, "start": 692, "text": " BERT and the full conditional independence assumption of the n step" }, { "end": 705.44, "start": 697.48, "text": " decoding right so it will compute this alignment and the actual tokens at the" }, { "end": 712.4, "start": 705.44, "text": " same time in this process so how many steps does this take this takes now B" }, { "end": 721.36, "start": 712.4, "text": " steps and this is pretty cool because B is the block size so this is independent" }, { "end": 727.4, "start": 721.36, "text": " of the sequence length so it is able to compute this alignment and output in a" }, { "end": 734.2, "start": 727.4, "text": " constant number of steps right so you're by modulating this B you're now able to" }, { "end": 741.84, "start": 734.2, "text": " trade off speed versus let's say performance in the imputer and this is" }, { "end": 747.12, "start": 741.84, "text": " pretty cool so I think actually I think the the bigger point to understand here" }, { "end": 753.28, "start": 747.12, "text": " is how to actually use the assumption that there is a monotonic alignment" }, { "end": 757.28, "start": 753.28, "text": " right because if there is a monotonic alignment and if this thing is given" }, { "end": 765.6, "start": 757.28, "text": " here then you can do this you can do this representation right here with the" }, { "end": 773.72, "start": 765.6, "text": " silence tokens and that allows you to basically represent the output in a" }, { "end": 778.48, "start": 773.72, "text": " form that is of the same length as the input and do this kind of token by token" }, { "end": 784.84, "start": 778.48, "text": " decoding while still allowing you to have variable lengths output as long as" }, { "end": 792.08, "start": 784.84, "text": " they're smaller in length than the input so that's pretty cool and then the the" }, { "end": 799.88, "start": 792.08, "text": " next pretty cool thing is the fact that they do this in blocks now of course my" }, { "end": 805.92, "start": 799.88, "text": " issue with this so this is how the system works my issue with this is how" }, { "end": 812.76, "start": 805.92, "text": " the system is trained so if you think about how you train this you must train" }, { "end": 820.56, "start": 812.76, "text": " this first of all the loss function right has to revert this and how they" }, { "end": 829.48, "start": 820.56, "text": " do it as they marginalize you see this down here you want to marginalize over" }, { "end": 838.96, "start": 829.48, "text": " all the possible alignments right here so this is how you train you sample an" }, { "end": 848.16, "start": 838.96, "text": " alignment from the alignment policy and this alignment policy is I think they" }, { "end": 853.2, "start": 848.16, "text": " have some heuristics of how they construct the alignments during during" }, { "end": 858.32, "start": 853.2, "text": " training or you have experts actually giving you this alignment I think they" }, { "end": 864.84, "start": 858.32, "text": " use in the speech recognition they use something like CTC to give you the" }, { "end": 872.1600000000001, "start": 864.84, "text": " alignments from the alignment policy and then you have a masking policy and I" }, { "end": 877.5200000000001, "start": 872.1600000000001, "text": " think they also they just do random masking and then they use that for" }, { "end": 884.7600000000001, "start": 877.5200000000001, "text": " training and then they marginalize over the alignments this I'm pretty sure is" }, { "end": 892.72, "start": 884.76, "text": " not the same distribution as the decoding procedure I just described" }, { "end": 901.16, "start": 892.72, "text": " right so the decoding procedure if you do this in B steps right that means each" }, { "end": 908.04, "start": 901.16, "text": " of the step is dependent on the step before so that means the distribution of" }, { "end": 914.4, "start": 908.04, "text": " whatever you whatever the imputer sees is actually dependent on itself while" }, { "end": 921.68, "start": 914.4, "text": " these people are proposing a training framework where you have here you have a" }, { "end": 928.16, "start": 921.68, "text": " heuristic in order to come up with the training sample alignments and here you" }, { "end": 936, "start": 928.16, "text": " have a random I think a random masking policy that comes up with the with where" }, { "end": 941.3199999999999, "start": 936, "text": " the empty tokens are so this is not the same distribution and then also it" }, { "end": 947.44, "start": 941.32, "text": " marginalizes over all compatible alignments which I'm I'm pretty sure" }, { "end": 952.2800000000001, "start": 947.44, "text": " this is not the same distribution this is not the correct loss distribution" }, { "end": 959.5200000000001, "start": 952.2800000000001, "text": " they have some math to show that in expectation it's the same but yeah this" }, { "end": 967.7600000000001, "start": 959.5200000000001, "text": " is this is over there over their role in policy and role and expert and and" }, { "end": 974.2, "start": 967.76, "text": " marginalization this I don't want to go too deep into this I've given it some" }, { "end": 979.28, "start": 974.2, "text": " thought but it will make this video too long and boring if I actually go into" }, { "end": 984.96, "start": 979.28, "text": " the details here suffice to say I invite you to look at the loss computation and" }, { "end": 992.68, "start": 984.96, "text": " ask yourself if you think that is the correct way to produce the data set for" }, { "end": 999.7199999999999, "start": 992.68, "text": " training given how you do the inference later the architecture of the imputer is" }, { "end": 1006.52, "start": 999.7199999999999, "text": " actually pretty similar to BERT in that first of all well okay you're dealing" }, { "end": 1011.3599999999999, "start": 1006.52, "text": " with audio in the input so you're going to have some convolutional network here" }, { "end": 1015.8, "start": 1011.3599999999999, "text": " and you also need to take as an input the prior alignment that you've already" }, { "end": 1021.9599999999999, "start": 1015.8, "text": " produced right so this you embed and but then you simply do an attention" }, { "end": 1029.1200000000001, "start": 1021.96, "text": " network a transformer which will which is pretty close to to the bird example" }, { "end": 1039.44, "start": 1029.1200000000001, "text": " we've made and so I mean they stress that that their that their loss is" }, { "end": 1044.44, "start": 1039.44, "text": " actually a lower bound on the loss so I shouldn't be I shouldn't be too hard when" }, { "end": 1050.64, "start": 1044.44, "text": " I say it's not the correct distribution they do minimize something some loss" }, { "end": 1058.4, "start": 1050.64, "text": " that actually makes sense but yeah I mainly wanted to go over the over the" }, { "end": 1064.3200000000002, "start": 1058.4, "text": " how the imputer works and how the it is structured and I think it's pretty cool" }, { "end": 1072.48, "start": 1064.3200000000002, "text": " and it lends itself very well to these tasks and most of all I like the fact" }, { "end": 1079.96, "start": 1072.48, "text": " that it exploits the these assumptions here so not all tasks fit these" }, { "end": 1085.8400000000001, "start": 1079.96, "text": " assumptions but if a task does fit the assumption then I think it should be you" }, { "end": 1090.32, "start": 1085.8400000000001, "text": " know it it should be fairly obvious that one should exploit that in order to" }, { "end": 1110, "start": 1090.32, "text": " perform better all right that was it for me thanks" } ]
ZVVnvZdUMUk
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
[ "Science & Technology" ]
[ "deep learning", "machine learning", "neural networks", "pruning", "distillation", "quantization", "size", "weights", "optimization", "training", "generalization", "overparameterization", "winning ticket", "winning lottery ticket", "arxiv" ]
Stunning evidence for the hypothesis that neural networks work so well because their random initialization almost certainly contains a nearly optimal sub-network that is responsible for most of the final performance. https://arxiv.org/abs/1803.03635 Abstract: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy. Authors: Jonathan Frankle, Michael Carbin Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carbon. So this paper is sort of an empirical paper into what makes neural networks train successfully. And it comes out of the literature of pruning. So they say neural network pruning techniques, right, they have been around for a while. They can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance or inference without compromising accuracy. So what does this mean? If you have a neural network, let's say you just have three nodes, each layer, you have two layers here. You have a neural network here. If you have a fully connected neural network, every node is going to be connected with every node in the next layer, right? And these connections are your weights, your thetas. And you're going to train them, which means you have a number of steps in this direction. And let's say you have a test set accuracy right here. So here is steps. You're going to train them. And if you train them, your accuracy will reach a certain point, right? I'm just going to draw the end point here. Let's say you reach a 90% test accuracy. So your network generalizes pretty well. That's pretty good. So people have been wondering, these networks, they require quite a lot of storage. You know, this is nine connections right here. So three times three. And this is also nine connections. Can we make it smaller but still retain the accuracy? And this is where pruning comes in. So with pruning, people would go and after you train them. So the first step is train the full network, right? And then the second step is prune. Now when you prune, you basically select among the weights that you have that you have trained, you select the best ones in some form or another. In this case, people just select the ones with the largest magnitudes. But there are multiple techniques to do this. And this is very related to things like quantization or distillation. So with pruning, you just leave away some of the weights or most of the weights. And you hope that you still retain a pretty good accuracy right here, right? Sorry, actually, we don't need these steps thing. So you leave away weights and you retain a good accuracy. So pruning methods have been deployed successfully to make networks use less space or be faster to evaluate. Because, of course, with less numbers, you need to do less calculations. So this paper builds on top of this and it basically says, all right, if we do the following, if we now take this network that we identified after training and we just take this network and we train it from the beginning, only this sub network. Right. So three is retrain. Then it will also perform pretty well or even better under one condition. Right. So if you only train this thing, it will perform well under one condition. And the condition is that you transfer over the initial weights. So right. The question is, can we train just the small network from the beginning so that we don't have to train the big network? Right. And the paper identifies that this works if if your initial weights, theta zero of the small network are equal to the initial weights of the large network. Right. Just so just the ones where you ported them over. But basically, the short answer is no. And the reason is, if you only want to train the small network, you need to know the good initialization of these of these weights all here. And the good initialization, you only know after you've trained the large network and actually identified which of these connections make sense. You can't just take a smaller network from the beginning. You have to train the larger one. Then you know which weights and which initializations make sense. So this is the winning lottery ticket hypothesis. Basically, it states and we can read it out in full. The lottery ticket hypothesis is a randomly initialized dense neural network contains a sub network that is initialized such that when trained in isolation, it can match the test accuracy of the original network after trading for at most the same number of iterations. Right. Now, the important part here is that it contains a sub network that is initialized such that when trained in isolation. So two things are important. It is important. The structure of the network of the sub network, but it is also important. What are the initialization of the connections? So the paper kind of hints at why neural networks work at all. And the reason why neural networks work is because we've often thought of neural networks have so many parameters, how can they even generalize? The reason is the following. If we have a neural network, we throw so many parameters at it. Some of the parameters, one subset of the parameters, namely the red ones here, are going to be initialized in such a way, in such a beneficial way that training will perform, will make the network perform well. So it's initialization plus SGD on that sub network. So it is actually only a very small sub network that is responsible for the performance of the neural network. But that sub network needs to be initialized at the correct position. And by over parameterizing these neural networks so much, we actually give it combinatorically many sub networks to choose from where the initialization could be well. So because of this combinatorics, it means that if we over parameterize by some margin, then there's almost guaranteed to be a good sub network in there that can then perform well. So I hope this makes sense. It is basically not a way, it is not a magic thing where we now can train the smaller networks. It is an explanation of why the over parameterization in neural networks makes sense. Because by over parameterizing, we allow the neural networks to exploit the combinatorics to find a good, well initialized sub network that will perform well. And the evidence for this is exactly the fact that if we transfer over the sub network, it by itself will reach the same performance or actually exceed the performance. But only if we initialize it at the same point as it was initialized in the original network. So here is how these sub networks are identified. We've already hinted at that, but here is how the paper does it. So it says identifying winning tickets. First randomly initialize a neural network. This is the full neural network. Then train the network for j iterations arriving at some parameters. These are the trained parameters. Prune p% of the parameters. So of these parameters, prune some. And this is in order to know which ones you prune, you need to have first trained the full neural network. So this is the catch here. You need to train the full neural network to know which ones you must prune. And thereby you create a mask m. And then they say reset the remaining parameters to their value in theta 0. Actually you don't need to say remaining. You can just say reset the parameters to their values in theta 0. Now this is also important. This is the same theta 0 as it was at the beginning of the training. So you need to actually set them back to those exact values. And thereby you create the winning ticket. If you just want to end up with a trained network, then this remaining thing here is important. But if you then want to retrain, you can set everything back and only train the masked version of the network. And they say this will identify these winning tickets. And it will actually work better if you don't do this in what they call one shot. But if you do this iterative pruning, that means it repeatedly trains, prunes and resets the network over n rounds. Each round prunes p to the 1 over n percent of the weights that survived the previous round. Now why might that be? It might be. And this is I think a somewhat valid hypothesis that I myself put forth here. It might be that if you prune some of the weights, let's say you prune this one and this one, what you'll do is you'll put the responsibility of these weights onto other weights. So maybe on this one and this one. So as we said, they prune by looking at which weights are large. So let's say here we have the weights of the layer and these are the magnitudes of the weights. So you would prune, let's say you only want to keep two of those around. So you would prune this one and this one because these are pretty small. Here's the magnitude. And you would also prune that one. If you just do this one shot and then you would retrain and maybe these weights would end up somewhat different. But if you do this in multiple rounds, let's say you first prune one of them. So you would actually prune the smallest one, this one here. And then you retrain and then your weights actually change. And all of the responsibility that this weight carried before is now transferred onto this. So your new weights look like this. And you prune another one like this. And again, all the responsibility of this would, in my hypothetical example, fall on this one. And now if you prune a third one, you would actually prune this one because you realize this weight here, in absence of these two other weights, is actually important. So you would prune this one as well. So I think that is why this iterative pruning method might work a bit better than the one shot pruning method that they say here. So they do a lot of empirical investigation. And I just want to highlight very few of them. So that you get the gist and then the paper goes into a lot of detail and a lot of different architectures that you can check out yourself. So here we have a plot that deals with percent of weights remaining. So as you go to the right here, they drop more and more weights and realize this is a log plot. So if the dashed lines here are random pruning, which means you just drop out a certain number of weights and then you retrain. And you can see that the dashed line here, it starts dropping and just becomes worse as you have less and less weights remaining, which is exactly what's expected. You prune the network, you make it smaller, you make it less performant. And the more weights you take away, the less performing it is. But interestingly enough, if you do this pruning that they suggest and then retrain with the correct initialization, not only do you retain the same level of accuracy for very long, you see here this is 2.9 or 1.2 percent of weights remaining, but you actually go higher. So you can see here when you have 16 percent of weights remaining, there's actually a significant difference between the full network and the prune network. And that's only by simply training this winning hypothesis. So this I find very, very fascinating. And again, this is not a magic bullet that you can do from the beginning, but it does give a clue that if you could train these from the beginning, then you might actually end up at a better point. So it does actually give a practical application. Also, you see they train faster. So the blue line here is the full network over the course of training. Sorry, this should be blue. So here is training iterations and this is test accuracy. So you see the full network does something like this. Now, if you prune to 20 percent of the weights, actually train faster and you go higher. And even if you have 7 percent of the weights, you go almost as high. So this is very interesting. Only when you go to like 1.9 percent of the weights does your performance degrade again and eventually actually go lower than the original network. So that is pretty, pretty, pretty cool, I think. Now, as I said, they do a lot of investigation. And I think one of the main takeaways is that it is not only the structure of the winning hypothesis. So it's not only the structure of the sub network that makes it to be a winning hypothesis. It is actually the initialization. Here I want to show one of these plots. They have lots of plots. You can see here, for example, sorry, this is from my own annotations. Again, this is percent of weights remaining and this is test accuracy at the final iteration. And if we initialize the sub network at its original position, like this method suggests, you see, we first increase the accuracy and then decrease it after a long time. If we take the same sub network, right, but we randomly reinitialize it, then it drops much faster and actually immediately drops. So it really is about not only the structure of the sub network, but about its initialization. I think that is that is the core of the hypothesis here. A very interesting related finding that I just want to mention, I find, to be that they actually discover that the weights, so if you have a weight of the, so if you have two kinds of weights, let's actually go up to my original drawing here. If you compare how fast or how far do the weights travel in optimization space, right, so you can basically look at how far weights travel during optimization. So you take the full neural network here and you look at a parameter that ends up being in the winning hypothesis, theta, theta zero, and it goes to theta end, which let's say theta final. And you also look at parameters that don't end up in the winning hypothesis. Let's call these theta one, two, theta, also final, prime. I'm not too good at labeling. And you look at how far they travel, you'll find that the weights that end up in the winning hypothesis, they, during optimization, they travel much further in optimization space than weights that are not in the winning hypothesis, right? They just stay around much more. So it's not that the kind of good network is already contained in initialization. It's much more than the good network lends itself very favorably to be initialized by SGD, right? Because it travels farther. It means SGD has a bigger pull on it, right? I think there is a lot of things that are yet to be explored in this space, and I think this paper is a very cool contribution to our understanding of how neural networks work. All right, I invite you to check out all the experiments. They do a very thorough job. And with that, I say bye bye.
[ { "end": 11, "start": 0, "text": " Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carbon." }, { "end": 21, "start": 11, "text": " So this paper is sort of an empirical paper into what makes neural networks train successfully." }, { "end": 29, "start": 21, "text": " And it comes out of the literature of pruning. So they say neural network pruning techniques, right, they have been around for a while." }, { "end": 44, "start": 29, "text": " They can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance or inference without compromising accuracy." }, { "end": 57, "start": 44, "text": " So what does this mean? If you have a neural network, let's say you just have three nodes, each layer, you have two layers here." }, { "end": 69, "start": 57, "text": " You have a neural network here. If you have a fully connected neural network, every node is going to be connected with every node in the next layer, right?" }, { "end": 84, "start": 69, "text": " And these connections are your weights, your thetas. And you're going to train them, which means you have a number of steps in this direction." }, { "end": 94, "start": 84, "text": " And let's say you have a test set accuracy right here. So here is steps. You're going to train them." }, { "end": 103, "start": 94, "text": " And if you train them, your accuracy will reach a certain point, right? I'm just going to draw the end point here." }, { "end": 109, "start": 103, "text": " Let's say you reach a 90% test accuracy. So your network generalizes pretty well. That's pretty good." }, { "end": 118, "start": 109, "text": " So people have been wondering, these networks, they require quite a lot of storage. You know, this is nine connections right here." }, { "end": 126, "start": 118, "text": " So three times three. And this is also nine connections. Can we make it smaller but still retain the accuracy?" }, { "end": 131, "start": 126, "text": " And this is where pruning comes in. So with pruning, people would go and after you train them." }, { "end": 140, "start": 131, "text": " So the first step is train the full network, right? And then the second step is prune." }, { "end": 155, "start": 140, "text": " Now when you prune, you basically select among the weights that you have that you have trained, you select the best ones in some form or another." }, { "end": 162, "start": 155, "text": " In this case, people just select the ones with the largest magnitudes. But there are multiple techniques to do this." }, { "end": 172, "start": 162, "text": " And this is very related to things like quantization or distillation. So with pruning, you just leave away some of the weights or most of the weights." }, { "end": 184, "start": 172, "text": " And you hope that you still retain a pretty good accuracy right here, right? Sorry, actually, we don't need these steps thing." }, { "end": 195, "start": 184, "text": " So you leave away weights and you retain a good accuracy. So pruning methods have been deployed successfully to make networks use less space or be faster to evaluate." }, { "end": 200, "start": 195, "text": " Because, of course, with less numbers, you need to do less calculations." }, { "end": 228, "start": 200, "text": " So this paper builds on top of this and it basically says, all right, if we do the following, if we now take this network that we identified after training and we just take this network and we train it from the beginning, only this sub network." }, { "end": 234, "start": 228, "text": " Right. So three is retrain." }, { "end": 247, "start": 234, "text": " Then it will also perform pretty well or even better under one condition. Right. So if you only train this thing, it will perform well under one condition." }, { "end": 262, "start": 247, "text": " And the condition is that you transfer over the initial weights. So right. The question is, can we train just the small network from the beginning so that we don't have to train the big network?" }, { "end": 278, "start": 262, "text": " Right. And the paper identifies that this works if if your initial weights, theta zero of the small network are equal to the initial weights of the large network." }, { "end": 285, "start": 278, "text": " Right. Just so just the ones where you ported them over. But basically, the short answer is no." }, { "end": 297, "start": 285, "text": " And the reason is, if you only want to train the small network, you need to know the good initialization of these of these weights all here." }, { "end": 307, "start": 297, "text": " And the good initialization, you only know after you've trained the large network and actually identified which of these connections make sense." }, { "end": 316, "start": 307, "text": " You can't just take a smaller network from the beginning. You have to train the larger one. Then you know which weights and which initializations make sense." }, { "end": 324, "start": 316, "text": " So this is the winning lottery ticket hypothesis. Basically, it states and we can read it out in full." }, { "end": 337, "start": 324, "text": " The lottery ticket hypothesis is a randomly initialized dense neural network contains a sub network that is initialized such that when trained in isolation," }, { "end": 345, "start": 337, "text": " it can match the test accuracy of the original network after trading for at most the same number of iterations." }, { "end": 356, "start": 345, "text": " Right. Now, the important part here is that it contains a sub network that is initialized such that when trained in isolation." }, { "end": 369, "start": 356, "text": " So two things are important. It is important. The structure of the network of the sub network, but it is also important." }, { "end": 378, "start": 369, "text": " What are the initialization of the connections? So the paper kind of hints at why neural networks work at all." }, { "end": 387, "start": 378, "text": " And the reason why neural networks work is because we've often thought of neural networks have so many parameters, how can they even generalize?" }, { "end": 394, "start": 387, "text": " The reason is the following. If we have a neural network, we throw so many parameters at it." }, { "end": 403, "start": 394, "text": " Some of the parameters, one subset of the parameters, namely the red ones here, are going to be initialized in such a way," }, { "end": 412, "start": 403, "text": " in such a beneficial way that training will perform, will make the network perform well." }, { "end": 421, "start": 412, "text": " So it's initialization plus SGD on that sub network." }, { "end": 428, "start": 421, "text": " So it is actually only a very small sub network that is responsible for the performance of the neural network." }, { "end": 435, "start": 428, "text": " But that sub network needs to be initialized at the correct position." }, { "end": 448, "start": 435, "text": " And by over parameterizing these neural networks so much, we actually give it combinatorically many sub networks to choose from where the initialization could be well." }, { "end": 455, "start": 448, "text": " So because of this combinatorics, it means that if we over parameterize by some margin," }, { "end": 462, "start": 455, "text": " then there's almost guaranteed to be a good sub network in there that can then perform well." }, { "end": 472, "start": 462, "text": " So I hope this makes sense. It is basically not a way, it is not a magic thing where we now can train the smaller networks." }, { "end": 479, "start": 472, "text": " It is an explanation of why the over parameterization in neural networks makes sense." }, { "end": 493, "start": 479, "text": " Because by over parameterizing, we allow the neural networks to exploit the combinatorics to find a good, well initialized sub network that will perform well." }, { "end": 506, "start": 493, "text": " And the evidence for this is exactly the fact that if we transfer over the sub network, it by itself will reach the same performance or actually exceed the performance." }, { "end": 515, "start": 506, "text": " But only if we initialize it at the same point as it was initialized in the original network." }, { "end": 520, "start": 515, "text": " So here is how these sub networks are identified." }, { "end": 524, "start": 520, "text": " We've already hinted at that, but here is how the paper does it." }, { "end": 529, "start": 524, "text": " So it says identifying winning tickets. First randomly initialize a neural network." }, { "end": 531, "start": 529, "text": " This is the full neural network." }, { "end": 537, "start": 531, "text": " Then train the network for j iterations arriving at some parameters." }, { "end": 540, "start": 537, "text": " These are the trained parameters." }, { "end": 545, "start": 540, "text": " Prune p% of the parameters." }, { "end": 548, "start": 545, "text": " So of these parameters, prune some." }, { "end": 558, "start": 548, "text": " And this is in order to know which ones you prune, you need to have first trained the full neural network." }, { "end": 564, "start": 558, "text": " So this is the catch here. You need to train the full neural network to know which ones you must prune." }, { "end": 568, "start": 564, "text": " And thereby you create a mask m." }, { "end": 574, "start": 568, "text": " And then they say reset the remaining parameters to their value in theta 0." }, { "end": 580, "start": 574, "text": " Actually you don't need to say remaining. You can just say reset the parameters to their values in theta 0." }, { "end": 587, "start": 580, "text": " Now this is also important. This is the same theta 0 as it was at the beginning of the training." }, { "end": 592, "start": 587, "text": " So you need to actually set them back to those exact values." }, { "end": 596, "start": 592, "text": " And thereby you create the winning ticket." }, { "end": 606, "start": 596, "text": " If you just want to end up with a trained network, then this remaining thing here is important." }, { "end": 616, "start": 606, "text": " But if you then want to retrain, you can set everything back and only train the masked version of the network." }, { "end": 620, "start": 616, "text": " And they say this will identify these winning tickets." }, { "end": 626, "start": 620, "text": " And it will actually work better if you don't do this in what they call one shot." }, { "end": 634, "start": 626, "text": " But if you do this iterative pruning, that means it repeatedly trains, prunes and resets the network over n rounds." }, { "end": 641, "start": 634, "text": " Each round prunes p to the 1 over n percent of the weights that survived the previous round." }, { "end": 645, "start": 641, "text": " Now why might that be? It might be." }, { "end": 655, "start": 645, "text": " And this is I think a somewhat valid hypothesis that I myself put forth here." }, { "end": 665, "start": 655, "text": " It might be that if you prune some of the weights, let's say you prune this one and this one," }, { "end": 671, "start": 665, "text": " what you'll do is you'll put the responsibility of these weights onto other weights." }, { "end": 679, "start": 671, "text": " So maybe on this one and this one. So as we said, they prune by looking at which weights are large." }, { "end": 689, "start": 679, "text": " So let's say here we have the weights of the layer and these are the magnitudes of the weights." }, { "end": 699, "start": 689, "text": " So you would prune, let's say you only want to keep two of those around." }, { "end": 703, "start": 699, "text": " So you would prune this one and this one because these are pretty small." }, { "end": 709, "start": 703, "text": " Here's the magnitude. And you would also prune that one." }, { "end": 717, "start": 709, "text": " If you just do this one shot and then you would retrain and maybe these weights would end up somewhat different." }, { "end": 723, "start": 717, "text": " But if you do this in multiple rounds, let's say you first prune one of them." }, { "end": 729, "start": 723, "text": " So you would actually prune the smallest one, this one here." }, { "end": 733, "start": 729, "text": " And then you retrain and then your weights actually change." }, { "end": 741, "start": 733, "text": " And all of the responsibility that this weight carried before is now transferred onto this." }, { "end": 745, "start": 741, "text": " So your new weights look like this." }, { "end": 747, "start": 745, "text": " And you prune another one like this." }, { "end": 753, "start": 747, "text": " And again, all the responsibility of this would, in my hypothetical example, fall on this one." }, { "end": 759, "start": 753, "text": " And now if you prune a third one, you would actually prune this one because you realize this weight here," }, { "end": 763, "start": 759, "text": " in absence of these two other weights, is actually important." }, { "end": 765, "start": 763, "text": " So you would prune this one as well." }, { "end": 775, "start": 765, "text": " So I think that is why this iterative pruning method might work a bit better than the one shot pruning method that they say here." }, { "end": 779, "start": 775, "text": " So they do a lot of empirical investigation." }, { "end": 783, "start": 779, "text": " And I just want to highlight very few of them." }, { "end": 793, "start": 783, "text": " So that you get the gist and then the paper goes into a lot of detail and a lot of different architectures that you can check out yourself." }, { "end": 799, "start": 793, "text": " So here we have a plot that deals with percent of weights remaining." }, { "end": 807, "start": 799, "text": " So as you go to the right here, they drop more and more weights and realize this is a log plot." }, { "end": 817, "start": 807, "text": " So if the dashed lines here are random pruning, which means you just drop out a certain number of weights and then you retrain." }, { "end": 831, "start": 817, "text": " And you can see that the dashed line here, it starts dropping and just becomes worse as you have less and less weights remaining," }, { "end": 833, "start": 831, "text": " which is exactly what's expected." }, { "end": 837, "start": 833, "text": " You prune the network, you make it smaller, you make it less performant." }, { "end": 843, "start": 837, "text": " And the more weights you take away, the less performing it is." }, { "end": 854, "start": 843, "text": " But interestingly enough, if you do this pruning that they suggest and then retrain with the correct initialization," }, { "end": 863, "start": 854, "text": " not only do you retain the same level of accuracy for very long, you see here this is 2.9 or 1.2 percent of weights remaining," }, { "end": 867, "start": 863, "text": " but you actually go higher." }, { "end": 879, "start": 867, "text": " So you can see here when you have 16 percent of weights remaining, there's actually a significant difference between the full network and the prune network." }, { "end": 884, "start": 879, "text": " And that's only by simply training this winning hypothesis." }, { "end": 887, "start": 884, "text": " So this I find very, very fascinating." }, { "end": 892, "start": 887, "text": " And again, this is not a magic bullet that you can do from the beginning," }, { "end": 904, "start": 892, "text": " but it does give a clue that if you could train these from the beginning, then you might actually end up at a better point." }, { "end": 906, "start": 904, "text": " So it does actually give a practical application." }, { "end": 908, "start": 906, "text": " Also, you see they train faster." }, { "end": 913, "start": 908, "text": " So the blue line here is the full network over the course of training." }, { "end": 915, "start": 913, "text": " Sorry, this should be blue." }, { "end": 919, "start": 915, "text": " So here is training iterations and this is test accuracy." }, { "end": 922, "start": 919, "text": " So you see the full network does something like this." }, { "end": 929, "start": 922, "text": " Now, if you prune to 20 percent of the weights, actually train faster and you go higher." }, { "end": 934, "start": 929, "text": " And even if you have 7 percent of the weights, you go almost as high." }, { "end": 937, "start": 934, "text": " So this is very interesting." }, { "end": 948, "start": 937, "text": " Only when you go to like 1.9 percent of the weights does your performance degrade again and eventually actually go lower than the original network." }, { "end": 954, "start": 948, "text": " So that is pretty, pretty, pretty cool, I think." }, { "end": 958, "start": 954, "text": " Now, as I said, they do a lot of investigation." }, { "end": 965, "start": 958, "text": " And I think one of the main takeaways is that it is not only the structure of the winning hypothesis." }, { "end": 971, "start": 965, "text": " So it's not only the structure of the sub network that makes it to be a winning hypothesis." }, { "end": 974, "start": 971, "text": " It is actually the initialization." }, { "end": 978, "start": 974, "text": " Here I want to show one of these plots." }, { "end": 980, "start": 978, "text": " They have lots of plots." }, { "end": 987, "start": 980, "text": " You can see here, for example, sorry, this is from my own annotations." }, { "end": 994, "start": 987, "text": " Again, this is percent of weights remaining and this is test accuracy at the final iteration." }, { "end": 1001, "start": 994, "text": " And if we initialize the sub network at its original position, like this method suggests, you see," }, { "end": 1007, "start": 1001, "text": " we first increase the accuracy and then decrease it after a long time." }, { "end": 1018, "start": 1007, "text": " If we take the same sub network, right, but we randomly reinitialize it, then it drops much faster and actually immediately drops." }, { "end": 1025, "start": 1018, "text": " So it really is about not only the structure of the sub network, but about its initialization." }, { "end": 1029, "start": 1025, "text": " I think that is that is the core of the hypothesis here." }, { "end": 1039, "start": 1029, "text": " A very interesting related finding that I just want to mention, I find, to be that they actually discover that the weights," }, { "end": 1048, "start": 1039, "text": " so if you have a weight of the, so if you have two kinds of weights, let's actually go up to my original drawing here." }, { "end": 1056, "start": 1048, "text": " If you compare how fast or how far do the weights travel in optimization space, right," }, { "end": 1062, "start": 1056, "text": " so you can basically look at how far weights travel during optimization." }, { "end": 1074, "start": 1062, "text": " So you take the full neural network here and you look at a parameter that ends up being in the winning hypothesis, theta," }, { "end": 1080, "start": 1074, "text": " theta zero, and it goes to theta end, which let's say theta final." }, { "end": 1086, "start": 1080, "text": " And you also look at parameters that don't end up in the winning hypothesis." }, { "end": 1091, "start": 1086, "text": " Let's call these theta one, two, theta, also final, prime." }, { "end": 1093, "start": 1091, "text": " I'm not too good at labeling." }, { "end": 1101, "start": 1093, "text": " And you look at how far they travel, you'll find that the weights that end up in the winning hypothesis," }, { "end": 1110, "start": 1101, "text": " they, during optimization, they travel much further in optimization space than weights that are not in the winning hypothesis, right?" }, { "end": 1112, "start": 1110, "text": " They just stay around much more." }, { "end": 1117, "start": 1112, "text": " So it's not that the kind of good network is already contained in initialization." }, { "end": 1129, "start": 1117, "text": " It's much more than the good network lends itself very favorably to be initialized by SGD, right?" }, { "end": 1132, "start": 1129, "text": " Because it travels farther." }, { "end": 1137, "start": 1132, "text": " It means SGD has a bigger pull on it, right?" }, { "end": 1142, "start": 1137, "text": " I think there is a lot of things that are yet to be explored in this space," }, { "end": 1148, "start": 1142, "text": " and I think this paper is a very cool contribution to our understanding of how neural networks work." }, { "end": 1150, "start": 1148, "text": " All right, I invite you to check out all the experiments." }, { "end": 1152, "start": 1150, "text": " They do a very thorough job." }, { "end": 1162, "start": 1152, "text": " And with that, I say bye bye." } ]
-0aM99dMu_4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery
[ "Science & Technology" ]
[ "deep learning", "machine learning", "reinforcement learning", "deep rl", "auxiliary", "reward", "distance", "value function", "shortest path", "neural networks", "maze", "unsupervised", "discovery", "exploration" ]
DDL is an auxiliary task for an agent to learn distances between states in episodes. This can then be used further to improve the agent's policy learning procedure. Paper: https://arxiv.org/abs/1907.08225 Blog: https://sites.google.com/view/dynamical-distance-learning/home Abstract: Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: this https URL. Authors: Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, Sergey Levine Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! If you look at this robot, this robot has learned to turn this valve by itself. Now by itself isn't really correct, but it has learned it in a semi-supervised way with only 10 human inputs along the entire learning trajectory. So only 10 times was there a true reward for this reinforcement learning procedure and the rest is unsupervised discovery of this skill. And the paper we're going to look at today and the technique by which this was achieved is dynamical distance learning for semi-supervised and unsupervised skill discovery by Kristian Hartikeinen, Xin Yang Geng, Thomas Harnoja and Sergei Levine. So this is a technique for reinforcement learning. So they claim reinforcement learning requires manual specification of a reward function to learn a task. And they say while in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. So what does this mean? Let's look at it. So if you want the robot here to turn the valve to the right, ideally you simply want to say, so the robot is here, this is the start state, ideally you just want to say I want this, I want the thing to be at the right, so this is good. All of this I don't want any of that. And the reinforcement learning, I mean this is enough, this is a reward function, all of this is zero and this is one. This is a reward function and in theory if you apply any sort of reinforcement learning algorithm with any sort of guarantee, this should get you there. But of course we all know that it's not that easy, right? There is basically an exploration bottleneck where your robot has these three digits and lots of joints to move around and the probability that by itself it discovered that it needs to do this here and get this reward is very very slim. So what you want to do is in your reward function that you're providing to the robot, you would want to say okay so this here I see the blue thing is a bit more turned so I'm maybe going to give this a 0.1 and then here it's a bit more turned so maybe this is 0.2 and this I really like 0.3 here is 0.6 maybe because it's even more right and then one at the end right so this is what they would call a smooth gradient in the reward function where it's kind of the reward function ramps up until the goal is reached but oftentimes this isn't really possible because if you already knew how exactly to do the task which then you could you can only shape the reward function truly if you know how to perform the task in the first hand and then why exactly do you do reinforcement learning except for as an academic exercise. So the issue this paper has is clear right? What they want to say is like let's assume that your reward function is actually pretty bad can we provide artificially a way that this discovery of these of these what they call of these new skills is facilitated as if the reward function had some sort of a gradient. So that's the the outset let's actually go back to the to this for a second and they have these mazes as a kind of an example. So if you look at these mazes what we want to keep in mind is let's actually draw this over here. So let's say you have one of these mazes right and always there is a start state so you're here and then there is a goal state right let's say over here and the task is you can move up down left right and the task is to reach the goal right but if the reward function is simply that if you reach the goal you get a reward of one and otherwise you get a reward of zero then all the agent can do is kind of explore around right until it reaches the goal. Now if you do random exploration like a lot of reinforcement learning algorithms for example Q learning or policy gradient they'll have some sort of a just of a random exploration element where they if they don't if they don't absent of what they of the when they know what to do they just kind of boogle around like up up up right left right left down right up that doesn't work okay down down left down so it's sort of and then up again right and then they just kind of wonk around so this this method takes issue with that and it says okay while the agent is doing its thing trying to reach the goal right what we can do is we can learn a distance function between states now we'll reduce the problem for now and just say the task is always that the goal state is reached in the shortest amount of steps right so let's say the agent does something right it goes here here here here and then here right it that's that's one rollout of the policy and then it crashes into a wall okay that's bad so it gets a negative reward right but in addition to that we can we can learn so it has visited all of these states here in between right these are intermediate states this paper wants us now to to learn a distance function between the states so this distance function let's call it D it learns how far two states are away so it'll you can you can tell it okay this state here let's call that state a and this state here state B how far are those away now this is not super well defined yet but you want to say how far are they away for this agent here so the agent has maybe policy pi like that's what it used to create this trajectory under policy pi how far away are states a and B and how far away is simply going to be the amount of steps that it takes the agent to go from a to B so in this case that would be two right so the the and you can do this between any two states right this and this this and this right here here these all of these states you can also start from this state right let's do it in a different color and do every so the the this distance function D can actually has a pretty tight reward signal like a pretty wealth of information that it can learn these things from right that so the policy pi in this case can't learn much because it just got a reward of zero or something because it didn't reach the goal but the distance function has very very big reward or a big rework it has a very dense reward signal where it can learn distances between two states right now let's say we've explored a bunch right a bunch we've had many trajectories some here like to here and then here sometimes we even reach the goal right so so sometimes we actually reach the goal so we learn the two distances between all of the states now if we had a perfect distance function let's assume we have a perfect distance function our task now becomes very very simple so let's assume that's so let's assume I am here where the green agent is and I have these I can either go up or down and let's go that's up let's say that's X and the down is Y right which one should I choose now without even asking my policy per se what I can do is I can ask hey distance function so I can ask the distance function two different things so first let's do it like this distance function what do you think of the distance between X to the goal and what do you think of the distance from Y to the goal and the distance function if it's learned correctly it will tell you the distance of X to the goal is whatever maybe you need eight steps the distance of white the goal is ten steps right so definitely you would go with X right so if you had a good distance function then you could solve the task fairly fairly easily now this by itself isn't super interesting you will quickly notice that if you are able to learn such a good distance function especially with the goal state here then you might as well learn a good policy because that means you've reached the goal a fair number of times right so that the kind of information theoretic signal of D versus the signal on pi if you just want to reach the same goal to me it seems the same this this paper it tries to talk this up I feel but to me if you are in the situation where you have a fixed goal and that's it then this doesn't seem too interesting or too beneficial with compared to let's say just learning a value function right like you would do in a 3c or something the difference between this and a value function so if if if the number of steps is actually your reward so your negative reward you want to reach the goal in the shortest amount of time then learning a value function is the same the difference is for a value function the value function simply takes a state s right and the policy pi while the distance function takes a state s and a goal state for the policy pi the goal state for the value function is implicit right so it implicitly has the goal state because you assume that the goal is always the same with the distance function you can technically change your goal and this is where it becomes interesting so let's say you've explored but you haven't quite reached the goal yet right but we said okay most of these are algorithms they have some sort of some notion of random of random exploration right in order to to reach the goal what if you went from here to here and to here and to here and you learn the distances fairly well for the trajectories that you can do but you just haven't been able to go any further what you can say is you can go to your replay buffer write your memory of everything you've done and you can ask which of these states has the furthest distance from my starting state and the answer will be okay this state here as the furthest distance so now what you can do is you can make this your goal right you can just try to reach that state right and once you reach the state you can explore from that state right because this is the farthest away from your original starting state that probably means that you know if you that's kind of the frontier of what you know so if you explore from here you can go even further noticeably because it is the farthest that you know so it might turn out that from here you can only go back right so that's a possibility but probably you could go even further right so then you go further and you might reach this state here right and again you ask your your replay buffer it tells you this state here is the farthest so far so you take this as your new goal and now you're just trying to reach that and explore from here this is extremely similar to an algorithm like go explorer that I already made a video about where it remembers what it did and then it it will always travel to the furthest states it has seen so far and then from there try to go farther right so this this if you if you can learn a good distance function here that will help you in exploring the space and eventually of course you think you might actually reach this goal state so you might go far enough into in this maze you might explore it enough such that you you stumble over the goal state by itself alright so this is this is sort of the the goal this can be used in a number of different ways now instead of always going for the furthest what they did in the robot is they just let the algorithm explore right you explore explore explore if this is like a state tree and then at some point it it asked the human which one is closest to what you want and then the human says this one and then they say okay cool so this is now the new goal right so we'll try to reach this as much as possible and then explore from here right so this in the case of the robot the robot simply just like does some things it explores in in the unsupervised manner and then at some point you ask the human which of these things that the robot has done you like the most and then that becomes the new intermediate goal state and the algorithm explores from there right so that's the the main gist and how you can use this now the entire learning thing is actually pretty simple so what they propose is simply to to learn the distance function that they put it pretty formal here they say okay if you're two states that were visited after one another in an episode then you can define the distance function as the sum from i to j if if the they were visited at time steps I and J respectively this is a discounted cost function across this but ultimately they consider problems where it's shortest path problems so the cost function simply becomes how many steps does it take you to reach to reach the goal so the cost function so this becomes this this becomes the identity I guess you can you can set it to to one and this you can also set to one so this simply becomes J minus I how many steps does it take you to reach state state in time step J from the state you visited in time step I and then they simply train a pot a neural network or I'm not even sure if it's a neural network but you train a bunch of a parameterized function that learns to map the distance between these states to how many steps it took you from one to the other right and you do this simply by having by regressing so mean squared regression mean squared loss regression simple as that and that's how you learn the distance function and then you can use the distance function in the ways we discussed to either to improve your shortest path policy by giving it by providing it so what you want to do is you want to provide the distance function as the negative reward right so they say they they they provide the distance function as a negative reward for this or you can do this in an unsupervised fashion where you always propose the furthest away goals or you can do this in the semi supervised fashion so they have a bunch of things that they did here they have a bunch of videos of things that they trained this is from the human sorry from the semi supervised where the humans were simply selecting the hoppers that went furthest to the right and you can see over time this hops to the right with very very sparse input only so this is semi supervised right and then it goes to the right and it also has an unsupervised video where you simply let it perform and it on in unsupervised fashion it tries to discover states that are as far away as possible from its initial states and you can see it actually learns to move to the right and to the left because these are these rich states that are very far from its original state right so that's it's pretty cool that it turns out that the unsupervised method will discover such states alright so what to make of this this if you recognize this already it's very plausible because I had seen this some sort of this idea in many many papers before so and they make some connections in their related work so if you know for example universal value functions sorry universal value estimation universal value functions and so on where basically it's also an unsupervised way where you always just you'd select two states you say this and this agent now try try to go from here to here right just try that and so it is and then you select two new states so you basically teach your agent to go between two states that you choose at random and it's supposed to in an unsupervised fashion learn something about the environment very similar to what we have here right also a bunch of other a bunch of other things like just pure value functions are also pretty similar I think to this go explore there is a big connection to go explore so this has been around in one way or the other but possibly not in this specific formulation and what I think is cool applied to this specific semi supervised task so if I had to formulate a criticism to this method I would guess that it probably doesn't work when let's say the branching factor of the task is super high you see here you can you can only really turn the valve in one way or another of course the digits and the joints are are they have they have degrees of freedom but if you think if the branching factor is super high right so from a from a given state here you can go in many many many different ways and then from each of those you can go in many many different ways right then the the notion of something being far away right you go to this thing and use what's the farthest away all right is is almost meaningless because you have so much not explored right so if you have if you are three steps deep here right it will always tell you well this state here is the farthest away but you haven't explored these you know 15 directions here right so it might be that you actually miss so that you you go so here's the goal and here's the start and you go a long way but you miss this obvious shortcut here because you always want to go along the longest path around so it seems like there is there there are probably environments where this works well right but they're right but but it appears that if if either the branching factor is super high or if there are maybe this this kind of loops in the game loops between states non obvious combinatorial things it might be somewhat even counterproductive sometimes not not sure about that but it seems to be very specific environments where this would work all right so this was my commentary I invite you to read the paper check it out and bye bye
[ { "end": 7.34, "start": 0, "text": " Hi there! If you look at this robot, this robot has learned to turn this valve by" }, { "end": 12.200000000000001, "start": 7.34, "text": " itself. Now by itself isn't really correct, but it has learned it in a" }, { "end": 17.52, "start": 12.200000000000001, "text": " semi-supervised way with only 10 human inputs along the entire learning" }, { "end": 23.68, "start": 17.52, "text": " trajectory. So only 10 times was there a true reward for this reinforcement" }, { "end": 28.92, "start": 23.68, "text": " learning procedure and the rest is unsupervised discovery of this skill." }, { "end": 33.400000000000006, "start": 28.92, "text": " And the paper we're going to look at today and the technique by which this was" }, { "end": 38.84, "start": 33.400000000000006, "text": " achieved is dynamical distance learning for semi-supervised and unsupervised" }, { "end": 46.08, "start": 38.84, "text": " skill discovery by Kristian Hartikeinen, Xin Yang Geng, Thomas Harnoja and Sergei" }, { "end": 53.2, "start": 46.08, "text": " Levine. So this is a technique for reinforcement learning. So they claim" }, { "end": 58.72, "start": 53.2, "text": " reinforcement learning requires manual specification of a reward function to" }, { "end": 64.44, "start": 58.72, "text": " learn a task. And they say while in principle this reward function only" }, { "end": 70.03999999999999, "start": 64.44, "text": " needs to specify the task goal, in practice reinforcement learning can be" }, { "end": 75.68, "start": 70.03999999999999, "text": " very time-consuming or even infeasible unless the reward function is shaped so" }, { "end": 80.24, "start": 75.68, "text": " as to provide a smooth gradient towards a successful outcome. So what does this" }, { "end": 85.44, "start": 80.24, "text": " mean? Let's look at it. So if you want the robot here to turn the valve to the" }, { "end": 92.03999999999999, "start": 85.44, "text": " right, ideally you simply want to say, so the robot is here, this is the" }, { "end": 97.75999999999999, "start": 92.03999999999999, "text": " start state, ideally you just want to say I want this, I want the" }, { "end": 103.08, "start": 97.75999999999999, "text": " thing to be at the right, so this is good. All of this I don't" }, { "end": 109.68, "start": 103.08, "text": " want any of that. And the reinforcement learning, I mean this" }, { "end": 114.88, "start": 109.68, "text": " is enough, this is a reward function, all of this is zero and this is one." }, { "end": 121.03999999999999, "start": 114.88, "text": " This is a reward function and in theory if you apply any sort of reinforcement" }, { "end": 125.08, "start": 121.03999999999999, "text": " learning algorithm with any sort of guarantee, this should get you there. But" }, { "end": 129.96, "start": 125.08, "text": " of course we all know that it's not that easy, right? There is basically an" }, { "end": 138.28, "start": 129.96, "text": " exploration bottleneck where your robot has these three digits and lots of" }, { "end": 145.08, "start": 138.28, "text": " joints to move around and the probability that by itself it discovered that" }, { "end": 150.16, "start": 145.08, "text": " it needs to do this here and get this reward is very very slim. So what you" }, { "end": 154.72, "start": 150.16, "text": " want to do is in your reward function that you're providing to the robot, you" }, { "end": 161.44, "start": 154.72, "text": " would want to say okay so this here I see the blue thing is a bit more" }, { "end": 166.4, "start": 161.44, "text": " turned so I'm maybe going to give this a 0.1 and then here it's a bit more" }, { "end": 173.12, "start": 166.4, "text": " turned so maybe this is 0.2 and this I really like 0.3 here is 0.6 maybe" }, { "end": 179.52, "start": 173.12, "text": " because it's even more right and then one at the end right so this is what" }, { "end": 185.48000000000002, "start": 179.52, "text": " they would call a smooth gradient in the reward function where it's kind of the" }, { "end": 191.24, "start": 185.48000000000002, "text": " reward function ramps up until the goal is reached but oftentimes this isn't" }, { "end": 198.56, "start": 191.24, "text": " really possible because if you already knew how exactly to do the task which" }, { "end": 202.84, "start": 198.56, "text": " then you could you can only shape the reward function truly if you know how to" }, { "end": 208.44, "start": 202.84, "text": " perform the task in the first hand and then why exactly do you do reinforcement" }, { "end": 215.24, "start": 208.44, "text": " learning except for as an academic exercise. So the issue this paper has" }, { "end": 220.68, "start": 215.24, "text": " is clear right? What they want to say is like let's assume that your" }, { "end": 226.6, "start": 220.68, "text": " reward function is actually pretty bad can we provide artificially a way that" }, { "end": 232.8, "start": 226.6, "text": " this discovery of these of these what they call of these new skills is" }, { "end": 240.16, "start": 232.8, "text": " facilitated as if the reward function had some sort of a gradient. So that's" }, { "end": 248.16, "start": 240.16, "text": " the the outset let's actually go back to the to this for a second and they have" }, { "end": 254.84, "start": 248.16, "text": " these mazes as a kind of an example. So if you look at these mazes what we want" }, { "end": 261.76, "start": 254.84, "text": " to keep in mind is let's actually draw this over here. So let's say you have one" }, { "end": 271.8, "start": 261.76, "text": " of these mazes right and always there is a start state so you're here and" }, { "end": 277.68, "start": 271.8, "text": " then there is a goal state right let's say over here and the task is you" }, { "end": 283.92, "start": 277.68, "text": " can move up down left right and the task is to reach the goal right but if the" }, { "end": 287.04, "start": 283.92, "text": " reward function is simply that if you reach the goal you get a reward of one" }, { "end": 291.40000000000003, "start": 287.04, "text": " and otherwise you get a reward of zero then all the agent can do is kind of" }, { "end": 297.92, "start": 291.40000000000003, "text": " explore around right until it reaches the goal. Now if you do random" }, { "end": 302.48, "start": 297.92, "text": " exploration like a lot of reinforcement learning algorithms for" }, { "end": 306.6, "start": 302.48, "text": " example Q learning or policy gradient they'll have some sort of a just of a" }, { "end": 312.08000000000004, "start": 306.6, "text": " random exploration element where they if they don't if they don't absent of what" }, { "end": 317.68, "start": 312.08000000000004, "text": " they of the when they know what to do they just kind of boogle around like up" }, { "end": 326, "start": 317.68, "text": " up up right left right left down right up that doesn't work okay down down left" }, { "end": 332.20000000000005, "start": 326, "text": " down so it's sort of and then up again right and then they just kind of wonk" }, { "end": 340.32, "start": 332.2, "text": " around so this this method takes issue with that and it says okay while the" }, { "end": 346.88, "start": 340.32, "text": " agent is doing its thing trying to reach the goal right what we can do is we can" }, { "end": 352.71999999999997, "start": 346.88, "text": " learn a distance function between states now we'll reduce the problem for now and" }, { "end": 358.88, "start": 352.71999999999997, "text": " just say the task is always that the goal state is reached in the shortest" }, { "end": 366.4, "start": 358.88, "text": " amount of steps right so let's say the agent does something right it goes here" }, { "end": 372.32, "start": 366.4, "text": " here here here and then here right it that's that's one rollout of the policy" }, { "end": 376.68, "start": 372.32, "text": " and then it crashes into a wall okay that's bad so it gets a negative reward" }, { "end": 382.12, "start": 376.68, "text": " right but in addition to that we can we can learn so it has visited all of these" }, { "end": 388.68, "start": 382.12, "text": " states here in between right these are intermediate states this paper wants us" }, { "end": 395.36, "start": 388.68, "text": " now to to learn a distance function between the states so this distance" }, { "end": 404.94, "start": 395.36, "text": " function let's call it D it learns how far two states are away so it'll you can" }, { "end": 410.16, "start": 404.94, "text": " you can tell it okay this state here let's call that state a and this state" }, { "end": 417.72, "start": 410.16, "text": " here state B how far are those away now this is not super well defined yet but" }, { "end": 422.72, "start": 417.72, "text": " you want to say how far are they away for this agent here so the agent has" }, { "end": 428.20000000000005, "start": 422.72, "text": " maybe policy pi like that's what it used to create this trajectory under policy" }, { "end": 435.76000000000005, "start": 428.20000000000005, "text": " pi how far away are states a and B and how far away is simply going to be the" }, { "end": 444.96000000000004, "start": 435.76000000000005, "text": " amount of steps that it takes the agent to go from a to B so in this case that" }, { "end": 451.91999999999996, "start": 444.96, "text": " would be two right so the the and you can do this between any two states right" }, { "end": 458.08, "start": 451.91999999999996, "text": " this and this this and this right here here these all of these states you can" }, { "end": 463.76, "start": 458.08, "text": " also start from this state right let's do it in a different color and do every" }, { "end": 469.35999999999996, "start": 463.76, "text": " so the the this distance function D can actually has a pretty tight reward" }, { "end": 473.67999999999995, "start": 469.35999999999996, "text": " signal like a pretty wealth of information that it can learn these" }, { "end": 478.6, "start": 473.68, "text": " things from right that so the policy pi in this case can't learn much because it" }, { "end": 484.12, "start": 478.6, "text": " just got a reward of zero or something because it didn't reach the goal but the" }, { "end": 490.6, "start": 484.12, "text": " distance function has very very big reward or a big rework it has a very" }, { "end": 496.44, "start": 490.6, "text": " dense reward signal where it can learn distances between two states right now" }, { "end": 503.66, "start": 496.44, "text": " let's say we've explored a bunch right a bunch we've had many trajectories some" }, { "end": 508.96000000000004, "start": 503.66, "text": " here like to here and then here sometimes we even reach the goal right" }, { "end": 513.84, "start": 508.96000000000004, "text": " so so sometimes we actually reach the goal so we learn the two distances" }, { "end": 520.76, "start": 513.84, "text": " between all of the states now if we had a perfect distance function let's assume" }, { "end": 527.1600000000001, "start": 520.76, "text": " we have a perfect distance function our task now becomes very very simple so" }, { "end": 535.56, "start": 527.16, "text": " let's assume that's so let's assume I am here where the green agent is and I have" }, { "end": 540.7199999999999, "start": 535.56, "text": " these I can either go up or down and let's go that's up let's say that's X" }, { "end": 547.16, "start": 540.7199999999999, "text": " and the down is Y right which one should I choose now without even asking my" }, { "end": 555.76, "start": 547.16, "text": " policy per se what I can do is I can ask hey distance function so I can ask the" }, { "end": 565.48, "start": 555.76, "text": " distance function two different things so first let's do it like this distance" }, { "end": 570.16, "start": 565.48, "text": " function what do you think of the distance between X to the goal and what" }, { "end": 574.52, "start": 570.16, "text": " do you think of the distance from Y to the goal and the distance function if" }, { "end": 578.28, "start": 574.52, "text": " it's learned correctly it will tell you the distance of X to the goal is" }, { "end": 585.08, "start": 578.28, "text": " whatever maybe you need eight steps the distance of white the goal is ten steps" }, { "end": 592.1600000000001, "start": 585.08, "text": " right so definitely you would go with X right so if you had a good distance" }, { "end": 599.24, "start": 592.1600000000001, "text": " function then you could solve the task fairly fairly easily now this by itself" }, { "end": 604.44, "start": 599.24, "text": " isn't super interesting you will quickly notice that if you are able to learn" }, { "end": 609.08, "start": 604.44, "text": " such a good distance function especially with the goal state here then you might" }, { "end": 614.1600000000001, "start": 609.08, "text": " as well learn a good policy because that means you've reached the goal a fair" }, { "end": 619.9599999999999, "start": 614.16, "text": " number of times right so that the kind of information theoretic signal of D" }, { "end": 625.6, "start": 619.9599999999999, "text": " versus the signal on pi if you just want to reach the same goal to me it seems" }, { "end": 632, "start": 625.6, "text": " the same this this paper it tries to talk this up I feel but to me if you are" }, { "end": 637.24, "start": 632, "text": " in the situation where you have a fixed goal and that's it then this doesn't" }, { "end": 647.24, "start": 637.24, "text": " seem too interesting or too beneficial with compared to let's say just learning" }, { "end": 652.8, "start": 647.24, "text": " a value function right like you would do in a 3c or something the difference" }, { "end": 659.5600000000001, "start": 652.8, "text": " between this and a value function so if if if the number of steps is actually" }, { "end": 662.64, "start": 659.5600000000001, "text": " your reward so your negative reward you want to reach the goal in the shortest" }, { "end": 670.92, "start": 662.64, "text": " amount of time then learning a value function is the same the difference is" }, { "end": 676.24, "start": 670.92, "text": " for a value function the value function simply takes a state s right and the" }, { "end": 683.36, "start": 676.24, "text": " policy pi while the distance function takes a state s and a goal state for the" }, { "end": 689.52, "start": 683.36, "text": " policy pi the goal state for the value function is implicit right so it" }, { "end": 693.36, "start": 689.52, "text": " implicitly has the goal state because you assume that the goal is always the" }, { "end": 700.16, "start": 693.36, "text": " same with the distance function you can technically change your goal and this is" }, { "end": 706.4399999999999, "start": 700.16, "text": " where it becomes interesting so let's say you've explored but you haven't" }, { "end": 712.4399999999999, "start": 706.4399999999999, "text": " quite reached the goal yet right but we said okay most of these are algorithms" }, { "end": 718.14, "start": 712.4399999999999, "text": " they have some sort of some notion of random of random exploration right in" }, { "end": 725.04, "start": 718.14, "text": " order to to reach the goal what if you went from here to here and to here and" }, { "end": 729.24, "start": 725.04, "text": " to here and you learn the distances fairly well for the trajectories that" }, { "end": 733.5, "start": 729.24, "text": " you can do but you just haven't been able to go any further what you can say" }, { "end": 737.12, "start": 733.5, "text": " is you can go to your replay buffer write your memory of everything you've" }, { "end": 744.3199999999999, "start": 737.12, "text": " done and you can ask which of these states has the furthest distance from my" }, { "end": 749.5600000000001, "start": 744.32, "text": " starting state and the answer will be okay this state here as the furthest" }, { "end": 755.84, "start": 749.5600000000001, "text": " distance so now what you can do is you can make this your goal right you can" }, { "end": 763.08, "start": 755.84, "text": " just try to reach that state right and once you reach the state you can explore" }, { "end": 767.44, "start": 763.08, "text": " from that state right because this is the farthest away from your original" }, { "end": 772.72, "start": 767.44, "text": " starting state that probably means that you know if you that's kind of the" }, { "end": 776.6800000000001, "start": 772.72, "text": " frontier of what you know so if you explore from here you can go even" }, { "end": 781.8000000000001, "start": 776.6800000000001, "text": " further noticeably because it is the farthest that you know so it might turn" }, { "end": 786.2, "start": 781.8000000000001, "text": " out that from here you can only go back right so that's a possibility but" }, { "end": 791.84, "start": 786.2, "text": " probably you could go even further right so then you go further and you might" }, { "end": 797.12, "start": 791.84, "text": " reach this state here right and again you ask your your replay buffer it tells" }, { "end": 801.1800000000001, "start": 797.12, "text": " you this state here is the farthest so far so you take this as your new goal" }, { "end": 806.2399999999999, "start": 801.18, "text": " and now you're just trying to reach that and explore from here this is extremely" }, { "end": 812.3199999999999, "start": 806.2399999999999, "text": " similar to an algorithm like go explorer that I already made a video about where" }, { "end": 817.7199999999999, "start": 812.3199999999999, "text": " it remembers what it did and then it it will always travel to the furthest" }, { "end": 823.64, "start": 817.7199999999999, "text": " states it has seen so far and then from there try to go farther right so this" }, { "end": 829.4, "start": 823.64, "text": " this if you if you can learn a good distance function here that will help" }, { "end": 834.4399999999999, "start": 829.4, "text": " you in exploring the space and eventually of course you think you might" }, { "end": 839.24, "start": 834.4399999999999, "text": " actually reach this goal state so you might go far enough into in this maze" }, { "end": 845.4, "start": 839.24, "text": " you might explore it enough such that you you stumble over the goal state by" }, { "end": 851.88, "start": 845.4, "text": " itself alright so this is this is sort of the the goal this can be used in a" }, { "end": 855.88, "start": 851.88, "text": " number of different ways now instead of always going for the furthest what they" }, { "end": 861.6, "start": 855.88, "text": " did in the robot is they just let the algorithm explore right you explore" }, { "end": 867.88, "start": 861.6, "text": " explore explore if this is like a state tree and then at some point it it asked" }, { "end": 873.48, "start": 867.88, "text": " the human which one is closest to what you want and then the human says this" }, { "end": 880.52, "start": 873.48, "text": " one and then they say okay cool so this is now the new goal right so we'll try" }, { "end": 886.6, "start": 880.52, "text": " to reach this as much as possible and then explore from here right so this in" }, { "end": 893.48, "start": 886.6, "text": " the case of the robot the robot simply just like does some things it explores" }, { "end": 897.52, "start": 893.48, "text": " in in the unsupervised manner and then at some point you ask the human which of" }, { "end": 901.68, "start": 897.52, "text": " these things that the robot has done you like the most and then that becomes the" }, { "end": 908.28, "start": 901.68, "text": " new intermediate goal state and the algorithm explores from there right so" }, { "end": 916.24, "start": 908.28, "text": " that's the the main gist and how you can use this now the entire learning thing" }, { "end": 922.3199999999999, "start": 916.24, "text": " is actually pretty simple so what they propose is simply to to learn the" }, { "end": 925.68, "start": 922.3199999999999, "text": " distance function that they put it pretty formal here they say okay if" }, { "end": 931.28, "start": 925.68, "text": " you're two states that were visited after one another in an episode then you" }, { "end": 938.4, "start": 931.28, "text": " can define the distance function as the sum from i to j if if the they were" }, { "end": 944.24, "start": 938.4, "text": " visited at time steps I and J respectively this is a discounted cost" }, { "end": 950.24, "start": 944.24, "text": " function across this but ultimately they consider problems where it's shortest" }, { "end": 954.56, "start": 950.24, "text": " path problems so the cost function simply becomes how many steps does it" }, { "end": 962.8399999999999, "start": 954.56, "text": " take you to reach to reach the goal so the cost function so this becomes this" }, { "end": 968.28, "start": 962.8399999999999, "text": " this becomes the identity I guess you can you can set it to to one and this" }, { "end": 974.8399999999999, "start": 968.28, "text": " you can also set to one so this simply becomes J minus I how many steps does" }, { "end": 981.4, "start": 974.8399999999999, "text": " it take you to reach state state in time step J from the state you visited in" }, { "end": 989.48, "start": 981.4, "text": " time step I and then they simply train a pot a neural network or I'm not even" }, { "end": 992.48, "start": 989.48, "text": " sure if it's a neural network but you train a bunch of a parameterized" }, { "end": 1000.48, "start": 992.48, "text": " function that learns to map the distance between these states to how many steps" }, { "end": 1007.1999999999999, "start": 1000.48, "text": " it took you from one to the other right and you do this simply by having by" }, { "end": 1015.9200000000001, "start": 1007.2, "text": " regressing so mean squared regression mean squared loss regression simple as" }, { "end": 1019.44, "start": 1015.9200000000001, "text": " that and that's how you learn the distance function and then you can use" }, { "end": 1023.2800000000001, "start": 1019.44, "text": " the distance function in the ways we discussed to either to improve your" }, { "end": 1030.72, "start": 1023.2800000000001, "text": " shortest path policy by giving it by providing it so what you want to do is" }, { "end": 1037.1200000000001, "start": 1030.72, "text": " you want to provide the distance function as the negative reward right so" }, { "end": 1042.3999999999999, "start": 1037.12, "text": " they say they they they provide the distance function as a negative reward" }, { "end": 1046.9199999999998, "start": 1042.3999999999999, "text": " for this or you can do this in an unsupervised fashion where you always" }, { "end": 1051.28, "start": 1046.9199999999998, "text": " propose the furthest away goals or you can do this in the semi supervised" }, { "end": 1057.8799999999999, "start": 1051.28, "text": " fashion so they have a bunch of things that they did here they have a bunch of" }, { "end": 1065.2399999999998, "start": 1057.8799999999999, "text": " videos of things that they trained this is from the human sorry from the semi" }, { "end": 1072, "start": 1065.24, "text": " supervised where the humans were simply selecting the hoppers that went furthest" }, { "end": 1079.8, "start": 1072, "text": " to the right and you can see over time this hops to the right with very very" }, { "end": 1085.04, "start": 1079.8, "text": " sparse input only so this is semi supervised right and then it goes to the" }, { "end": 1094.24, "start": 1085.04, "text": " right and it also has an unsupervised video where you simply let it perform" }, { "end": 1100.92, "start": 1094.24, "text": " and it on in unsupervised fashion it tries to discover states that are as far" }, { "end": 1106.28, "start": 1100.92, "text": " away as possible from its initial states and you can see it actually learns to" }, { "end": 1112.96, "start": 1106.28, "text": " move to the right and to the left because these are these rich states that" }, { "end": 1117.72, "start": 1112.96, "text": " are very far from its original state right so that's it's pretty cool that it" }, { "end": 1125.48, "start": 1117.72, "text": " turns out that the unsupervised method will discover such states alright so" }, { "end": 1131.76, "start": 1125.48, "text": " what to make of this this if you recognize this already it's very" }, { "end": 1140.16, "start": 1131.76, "text": " plausible because I had seen this some sort of this idea in many many papers" }, { "end": 1144.8, "start": 1140.16, "text": " before so and they make some connections in their related work so if you know for" }, { "end": 1152.9199999999998, "start": 1144.8, "text": " example universal value functions sorry universal value estimation universal" }, { "end": 1159.24, "start": 1152.9199999999998, "text": " value functions and so on where basically it's also an unsupervised way" }, { "end": 1163.96, "start": 1159.24, "text": " where you always just you'd select two states you say this and this agent now" }, { "end": 1172.36, "start": 1163.96, "text": " try try to go from here to here right just try that and so it is and then you" }, { "end": 1177.6, "start": 1172.36, "text": " select two new states so you basically teach your agent to go between two" }, { "end": 1182.6399999999999, "start": 1177.6, "text": " states that you choose at random and it's supposed to in an unsupervised" }, { "end": 1186.84, "start": 1182.6399999999999, "text": " fashion learn something about the environment very similar to what we have" }, { "end": 1191.8, "start": 1186.84, "text": " here right also a bunch of other a bunch of other things like just pure value" }, { "end": 1197.28, "start": 1191.8, "text": " functions are also pretty similar I think to this go explore there is a big" }, { "end": 1202, "start": 1197.28, "text": " connection to go explore so this has been around in one way or the other but" }, { "end": 1207.24, "start": 1202, "text": " possibly not in this specific formulation and what I think is cool" }, { "end": 1216.4, "start": 1207.24, "text": " applied to this specific semi supervised task so if I had to formulate a" }, { "end": 1224.36, "start": 1216.4, "text": " criticism to this method I would guess that it probably doesn't work when let's" }, { "end": 1231.04, "start": 1224.36, "text": " say the branching factor of the task is super high you see here you can you can" }, { "end": 1236.44, "start": 1231.04, "text": " only really turn the valve in one way or another of course the digits and the" }, { "end": 1243.1599999999999, "start": 1236.44, "text": " joints are are they have they have degrees of freedom but if you think if" }, { "end": 1249.6399999999999, "start": 1243.1599999999999, "text": " the branching factor is super high right so from a from a given state here you" }, { "end": 1254.3999999999999, "start": 1249.6399999999999, "text": " can go in many many many different ways and then from each of those you can go" }, { "end": 1260.24, "start": 1254.3999999999999, "text": " in many many different ways right then the the notion of something being far" }, { "end": 1265.96, "start": 1260.24, "text": " away right you go to this thing and use what's the farthest away all right is is" }, { "end": 1271.36, "start": 1265.96, "text": " almost meaningless because you have so much not explored right so if you have" }, { "end": 1275.6, "start": 1271.36, "text": " if you are three steps deep here right it will always tell you well this state" }, { "end": 1279.72, "start": 1275.6, "text": " here is the farthest away but you haven't explored these you know 15" }, { "end": 1289.96, "start": 1279.72, "text": " directions here right so it might be that you actually miss so that you" }, { "end": 1297.68, "start": 1289.96, "text": " you go so here's the goal and here's the start and you go a long way but you miss" }, { "end": 1304.16, "start": 1297.68, "text": " this obvious shortcut here because you always want to go along the longest path" }, { "end": 1310.3600000000001, "start": 1304.16, "text": " around so it seems like there is there there are probably environments where" }, { "end": 1318.2, "start": 1310.3600000000001, "text": " this works well right but they're right but but it appears that if if either the" }, { "end": 1323.56, "start": 1318.2, "text": " branching factor is super high or if there are maybe this this kind of loops" }, { "end": 1334.3600000000001, "start": 1323.56, "text": " in the game loops between states non obvious combinatorial things it might be" }, { "end": 1339.96, "start": 1334.3600000000001, "text": " somewhat even counterproductive sometimes not not sure about that but it" }, { "end": 1345.88, "start": 1339.96, "text": " seems to be very specific environments where this would work all right so this" }, { "end": 1356.24, "start": 1345.88, "text": " was my commentary I invite you to read the paper check it out and bye bye" } ]
hg2Q_O5b9w4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
[ "Science & Technology" ]
[ "deep learning", "machine learning", "rl", "reinforcement learning", "unsupervised", "contrast", "contrastive", "encoder", "self-supervised", "deep rl", "representation", "representation learning", "query", "key" ]
Contrastive Learning has been an established method in NLP and Image classification. The authors show that with relatively minor adjustments, CL can be used to augment and improve RL dramatically. Paper: https://arxiv.org/abs/2004.04136 Code: https://github.com/MishaLaskin/curl Abstract: We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 2.8x and 1.6x performance gains respectively at the 100K interaction steps benchmark. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency and performance of methods that use state-based features. Authors: Aravind Srinivas, Michael Laskin, Pieter Abbeel Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're going to look at CURL, Contrastive Unsupervised Representations for Reinforcement Learning, by Aravind Srinivas, Michael Laskin and Pieter Abbeel. So this is a general framework for unsupervised representation learning for RL. So let's untangle the title a little bit. It is FOR reinforcement learning, which if you don't know what reinforcement learning is, I've done a bunch of videos on RL frameworks. So it's for general reinforcement learning. That means it can be paired with almost any RL algorithm out there. So we're not going to dive into specific RL algorithms today. It is unsupervised, which means it doesn't need any sort of labels, and it also doesn't need a reward signal for RL, which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal. Now there is a training objective here, but it doesn't have to do with the RL reward. And then it is learning representations, which means it learns intermediate representations of the input data that is useful. And in the end it is contrastive, and that is the secret sauce in here. The training objective is what's called contrastive learning, and that's what we're going to spend most of our time on today, exploring what that means. So here's the general framework. You can see it down here. Sorry about that. So you can see that reinforcement learning is just a box, which is we don't care about the RL algorithm you use, that's just what comes at the end. What comes at the beginning, oh, here is the observation. So the observation in an RL algorithm is kind of fundamental. Now if someone explains RL to you, reinforcement learning, usually what they'll say is there is some kind of actor and there is some kind of environment. And the environment will give you an observation, observation O, which is some sort of, let's say here is an image. So in this RL framework specifically, the examples they give are of image-based reinforcement learning. Let's say the Atari game where you have this little spaceship here and there are meteorites up here, and you need to shoot them. So there is a little shot here. You need to shoot those meteorites. So this is the observation O. And then as an age, as an actor, you have to come up with some sort of action. And the actions here can be something like move to the left, move to the right, press the button that does the shooting. So you have to come up with an action somehow given this observation. And then the environment will give you back a reward along with the next observation, like the next frame of the game. And you're going to have to come up with another action in response to that. And the environment is going to give you back another reward and the next observation and so on. So what you want to do is you want to find a mapping from observation to action, such that your reward is going to be as high as possible. This is the fundamental problem of RL. And usually what people do is they take this mapping here from observation to action to be some sort of function, some sort of function that is parameterized maybe. Nowadays, of course, it's often a neural network. But you're trying to learn, given the input observation, what output action you need to do. And you can think of the same here. So you have this input observation up here. And down here, after the reinforcement learning, the output is going to be an action. And so this function we talked about up here is usually implemented. It's usually implemented like this. You put the observation into the RL framework. And then the RL framework learns this f of theta function to give you an action. Now here you can see the pipeline is a bit different. We don't want to shove the observation in directly, right? We don't want the observation directly. But what we put into the RL framework is this queue thing. Now the queue is supposed to be a representation of the observation and a useful representation. So if we think of this game here, of this Atari game up here, what could be a useful representation if I had to craft one by hand? How would I construct a useful representation? Keep in mind the goal is to have a representation of the observation that is more useful to the RL algorithm than just the pure pixels of the image. So if I had to craft a representation, let's say it's a vector. Let's say our representations need to be vectors. What I would do is I would probably take the x and y coordinates of the little spaceship, x and y, and put it in the vector. That's pretty useful. Then I would probably take the x and y coordinates of the meteorites that are around. Let's say there's a maximum of two, so x, y, x, y here. I would probably take the angle where my spaceship is pointing to. That should be pretty useful because if I shoot, I want to know where I shoot. So theta here, and then probably the x and y coordinates of the red shot that I fired, if there is one. I'm also going to put that into my representation. So x and y, and maybe delta x, delta y. Something like this. You can see if I had to handcraft something, I can pretty much guarantee that if I put in this representation right here into the RL algorithm, if I put this in here, it would turn out guaranteed, it would turn out to be a better RL agent that learns faster than if I put in the original observation, which is the pixel image of the game. Because, of course, in order to play the game correctly, in order to play the game to win, you need to extract this information. You need to get, ah, there's something like a spaceship, there's something like meteorites. This is all things that the RL algorithm doesn't know per se, and would have to learn from the pixels. But if I already give it the information that is useful, it can learn much faster. So you can see if I handcraft a good representation, it's pretty easy for the RL algorithm to improve. Now we want to come up with a framework that automatically comes up with a good representation. So it alleviates the RL algorithm here, the reinforcement learning. It alleviates that from having to learn a good representation. It already is burdened with learning what a good action is in any given situation. We want to alleviate it of the burden to also extract useful information from the observation space. So how do we do this? This Q here is supposed to be exactly that. It's supposed to be a good representation, but not one that we handcrafted, but used with a technique that can be employed pretty much everywhere. The goal, sorry, the secret sauce here is this contrastive loss thing. Okay, this bombed. Contrastive learning is this kind of magic thing that will make us good representations. What is contrastive learning? In this case, I'm going to explain it. In this case, for image-based reinforcement learning, but just for image-based neural networks, how can we come up with a contrastive loss? So you see there's a two pipeline thing going on here. This and this, and then one of them is going to be the good encoding. So let's check it out. Let's say we have this image that we had before. Draw it again. This little spaceship. This and this. And shot. We want to do this. What we need to do is we need to produce three different things from it. We need to produce an anchor, what's called an anchor. We need to produce a positive sample. And we need to produce negative samples. Let's just go with one negative sample for now. The goal is to come up with a task where we produce our own labels. Since we're training an encoder, and the encoder is a neural network that is parameterized, we need some sort of loss function. The goal is to come up with a method where we can create our own labels to a task, but we construct the task in a way such that the neural network has no choice and we can create something meaningful, even though we made the task up ourselves. I hope this was kind of clear. How are we going to do this? Our method of choice here is going to be random cropping. Random cropping means that I take an image and I crop a piece from it. A smaller piece from the image. I take a view inside the image. In case of the anchor, I'm going to draw the same picture here. Bear with me, I'm going to draw the same picture here a couple of times. This is all supposed to be the same picture. With the negative sample, I'm just going to leave it empty for now. Ta-da! Two meteorites. Two meteorites. Shot. Shot. For the anchor, we're going to center crop. We're going to take the center image. The assumption is that if I center crop, I won't lose too much of the image. I can actually make the crop bigger, such that almost everything of the image is somewhat contained in this. This is going to be my anchor. The positive sample is going to be a random crop of the same image. I'm just randomly going to select a same size section from that image. Let's say this is up right here. The negative sample is going to be a random crop from a different image. A different image might be from the same game, but there is a meteorite here and there is no shot. I don't shoot. I'm going to take a random crop from this. Let's say I'm going to take a random crop here. Let's put a meteorite here as well, just for fun. These are going to be our three samples. Now the question is going to be if I give the anchor to the neural network. I give you the anchor, but I'm also going to give you this and this thing. I'm not going to give any of this. I'm just going to give whatever I cropped. Just these things. I ask the neural network, I give you the anchor. Which one of these two crops comes from the same image? As a human you look at this and if you just see the center crop, you see down here there is this tip of this thing and then there is the shot. In relation to the shot there is a meteor here. Then you look at the second one and you say I don't see the spaceship, but there is the same relation here from the shot to the meteor. I can kind of see the meteor up here. This also fits with that. The spaceship must be down here somewhere. Then I go over here and I try to do the same thing. Here is the meteor. In the original image it might be over here somewhere. That's possible. I don't see it. That's possible, but then there should be a shot somewhere here. There should be a shot somewhere here. I'm pretty sure because there is one over here and I don't see it. I am fairly sure that this image here is the positive sample, while this image here is the negative sample. This is the task that you ask of the neural network. Give it the anchor and you ask which one of these two comes from the same image. This is called contrastive learning. It is a bit more complicated in that of course what you do is you encode these things using neural networks. Each of the things you encode. The anchor you are going to encode all of these things using a neural network. Then this is what's going to become the query. These are becoming the keys. Key 1 or key 2. Then you are going to feed always two of them into a bilinear product. A bilinear product is simply an inner product in a perturbed space that you can learn. You are going to have these two here. These go into Q, W, K, 1. Then these two here, sorry, this and this go into Q, W, K, 2. Now W here is a learnable parameter. You have some freedom. Then you basically take whichever one of those two is highest. This might be this high and this might only be this high. Then you say, aha, cool, this one is higher so this one must be the positive. You train the W specifically to make the positive ones higher and the negative ones lower. This is a supervised learning task. These things here are going to be the logits. They are inner products but you basically then pick the one that is highest in a softmax way. They put this in the paper. If we go down here, the objective that they use to do the contrastive learning is this one. As you can see, it's a softmax like in multiclass classification. The inner product, the bilinear product with the positive samples over the bilinear product with the positive samples plus the bilinear product with all of the negative samples. You are going to come up with more than one negative sample. The only thing left that we don't have here is that the encoding, how you are going to come from the image space to this space here, is going to be slightly different depending on whether you are talking on the anchor or on what are called the keys, the things you compare to. This is out of a stability criterion. Maybe you know something like double Q-learning or things like this. Sometimes when you train with your own thing, in Q-learning you are trying to come up with an actor and a critic. It's not the same thing, but you are using the same neural network twice in your setup. Then you compare the outputs to each other, which leads to instability. In our case, we took it three times here, or multiple times. Especially for the same objective here, we have twice something that was encoded by the same neural network and is on the two sides of this bilinear product. If we were to use the same neural network, that tends to be somewhat unstable. We have different neural networks, one that will encode the query, which is this FQ, and one which will encode the keys, sorry, FK. We don't want to learn two neural networks. That's why there's a bit of a compromise, where we say it is the same neural network, but basically this one is the one we learn. Every now and then we transfer over the parameters to that one. In fact, each step we transfer over the parameters and do an exponentially moving average with the parameters of this momentum encoder from the step before. The momentum encoder parameters are a moving average of the parameters of the query encoder. You get the best of both worlds. You don't have to learn a second neural network, but your second neural network is not the same as your first neural network. It kind of lags behind, but it is also performing almost as well. I don't know if that makes sense, but it is the best I can explain it. To recap, you take your observation, you encode it as a query, sorry, you crop here for your anchor, that gets your query, and then you random crop for your keys into positive and negative samples. Random crop from the same observation or from different observations. These become your positive and negative samples. Then you push these through your encoders for the query and for the keys respectively. You end up with the queue, which is the encoded anchor, and the k's, which are the encoded positive and negative samples. Then you learn, you update this encoder here using the contrastive loss. At the same time, you feed the queue into the reinforcement learning algorithm, and you learn your reinforcement learning algorithm. Instead of having the observation directly as an input here, you now have the queue here as an input. The reinforcement learning works exactly the same, except having the pixel input O, you now have the representation input Q. You don't have to worry about anything else in terms of the reinforcement learning algorithm. It works exactly the same. This whole thing here can run either in parallel, or you can think of it before, you can think of it off-policy, on-policy. It is sort of modular how you fit this in. It simply comes up with good representation. That is basically the deal here. You hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here. If you encode that to the queue, you hope that this representation now is a good representation as a basis for the RL algorithm. It turns out, at least in their experiments, it is. Here you see the same thing. You can do something more where in RL you usually deal with a stack of observations, not just a single observation. For example, in Atari, people always concatenate something like the four last frames. Their point is, if we have this stack here, if we do this data augmentation, these crops, we need to do them consistently. We need to crop every single image at the same point for the query. Also, if we do a random crop, let's say a random crop down here, we need to do this same random crop for all of the stack of images here. That is the additional thing they introduce with respect to RL that deals with stacked timeframes. It's the same diagram as above here. They explain the RL algorithms they use and exactly their thing. Here you can see that the anchor is a crop, and the positive sample is a random crop from the same image. This would be up here somewhere. The anchor is cropped from the middle. Then the negative would be a random crop from a different image or a different stack of images. They have a pseudocode here. It's pretty simple. We'll just go through it quickly. You start off with FQ and FK. These are the encoders for the query and keys. You start them off the same. Then you go through your data loader. You do this random augmentation of your query and your keys. I'm not even sure if the random augmentation needs to be a center crop for the anchor, but it's just two different crops from the same image. I guess it's a thing you could choose. I don't know what exactly is the best thing. Then I forward the query through the FQ and I forward the keys through the FK. It's important to detach this so I don't want to train the FK. I only want to train the FQ. Then I do the bilinear product here with the W. These are the bilinear product. Then I put all of this into a cross entropy loss. In the end I update my FQ and my W and I do this exponentially moving average for my key encoder. They test on two different things. They test on the DeepMind control tasks. They always test 100k time steps. Their big point is data efficiency. They claim they can learn useful representations with not much data. The task is here, how good are you at 100k time steps? You don't optimize until the end. You get 100k time steps and then the question is how good are you? The curl here outperforms all of the baselines handily in the DeepMind control tasks. It also outperforms a lot of the baselines in the Atari tasks. If you look at the results, it doesn't outperform everything. For example, the red is curl and the dashed grey is stateSAC. StateSAC has access to the state. Curl only works from pixels. If I had to craft a representation, stateSAC has access to that. You see that in many of the tasks, the curl comes close or performs equally well to stateSAC. That's pretty impressive. Especially if you look at pixelSAC, which is the same algorithm but does not have access to the state, it often fails terribly. That is pretty interesting to see. Even to me, it's pretty interesting to see that this kind of self-labeled algorithm comes up with such useful representations. I hope I have explained this satisfactorily. Check out the paper for more experiments, ablation studies and general reading. I wish you a good day.
[ { "end": 7.5, "start": 0, "text": " Hi there! Today we're going to look at CURL, Contrastive Unsupervised Representations for Reinforcement Learning," }, { "end": 12.5, "start": 7.5, "text": " by Aravind Srinivas, Michael Laskin and Pieter Abbeel." }, { "end": 19, "start": 12.5, "text": " So this is a general framework for unsupervised representation learning for RL." }, { "end": 22.5, "start": 19, "text": " So let's untangle the title a little bit." }, { "end": 28.5, "start": 22.5, "text": " It is FOR reinforcement learning, which if you don't know what reinforcement learning is," }, { "end": 32, "start": 28.5, "text": " I've done a bunch of videos on RL frameworks." }, { "end": 35, "start": 32, "text": " So it's for general reinforcement learning." }, { "end": 41, "start": 35, "text": " That means it can be paired with almost any RL algorithm out there." }, { "end": 46, "start": 41, "text": " So we're not going to dive into specific RL algorithms today." }, { "end": 53, "start": 46, "text": " It is unsupervised, which means it doesn't need any sort of labels," }, { "end": 57, "start": 53, "text": " and it also doesn't need a reward signal for RL," }, { "end": 65.5, "start": 57, "text": " which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal." }, { "end": 71, "start": 65.5, "text": " Now there is a training objective here, but it doesn't have to do with the RL reward." }, { "end": 83, "start": 71, "text": " And then it is learning representations, which means it learns intermediate representations of the input data that is useful." }, { "end": 88, "start": 83, "text": " And in the end it is contrastive, and that is the secret sauce in here." }, { "end": 91.5, "start": 88, "text": " The training objective is what's called contrastive learning," }, { "end": 97, "start": 91.5, "text": " and that's what we're going to spend most of our time on today, exploring what that means." }, { "end": 103, "start": 97, "text": " So here's the general framework. You can see it down here." }, { "end": 107, "start": 103, "text": " Sorry about that." }, { "end": 116, "start": 107, "text": " So you can see that reinforcement learning is just a box, which is we don't care about the RL algorithm you use," }, { "end": 120, "start": 116, "text": " that's just what comes at the end." }, { "end": 123.5, "start": 120, "text": " What comes at the beginning, oh, here is the observation." }, { "end": 128, "start": 123.5, "text": " So the observation in an RL algorithm is kind of fundamental." }, { "end": 132, "start": 128, "text": " Now if someone explains RL to you, reinforcement learning," }, { "end": 138, "start": 132, "text": " usually what they'll say is there is some kind of actor and there is some kind of environment." }, { "end": 152, "start": 138, "text": " And the environment will give you an observation, observation O, which is some sort of, let's say here is an image." }, { "end": 158.5, "start": 152, "text": " So in this RL framework specifically, the examples they give are of image-based reinforcement learning." }, { "end": 168, "start": 158.5, "text": " Let's say the Atari game where you have this little spaceship here and there are meteorites up here," }, { "end": 172, "start": 168, "text": " and you need to shoot them. So there is a little shot here." }, { "end": 174, "start": 172, "text": " You need to shoot those meteorites." }, { "end": 176, "start": 174, "text": " So this is the observation O." }, { "end": 181, "start": 176, "text": " And then as an age, as an actor, you have to come up with some sort of action." }, { "end": 185, "start": 181, "text": " And the actions here can be something like move to the left, move to the right," }, { "end": 189, "start": 185, "text": " press the button that does the shooting." }, { "end": 194, "start": 189, "text": " So you have to come up with an action somehow given this observation." }, { "end": 200, "start": 194, "text": " And then the environment will give you back a reward along with the next observation," }, { "end": 202, "start": 200, "text": " like the next frame of the game." }, { "end": 206, "start": 202, "text": " And you're going to have to come up with another action in response to that." }, { "end": 211, "start": 206, "text": " And the environment is going to give you back another reward and the next observation and so on." }, { "end": 218.5, "start": 211, "text": " So what you want to do is you want to find a mapping from observation to action," }, { "end": 223, "start": 218.5, "text": " such that your reward is going to be as high as possible." }, { "end": 226, "start": 223, "text": " This is the fundamental problem of RL." }, { "end": 232.5, "start": 226, "text": " And usually what people do is they take this mapping here from observation to action" }, { "end": 239, "start": 232.5, "text": " to be some sort of function, some sort of function that is parameterized maybe." }, { "end": 242, "start": 239, "text": " Nowadays, of course, it's often a neural network." }, { "end": 249, "start": 242, "text": " But you're trying to learn, given the input observation, what output action you need to do." }, { "end": 251, "start": 249, "text": " And you can think of the same here." }, { "end": 254, "start": 251, "text": " So you have this input observation up here." }, { "end": 261, "start": 254, "text": " And down here, after the reinforcement learning, the output is going to be an action." }, { "end": 267, "start": 261, "text": " And so this function we talked about up here is usually implemented." }, { "end": 271, "start": 267, "text": " It's usually implemented like this. You put the observation into the RL framework." }, { "end": 276, "start": 271, "text": " And then the RL framework learns this f of theta function to give you an action." }, { "end": 279, "start": 276, "text": " Now here you can see the pipeline is a bit different." }, { "end": 283, "start": 279, "text": " We don't want to shove the observation in directly, right?" }, { "end": 286, "start": 283, "text": " We don't want the observation directly." }, { "end": 291, "start": 286, "text": " But what we put into the RL framework is this queue thing." }, { "end": 296, "start": 291, "text": " Now the queue is supposed to be a representation of the observation" }, { "end": 298, "start": 296, "text": " and a useful representation." }, { "end": 304, "start": 298, "text": " So if we think of this game here, of this Atari game up here," }, { "end": 310, "start": 304, "text": " what could be a useful representation if I had to craft one by hand?" }, { "end": 314, "start": 310, "text": " How would I construct a useful representation?" }, { "end": 320, "start": 314, "text": " Keep in mind the goal is to have a representation of the observation" }, { "end": 327, "start": 320, "text": " that is more useful to the RL algorithm than just the pure pixels of the image." }, { "end": 331, "start": 327, "text": " So if I had to craft a representation, let's say it's a vector." }, { "end": 336, "start": 331, "text": " Let's say our representations need to be vectors." }, { "end": 343, "start": 336, "text": " What I would do is I would probably take the x and y coordinates of the little spaceship," }, { "end": 347, "start": 343, "text": " x and y, and put it in the vector. That's pretty useful." }, { "end": 355, "start": 347, "text": " Then I would probably take the x and y coordinates of the meteorites that are around." }, { "end": 360, "start": 355, "text": " Let's say there's a maximum of two, so x, y, x, y here." }, { "end": 370, "start": 360, "text": " I would probably take the angle where my spaceship is pointing to." }, { "end": 375, "start": 370, "text": " That should be pretty useful because if I shoot, I want to know where I shoot." }, { "end": 386, "start": 375, "text": " So theta here, and then probably the x and y coordinates of the red shot that I fired, if there is one." }, { "end": 389, "start": 386, "text": " I'm also going to put that into my representation." }, { "end": 395, "start": 389, "text": " So x and y, and maybe delta x, delta y." }, { "end": 397, "start": 395, "text": " Something like this." }, { "end": 400, "start": 397, "text": " You can see if I had to handcraft something," }, { "end": 409, "start": 400, "text": " I can pretty much guarantee that if I put in this representation right here into the RL algorithm," }, { "end": 414, "start": 409, "text": " if I put this in here, it would turn out guaranteed," }, { "end": 422, "start": 414, "text": " it would turn out to be a better RL agent that learns faster than if I put in the original observation," }, { "end": 427, "start": 422, "text": " which is the pixel image of the game." }, { "end": 433, "start": 427, "text": " Because, of course, in order to play the game correctly, in order to play the game to win," }, { "end": 436, "start": 433, "text": " you need to extract this information." }, { "end": 441, "start": 436, "text": " You need to get, ah, there's something like a spaceship, there's something like meteorites." }, { "end": 448, "start": 441, "text": " This is all things that the RL algorithm doesn't know per se, and would have to learn from the pixels." }, { "end": 453, "start": 448, "text": " But if I already give it the information that is useful, it can learn much faster." }, { "end": 461, "start": 453, "text": " So you can see if I handcraft a good representation, it's pretty easy for the RL algorithm to improve." }, { "end": 468, "start": 461, "text": " Now we want to come up with a framework that automatically comes up with a good representation." }, { "end": 473, "start": 468, "text": " So it alleviates the RL algorithm here, the reinforcement learning." }, { "end": 480, "start": 473, "text": " It alleviates that from having to learn a good representation." }, { "end": 487, "start": 480, "text": " It already is burdened with learning what a good action is in any given situation." }, { "end": 498, "start": 487, "text": " We want to alleviate it of the burden to also extract useful information from the observation space." }, { "end": 500, "start": 498, "text": " So how do we do this?" }, { "end": 504, "start": 500, "text": " This Q here is supposed to be exactly that." }, { "end": 510, "start": 504, "text": " It's supposed to be a good representation, but not one that we handcrafted," }, { "end": 516, "start": 510, "text": " but used with a technique that can be employed pretty much everywhere." }, { "end": 522, "start": 516, "text": " The goal, sorry, the secret sauce here is this contrastive loss thing." }, { "end": 524, "start": 522, "text": " Okay, this bombed." }, { "end": 532, "start": 524, "text": " Contrastive learning is this kind of magic thing that will make us good representations." }, { "end": 534, "start": 532, "text": " What is contrastive learning?" }, { "end": 537, "start": 534, "text": " In this case, I'm going to explain it." }, { "end": 550, "start": 537, "text": " In this case, for image-based reinforcement learning, but just for image-based neural networks," }, { "end": 554, "start": 550, "text": " how can we come up with a contrastive loss?" }, { "end": 558, "start": 554, "text": " So you see there's a two pipeline thing going on here." }, { "end": 566, "start": 558, "text": " This and this, and then one of them is going to be the good encoding." }, { "end": 569, "start": 566, "text": " So let's check it out." }, { "end": 575, "start": 569, "text": " Let's say we have this image that we had before." }, { "end": 578, "start": 575, "text": " Draw it again." }, { "end": 583, "start": 578, "text": " This little spaceship." }, { "end": 585, "start": 583, "text": " This and this." }, { "end": 588, "start": 585, "text": " And shot." }, { "end": 590, "start": 588, "text": " We want to do this." }, { "end": 595, "start": 590, "text": " What we need to do is we need to produce three different things from it." }, { "end": 602, "start": 595, "text": " We need to produce an anchor, what's called an anchor." }, { "end": 607, "start": 602, "text": " We need to produce a positive sample." }, { "end": 610, "start": 607, "text": " And we need to produce negative samples." }, { "end": 614, "start": 610, "text": " Let's just go with one negative sample for now." }, { "end": 621, "start": 614, "text": " The goal is to come up with a task where we produce our own labels." }, { "end": 627, "start": 621, "text": " Since we're training an encoder, and the encoder is a neural network that is parameterized," }, { "end": 629, "start": 627, "text": " we need some sort of loss function." }, { "end": 635, "start": 629, "text": " The goal is to come up with a method where we can create our own labels to a task," }, { "end": 640, "start": 635, "text": " but we construct the task in a way such that the neural network has no choice" }, { "end": 645, "start": 640, "text": " and we can create something meaningful, even though we made the task up ourselves." }, { "end": 649, "start": 645, "text": " I hope this was kind of clear." }, { "end": 651, "start": 649, "text": " How are we going to do this?" }, { "end": 655, "start": 651, "text": " Our method of choice here is going to be random cropping." }, { "end": 664, "start": 655, "text": " Random cropping means that I take an image and I crop a piece from it." }, { "end": 667, "start": 664, "text": " A smaller piece from the image." }, { "end": 670, "start": 667, "text": " I take a view inside the image." }, { "end": 676, "start": 670, "text": " In case of the anchor, I'm going to draw the same picture here." }, { "end": 680, "start": 676, "text": " Bear with me, I'm going to draw the same picture here a couple of times." }, { "end": 684, "start": 680, "text": " This is all supposed to be the same picture." }, { "end": 689, "start": 684, "text": " With the negative sample, I'm just going to leave it empty for now." }, { "end": 694, "start": 689, "text": " Ta-da! Two meteorites. Two meteorites." }, { "end": 696, "start": 694, "text": " Shot. Shot." }, { "end": 702, "start": 696, "text": " For the anchor, we're going to center crop." }, { "end": 708, "start": 702, "text": " We're going to take the center image." }, { "end": 716, "start": 708, "text": " The assumption is that if I center crop, I won't lose too much of the image." }, { "end": 721, "start": 716, "text": " I can actually make the crop bigger, such that almost everything of the image" }, { "end": 726, "start": 721, "text": " is somewhat contained in this." }, { "end": 728, "start": 726, "text": " This is going to be my anchor." }, { "end": 734, "start": 728, "text": " The positive sample is going to be a random crop of the same image." }, { "end": 743, "start": 734, "text": " I'm just randomly going to select a same size section from that image." }, { "end": 747, "start": 743, "text": " Let's say this is up right here." }, { "end": 753, "start": 747, "text": " The negative sample is going to be a random crop from a different image." }, { "end": 757, "start": 753, "text": " A different image might be from the same game," }, { "end": 763, "start": 757, "text": " but there is a meteorite here and there is no shot." }, { "end": 765, "start": 763, "text": " I don't shoot." }, { "end": 768, "start": 765, "text": " I'm going to take a random crop from this." }, { "end": 772, "start": 768, "text": " Let's say I'm going to take a random crop here." }, { "end": 777, "start": 772, "text": " Let's put a meteorite here as well, just for fun." }, { "end": 784, "start": 777, "text": " These are going to be our three samples." }, { "end": 792, "start": 784, "text": " Now the question is going to be if I give the anchor to the neural network." }, { "end": 801, "start": 792, "text": " I give you the anchor, but I'm also going to give you this and this thing." }, { "end": 803, "start": 801, "text": " I'm not going to give any of this." }, { "end": 813, "start": 803, "text": " I'm just going to give whatever I cropped." }, { "end": 816, "start": 813, "text": " Just these things." }, { "end": 820, "start": 816, "text": " I ask the neural network, I give you the anchor." }, { "end": 829, "start": 820, "text": " Which one of these two crops comes from the same image?" }, { "end": 833, "start": 829, "text": " As a human you look at this and if you just see the center crop," }, { "end": 838, "start": 833, "text": " you see down here there is this tip of this thing and then there is the shot." }, { "end": 842, "start": 838, "text": " In relation to the shot there is a meteor here." }, { "end": 847, "start": 842, "text": " Then you look at the second one and you say I don't see the spaceship," }, { "end": 851, "start": 847, "text": " but there is the same relation here from the shot to the meteor." }, { "end": 854, "start": 851, "text": " I can kind of see the meteor up here." }, { "end": 857, "start": 854, "text": " This also fits with that." }, { "end": 861, "start": 857, "text": " The spaceship must be down here somewhere." }, { "end": 865, "start": 861, "text": " Then I go over here and I try to do the same thing." }, { "end": 867, "start": 865, "text": " Here is the meteor." }, { "end": 874, "start": 867, "text": " In the original image it might be over here somewhere." }, { "end": 877, "start": 874, "text": " That's possible. I don't see it." }, { "end": 887, "start": 877, "text": " That's possible, but then there should be a shot somewhere here." }, { "end": 893, "start": 887, "text": " There should be a shot somewhere here." }, { "end": 898, "start": 893, "text": " I'm pretty sure because there is one over here and I don't see it." }, { "end": 905, "start": 898, "text": " I am fairly sure that this image here is the positive sample," }, { "end": 909, "start": 905, "text": " while this image here is the negative sample." }, { "end": 912, "start": 909, "text": " This is the task that you ask of the neural network." }, { "end": 921, "start": 912, "text": " Give it the anchor and you ask which one of these two comes from the same image." }, { "end": 925, "start": 921, "text": " This is called contrastive learning." }, { "end": 934, "start": 925, "text": " It is a bit more complicated in that of course what you do is you encode these things using neural networks." }, { "end": 939, "start": 934, "text": " Each of the things you encode." }, { "end": 947, "start": 939, "text": " The anchor you are going to encode all of these things using a neural network." }, { "end": 952, "start": 947, "text": " Then this is what's going to become the query." }, { "end": 956, "start": 952, "text": " These are becoming the keys. Key 1 or key 2." }, { "end": 963, "start": 956, "text": " Then you are going to feed always two of them into a bilinear product." }, { "end": 970, "start": 963, "text": " A bilinear product is simply an inner product in a perturbed space that you can learn." }, { "end": 975, "start": 970, "text": " You are going to have these two here." }, { "end": 979, "start": 975, "text": " These go into Q, W, K, 1." }, { "end": 986, "start": 979, "text": " Then these two here, sorry, this and this go into Q, W, K, 2." }, { "end": 990, "start": 986, "text": " Now W here is a learnable parameter." }, { "end": 993, "start": 990, "text": " You have some freedom." }, { "end": 999, "start": 993, "text": " Then you basically take whichever one of those two is highest." }, { "end": 1004, "start": 999, "text": " This might be this high and this might only be this high." }, { "end": 1010, "start": 1004, "text": " Then you say, aha, cool, this one is higher so this one must be the positive." }, { "end": 1019, "start": 1010, "text": " You train the W specifically to make the positive ones higher and the negative ones lower." }, { "end": 1023, "start": 1019, "text": " This is a supervised learning task." }, { "end": 1030, "start": 1023, "text": " These things here are going to be the logits." }, { "end": 1037, "start": 1030, "text": " They are inner products but you basically then pick the one that is highest in a softmax way." }, { "end": 1040, "start": 1037, "text": " They put this in the paper." }, { "end": 1048, "start": 1040, "text": " If we go down here, the objective that they use to do the contrastive learning is this one." }, { "end": 1054, "start": 1048, "text": " As you can see, it's a softmax like in multiclass classification." }, { "end": 1061, "start": 1054, "text": " The inner product, the bilinear product with the positive samples" }, { "end": 1067, "start": 1061, "text": " over the bilinear product with the positive samples plus the bilinear product with all of the negative samples." }, { "end": 1071, "start": 1067, "text": " You are going to come up with more than one negative sample." }, { "end": 1078, "start": 1071, "text": " The only thing left that we don't have here is that the encoding," }, { "end": 1086, "start": 1078, "text": " how you are going to come from the image space to this space here," }, { "end": 1092, "start": 1086, "text": " is going to be slightly different depending on whether you are talking on the anchor" }, { "end": 1097, "start": 1092, "text": " or on what are called the keys, the things you compare to." }, { "end": 1100, "start": 1097, "text": " This is out of a stability criterion." }, { "end": 1106, "start": 1100, "text": " Maybe you know something like double Q-learning or things like this." }, { "end": 1112, "start": 1106, "text": " Sometimes when you train with your own thing," }, { "end": 1119, "start": 1112, "text": " in Q-learning you are trying to come up with an actor and a critic." }, { "end": 1130, "start": 1119, "text": " It's not the same thing, but you are using the same neural network twice in your setup." }, { "end": 1138, "start": 1130, "text": " Then you compare the outputs to each other, which leads to instability." }, { "end": 1145, "start": 1138, "text": " In our case, we took it three times here, or multiple times." }, { "end": 1151, "start": 1145, "text": " Especially for the same objective here, we have twice something that was encoded by the same neural network" }, { "end": 1154, "start": 1151, "text": " and is on the two sides of this bilinear product." }, { "end": 1160, "start": 1154, "text": " If we were to use the same neural network, that tends to be somewhat unstable." }, { "end": 1166, "start": 1160, "text": " We have different neural networks, one that will encode the query, which is this FQ," }, { "end": 1172, "start": 1166, "text": " and one which will encode the keys, sorry, FK." }, { "end": 1176, "start": 1172, "text": " We don't want to learn two neural networks." }, { "end": 1181, "start": 1176, "text": " That's why there's a bit of a compromise, where we say it is the same neural network," }, { "end": 1188, "start": 1181, "text": " but basically this one is the one we learn." }, { "end": 1196, "start": 1188, "text": " Every now and then we transfer over the parameters to that one." }, { "end": 1203, "start": 1196, "text": " In fact, each step we transfer over the parameters and do an exponentially moving average" }, { "end": 1209, "start": 1203, "text": " with the parameters of this momentum encoder from the step before." }, { "end": 1217, "start": 1209, "text": " The momentum encoder parameters are a moving average of the parameters of the query encoder." }, { "end": 1221, "start": 1217, "text": " You get the best of both worlds." }, { "end": 1227, "start": 1221, "text": " You don't have to learn a second neural network, but your second neural network" }, { "end": 1231, "start": 1227, "text": " is not the same as your first neural network." }, { "end": 1239, "start": 1231, "text": " It kind of lags behind, but it is also performing almost as well." }, { "end": 1246, "start": 1239, "text": " I don't know if that makes sense, but it is the best I can explain it." }, { "end": 1253, "start": 1246, "text": " To recap, you take your observation, you encode it as a query, sorry," }, { "end": 1260, "start": 1253, "text": " you crop here for your anchor, that gets your query," }, { "end": 1269, "start": 1260, "text": " and then you random crop for your keys into positive and negative samples." }, { "end": 1274, "start": 1269, "text": " Random crop from the same observation or from different observations." }, { "end": 1277, "start": 1274, "text": " These become your positive and negative samples." }, { "end": 1286, "start": 1277, "text": " Then you push these through your encoders for the query and for the keys respectively." }, { "end": 1291, "start": 1286, "text": " You end up with the queue, which is the encoded anchor," }, { "end": 1296, "start": 1291, "text": " and the k's, which are the encoded positive and negative samples." }, { "end": 1307, "start": 1296, "text": " Then you learn, you update this encoder here using the contrastive loss." }, { "end": 1316, "start": 1307, "text": " At the same time, you feed the queue into the reinforcement learning algorithm," }, { "end": 1321, "start": 1316, "text": " and you learn your reinforcement learning algorithm." }, { "end": 1326, "start": 1321, "text": " Instead of having the observation directly as an input here," }, { "end": 1332, "start": 1326, "text": " you now have the queue here as an input." }, { "end": 1336, "start": 1332, "text": " The reinforcement learning works exactly the same," }, { "end": 1343, "start": 1336, "text": " except having the pixel input O, you now have the representation input Q." }, { "end": 1348, "start": 1343, "text": " You don't have to worry about anything else in terms of the reinforcement learning algorithm." }, { "end": 1351, "start": 1348, "text": " It works exactly the same." }, { "end": 1357, "start": 1351, "text": " This whole thing here can run either in parallel, or you can think of it before," }, { "end": 1360, "start": 1357, "text": " you can think of it off-policy, on-policy." }, { "end": 1363, "start": 1360, "text": " It is sort of modular how you fit this in." }, { "end": 1366, "start": 1363, "text": " It simply comes up with good representation." }, { "end": 1371, "start": 1366, "text": " That is basically the deal here." }, { "end": 1381, "start": 1371, "text": " You hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here." }, { "end": 1391, "start": 1381, "text": " If you encode that to the queue, you hope that this representation now is a good representation as a basis for the RL algorithm." }, { "end": 1396, "start": 1391, "text": " It turns out, at least in their experiments, it is." }, { "end": 1398, "start": 1396, "text": " Here you see the same thing." }, { "end": 1404, "start": 1398, "text": " You can do something more where in RL you usually deal with a stack of observations," }, { "end": 1407, "start": 1404, "text": " not just a single observation." }, { "end": 1413, "start": 1407, "text": " For example, in Atari, people always concatenate something like the four last frames." }, { "end": 1420, "start": 1413, "text": " Their point is, if we have this stack here, if we do this data augmentation, these crops," }, { "end": 1422, "start": 1420, "text": " we need to do them consistently." }, { "end": 1429, "start": 1422, "text": " We need to crop every single image at the same point for the query." }, { "end": 1433, "start": 1429, "text": " Also, if we do a random crop, let's say a random crop down here," }, { "end": 1440, "start": 1433, "text": " we need to do this same random crop for all of the stack of images here." }, { "end": 1453, "start": 1440, "text": " That is the additional thing they introduce with respect to RL that deals with stacked timeframes." }, { "end": 1460, "start": 1453, "text": " It's the same diagram as above here." }, { "end": 1467, "start": 1460, "text": " They explain the RL algorithms they use and exactly their thing." }, { "end": 1475, "start": 1467, "text": " Here you can see that the anchor is a crop, and the positive sample is a random crop from the same image." }, { "end": 1477, "start": 1475, "text": " This would be up here somewhere." }, { "end": 1479, "start": 1477, "text": " The anchor is cropped from the middle." }, { "end": 1485, "start": 1479, "text": " Then the negative would be a random crop from a different image or a different stack of images." }, { "end": 1488, "start": 1485, "text": " They have a pseudocode here." }, { "end": 1494, "start": 1488, "text": " It's pretty simple. We'll just go through it quickly." }, { "end": 1500, "start": 1494, "text": " You start off with FQ and FK. These are the encoders for the query and keys." }, { "end": 1503, "start": 1500, "text": " You start them off the same." }, { "end": 1505, "start": 1503, "text": " Then you go through your data loader." }, { "end": 1511, "start": 1505, "text": " You do this random augmentation of your query and your keys." }, { "end": 1517, "start": 1511, "text": " I'm not even sure if the random augmentation needs to be a center crop for the anchor," }, { "end": 1527, "start": 1517, "text": " but it's just two different crops from the same image." }, { "end": 1532, "start": 1527, "text": " I guess it's a thing you could choose. I don't know what exactly is the best thing." }, { "end": 1541, "start": 1532, "text": " Then I forward the query through the FQ and I forward the keys through the FK." }, { "end": 1547, "start": 1541, "text": " It's important to detach this so I don't want to train the FK." }, { "end": 1550, "start": 1547, "text": " I only want to train the FQ." }, { "end": 1557, "start": 1550, "text": " Then I do the bilinear product here with the W." }, { "end": 1559, "start": 1557, "text": " These are the bilinear product." }, { "end": 1569, "start": 1559, "text": " Then I put all of this into a cross entropy loss." }, { "end": 1578, "start": 1569, "text": " In the end I update my FQ and my W and I do this exponentially moving average for my key encoder." }, { "end": 1581, "start": 1578, "text": " They test on two different things." }, { "end": 1586, "start": 1581, "text": " They test on the DeepMind control tasks." }, { "end": 1591, "start": 1586, "text": " They always test 100k time steps." }, { "end": 1594, "start": 1591, "text": " Their big point is data efficiency." }, { "end": 1600, "start": 1594, "text": " They claim they can learn useful representations with not much data." }, { "end": 1606, "start": 1600, "text": " The task is here, how good are you at 100k time steps?" }, { "end": 1608, "start": 1606, "text": " You don't optimize until the end." }, { "end": 1614, "start": 1608, "text": " You get 100k time steps and then the question is how good are you?" }, { "end": 1623, "start": 1614, "text": " The curl here outperforms all of the baselines handily in the DeepMind control tasks." }, { "end": 1631, "start": 1623, "text": " It also outperforms a lot of the baselines in the Atari tasks." }, { "end": 1638, "start": 1631, "text": " If you look at the results, it doesn't outperform everything." }, { "end": 1645, "start": 1638, "text": " For example, the red is curl and the dashed grey is stateSAC." }, { "end": 1651, "start": 1645, "text": " StateSAC has access to the state." }, { "end": 1654, "start": 1651, "text": " Curl only works from pixels." }, { "end": 1661, "start": 1654, "text": " If I had to craft a representation, stateSAC has access to that." }, { "end": 1673, "start": 1661, "text": " You see that in many of the tasks, the curl comes close or performs equally well to stateSAC." }, { "end": 1676, "start": 1673, "text": " That's pretty impressive." }, { "end": 1684, "start": 1676, "text": " Especially if you look at pixelSAC, which is the same algorithm but does not have access to the state," }, { "end": 1690, "start": 1684, "text": " it often fails terribly." }, { "end": 1693, "start": 1690, "text": " That is pretty interesting to see." }, { "end": 1705, "start": 1693, "text": " Even to me, it's pretty interesting to see that this kind of self-labeled algorithm comes up with such useful representations." }, { "end": 1713, "start": 1705, "text": " I hope I have explained this satisfactorily." }, { "end": 1720, "start": 1713, "text": " Check out the paper for more experiments, ablation studies and general reading." }, { "end": 1735, "start": 1720, "text": " I wish you a good day." } ]
gbG1X8Xq-T8
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Enhanced POET: Open-Ended RL through Unbounded Invention of Learning Challenges and their Solutions
[ "Science & Technology" ]
[ "deep learning", "machine learning", "unbounded", "open-ended", "evolution", "evolutionary", "uber", "uber ai", "distributed", "reinforcement learning", "rl", "generative" ]
The enhanced POET makes some substantial and well-crafted improvements over the original POET algorithm and excels at open-ended learning like no system before. https://arxiv.org/abs/2003.08536 https://youtu.be/RX0sKDRq400 Abstract: Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction is the Paired Open-Ended Trailblazer (POET), an algorithm that generates and solves its own challenges, and allows solutions to goal-switch between challenges to avoid local optima. However, the original POET was unable to demonstrate its full creative potential because of limitations of the algorithm itself and because of external issues including a limited problem space and lack of a universal progress measure. Importantly, both limitations pose impediments not only for POET, but for the pursuit of open-endedness in general. Here we introduce and empirically validate two new innovations to the original algorithm, as well as two external innovations designed to help elucidate its full potential. Together, these four advances enable the most open-ended algorithmic demonstration to date. The algorithmic innovations are (1) a domain-general measure of how meaningfully novel new challenges are, enabling the system to potentially create and solve interesting challenges endlessly, and (2) an efficient heuristic for determining when agents should goal-switch from one problem to another (helping open-ended search better scale). Outside the algorithm itself, to enable a more definitive demonstration of open-endedness, we introduce (3) a novel, more flexible way to encode environmental challenges, and (4) a generic measure of the extent to which a system continues to exhibit open-ended innovation. Enhanced POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved through other means. Authors: Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
There, before we jump into today's paper, I just want to give a shout out to Machine Learning Street Talk, where every week we talk about current or big trends or topics in machine learning. The first discussion that we launched is actually on today's paper, The Enhanced Poet. So if you like the following video, you might want to jump over to Machine Learning Street Talk and check out our discussion about it. It's very interesting. Alright, have fun. Hi there. What you're seeing here are many different environments from a single run of a system that's called The Enhanced Poet. Last time we've taken a look at a system called Poet, and The Enhanced Poet is kind of an improvement over the original Poet, fixing some of its shortcomings. And you see here that the agent is able to solve this very, very diverse set of environments. And the notable thing is, this is from a single run of this algorithm. So one run will produce all these different environments and will produce agents that are able to solve all the different environments at the same time in parallel. So it's a population-based method. If you haven't seen the video I did on Poet, I suggest you go and see that now. This is simply an enhancement to it, and I expect people to know kind of what I'm talking about. Alright, it's going to be a short video, but I think it is a good addendum to Poet. So it's The Enhanced Poet, Open-ended Reinforcement Learning Through Unbounded Invention of Learning Challenges and Their Solutions by Rui Wangchou, Leymann Adhitar Wahl, Jial Qi, Julun Li, Jeff Klun, and Kenneth O. Stanley. So we'll jump right in. They make a number of improvements to the original Poet, and I simply want to discuss the most important ones. So you know, they have a nice graphic down here of what happens in Poet. Poet builds up this tree of environments, and to each environment it has an agent that it trains to solve that environment at the same time. So at the same time it will kind of start out here. It will generate offspring. It will continuously generate offspring, and then it will also continuously train agents in each environment that it produced in order to solve that environment. And it keeps doing that while producing more and more offspring. And then once in a while it does what is called a transfer. So that means that, for example, you see here the offspring produced here from this environment. You kind of see that the lineage here kind of focuses on squiggly environments, right? You see that there's a bit of a squiggle here and a bit of a squiggle here. And then the offspring all of a sudden is a bit more smooth, but has this little step here. And then this offspring of this environment has this large step here. Now the agents that come from here have kind of been optimized to solve the squiggliness problem. But here, over here, this lineage has specified or specialized more and more in kind of like these kind of large drops or steep hills. So the agent that was trained over here was found to be very effective in this environment and therefore can be transferred. So this kind of population branching out into the different trees and then transferring solutions between the parts of the trees, that's what makes Poet very very powerful mechanism to solve these kind of tasks. All right, so how does this improve? Now the first thing that Poet does is it generates these environments and it always wants to generate new environments. So it always generates offspring to each environment. So let's say we are here, it will generate offspring to each environment here, each that we have. Let's see, we have only seen so far. And then it only picks the most novel ones, the ones that are most novel, which is this, probably this. Then there are other criteria, namely that it can be solved by some agents, but it cannot be solved by others. It's not too difficult, but also not too hard. But one of the aspects is it must be novel, right? So we're not seeing any here, which means that those weren't novel enough. How does it measure novel? In the original implementation of Poet, you had this environment generator, which was like a level generator, which made these gaps here and the stumps here. And you could specify, I believe, five numbers. So there was a five-point scale in which you could specify how high the stumps were. You get this kind of pentagon here, how high the stumps were and how deep the gaps were and how rough the terrain was. And the level generator would generate this level. And so basically your distance metric between environments was a vector of size five, right? This is environment one. And you had environment two, which if it's more, it has higher stumps, right? Than this particular number here, maybe would be higher than this number here. So it was kind of limited to taking the Euclidean distance between two environment encodings in order to measure the distance between environments. This is very, very domain specific. And the authors here argue what we should rather do is have a general environment agnostic distance metric, right? So here is what they propose. They propose the following. Why don't we, if we have a new environment, right? Let's say we have a new environment. We measure all of the agents, the current agents and the ones we've already seen, right? We measure all the agents in our database on this new environment. That's this. And they come up with scores, right? Each of them gets a score. And then we, you know, clip and bound the score. So the max here is 300 and the minimum is 50. But in any case, we then rank them, right? So we evaluate them and then we rank them from best to worst. And then we normalize, which simply means that the best one gets a score of 0.5 and the worst one gets a score of negative 0.5. And now this vector here, this is now used to compare environments. So if we have another environment, right? Right here, we have E2 and that gets a different ordering, right? So maybe agent one is now the best agent two is really bad and so on, right? That gets a different ordering. Then the resulting vector here will be very, very different from from this vector right here. And this is very agnostic. So no matter which environment it is, if the ordering of agents in it, the score they get, the order of it is the same, the environments aren't really different from each other, the authors argue. But if the scores are very differently ranked, right? So imagine the environment is harder but essentially the same, then the scores will be lower, but still the agents would be ranked the same. So you can argue, well, that's just kind of the same environment, except a step like this now has a super steep step, right? It's not very different. But if instead of that, you get an environment that is like this, like you say, wow, that's qualitatively different. And you would expect from this one to this one that the agents would be ranked the same, the agents performance would roughly stay the same, but you would expect from the middle one to this one that an entirely different set of agents might perform well right in this one. So that's how novelty is measured and I think it's a pretty cool way. I don't have coronavirus, by the way, maybe, who knows? No, I just have a dry throat. All right, so this is the first enhancement they make is that they now measure novelty in this domain agnostic way. Pretty cool so far. And what this allows them to do, this allows them to actually not rely on this level generator with the five parameters in order to generate these levels. But these levels can now be produced however they want with different generators and that's exactly what they do. They now employ neural networks. Well, it's kind of a prototypical, it's called a CP&N that generates these things. You might have seen in the examples the enhanced poet doesn't have these gaps and stumps anymore. It simply has these landscapes that are super diverse, but they're still just their landscapes. And what it does is it evolves neural networks at the same time as it evolves the population. It evolves these, so the architecture of these networks isn't fixed. It's actually evolving along with the agent to make the challenges harder and harder. So you see there are like cosines and sines in here and you can add them and subtract and so on. And that will give you a mapping from x, which is the x coordinate here, to y, which is the y coordinate. And that will give you kind of a continuous landscape depending on the architecture here and on the internal parameters of course. I guess there would also be a node, some here like times a lambda factor and then the lambda would also be a thing that is evolved. So pretty cool. Of course the internals of this now aren't just described by a fixed vector anymore, but you don't need that anymore because we have a method to compare environments even if they come from completely different architectures of generators. So it's pretty cool that the agnostic comparison of environments allows you to now have a much more general level generator and of course now produce much more diverse environments. And that's exactly what they see. Of course you see here the environments get super crazy. So they also propose kind of a novel metric to measure novelty, sorry to measure progress. So the question is how do we measure progress in these algorithms, in these open-ended algorithms? And what they propose is this ANNX score, which is, I have to go and look it up, the ANNX score I think is the number of new environments that are solved. Yes, so exactly. The question is whether a system continues to generate interesting new things. And the way they measure it is by the accumulated number of novel environments created and solved. So the question here is accumulated, that means over the entire run they count up how many environments that they've seen that are novel, and we've already had the definition of novel. And in this case it basically means that it must pass the minimal criterion. It's neither too hard nor too easy. We've already seen this in how the offspring of environments is generated. There's a minimal criterion and it must be eventually solved. So that means the novel environments created and solved. So how many new environments are created and solved? And then at a later point solved. You can see the difference to the original poet in this graph. So the original poet eventually runs out of new environments because its generator is just not powerful enough. It can only modify these five variables and eventually the environments aren't substantially novel from the old environments. Whereas the enhanced poet you can see even after this run, and I'm sure they have large infrastructure to do these experiments, it just continues to innovate new more elaborate environments continuously. So this I think are the main things. They also do some improvement to the transfers and so on. I don't want to go into that. I just wanted to show these improvements so that you have the complete picture of how such an algorithm runs. My criticism to this is that if you just look at their thing is that with the leaving out of the gaps and the stumps and so on, in a weird way, of course the levels are diverse, but they have become even more similar it seems. Like you're really relying on your ability to kind of continuously create these levels. Kind of like a GAN for levels, right? And you're relying on your ability to smoothly increase the difficulty of the levels, right? To actually have a diversity in your level generator, but also a kind of a smoothness with regard to the difficulty in order to build this whole curriculum. And I think even though the environments look more diverse, it might be going into a direction where you kind of engineer yourself into a corner where you are now even more and more relying on these evolving and parameterizable generators. Nonetheless, the ideas I think are pretty cool and that's all I have to say about it. Bye bye!
[ { "end": 5.44, "start": 0, "text": " There, before we jump into today's paper, I just want to give a shout out to Machine Learning Street" }, { "end": 11.68, "start": 5.44, "text": " Talk, where every week we talk about current or big trends or topics in machine learning." }, { "end": 19.36, "start": 12.4, "text": " The first discussion that we launched is actually on today's paper, The Enhanced Poet. So if you like" }, { "end": 24.8, "start": 19.36, "text": " the following video, you might want to jump over to Machine Learning Street Talk and check out our" }, { "end": 32.8, "start": 24.8, "text": " discussion about it. It's very interesting. Alright, have fun. Hi there. What you're seeing here are" }, { "end": 38.4, "start": 32.8, "text": " many different environments from a single run of a system that's called The Enhanced Poet." }, { "end": 46.56, "start": 39.760000000000005, "text": " Last time we've taken a look at a system called Poet, and The Enhanced Poet is kind of an improvement" }, { "end": 53.84, "start": 47.28, "text": " over the original Poet, fixing some of its shortcomings. And you see here that the" }, { "end": 66, "start": 53.84, "text": " agent is able to solve this very, very diverse set of environments. And the notable thing is," }, { "end": 71.84, "start": 66, "text": " this is from a single run of this algorithm. So one run will produce all these different" }, { "end": 77.04, "start": 71.84, "text": " environments and will produce agents that are able to solve all the different environments" }, { "end": 82.16, "start": 77.92, "text": " at the same time in parallel. So it's a population-based method. If you haven't" }, { "end": 89.36, "start": 82.16, "text": " seen the video I did on Poet, I suggest you go and see that now. This is simply an enhancement to it," }, { "end": 95.03999999999999, "start": 89.36, "text": " and I expect people to know kind of what I'm talking about. Alright, it's going to be a short" }, { "end": 101.28, "start": 95.03999999999999, "text": " video, but I think it is a good addendum to Poet. So it's The Enhanced Poet, Open-ended" }, { "end": 107.67999999999999, "start": 101.28, "text": " Reinforcement Learning Through Unbounded Invention of Learning Challenges and Their Solutions" }, { "end": 115.76, "start": 107.68, "text": " by Rui Wangchou, Leymann Adhitar Wahl, Jial Qi, Julun Li, Jeff Klun, and Kenneth O. Stanley." }, { "end": 124.64000000000001, "start": 117.76, "text": " So we'll jump right in. They make a number of improvements to the original Poet, and I simply" }, { "end": 132.4, "start": 124.64000000000001, "text": " want to discuss the most important ones. So you know, they have a nice graphic down here of what" }, { "end": 139.92000000000002, "start": 132.4, "text": " happens in Poet. Poet builds up this tree of environments, and to each environment it has an" }, { "end": 146.16, "start": 139.92000000000002, "text": " agent that it trains to solve that environment at the same time. So at the same time it will kind of" }, { "end": 152.96, "start": 146.16, "text": " start out here. It will generate offspring. It will continuously generate offspring, and then it will" }, { "end": 160.24, "start": 152.96, "text": " also continuously train agents in each environment that it produced in order to solve that environment." }, { "end": 165.20000000000002, "start": 160.24, "text": " And it keeps doing that while producing more and more offspring. And then once in a while" }, { "end": 174.4, "start": 167.28, "text": " it does what is called a transfer. So that means that, for example, you see here the offspring" }, { "end": 183.36, "start": 174.4, "text": " produced here from this environment. You kind of see that the lineage here kind of focuses on" }, { "end": 188.4, "start": 183.36, "text": " squiggly environments, right? You see that there's a bit of a squiggle here and a bit of a squiggle" }, { "end": 193.68, "start": 188.4, "text": " here. And then the offspring all of a sudden is a bit more smooth, but has this little step here." }, { "end": 199.92000000000002, "start": 194.24, "text": " And then this offspring of this environment has this large step here. Now the agents that come" }, { "end": 208.8, "start": 199.92000000000002, "text": " from here have kind of been optimized to solve the squiggliness problem. But here, over here," }, { "end": 215.28, "start": 208.8, "text": " this lineage has specified or specialized more and more in kind of like these kind of large" }, { "end": 223.92000000000002, "start": 215.28, "text": " drops or steep hills. So the agent that was trained over here was found to be very effective" }, { "end": 232, "start": 223.92000000000002, "text": " in this environment and therefore can be transferred. So this kind of population branching out into the" }, { "end": 239.44, "start": 232, "text": " different trees and then transferring solutions between the parts of the trees, that's what makes" }, { "end": 250.96, "start": 239.44, "text": " Poet very very powerful mechanism to solve these kind of tasks. All right, so how does this improve?" }, { "end": 258.48, "start": 250.96, "text": " Now the first thing that Poet does is it generates these environments and it always wants to generate" }, { "end": 265.44, "start": 258.48, "text": " new environments. So it always generates offspring to each environment. So let's say we are here," }, { "end": 271.84, "start": 265.44, "text": " it will generate offspring to each environment here, each that we have. Let's see, we have only seen" }, { "end": 279.36, "start": 271.84, "text": " so far. And then it only picks the most novel ones, the ones that are most novel, which is this," }, { "end": 285.12, "start": 279.36, "text": " probably this. Then there are other criteria, namely that it can be solved by some agents," }, { "end": 290.96, "start": 285.12, "text": " but it cannot be solved by others. It's not too difficult, but also not too hard. But one of the" }, { "end": 297.35999999999996, "start": 290.96, "text": " aspects is it must be novel, right? So we're not seeing any here, which means that those weren't" }, { "end": 304.4, "start": 297.35999999999996, "text": " novel enough. How does it measure novel? In the original implementation of Poet, you had this" }, { "end": 311.76, "start": 304.4, "text": " environment generator, which was like a level generator, which made these gaps here and the" }, { "end": 319.76, "start": 311.76, "text": " stumps here. And you could specify, I believe, five numbers. So there was a five-point scale in" }, { "end": 326.08, "start": 319.76, "text": " which you could specify how high the stumps were. You get this kind of pentagon here, how high the" }, { "end": 332.64, "start": 326.08, "text": " stumps were and how deep the gaps were and how rough the terrain was. And the level generator" }, { "end": 340.4, "start": 332.64, "text": " would generate this level. And so basically your distance metric between environments was" }, { "end": 348.08, "start": 341.12, "text": " a vector of size five, right? This is environment one. And you had environment two, which if it's" }, { "end": 353.84, "start": 348.08, "text": " more, it has higher stumps, right? Than this particular number here, maybe would be higher" }, { "end": 361.68, "start": 353.84, "text": " than this number here. So it was kind of limited to taking the Euclidean distance between two" }, { "end": 370.47999999999996, "start": 361.68, "text": " environment encodings in order to measure the distance between environments. This is very," }, { "end": 379.44, "start": 370.48, "text": " very domain specific. And the authors here argue what we should rather do is have a general" }, { "end": 387.68, "start": 379.44, "text": " environment agnostic distance metric, right? So here is what they propose. They propose the" }, { "end": 394.8, "start": 387.68, "text": " following. Why don't we, if we have a new environment, right? Let's say we have a new" }, { "end": 401.6, "start": 394.8, "text": " environment. We measure all of the agents, the current agents and the ones we've already seen," }, { "end": 407.92, "start": 401.6, "text": " right? We measure all the agents in our database on this new environment. That's this. And they" }, { "end": 414.08000000000004, "start": 407.92, "text": " come up with scores, right? Each of them gets a score. And then we, you know, clip and bound the" }, { "end": 422, "start": 414.08000000000004, "text": " score. So the max here is 300 and the minimum is 50. But in any case, we then rank them, right? So" }, { "end": 430.16, "start": 422, "text": " we evaluate them and then we rank them from best to worst. And then we normalize, which simply" }, { "end": 441.04, "start": 430.16, "text": " means that the best one gets a score of 0.5 and the worst one gets a score of negative 0.5. And" }, { "end": 447.6, "start": 441.04, "text": " now this vector here, this is now used to compare environments. So if we have another environment," }, { "end": 458.32000000000005, "start": 447.6, "text": " right? Right here, we have E2 and that gets a different ordering, right? So maybe agent one is" }, { "end": 464.32000000000005, "start": 458.32000000000005, "text": " now the best agent two is really bad and so on, right? That gets a different ordering. Then the" }, { "end": 473.92, "start": 464.32000000000005, "text": " resulting vector here will be very, very different from from this vector right here. And this is very" }, { "end": 482.08000000000004, "start": 473.92, "text": " agnostic. So no matter which environment it is, if the ordering of agents in it, the score they get," }, { "end": 487.92, "start": 482.08000000000004, "text": " the order of it is the same, the environments aren't really different from each other, the" }, { "end": 495.20000000000005, "start": 487.92, "text": " authors argue. But if the scores are very differently ranked, right? So imagine the" }, { "end": 502, "start": 495.20000000000005, "text": " environment is harder but essentially the same, then the scores will be lower, but still the" }, { "end": 507.52, "start": 502, "text": " agents would be ranked the same. So you can argue, well, that's just kind of the same environment," }, { "end": 515.76, "start": 507.52, "text": " except a step like this now has a super steep step, right? It's not very different. But if" }, { "end": 524.24, "start": 516.72, "text": " instead of that, you get an environment that is like this, like you say, wow, that's qualitatively" }, { "end": 531.28, "start": 524.24, "text": " different. And you would expect from this one to this one that the agents would be ranked" }, { "end": 536.9599999999999, "start": 531.28, "text": " the same, the agents performance would roughly stay the same, but you would expect from the middle" }, { "end": 543.36, "start": 536.9599999999999, "text": " one to this one that an entirely different set of agents might perform well right in this one." }, { "end": 551.76, "start": 543.36, "text": " So that's how novelty is measured and I think it's a pretty cool way. I don't have coronavirus," }, { "end": 564.3199999999999, "start": 551.76, "text": " by the way, maybe, who knows? No, I just have a dry throat. All right, so this is the first" }, { "end": 570.48, "start": 564.3199999999999, "text": " enhancement they make is that they now measure novelty in this domain agnostic way. Pretty cool" }, { "end": 577.76, "start": 570.48, "text": " so far. And what this allows them to do, this allows them to actually not rely on this level" }, { "end": 586.24, "start": 577.76, "text": " generator with the five parameters in order to generate these levels. But these levels can now" }, { "end": 591.52, "start": 586.24, "text": " be produced however they want with different generators and that's exactly what they do." }, { "end": 602.4, "start": 591.52, "text": " They now employ neural networks. Well, it's kind of a prototypical, it's called a CP&N that generates" }, { "end": 608.56, "start": 602.4, "text": " these things. You might have seen in the examples the enhanced poet doesn't have these gaps and" }, { "end": 614.8, "start": 608.56, "text": " stumps anymore. It simply has these landscapes that are super diverse, but they're still just" }, { "end": 623.04, "start": 614.8, "text": " their landscapes. And what it does is it evolves neural networks at the same time as it evolves" }, { "end": 629.36, "start": 623.04, "text": " the population. It evolves these, so the architecture of these networks isn't fixed. It's actually" }, { "end": 636.16, "start": 629.36, "text": " evolving along with the agent to make the challenges harder and harder. So you see there" }, { "end": 641.6800000000001, "start": 636.16, "text": " are like cosines and sines in here and you can add them and subtract and so on. And that will give" }, { "end": 649.92, "start": 641.6800000000001, "text": " you a mapping from x, which is the x coordinate here, to y, which is the y coordinate. And that" }, { "end": 657.36, "start": 649.92, "text": " will give you kind of a continuous landscape depending on the architecture here and on the" }, { "end": 663.92, "start": 657.36, "text": " internal parameters of course. I guess there would also be a node, some here like times a lambda" }, { "end": 671.92, "start": 663.92, "text": " factor and then the lambda would also be a thing that is evolved. So pretty cool. Of course the" }, { "end": 677.2, "start": 671.92, "text": " internals of this now aren't just described by a fixed vector anymore, but you don't need that" }, { "end": 682.32, "start": 677.2, "text": " anymore because we have a method to compare environments even if they come from completely" }, { "end": 691.12, "start": 682.32, "text": " different architectures of generators. So it's pretty cool that the agnostic" }, { "end": 698.5600000000001, "start": 691.12, "text": " comparison of environments allows you to now have a much more general level generator and of course" }, { "end": 704.8000000000001, "start": 698.5600000000001, "text": " now produce much more diverse environments. And that's exactly what they see. Of course you see" }, { "end": 715.52, "start": 704.8, "text": " here the environments get super crazy. So they also propose kind of a novel metric to measure" }, { "end": 722.56, "start": 715.52, "text": " novelty, sorry to measure progress. So the question is how do we measure progress in these" }, { "end": 729.76, "start": 722.56, "text": " algorithms, in these open-ended algorithms? And what they propose is this ANNX score, which is," }, { "end": 739.52, "start": 729.76, "text": " I have to go and look it up, the ANNX score I think is the number of new environments that are solved." }, { "end": 754.56, "start": 746.08, "text": " Yes, so exactly. The question is whether a system continues to generate interesting new things." }, { "end": 762.88, "start": 754.56, "text": " And the way they measure it is by the accumulated number of novel environments created and solved." }, { "end": 771.4399999999999, "start": 764, "text": " So the question here is accumulated, that means over the entire run they count up how many" }, { "end": 778.88, "start": 771.4399999999999, "text": " environments that they've seen that are novel, and we've already had the definition of novel." }, { "end": 787.12, "start": 778.88, "text": " And in this case it basically means that it must pass the minimal criterion. It's neither too hard" }, { "end": 792.48, "start": 787.12, "text": " nor too easy. We've already seen this in how the offspring of environments is generated." }, { "end": 801.84, "start": 792.48, "text": " There's a minimal criterion and it must be eventually solved. So that means the novel" }, { "end": 808.8, "start": 801.84, "text": " environments created and solved. So how many new environments are created and solved?" }, { "end": 815.92, "start": 808.8, "text": " And then at a later point solved. You can see the difference to the original poet in this graph." }, { "end": 825.52, "start": 817.04, "text": " So the original poet eventually runs out of new environments because its generator is just not" }, { "end": 831.8399999999999, "start": 825.52, "text": " powerful enough. It can only modify these five variables and eventually the environments aren't" }, { "end": 837.92, "start": 831.8399999999999, "text": " substantially novel from the old environments. Whereas the enhanced poet you can see even after" }, { "end": 842.9599999999999, "start": 837.92, "text": " this run, and I'm sure they have large infrastructure to do these experiments," }, { "end": 850.9599999999999, "start": 842.9599999999999, "text": " it just continues to innovate new more elaborate environments continuously." }, { "end": 858.24, "start": 852, "text": " So this I think are the main things. They also do some improvement to the transfers and so on." }, { "end": 862.0799999999999, "start": 858.24, "text": " I don't want to go into that. I just wanted to show these improvements so that you have" }, { "end": 870.32, "start": 862.08, "text": " the complete picture of how such an algorithm runs. My criticism to this is that if you just" }, { "end": 877.76, "start": 870.32, "text": " look at their thing is that with the leaving out of the gaps and the stumps and so on," }, { "end": 884.72, "start": 878.8000000000001, "text": " in a weird way, of course the levels are diverse, but they have become even more similar it seems." }, { "end": 891.5200000000001, "start": 884.72, "text": " Like you're really relying on your ability to kind of continuously create these levels. Kind of like" }, { "end": 902.64, "start": 891.52, "text": " a GAN for levels, right? And you're relying on your ability to smoothly increase the difficulty" }, { "end": 909.36, "start": 902.64, "text": " of the levels, right? To actually have a diversity in your level generator, but also a kind of a" }, { "end": 916.8, "start": 909.36, "text": " smoothness with regard to the difficulty in order to build this whole curriculum. And I think" }, { "end": 922.16, "start": 916.8, "text": " even though the environments look more diverse, it might be going into a direction where you kind of" }, { "end": 930, "start": 922.16, "text": " engineer yourself into a corner where you are now even more and more relying on these evolving" }, { "end": 936.4799999999999, "start": 930, "text": " and parameterizable generators. Nonetheless, the ideas I think are pretty cool and that's" }, { "end": 947.84, "start": 936.48, "text": " all I have to say about it. Bye bye!" } ]
klPuEHCKG9M
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Evolving Normalization-Activation Layers
[ "Science & Technology" ]
[ "deep learning", "machine learning", "cnn", "resnet", "residual", "efficientnet", "mobilenet", "cifar10", "imagenet", "batch normalization", "batchnorm", "relu", "sigmoid", "evolution", "architecture", "transfer", "image classification", "supervised learning", "population", "activation", "normalization", "google", "deepmind" ]
Normalization and activation layers have seen a long history of hand-crafted variants with various results. This paper proposes an evolutionary search to determine the ultimate, final and best combined normalization-activation layer... in a very specific setting. https://arxiv.org/abs/2004.02967 Abstract: Normalization layers and activation functions are critical components in deep neural networks that frequently co-locate with each other. Instead of designing them separately, we unify them into a single computation graph, and evolve its structure starting from low-level primitives. Our layer search algorithm leads to the discovery of EvoNorms, a set of new normalization-activation layers that go beyond existing design patterns. Several of these layers enjoy the property of being independent from the batch statistics. Our experiments show that EvoNorms not only excel on a variety of image classification models including ResNets, MobileNets and EfficientNets, but also transfer well to Mask R-CNN for instance segmentation and BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers by a significant margin in many cases. Authors: Hanxiao Liu, Andrew Brock, Karen Simonyan, Quoc V. Le Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at evolving normalization activation layers by Hanjiao Liu, Andrew Brock, Karen Simonian and Guo Vili. These are people from Google Brain and Google DeepMind. The topic of this paper is, as you can see, it's about normalization activation layers and we want to evolve them. I think the title says a lot, but let's go down here and see what this is about. We'll look at image neural networks and current architectures are kind of focused around the same principles. What they'll have is, ever since ResNet, these neural networks will be composed of these kind of blocks that come one after another. There will be a block up here and then the signal will propagate and there will be another block down here. These blocks usually consist of what's called a skip connection. This is the fundamental ingredient of ResNets that made ResNets so effective, it seems to be the introduction of this skip connection. You see all of these have the skip connection here. These are variants on ResNets and then we see that these are always mixed between convolutional layers and then other things that here are called evoNorm. In a classic ResNet you would have something like a convolutional layer, then you would have a batch normalization and then you would have a non-linearity, for example a ReLU, and then you would go on to the next convolutional layer. You see that the paper mainly cares about these two layers here, the batch norm and the ReLU, and it combines them into what it's called an evoNorm. The evoNorm layers here are supposed to replace the normalization and the activation layers, combine them and make them better. How does it do that? Through evolutionary search. These three models here are the ResNet, MobileNet and EfficientNet architectures. They're all image classifier architectures. Let's see how it does that. What it does is it evolves these layers from single primitives. If you've seen the batch normalization paper, then you know that the batch normalization is just kind of a formula you can write down. These other normalization methods, people have developed other ones than batch norm, for example this is groupNorm with a ReLU activation function. You can write these two layers down as this mathematical expression. It features things like, this is the input signal, I think this is the mean across some groups, this is a bias term that you can train, this is the standard deviation across the same groups and so on. This here is the ReLU term. You can write this down as a combination of these primitives. You can write it in a graph. This graph here is actually an activation function that this paper has found. It's called EVO norm S0 and the mathematical equation is the thing down here. It's not that different, you can see from previous activations. It also has the input signal, it has this variance or standard deviation across groups, it has a non-linearity here and the graph here is simply a graph of mathematical operations made out of primitives. This paper takes all of the primitives that they can think of and puts them in this table and they say, okay, our search space for these layers, so we want to evolve these layers, our search space is a combination of any of these primitives. You can see here you have something like addition, multiplication, negation, so you can subtract things, you can take the log of things, you can take the square root and so on. Here you have a max which is the ReLU activation function, but if you put 0 as one of them, you have the sigmoid which is another non-linearity. Then you can also do something like I want to compute the batch mean or I want to compute a group standard deviation, pretty much anything that current activation functions that have been handcrafted use are available as primitives across this search. So how does this method search? This is the process of how the method searches, it does this in an evolutionary way and evolutionary protocols it means that you don't develop one layer like you would do if you were to do something like gradient descent. You develop a whole population of layers, so these can be maybe a couple of hundred or a couple of thousands, different layer architectures that you develop at the same time. What you do each time you put them into a tournament, which basically means you want to sample a couple of them, I don't think they do all at the same time, they sample a couple of them, right, and then they train on what they call a proxy task. Their proxy task is CIFAR-10. So CIFAR-10 is a fairly small image classification task and they train on CIFAR-10 because it's pretty fast, right, you can train something on CIFAR-10 in like a couple of minutes or an hour or so and get a pretty good feeling for how good the final accuracy will be. You can get that pretty fast. So this is a fast classification task because they need to do this a lot, right, the population is large and they need to repeat this over time, right. In any case they take a sample, they train it on CIFAR-10 and then the winner, the winning layer is picked from this sample and only the winning layer is allowed to mutate, right. So the winning layer is mutated then and mutation means you kind of change it a bit. Now you don't change it in an informed way, you just change it at random and of course the, and then you put the mutated layers back into the population. Of course the hope is that by repeating this process, right, you repeat and repeat and repeat that the, simply by picking the winning layers over and over and over again is a selective pressure such that through the random mutations but the tournament style evaluation and picking of the winner, that over time the best performing models in your population, right, the best scoring model here will get better and better and better, right. So the assumption is that this isn't like a pure combinatorial optimization or like a pure random function, is that if I take something that works well there are ways that I can perturb it that make it work even better, right. So even if most of the perturbations are worse there are some and the tournament style will always find these ones for me that perform better and then I can modify these again at random and then among these I can again find the ones that perform even better. So that is the method and so the question, there are two questions, how do you mutate a layer, right, and mutation I believe is done in sort of different ways here but if you look at this here, at this expression, so what you have here is the input is this signal here, right, and you always start out I believe with the input with a layer that just emits the number one, with a layer that just emits the number zero or a component and then you have two trainable vectors that you can include and you just start out with these four things and then every time you mutate you add one of these blocks and I believe there's also a method like a randomness of changing the individual things or of actually starting from scratch again, it's pretty important otherwise you just grow bigger and bigger monsters and but the way you mutate is the following, you add a new block, let's say I add one here, and you decide on one of the primitives from the table, right, here I'm going to simply decide on a minus operation, so a subtraction operation and then once you've done that you choose two children, sorry two parents, however you see it, you choose two parents because the minus operation needs two parents, you choose two of the parents at random here, so I'm going to choose this thing here to be a parent and I'm going to choose this thing here to be a parent at random, right, and then this new node will become the new output of the layer, so you see that this was the previous output here, this multiplication node between this and this, now this is no longer the output, now this is obsolete, right, this is no longer part of the final mathematical expression here, so you see all the gray nodes here were actually sort of obsolete nodes but they are still kept because in subsequent steps you can choose them as parents and then they become part of the expression again, you can see here this tanh node, it was just a node that was sort of a dead end in the expression before but now with the new mutation it is again included in the expression because I've randomly selected it as a parent but then this node here and that was reset this node here, they are now obsolete nodes because they are no longer part of the expression, the expression in this case would go from here to here, right, including this node and it would go from here over here, right, so these nodes are now part of the expression, so this is how you mutate and as I said you can also mutate such that you start from scratch and so that's how you mutate, the second part in this thing is how do you exactly determine the winner and what is the tournament, so how do you do that, the tournament exactly is what we've seen before when we looked at the different layers, so we said we train on CIFAR-10 and what we do is we train these three architectures on CIFAR-10, so the ResNet, the MobileNet and the EfficientNet, we train these three architectures on CIFAR-10 with the EVO norm layer instantiated by you know that particular sample from the population and then we look at their accuracies and we do, we determine what is called the Pareto frontier of the population, so I think this is further up, oh right here, okay, so the dots here, the red and the grey dots would be our sample, so all of this would be our samples and their performance, here it's on actually on two models but in practice we have three just to graph it better, so we plot them here and we determine the Pareto frontier, now here A, B and C are part of the Pareto frontier because A outperforms everything else on model 1, C outperforms everything else on model 2 and B outperforms C on model 1 but also outperforms A on model 2, so it's what's called the Pareto frontier and we pick one of those as the winner, so they all are kind of one-third winners here, so this is how you do the tournament, you pick the winner like this and then you allow the winner to mutate, the last part that is not drawn in here actually is somewhere here-ish which is called the rejection step, so the rejection step is important because what they want to do is they say, hi we have these mutated layers but some of the mutations are probably going to be just terrible, like destroying everything, not trainable layers, it's just horrible, horrible, such that the layer is useless, they don't want to keep, they don't want to put them back here into the population because that might either deteriorate or severely slow down this progress here, so they want to stop them and only the good ones, only the ones that are somewhat fairly okay get back to the population, right, they don't always have to improve but they have to be at minimally useful, so the rejection step they describe down here in the rejection protocols, they have two criteria for rejecting mutated architectures, first they have a quality criterion, say we discard layers that achieve less than 20% validation accuracy in 100 training steps on any of the three anchor architectures, right, so the reasoning behind this is if you have a hundred training steps and you achieve less than 20% validation accuracy you're not going anywhere, right, you're just because 10% is already random performance, if you are less than 20% after a hundred steps your layer is pretty useless and can be discarded, right, so they say this simple mechanism ensures the compute resources to concentrate on the full training process of a small subset of promising candidates, oh sorry, yeah, so the hundred training steps of course is not enough to train fully but you can see after a hundred training steps whether or not the layer even does something, so you reject those, so this makes pretty much sense, right, the second criterion is what they call stability, right, they say we reject layers that are subject to numerical instability, right, and how do they find numerical instability? They define it like this, so what they do is they take the parameters, so the layers, and this is an architecture, yeah, so the model, the model, these are the convolutional weights, are the theta, right, and the G is the computation graph which is the EVO norm in this case and there is a loss defined across them, of course, right, this is the loss of the neural network on the samples, right, so these are the convolutional layers and these are the normalization layers, now what we want to do is we want to see how does this loss change when we change the convolutional layers, so you have to imagine, here are the convolutional layers and then there are these weird normalization layers and then again there are the convolutional layers, now we want to see how does the loss change if we change the weights of the convolution by a little bit, right, we just change it a little bit and see how does the loss change, this is the gradient of the weights basically, this is how you train, this entire thing here is how you train the neural network, right, so you want to see how large is this gradient and you kind of want to do this in an adversarial way, so you want to find the maximum perturbation you can achieve, right, you say okay if I change this a little bit in the worst direction I possibly can, how large is the perturbation going to be and that's how they define numerical instability, so it basically means if this is very high then the network might be doing well right where it is but just a little bit changing it will make it terrible, right, so they say we ascend the value on this direction for 100 steps and layer with the worst-case gradient norm greater than 10 to the 8th are rejected, in addition, so as a reason, this seems pretty strange, right, this quality criterion, it made sense but the stability criterion, it kind of seems, I mean reasonable but strange in here, the reason now, so the two tests are complementary with each other, for example we found a layer like this is able to achieve reasonable accuracies on C for 10 across all the anchor architectures, so it passes the quality criterion above but its gradients quickly explode on ImageNet possibly due to the absence of normalization operations, so and then you see aha, okay, so what probably happened is the following, they did their experiment without this, right, just with this quality criterion which I guess makes sense, they did this, right, they trained on C for 10 that's how they do their evolutionary research, then they took their best performing things among them is this one and they went to ImageNet and they said let's test these now on ImageNet class first, like we found these new architectures, let's see, and then they got exploding gradients, right, and then they went back into their original problem formulation, okay, what can we build in to the evolution such that this won't happen and here you already see kind of the problem with this, what you would like to have is kind of an algorithm that is general such as to not depend on the architectures and so on that is used but you see already here that the authors, they don't direct the search, right, the search is evolution but they guide, the evolution is very much guided by what these rejection protocols are and you see here the authors tailoring their rejection protocols to the specific data sets and architectures that they use and the specific problems they experienced when trying to apply their method and that I think weakens a bit the application of this method because it seems that this particular form of protocols, of this particular form of rejection protocols is very much tailored to this, let's do these three architectures on CIFAR-10 and then go to ImageNet and that tells me if I want to do this in a very different domain that I would have to, couldn't, it is not very clear that I could just to plop whatever they found works in and it would just work just as outperformingly of the others as in their experiment, it tells me that there is pretty like a somewhat large dependence on the specifics here. Yeah so but that being said these are the rejection criteria so they reject each step here, the worst ones and they go back into the population and then that process repeats and repeats and repeats and then at the end you hopefully end up with some very good normalization layers. Now I have to see here if you compare now these these found normalization layers with the classic variant so the classic thing here is this red line this is batch norm and relu, this is a classic activation normalization combo you put in a neural network and you see that these methods outperform this on a very kind of a stable basis right. So that's pretty cool but that is as we said on CIFAR-10 that is on the exact thing that they search over right there so it's not really a surprise that if I you know search a bunch of combinations and always get the best ones I would outperform just one of them. The interesting thing is what happens now if we take what we found and put them into a different architecture for a different data set. Now here the architecture isn't really different because it's kind of the same but they do evaluate it on ImageNet right. ImageNet different data set than CIFAR-10 much larger and so they put their their architectures which here evoNorm into ImageNet and evaluate it and you can see that it has fairly competitive results across right. So I find that to be to be fairly cool that the best performing ones on CIFAR-10 would also perform better than the corresponding ones on ImageNet. But you already see as well that it's not super high right. So the the differences here are I would say it is improving but sometimes you know it's the same sometimes it's actually worse. It doesn't it doesn't appear to know it to me that those kind of things are not super convincing especially because this is the paper that suggests these methods so they're naturally going to present them in the best possible way. So it seems like the the massive outperformance on CIFAR translates only marginally to ImageNet and these are the same architectures right the ResNet-50 and MobileNet and EfficientNet. These were already the architectures that they searched over so my trust that this new normalization layer put into a an actual different architecture is less still. Now they do actually do some experiments on that as well but I just this is my thoughts when reading this and as well and this I find very interesting this column here are random search so if you just do a random search which means you just produce random layers then it doesn't work at all right. So you take the best ones of the random ones you found and it doesn't transfer at all but interestingly if you do random search plus rejection so the same rejection that they do just you don't do this tournament evolution mutation style you simply random search and then do rejection that gives you fairly competitive numbers right and in some cases even see here it does it outperforms some of the classic methods so just that will give you fairly decent results right and that is to me that that seems to be even more a sign of okay this what this method is mostly doing is just searching like mad for something that works on these particular architectures and of course you can find things that work better if you search like mad but then what do you do with it like what does it mean it can we generalize now they do two additional tasks to show that it does generalize to other architecture and tasks so first of all they do object detection and instance segmentation right on cocoa so this is a very different task this is a mask or CNN right and they just put in their layer there and you can see here that they generally outperform the baseline I don't I can't speak to how much this is this outperformance is here it seems like the numbers are fairly close together but they are consistently better and again I don't I don't necessarily trust these kind of experiments too much because who knows how much effort you can spend on making your method better but in any case they show that they are better which is already something but again here the the r50 indicates that we're again dealing with like resin at 50 a resident 101 architectures which are fairly similar to the ones that we that the method was searching over so the second thing is they say we generalize to gan training so they take a big gan a big gan deep and they show that their method will outperform these other methods on the IS and FID metrics I don't even know what inception score and fresh lit inception distance yay so it will out perform them but in kind of a weird way okay here it outperforms them consistently but then in the inception score this batch norm plus reluces still seems to be like a lot higher than this evil norm be zero and then this thing here that was performing worse in the image net is now performing somewhat better it just so it is a cool result and definitely cool that you can pop this in here I I just think that the things that turn out here that they are tuned to very specific architectures to very specific tasks so I think the big gan deep the kind of architectures will always be kind of the same it will always be kind of resonant ish style neural networks and the tasks here will always be sort of C for image net style things and therefore I believe with the results we've seen the fact that it outperforms so much on C for 10 but then the gains on image net become more marginal I think that indicates that the gains here most probably don't translate the further away you go so I'm not sure that the evil norm that they find like that this particular thing here will remain the best thing across across tasks I think they just found this to work well in their particular setting here and if I run the same thing with the slightly different architectures and slightly different tasks I will come up with a different best thing yeah all right so these were my comments they do some interesting experiments where they show that if they just do random layers it it's not as performant which I can believe if you just jumble these things around probably not as good so you need some kind of search criterion and yeah that was my thoughts on this paper I invite you to read it look at it look at the additional experiment it is a very good evaluated paper and that bye bye
[ { "end": 5.76, "start": 0, "text": " Hi there! Today we're looking at evolving normalization activation layers by" }, { "end": 13.44, "start": 5.76, "text": " Hanjiao Liu, Andrew Brock, Karen Simonian and Guo Vili. These are people from" }, { "end": 20.080000000000002, "start": 13.44, "text": " Google Brain and Google DeepMind. The topic of this paper is, as you can see," }, { "end": 26.080000000000002, "start": 20.080000000000002, "text": " it's about normalization activation layers and we want to evolve them." }, { "end": 31.04, "start": 26.08, "text": " I think the title says a lot, but let's go down here and see what this is about." }, { "end": 41.92, "start": 31.04, "text": " We'll look at image neural networks and current architectures are kind of" }, { "end": 46.959999999999994, "start": 41.92, "text": " focused around the same principles. What they'll have is, ever since ResNet," }, { "end": 52.239999999999995, "start": 46.959999999999994, "text": " these neural networks will be composed of these kind of blocks that come" }, { "end": 56.56, "start": 52.24, "text": " one after another. There will be a block up here and then the signal will" }, { "end": 61.760000000000005, "start": 56.56, "text": " propagate and there will be another block down here. These blocks usually" }, { "end": 67.2, "start": 61.760000000000005, "text": " consist of what's called a skip connection. This is the fundamental" }, { "end": 74.72, "start": 67.2, "text": " ingredient of ResNets that made ResNets so effective, it seems to be the" }, { "end": 78.64, "start": 74.72, "text": " introduction of this skip connection. You see all of these have the skip" }, { "end": 85.76, "start": 78.64, "text": " connection here. These are variants on ResNets and then we see that these" }, { "end": 90.88, "start": 85.76, "text": " are always mixed between convolutional layers and then other things that here" }, { "end": 95.84, "start": 90.88, "text": " are called evoNorm. In a classic ResNet you would have something like a" }, { "end": 101.92, "start": 95.84, "text": " convolutional layer, then you would have a batch normalization and then you" }, { "end": 105.92, "start": 101.92, "text": " would have a non-linearity, for example a ReLU, and then you would go on to the" }, { "end": 113.36, "start": 105.92, "text": " next convolutional layer. You see that the paper mainly cares about these two" }, { "end": 118.4, "start": 113.36, "text": " layers here, the batch norm and the ReLU, and it combines them into what it's" }, { "end": 125.36, "start": 118.4, "text": " called an evoNorm. The evoNorm layers here are supposed to replace" }, { "end": 132.88, "start": 125.36, "text": " the normalization and the activation layers, combine them and make them" }, { "end": 140.4, "start": 132.88, "text": " better. How does it do that? Through evolutionary search. These three" }, { "end": 147.51999999999998, "start": 140.4, "text": " models here are the ResNet, MobileNet and EfficientNet architectures. They're all" }, { "end": 154.88, "start": 147.51999999999998, "text": " image classifier architectures. Let's see how it does that. What it does" }, { "end": 162.32, "start": 154.88, "text": " is it evolves these layers from single primitives. If you've seen the batch" }, { "end": 171.28, "start": 162.32, "text": " normalization paper, then you know that the batch normalization is" }, { "end": 176.79999999999998, "start": 171.28, "text": " just kind of a formula you can write down. These other normalization" }, { "end": 181.04, "start": 176.79999999999998, "text": " methods, people have developed other ones than batch norm, for example this is" }, { "end": 186.79999999999998, "start": 181.04, "text": " groupNorm with a ReLU activation function. You can write these two layers" }, { "end": 191.51999999999998, "start": 186.79999999999998, "text": " down as this mathematical expression. It features things like, this is the" }, { "end": 198.08, "start": 191.52, "text": " input signal, I think this is the mean across some groups, this is a bias term" }, { "end": 203.44, "start": 198.08, "text": " that you can train, this is the standard deviation across the same groups and so" }, { "end": 210.56, "start": 203.44, "text": " on. This here is the ReLU term. You can write this down as a" }, { "end": 218.4, "start": 210.56, "text": " combination of these primitives. You can write it in a graph. This" }, { "end": 225.84, "start": 218.4, "text": " graph here is actually an activation function that this paper has found. It's" }, { "end": 233.48000000000002, "start": 225.84, "text": " called EVO norm S0 and the mathematical equation is the thing down here. It's not" }, { "end": 238.12, "start": 233.48000000000002, "text": " that different, you can see from previous activations. It also has the input signal," }, { "end": 244.64000000000001, "start": 238.12, "text": " it has this variance or standard deviation across groups, it has a" }, { "end": 252.92, "start": 244.64, "text": " non-linearity here and the graph here is simply a graph of mathematical" }, { "end": 259.59999999999997, "start": 252.92, "text": " operations made out of primitives. This paper takes all of the" }, { "end": 264.28, "start": 259.59999999999997, "text": " primitives that they can think of and puts them in this table and they say," }, { "end": 269.64, "start": 264.28, "text": " okay, our search space for these layers, so we want to evolve these layers, our" }, { "end": 275.15999999999997, "start": 269.64, "text": " search space is a combination of any of these primitives. You can see here" }, { "end": 282.47999999999996, "start": 275.15999999999997, "text": " you have something like addition, multiplication, negation, so you" }, { "end": 287.84, "start": 282.47999999999996, "text": " can subtract things, you can take the log of things, you can take the square" }, { "end": 294.52, "start": 287.84, "text": " root and so on. Here you have a max which is the ReLU activation function," }, { "end": 299.56, "start": 294.52, "text": " but if you put 0 as one of them, you have the sigmoid which is another" }, { "end": 304, "start": 299.56, "text": " non-linearity. Then you can also do something like I want to compute the" }, { "end": 309.32, "start": 304, "text": " batch mean or I want to compute a group standard deviation, pretty much anything" }, { "end": 315.04, "start": 309.32, "text": " that current activation functions that have been handcrafted use are available" }, { "end": 321.92, "start": 315.04, "text": " as primitives across this search. So how does this method search? This is the" }, { "end": 326.2, "start": 321.92, "text": " process of how the method searches, it does this in an evolutionary way and" }, { "end": 332.28, "start": 326.2, "text": " evolutionary protocols it means that you don't develop one layer like you would" }, { "end": 336.96, "start": 332.28, "text": " do if you were to do something like gradient descent. You develop a whole" }, { "end": 341.88, "start": 336.96, "text": " population of layers, so these can be maybe a couple of hundred or a couple of" }, { "end": 347.88, "start": 341.88, "text": " thousands, different layer architectures that you develop at the same time." }, { "end": 353.08, "start": 347.88, "text": " What you do each time you put them into a tournament, which basically means you" }, { "end": 359.71999999999997, "start": 353.08, "text": " want to sample a couple of them, I don't think they do all at the same time, they" }, { "end": 364.8, "start": 359.71999999999997, "text": " sample a couple of them, right, and then they train on what they call a proxy" }, { "end": 371.12, "start": 364.8, "text": " task. Their proxy task is CIFAR-10. So CIFAR-10 is a fairly small image" }, { "end": 378.4, "start": 371.12, "text": " classification task and they train on CIFAR-10 because it's pretty fast, right," }, { "end": 382.08, "start": 378.4, "text": " you can train something on CIFAR-10 in like a couple of minutes or an hour or" }, { "end": 391.52, "start": 382.08, "text": " so and get a pretty good feeling for how good the final accuracy will be. You can" }, { "end": 395.59999999999997, "start": 391.52, "text": " get that pretty fast. So this is a fast classification task because they need to" }, { "end": 400.47999999999996, "start": 395.59999999999997, "text": " do this a lot, right, the population is large and they need to repeat this over" }, { "end": 405.18, "start": 400.47999999999996, "text": " time, right. In any case they take a sample, they train it on CIFAR-10 and then the" }, { "end": 411.08, "start": 405.18, "text": " winner, the winning layer is picked from this sample and only the winning layer" }, { "end": 416.56, "start": 411.08, "text": " is allowed to mutate, right. So the winning layer is mutated then and" }, { "end": 420.4, "start": 416.56, "text": " mutation means you kind of change it a bit. Now you don't change it in an" }, { "end": 426.44, "start": 420.4, "text": " informed way, you just change it at random and of course the, and then you" }, { "end": 433.03999999999996, "start": 426.44, "text": " put the mutated layers back into the population. Of course the hope is" }, { "end": 437.79999999999995, "start": 433.03999999999996, "text": " that by repeating this process, right, you repeat and repeat and repeat that the," }, { "end": 441.84000000000003, "start": 437.8, "text": " simply by picking the winning layers over and over and over again is a" }, { "end": 447.36, "start": 441.84000000000003, "text": " selective pressure such that through the random mutations but the tournament" }, { "end": 454.12, "start": 447.36, "text": " style evaluation and picking of the winner, that over time the best" }, { "end": 458.56, "start": 454.12, "text": " performing models in your population, right, the best scoring model here will" }, { "end": 462.7, "start": 458.56, "text": " get better and better and better, right. So the assumption is that this isn't like" }, { "end": 469, "start": 462.7, "text": " a pure combinatorial optimization or like a pure random function, is that if" }, { "end": 475.08, "start": 469, "text": " I take something that works well there are ways that I can perturb it that make" }, { "end": 479.64, "start": 475.08, "text": " it work even better, right. So even if most of the perturbations are worse" }, { "end": 485.56, "start": 479.64, "text": " there are some and the tournament style will always find these ones" }, { "end": 490.15999999999997, "start": 485.56, "text": " for me that perform better and then I can modify these again at random and" }, { "end": 497.84000000000003, "start": 490.16, "text": " then among these I can again find the ones that perform even better. So that" }, { "end": 503.36, "start": 497.84000000000003, "text": " is the method and so the question, there are two questions, how do you" }, { "end": 509.6, "start": 503.36, "text": " mutate a layer, right, and mutation I believe is done in sort of different ways" }, { "end": 515.64, "start": 509.6, "text": " here but if you look at this here, at this expression, so what you have here" }, { "end": 523.28, "start": 515.64, "text": " is the input is this signal here, right, and you always start out I believe with" }, { "end": 530.28, "start": 523.28, "text": " the input with a layer that just emits the number one, with a layer that just" }, { "end": 537.4, "start": 530.28, "text": " emits the number zero or a component and then you have two trainable vectors that" }, { "end": 544.2, "start": 537.4, "text": " you can include and you just start out with these four things and then every" }, { "end": 548.88, "start": 544.2, "text": " time you mutate you add one of these blocks and I believe there's also a" }, { "end": 553.6, "start": 548.88, "text": " method like a randomness of changing the individual things or of actually" }, { "end": 558.5600000000001, "start": 553.6, "text": " starting from scratch again, it's pretty important otherwise you just grow bigger" }, { "end": 568, "start": 558.5600000000001, "text": " and bigger monsters and but the way you mutate is the following, you add a new" }, { "end": 574.1800000000001, "start": 568, "text": " block, let's say I add one here, and you decide on one of the primitives from the" }, { "end": 578.8, "start": 574.18, "text": " table, right, here I'm going to simply decide on a minus operation, so a" }, { "end": 586, "start": 578.8, "text": " subtraction operation and then once you've done that you choose two" }, { "end": 591.12, "start": 586, "text": " children, sorry two parents, however you see it, you choose two parents because" }, { "end": 597.12, "start": 591.12, "text": " the minus operation needs two parents, you choose two of the parents at random" }, { "end": 603.04, "start": 597.12, "text": " here, so I'm going to choose this thing here to be a parent and I'm going to" }, { "end": 610.56, "start": 603.04, "text": " choose this thing here to be a parent at random, right, and then this new node will" }, { "end": 616.68, "start": 610.56, "text": " become the new output of the layer, so you see that this was the previous" }, { "end": 622.4399999999999, "start": 616.68, "text": " output here, this multiplication node between this and this, now this is no" }, { "end": 626.5999999999999, "start": 622.4399999999999, "text": " longer the output, now this is obsolete, right, this is no longer part of the" }, { "end": 632.68, "start": 626.5999999999999, "text": " final mathematical expression here, so you see all the gray nodes here were" }, { "end": 638.12, "start": 632.68, "text": " actually sort of obsolete nodes but they are still kept because in subsequent" }, { "end": 643.0799999999999, "start": 638.12, "text": " steps you can choose them as parents and then they become part of the" }, { "end": 651.4799999999999, "start": 643.0799999999999, "text": " expression again, you can see here this tanh node, it was just a node that" }, { "end": 657.8, "start": 651.4799999999999, "text": " was sort of a dead end in the expression before but now with the new mutation it" }, { "end": 662.1999999999999, "start": 657.8, "text": " is again included in the expression because I've randomly selected it as a" }, { "end": 667.5600000000001, "start": 662.2, "text": " parent but then this node here and that was reset this node here, they are now" }, { "end": 671.24, "start": 667.5600000000001, "text": " obsolete nodes because they are no longer part of the expression, the" }, { "end": 678.1600000000001, "start": 671.24, "text": " expression in this case would go from here to here, right, including this node" }, { "end": 688.2, "start": 678.1600000000001, "text": " and it would go from here over here, right, so these nodes are now part of the" }, { "end": 692.6800000000001, "start": 688.2, "text": " expression, so this is how you mutate and as I said you can also mutate such" }, { "end": 700.5200000000001, "start": 692.6800000000001, "text": " that you start from scratch and so that's how you mutate, the second part in this" }, { "end": 708.08, "start": 700.5200000000001, "text": " thing is how do you exactly determine the winner and what is the tournament, so" }, { "end": 714.0400000000001, "start": 708.08, "text": " how do you do that, the tournament exactly is what we've seen before when" }, { "end": 718.5999999999999, "start": 714.04, "text": " we looked at the different layers, so we said we train on CIFAR-10 and what we do" }, { "end": 724.88, "start": 718.5999999999999, "text": " is we train these three architectures on CIFAR-10, so the ResNet, the MobileNet" }, { "end": 731.76, "start": 724.88, "text": " and the EfficientNet, we train these three architectures on CIFAR-10 with the" }, { "end": 737.3199999999999, "start": 731.76, "text": " EVO norm layer instantiated by you know that particular sample from the" }, { "end": 743.56, "start": 737.3199999999999, "text": " population and then we look at their accuracies and we do, we determine what" }, { "end": 750.7199999999999, "start": 743.56, "text": " is called the Pareto frontier of the population, so I think this is further up," }, { "end": 758.5999999999999, "start": 750.7199999999999, "text": " oh right here, okay, so the dots here, the red and the grey dots would be our sample," }, { "end": 764.56, "start": 758.5999999999999, "text": " so all of this would be our samples and their performance, here it's on" }, { "end": 770.7199999999999, "start": 764.56, "text": " actually on two models but in practice we have three just to graph it better, so" }, { "end": 774.76, "start": 770.72, "text": " we plot them here and we determine the Pareto frontier, now here A, B and C are" }, { "end": 779.8000000000001, "start": 774.76, "text": " part of the Pareto frontier because A outperforms everything else on" }, { "end": 787.64, "start": 779.8000000000001, "text": " model 1, C outperforms everything else on model 2 and B outperforms C on model 1" }, { "end": 793.0400000000001, "start": 787.64, "text": " but also outperforms A on model 2, so it's what's called the Pareto frontier" }, { "end": 800, "start": 793.0400000000001, "text": " and we pick one of those as the winner, so they all are kind of one-third winners" }, { "end": 805.88, "start": 800, "text": " here, so this is how you do the tournament, you pick the winner like this" }, { "end": 814.88, "start": 805.88, "text": " and then you allow the winner to mutate, the last part that is not drawn in here" }, { "end": 824.12, "start": 814.88, "text": " actually is somewhere here-ish which is called the rejection step, so the" }, { "end": 831.36, "start": 824.12, "text": " rejection step is important because what they want to do is they say, hi we have" }, { "end": 836.96, "start": 831.36, "text": " these mutated layers but some of the mutations are probably going to be just" }, { "end": 843.08, "start": 836.96, "text": " terrible, like destroying everything, not trainable layers, it's just" }, { "end": 849.64, "start": 843.08, "text": " horrible, horrible, such that the layer is useless, they don't" }, { "end": 853.72, "start": 849.64, "text": " want to keep, they don't want to put them back here" }, { "end": 860.9200000000001, "start": 853.72, "text": " into the population because that might either deteriorate or severely slow" }, { "end": 866.24, "start": 860.9200000000001, "text": " down this progress here, so they want to stop them and only the good ones," }, { "end": 873.08, "start": 866.24, "text": " only the ones that are somewhat fairly okay get back to the population, right," }, { "end": 878.98, "start": 873.08, "text": " they don't always have to improve but they have to be at minimally useful, so" }, { "end": 887.4, "start": 878.98, "text": " the rejection step they describe down here in the rejection protocols, they" }, { "end": 893.4, "start": 887.4, "text": " have two criteria for rejecting mutated architectures, first they have a quality" }, { "end": 899.6800000000001, "start": 893.4, "text": " criterion, say we discard layers that achieve less than 20% validation" }, { "end": 905.88, "start": 899.6800000000001, "text": " accuracy in 100 training steps on any of the three anchor architectures, right, so" }, { "end": 910.92, "start": 905.88, "text": " the reasoning behind this is if you have a hundred training steps and you achieve" }, { "end": 917.12, "start": 910.92, "text": " less than 20% validation accuracy you're not going anywhere, right, you're just" }, { "end": 923.04, "start": 917.12, "text": " because 10% is already random performance, if you are less than 20%" }, { "end": 928.72, "start": 923.04, "text": " after a hundred steps your layer is pretty useless and can be discarded," }, { "end": 934.4, "start": 928.72, "text": " right, so they say this simple mechanism ensures the compute resources to" }, { "end": 939.28, "start": 934.4, "text": " concentrate on the full training process of a small subset of promising candidates," }, { "end": 945.68, "start": 939.28, "text": " oh sorry, yeah, so the hundred training steps of course is not enough to train" }, { "end": 949.56, "start": 945.68, "text": " fully but you can see after a hundred training steps whether or not the layer" }, { "end": 954.8, "start": 949.56, "text": " even does something, so you reject those, so this makes pretty much sense, right," }, { "end": 961.28, "start": 954.8, "text": " the second criterion is what they call stability, right, they say we reject" }, { "end": 967.6, "start": 961.28, "text": " layers that are subject to numerical instability, right, and how do they find" }, { "end": 975.4399999999999, "start": 967.6, "text": " numerical instability? They define it like this, so what they do is they take" }, { "end": 986.36, "start": 975.4399999999999, "text": " the parameters, so the layers, and this is an architecture, yeah, so the model," }, { "end": 996, "start": 986.36, "text": " the model, these are the convolutional weights, are the theta, right, and the G" }, { "end": 1003.16, "start": 996, "text": " is the computation graph which is the EVO norm in this case and there is a" }, { "end": 1007.04, "start": 1003.16, "text": " loss defined across them, of course, right, this is the loss of the neural" }, { "end": 1011.96, "start": 1007.04, "text": " network on the samples, right, so these are the convolutional" }, { "end": 1015.64, "start": 1011.96, "text": " layers and these are the normalization layers, now what we want to do is we" }, { "end": 1021.28, "start": 1015.64, "text": " want to see how does this loss change when we change the convolutional layers," }, { "end": 1027, "start": 1021.28, "text": " so you have to imagine, here are the convolutional layers and then there are" }, { "end": 1031.12, "start": 1027, "text": " these weird normalization layers and then again there are the convolutional" }, { "end": 1041.96, "start": 1031.12, "text": " layers, now we want to see how does the loss change if we change the weights" }, { "end": 1046.1200000000001, "start": 1041.96, "text": " of the convolution by a little bit, right, we just change it a little bit and see" }, { "end": 1051.16, "start": 1046.1200000000001, "text": " how does the loss change, this is the gradient of the weights basically," }, { "end": 1056.8, "start": 1051.16, "text": " this is how you train, this entire thing here is how you train the" }, { "end": 1063.1200000000001, "start": 1056.8, "text": " neural network, right, so you want to see how large is this gradient and you kind" }, { "end": 1067.72, "start": 1063.1200000000001, "text": " of want to do this in an adversarial way, so you want to find the maximum" }, { "end": 1074.56, "start": 1067.72, "text": " perturbation you can achieve, right, you say okay if I change this a little" }, { "end": 1082.68, "start": 1074.56, "text": " bit in the worst direction I possibly can, how large is the" }, { "end": 1088.44, "start": 1082.68, "text": " perturbation going to be and that's how they define numerical" }, { "end": 1095, "start": 1088.44, "text": " instability, so it basically means if this is very high then the network might" }, { "end": 1101.36, "start": 1095, "text": " be doing well right where it is but just a little bit changing it will make it" }, { "end": 1111.92, "start": 1101.36, "text": " terrible, right, so they say we ascend the value on this direction for 100 steps and" }, { "end": 1116.52, "start": 1111.92, "text": " layer with the worst-case gradient norm greater than 10 to the 8th are rejected," }, { "end": 1121.68, "start": 1116.52, "text": " in addition, so as a reason, this seems pretty strange, right, this" }, { "end": 1128.0800000000002, "start": 1121.68, "text": " quality criterion, it made sense but the stability criterion, it kind of seems, I" }, { "end": 1135.8400000000001, "start": 1128.0800000000002, "text": " mean reasonable but strange in here, the reason now, so the two tests are" }, { "end": 1140.28, "start": 1135.8400000000001, "text": " complementary with each other, for example we found a layer like this is" }, { "end": 1145.3600000000001, "start": 1140.28, "text": " able to achieve reasonable accuracies on C for 10 across all the anchor" }, { "end": 1152.08, "start": 1145.36, "text": " architectures, so it passes the quality criterion above but its gradients" }, { "end": 1156.6399999999999, "start": 1152.08, "text": " quickly explode on ImageNet possibly due to the absence of normalization" }, { "end": 1162, "start": 1156.6399999999999, "text": " operations, so and then you see aha, okay, so what probably happened is the" }, { "end": 1166.9199999999998, "start": 1162, "text": " following, they did their experiment without this, right, just with this quality" }, { "end": 1172.12, "start": 1166.9199999999998, "text": " criterion which I guess makes sense, they did this, right, they trained on C for 10" }, { "end": 1175.4799999999998, "start": 1172.12, "text": " that's how they do their evolutionary research, then they took their best" }, { "end": 1181.8799999999999, "start": 1175.4799999999998, "text": " performing things among them is this one and they went to ImageNet and they said" }, { "end": 1186.2399999999998, "start": 1181.8799999999999, "text": " let's test these now on ImageNet class first, like we found these new" }, { "end": 1192.12, "start": 1186.2399999999998, "text": " architectures, let's see, and then they got exploding gradients, right, and then" }, { "end": 1196.6999999999998, "start": 1192.12, "text": " they went back into their original problem formulation, okay, what can we" }, { "end": 1201.84, "start": 1196.6999999999998, "text": " build in to the evolution such that this won't happen and here you already see" }, { "end": 1206.1599999999999, "start": 1201.84, "text": " kind of the problem with this, what you would like to have is kind of an" }, { "end": 1212.28, "start": 1206.1599999999999, "text": " algorithm that is general such as to not depend on the architectures and so on" }, { "end": 1218.84, "start": 1212.28, "text": " that is used but you see already here that the authors, they don't direct the" }, { "end": 1223.8, "start": 1218.84, "text": " search, right, the search is evolution but they guide, the evolution is very much" }, { "end": 1228.6399999999999, "start": 1223.8, "text": " guided by what these rejection protocols are and you see here the authors" }, { "end": 1233.3600000000001, "start": 1228.64, "text": " tailoring their rejection protocols to the specific data sets and" }, { "end": 1239.16, "start": 1233.3600000000001, "text": " architectures that they use and the specific problems they experienced when" }, { "end": 1245.5600000000002, "start": 1239.16, "text": " trying to apply their method and that I think weakens a bit the" }, { "end": 1251.48, "start": 1245.5600000000002, "text": " application of this method because it seems that this particular form of" }, { "end": 1256.6000000000001, "start": 1251.48, "text": " protocols, of this particular form of rejection protocols is very much" }, { "end": 1262, "start": 1256.6, "text": " tailored to this, let's do these three architectures on CIFAR-10 and then go to" }, { "end": 1269.8, "start": 1262, "text": " ImageNet and that tells me if I want to do this in a very different domain that" }, { "end": 1277.8, "start": 1269.8, "text": " I would have to, couldn't, it is not very clear that I could just to plop whatever" }, { "end": 1283.08, "start": 1277.8, "text": " they found works in and it would just work just as outperformingly of the" }, { "end": 1290.96, "start": 1283.08, "text": " others as in their experiment, it tells me that there is pretty like a somewhat" }, { "end": 1301.52, "start": 1290.96, "text": " large dependence on the specifics here. Yeah so but that being said these are" }, { "end": 1308.08, "start": 1301.52, "text": " the rejection criteria so they reject each step here, the worst ones and they" }, { "end": 1312.52, "start": 1308.08, "text": " go back into the population and then that process repeats and repeats and" }, { "end": 1316.92, "start": 1312.52, "text": " repeats and then at the end you hopefully end up with some very good" }, { "end": 1328.24, "start": 1316.92, "text": " normalization layers. Now I have to see here if you compare now these these" }, { "end": 1334.36, "start": 1328.24, "text": " found normalization layers with the classic variant so the classic thing" }, { "end": 1339.72, "start": 1334.36, "text": " here is this red line this is batch norm and relu, this is a classic" }, { "end": 1344.48, "start": 1339.72, "text": " activation normalization combo you put in a neural network and you see that" }, { "end": 1355.16, "start": 1344.48, "text": " these methods outperform this on a very kind of a stable basis right. So that's" }, { "end": 1359.48, "start": 1355.16, "text": " pretty cool but that is as we said on CIFAR-10 that is on the exact thing" }, { "end": 1364.48, "start": 1359.48, "text": " that they search over right there so it's not really a surprise that if I you" }, { "end": 1368.72, "start": 1364.48, "text": " know search a bunch of combinations and always get the best ones I would" }, { "end": 1376.24, "start": 1368.72, "text": " outperform just one of them. The interesting thing is what happens now if" }, { "end": 1383.96, "start": 1376.24, "text": " we take what we found and put them into a different architecture for a different" }, { "end": 1388.92, "start": 1383.96, "text": " data set. Now here the architecture isn't really different because it's kind of" }, { "end": 1393.72, "start": 1388.92, "text": " the same but they do evaluate it on ImageNet right. ImageNet different" }, { "end": 1401.4, "start": 1393.72, "text": " data set than CIFAR-10 much larger and so they put their their architectures" }, { "end": 1407.64, "start": 1401.4, "text": " which here evoNorm into ImageNet and evaluate it and you can see that it has" }, { "end": 1417, "start": 1407.64, "text": " fairly competitive results across right. So I find that to be to be fairly cool" }, { "end": 1424.48, "start": 1417, "text": " that the best performing ones on CIFAR-10 would also perform better than the" }, { "end": 1431.28, "start": 1424.48, "text": " corresponding ones on ImageNet. But you already see as well that it's not super" }, { "end": 1439.48, "start": 1431.28, "text": " high right. So the the differences here are I would say it is improving but" }, { "end": 1446.04, "start": 1439.48, "text": " sometimes you know it's the same sometimes it's actually worse. It doesn't" }, { "end": 1452.2, "start": 1446.04, "text": " it doesn't appear to know it to me that those kind of things are not super" }, { "end": 1455.8, "start": 1452.2, "text": " convincing especially because this is the paper that suggests these methods so" }, { "end": 1462.8, "start": 1455.8, "text": " they're naturally going to present them in the best possible way. So it seems" }, { "end": 1470.08, "start": 1462.8, "text": " like the the massive outperformance on CIFAR translates only marginally to" }, { "end": 1474, "start": 1470.08, "text": " ImageNet and these are the same architectures right the ResNet-50 and" }, { "end": 1477.32, "start": 1474, "text": " MobileNet and EfficientNet. These were already the architectures that they" }, { "end": 1483.04, "start": 1477.32, "text": " searched over so my trust that this new normalization layer put into a an" }, { "end": 1488.8, "start": 1483.04, "text": " actual different architecture is less still. Now they do actually do" }, { "end": 1494.32, "start": 1488.8, "text": " some experiments on that as well but I just this is my thoughts when reading" }, { "end": 1501.08, "start": 1494.32, "text": " this and as well and this I find very interesting this column here are random" }, { "end": 1505.6, "start": 1501.08, "text": " search so if you just do a random search which means you just produce" }, { "end": 1511.1599999999999, "start": 1505.6, "text": " random layers then it doesn't work at all right. So you take the best ones of" }, { "end": 1518.8799999999999, "start": 1511.1599999999999, "text": " the random ones you found and it doesn't transfer at all but interestingly if you" }, { "end": 1526.3999999999999, "start": 1518.8799999999999, "text": " do random search plus rejection so the same rejection that they do just you" }, { "end": 1533.48, "start": 1526.4, "text": " don't do this tournament evolution mutation style you simply random search" }, { "end": 1541.2, "start": 1533.48, "text": " and then do rejection that gives you fairly competitive numbers right and in" }, { "end": 1549.96, "start": 1541.2, "text": " some cases even see here it does it outperforms some of the classic methods" }, { "end": 1558.2, "start": 1549.96, "text": " so just that will give you fairly decent results right and that is to me" }, { "end": 1567.3600000000001, "start": 1558.2, "text": " that that seems to be even more a sign of okay this what this method is mostly" }, { "end": 1571.2, "start": 1567.3600000000001, "text": " doing is just searching like mad for something that works on these" }, { "end": 1577.4, "start": 1571.2, "text": " particular architectures and of course you can find things that work better if" }, { "end": 1584.88, "start": 1577.4, "text": " you search like mad but then what do you do with it like what does it mean it can" }, { "end": 1591.5600000000002, "start": 1584.88, "text": " we generalize now they do two additional tasks to show that it does generalize" }, { "end": 1597.72, "start": 1591.5600000000002, "text": " to other architecture and tasks so first of all they do object detection and" }, { "end": 1605.88, "start": 1597.72, "text": " instance segmentation right on cocoa so this is a very different task this is a" }, { "end": 1611.68, "start": 1605.88, "text": " mask or CNN right and they just put in their layer there and you can see here" }, { "end": 1618.3200000000002, "start": 1611.68, "text": " that they generally outperform the baseline I don't I can't speak to how" }, { "end": 1624.1200000000001, "start": 1618.3200000000002, "text": " much this is this outperformance is here it seems like the numbers are fairly" }, { "end": 1629.64, "start": 1624.1200000000001, "text": " close together but they are consistently better and again I don't I don't" }, { "end": 1635.48, "start": 1629.64, "text": " necessarily trust these kind of experiments too much because who knows" }, { "end": 1640.32, "start": 1635.48, "text": " how much effort you can spend on making your method better but in any case they" }, { "end": 1643.68, "start": 1640.32, "text": " show that they are better which is already something but again here the" }, { "end": 1648.64, "start": 1643.68, "text": " the r50 indicates that we're again dealing with like resin at 50 a resident" }, { "end": 1655.28, "start": 1648.64, "text": " 101 architectures which are fairly similar to the ones that we that the" }, { "end": 1662.2, "start": 1655.28, "text": " method was searching over so the second thing is they say we generalize to gan" }, { "end": 1672.3600000000001, "start": 1662.2, "text": " training so they take a big gan a big gan deep and they show that their method" }, { "end": 1681.0800000000002, "start": 1672.3600000000001, "text": " will outperform these other methods on the IS and FID metrics I don't even know" }, { "end": 1688.6000000000001, "start": 1681.0800000000002, "text": " what inception score and fresh lit inception distance yay so it will out" }, { "end": 1696.08, "start": 1688.6, "text": " perform them but in kind of a weird way okay here it outperforms them" }, { "end": 1703.1599999999999, "start": 1696.08, "text": " consistently but then in the inception score this batch norm plus reluces still" }, { "end": 1711.76, "start": 1703.1599999999999, "text": " seems to be like a lot higher than this evil norm be zero and then this thing" }, { "end": 1718.92, "start": 1711.76, "text": " here that was performing worse in the image net is now performing somewhat" }, { "end": 1727.28, "start": 1718.92, "text": " better it just so it is a cool result and definitely cool that you can pop" }, { "end": 1733.76, "start": 1727.28, "text": " this in here I I just think that the things that turn out here that they are" }, { "end": 1740.44, "start": 1733.76, "text": " tuned to very specific architectures to very specific tasks so I think the big" }, { "end": 1745.16, "start": 1740.44, "text": " gan deep the kind of architectures will always be kind of the same it will" }, { "end": 1750.16, "start": 1745.16, "text": " always be kind of resonant ish style neural networks and the tasks here will" }, { "end": 1758.4, "start": 1750.16, "text": " always be sort of C for image net style things and therefore I believe with the" }, { "end": 1762.6000000000001, "start": 1758.4, "text": " results we've seen the fact that it outperforms so much on C for 10 but then" }, { "end": 1768.64, "start": 1762.6000000000001, "text": " the gains on image net become more marginal I think that indicates that the" }, { "end": 1775.96, "start": 1768.64, "text": " gains here most probably don't translate the further away you go so I'm not sure" }, { "end": 1783.88, "start": 1775.96, "text": " that the evil norm that they find like that this particular thing here will" }, { "end": 1791.2800000000002, "start": 1783.88, "text": " remain the best thing across across tasks I think they just found this to" }, { "end": 1797.3000000000002, "start": 1791.2800000000002, "text": " work well in their particular setting here and if I run the same thing with" }, { "end": 1800.68, "start": 1797.3, "text": " the slightly different architectures and slightly different tasks I will come up" }, { "end": 1807.62, "start": 1800.68, "text": " with a different best thing yeah all right so these were my comments they do" }, { "end": 1811.6, "start": 1807.62, "text": " some interesting experiments where they show that if they just do random layers" }, { "end": 1818.6, "start": 1811.6, "text": " it it's not as performant which I can believe if you just jumble these things" }, { "end": 1826.12, "start": 1818.6, "text": " around probably not as good so you need some kind of search criterion and yeah" }, { "end": 1831.04, "start": 1826.12, "text": " that was my thoughts on this paper I invite you to read it look at it look at" }, { "end": 1857.52, "start": 1831.04, "text": " the additional experiment it is a very good evaluated paper and that bye bye" } ]
DRy_Mr732yA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Drama] Who invented Contrast Sets?
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "arxiv", "twitter", "drama", "credit", "related", "lipton", "gardner", "counterfactual", "augmentation", "plagiarism" ]
Funny Twitter spat between researchers arguing who was the first to invent an idea that has probably been around since 1990 :D References: https://arxiv.org/abs/2004.02709 https://twitter.com/nlpmattg/status/1247326213296672768 https://arxiv.org/abs/1909.12434 https://twitter.com/zacharylipton/status/1247357810410762240 https://twitter.com/nlpmattg/status/1247373386839252992 https://twitter.com/zacharylipton/status/1247383141075083267 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
I love me some good Twitter drama look at this this is awesome so after this contrast set paper appeared and I've done a video on that the author of it tweeted it out with one of the long Twitter threads with screenshots and all this seems to be the new marketing tool of academics and as you know I'm not a fan of this paper I think that the number that comes out of such a contrast set is very either useless or counterproductive and you can see my video on that in any case there there was another researcher Zachary Lipton who felt like he needed to jump in here saying before the media blitz and retweet party gets out of control this idea exists has been published it has a name and a clear justification is called counterfactually augmented data this is amazing look at that and here's the published paper of course and if we look at the published paper this is it right here of course Zach Lipton is an author on that paper and so let's just just read the abstract I haven't read the paper but let's just read the abstract it so I have it classically I have it here my nifty thing here so we can analyze it so this paper if you read the abstract it does sound similar right despite alarm over the reliance of union learning systems blah blah blah blah spurious correlations so it talks about the same problems now what do they say given documents and their initial labels we task humans with revising each document so that it accords with a counterfactual target label retains internal coherence and avoids unnecessary changes right so this sounds very similar to what these contrast sets do so the counterfactual target label would be the necessary of the contrast set to change the label retains internal coherence which is the in the contrast that this simply given by it supposed to conform to the intent of the data set makers which the intent probably includes internal coherence and it avoids unnecessary changes that conforms to the contrast set only searching in the local environment of a test set sample so you see that the definition of these is pretty similar then we go on and say they say class first trained on original data fail on their counterfactually revised counterparts and vice versa this experiment was also done by the contrast that paper and then they say class first trained on combined data sets performed remarkably well just chive those specialized in either domain so immediately we see some differences as well right the main difference I see is they say we task humans and then they train on the the train on the counterfactually revised counterparts which probably means they use some mechanical Turks here when they say humans because if you want to create a training data set you need lots of data so they probably take a data set and run its training data set again through something like mechanical Turk to get annotations this is exactly what the people of the of the contrast sets claim is wrong with the current pipeline and they so here we have this this thing counterfactually augmented stuff so the contrast sets what they say is we actually need the experts to do this that this the these humans are exactly the wrong people to make the data sets so it has the CFA has some elements correctly the same namely how they construct these labels but who construct the labels and for what reason so here it's experts and for what reason it's for testing it's they say the experts that make the data set should provide an additional contrast test set so this is I mean if if this is just my opinion if this is the same idea of course it's very similar but if this counts as the same idea then 95% of all research counts as the same idea as something that Jürgen Schmidhuber has done in the 1990s which of course Jürgen Schmidhuber will eloquently argue exactly he invented GANs basically the same thing so yeah so if this is it's not the same like I have to say this is very close but it's not as I understand they even cited the other ones so then the bickering starts and this is funny I'm like this this is just funny to me so Zach Lippen jumps here it says this has been published has a name and a clearer justification it's called contractual augmented data here is the published paper we just looked at this right and then Matt Gardner answers and he says Zach and Divyansh work is excellent recommend you all go look at it our work provides a different concurrent take on similar issues right and I think here someone comments that so he says it is in the related work section although mischaracterized and misattributed as contemporary work so position is really that it is kind of a stolen idea and they were apparently in contact with each other during that so this Matt Gardner here says what the differences are he says we take a geometrical view we demonstrate such a wider variety I mean for all intents and purposes if you go through any of the research go to computer vision go to NLP you'll find like the exact I have like I have I review two papers each year that want to produce data that better defines the decision boundary like these people here I mean this is this idea is just get rehashed over and over in the slightly different form these two are particularly close but and then see how they pick our paper was finished two months after theirs and then they say we started the project well before and so on why do we feel defensive and then he answers again with this is absolutely false our paper was drafted in July your paper was finished the night before the ACL deadline this is not two months ago but a half a year it is nothing to do it says why do you presume to know when we started drop the nonsense we did this work in May 2019 present the public results in July posted it's a better drop the posturing so much of what you're doing here is the very cancer in the system I mean I agree just you know slightly refining ideas that previously were there is very bad problem in academia so this is actually correct to point out but I don't think that this particular instance is particularly bad and then he says I'm afraid you're simply mistaken I have a history of publishing similar so I've I've something like the last thing I say I just invite you to read this beautiful but the last thing to say here if if this counterfactually augmented data if this is in fact the first instance of this general idea to produce counterfactually augmented data that that that does actually fulfill these criteria I would be extremely surprised because this is nothing to do with deep learning right and the real novelty in her field is mostly deep learning so I'm pretty sure someone must have thought of something like this when everyone was just doing grammars and manual features and things like this so I'm I would be extremely surprised if this hasn't been there in one form or another and why the authors of that shouldn't make exactly the same argument that being said it is fairly close like that the fun part here is that it is actually a fairly similar idea except after so the idea itself is fairly similar but here the focus is on different things and it's also on different data sets and I believe yeah as I said 95% of research falls into exactly this category so much fun check it out yeah bye bye
[ { "end": 7.24, "start": 0, "text": " I love me some good Twitter drama look at this this is awesome so after this" }, { "end": 13.4, "start": 7.24, "text": " contrast set paper appeared and I've done a video on that the author of it" }, { "end": 19.72, "start": 13.4, "text": " tweeted it out with one of the long Twitter threads with screenshots and all" }, { "end": 26.2, "start": 19.72, "text": " this seems to be the new marketing tool of academics and as you know I'm not a" }, { "end": 30.4, "start": 26.2, "text": " fan of this paper I think that the number that comes out of such a contrast" }, { "end": 35.6, "start": 30.4, "text": " set is very either useless or counterproductive and you can see my" }, { "end": 42.96, "start": 35.6, "text": " video on that in any case there there was another researcher Zachary Lipton" }, { "end": 50, "start": 42.96, "text": " who felt like he needed to jump in here saying before the media blitz and" }, { "end": 56, "start": 50, "text": " retweet party gets out of control this idea exists has been published it has" }, { "end": 61.6, "start": 56, "text": " a name and a clear justification is called counterfactually augmented data" }, { "end": 69.36, "start": 61.6, "text": " this is amazing look at that and here's the published paper of course and if we" }, { "end": 76.08, "start": 69.36, "text": " look at the published paper this is it right here of course Zach Lipton is an" }, { "end": 82.8, "start": 76.08, "text": " author on that paper and so let's just just read the abstract I haven't read" }, { "end": 88.12, "start": 82.8, "text": " the paper but let's just read the abstract it so I have it classically I" }, { "end": 98.03999999999999, "start": 88.12, "text": " have it here my nifty thing here so we can analyze it so this paper if you read" }, { "end": 105, "start": 98.03999999999999, "text": " the abstract it does sound similar right despite alarm over the reliance of" }, { "end": 108.12, "start": 105, "text": " union learning systems blah blah blah blah spurious correlations so it talks" }, { "end": 114, "start": 108.12, "text": " about the same problems now what do they say given documents and their initial" }, { "end": 119.08000000000001, "start": 114, "text": " labels we task humans with revising each document so that it accords with a" }, { "end": 123.36, "start": 119.08000000000001, "text": " counterfactual target label retains internal coherence and avoids" }, { "end": 129.64000000000001, "start": 123.36, "text": " unnecessary changes right so this sounds very similar to what these contrast sets" }, { "end": 135.8, "start": 129.64000000000001, "text": " do so the counterfactual target label would be the necessary of the" }, { "end": 143.36, "start": 135.8, "text": " contrast set to change the label retains internal coherence which is the in the" }, { "end": 148.92000000000002, "start": 143.36, "text": " contrast that this simply given by it supposed to conform to the intent of the" }, { "end": 154.52, "start": 148.92000000000002, "text": " data set makers which the intent probably includes internal coherence and" }, { "end": 160.12, "start": 154.52, "text": " it avoids unnecessary changes that conforms to the contrast set only" }, { "end": 166.72, "start": 160.12, "text": " searching in the local environment of a test set sample so you see that the" }, { "end": 174.04, "start": 166.72, "text": " definition of these is pretty similar then we go on and say they say class" }, { "end": 177.16, "start": 174.04, "text": " first trained on original data fail on their counterfactually revised" }, { "end": 180.72, "start": 177.16, "text": " counterparts and vice versa this experiment was also done by the" }, { "end": 186.56, "start": 180.72, "text": " contrast that paper and then they say class first trained on combined data" }, { "end": 190.44, "start": 186.56, "text": " sets performed remarkably well just chive those specialized in either" }, { "end": 197.76, "start": 190.44, "text": " domain so immediately we see some differences as well right the main" }, { "end": 203.92000000000002, "start": 197.76, "text": " difference I see is they say we task humans and then they train on the the" }, { "end": 208.24, "start": 203.92000000000002, "text": " train on the counterfactually revised counterparts which probably means they" }, { "end": 213.28, "start": 208.24, "text": " use some mechanical Turks here when they say humans because if you want to create" }, { "end": 218.48, "start": 213.28, "text": " a training data set you need lots of data so they probably take a data set" }, { "end": 222.24, "start": 218.48, "text": " and run its training data set again through something like mechanical Turk" }, { "end": 230.72, "start": 222.24, "text": " to get annotations this is exactly what the people of the of the contrast sets" }, { "end": 237.88, "start": 230.72, "text": " claim is wrong with the current pipeline and they so here we have this this thing" }, { "end": 243.2, "start": 237.88, "text": " counterfactually augmented stuff so the contrast sets what they say is we" }, { "end": 248.44, "start": 243.2, "text": " actually need the experts to do this that this the these humans are exactly" }, { "end": 255, "start": 248.44, "text": " the wrong people to make the data sets so it has the CFA has some elements" }, { "end": 260.8, "start": 255, "text": " correctly the same namely how they construct these labels but who construct" }, { "end": 266.24, "start": 260.8, "text": " the labels and for what reason so here it's experts and for what reason it's" }, { "end": 271.91999999999996, "start": 266.24, "text": " for testing it's they say the experts that make the data set should provide an" }, { "end": 280.44, "start": 271.92, "text": " additional contrast test set so this is I mean if if this is just my opinion if" }, { "end": 285.04, "start": 280.44, "text": " this is the same idea of course it's very similar but if this counts as the" }, { "end": 290.20000000000005, "start": 285.04, "text": " same idea then 95% of all research counts as the same idea as something" }, { "end": 294.88, "start": 290.20000000000005, "text": " that Jürgen Schmidhuber has done in the 1990s which of course Jürgen Schmidhuber" }, { "end": 304, "start": 294.88, "text": " will eloquently argue exactly he invented GANs basically the same thing" }, { "end": 310.52, "start": 304, "text": " so yeah so if this is it's not the same like I have to say this is very close" }, { "end": 316.86, "start": 310.52, "text": " but it's not as I understand they even cited the other ones so then the" }, { "end": 321.15999999999997, "start": 316.86, "text": " bickering starts and this is funny I'm like this this is just funny to me so" }, { "end": 326.40000000000003, "start": 321.16, "text": " Zach Lippen jumps here it says this has been published has a name and a clearer" }, { "end": 330.48, "start": 326.40000000000003, "text": " justification it's called contractual augmented data here is the published" }, { "end": 336.52000000000004, "start": 330.48, "text": " paper we just looked at this right and then Matt Gardner answers and he says" }, { "end": 345.64000000000004, "start": 336.52000000000004, "text": " Zach and Divyansh work is excellent recommend you all go look at it our work" }, { "end": 350.84000000000003, "start": 345.64000000000004, "text": " provides a different concurrent take on similar issues right and I think here" }, { "end": 356.91999999999996, "start": 350.84, "text": " someone comments that so he says it is in the related work section although" }, { "end": 362.35999999999996, "start": 356.91999999999996, "text": " mischaracterized and misattributed as contemporary work so position is really" }, { "end": 369.47999999999996, "start": 362.35999999999996, "text": " that it is kind of a stolen idea and they were apparently in contact with" }, { "end": 374.55999999999995, "start": 369.47999999999996, "text": " each other during that so this Matt Gardner here says what the differences" }, { "end": 379.64, "start": 374.55999999999995, "text": " are he says we take a geometrical view we demonstrate such a wider variety I" }, { "end": 383.32, "start": 379.64, "text": " mean for all intents and purposes if you go through any of the research go to" }, { "end": 388.71999999999997, "start": 383.32, "text": " computer vision go to NLP you'll find like the exact I have like I have I" }, { "end": 396.96, "start": 388.71999999999997, "text": " review two papers each year that want to produce data that better defines the" }, { "end": 401.96, "start": 396.96, "text": " decision boundary like these people here I mean this is this idea is just get" }, { "end": 409.2, "start": 401.96, "text": " rehashed over and over in the slightly different form these two are" }, { "end": 414.47999999999996, "start": 409.2, "text": " particularly close but and then see how they pick our paper was finished two" }, { "end": 423.15999999999997, "start": 414.47999999999996, "text": " months after theirs and then they say we started the project well before and so" }, { "end": 434.24, "start": 423.15999999999997, "text": " on why do we feel defensive and then he answers again with this is absolutely" }, { "end": 438.91999999999996, "start": 434.24, "text": " false our paper was drafted in July your paper was finished the night before the" }, { "end": 443.8, "start": 438.92, "text": " ACL deadline this is not two months ago but a half a year it is nothing to do" }, { "end": 450.08000000000004, "start": 443.8, "text": " it says why do you presume to know when we started drop the nonsense we did this" }, { "end": 454.88, "start": 450.08000000000004, "text": " work in May 2019 present the public results in July posted it's a better" }, { "end": 460.32, "start": 454.88, "text": " drop the posturing so much of what you're doing here is the very cancer in" }, { "end": 468.76, "start": 460.32, "text": " the system I mean I agree just you know slightly refining ideas that previously" }, { "end": 473.71999999999997, "start": 468.76, "text": " were there is very bad problem in academia so this is actually correct to" }, { "end": 478, "start": 473.71999999999997, "text": " point out but I don't think that this particular instance is particularly bad" }, { "end": 480.71999999999997, "start": 478, "text": " and then he says I'm afraid you're simply mistaken I have a history of" }, { "end": 485.24, "start": 480.71999999999997, "text": " publishing similar so I've I've something like the last thing I say I" }, { "end": 492.32, "start": 485.24, "text": " just invite you to read this beautiful but the last thing to say here if if" }, { "end": 499.15999999999997, "start": 492.32, "text": " this counterfactually augmented data if this is in fact the first instance of" }, { "end": 505.08, "start": 499.15999999999997, "text": " this general idea to produce counterfactually augmented data that" }, { "end": 512.4, "start": 505.08, "text": " that that does actually fulfill these criteria I would be extremely surprised" }, { "end": 517.96, "start": 512.4, "text": " because this is nothing to do with deep learning right and the real novelty in" }, { "end": 523.48, "start": 517.96, "text": " her field is mostly deep learning so I'm pretty sure someone must have thought of" }, { "end": 529.48, "start": 523.48, "text": " something like this when everyone was just doing grammars and manual features" }, { "end": 536.5600000000001, "start": 529.48, "text": " and things like this so I'm I would be extremely surprised if this hasn't been" }, { "end": 541.2, "start": 536.5600000000001, "text": " there in one form or another and why the authors of that shouldn't make exactly" }, { "end": 545.72, "start": 541.2, "text": " the same argument that being said it is fairly close like that the fun part here" }, { "end": 551.76, "start": 545.72, "text": " is that it is actually a fairly similar idea except after so the idea itself is" }, { "end": 558.12, "start": 551.76, "text": " fairly similar but here the focus is on different things and it's also on" }, { "end": 563.36, "start": 558.12, "text": " different data sets and I believe yeah as I said 95% of research falls into" }, { "end": 576.88, "start": 563.36, "text": " exactly this category so much fun check it out yeah bye bye" } ]
qeEO2GECQk0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Evaluating NLP Models via Contrast Sets
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "arxiv", "attention", "evaluation", "cheat", "easy", "hard", "adversarial", "counterfactual", "hand-crafted", "test set", "supervised" ]
Current NLP models are often "cheating" on supervised learning tasks by exploiting correlations that arise from the particularities of the dataset. Therefore they often fail to learn the original intent of the dataset creators. This paper argues that NLP models should be evaluated on Contrast Sets, which are hand-crafted perturbations by the dataset authors that capture their intent in a meaningful way. https://arxiv.org/abs/2004.02709 Abstract: Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes. Authors: Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at evaluating NLP models via contrast sets. These are too many authors from too many places for me to read out. We'll just jump right into the problem. What is the problem? Let's jump into the solution. Here you see a visual question answering task. Visual question answering in this case. You have two pictures right here. Picture one, picture two and a sentence. Two similarly colored and similarly posed chow dogs are face-to-face in one image. I guess the task here is to have the system answer. Is this correct or incorrect? As you see here I believe that's a correct statement. Or you're maybe tasked to ask which is the image that this applies to. Is it image one or image two? Of course here it's image one. The problem with such systems is that there are a lot of easy things that the models can do that will usually get them the answer. What we like to imagine is that the model will look at this and recognize that this is a dog here. This is a dog. Here is its face and this is a dog and here is its face. It will see there's a count. There's two of them. There's two of them. There's a notion of face and there's notion of pose and so on. Usually there are tricks that the models can do to get this easier. For example I know that in a particular visual question answering system whenever there is a question of what is the ground covered in or something like this. The answer is always snow. You don't even have to look at the image. Similarly there are a lot of these kind of tricks that the models learn and the authors recognize correctly that this is mostly a data set problem. Usually what you do in these data sets is you have an image that you scrape from the web or something and it has some mountains and then there's snow on the mountains, on the ground. You give this to a bunch of mechanical turks or someone like a raider and you instruct them. You produce a question to this image. You give them a couple of examples and they're usually kind of lazy and they will just look at it and be like what questions could I ask? You need to ask something. Usually the instructions are it must be visual and it must maybe be answerable with a one word answer or something like this. Or it must be a multiple choice question. There are these number of instructions and they will usually be like what's kind of special about this picture? There's snow so I'm gonna ask about that. Snow right? The problem is mainly the process of data set generation. That will lead to biases and easy solutions for the models where the models will simply learn statistical correlations between things and the intention. We have a big divergence between the intention of what the data set creators want. The intention is in this case is visual understanding, visual of the world. There's a big difference between this and between how the data set is really constructed. The authors are trying to address this with what they call contrast sets. They say you get out of this process a data set. You get a training data set and a test data set. Maybe here a smaller test data set. What they say is what we should do is we should additionally have these things called contrast sets. This is train and this is test. Usually these two come from the same distribution. You simply make them and then you split them somehow and you take the test from the train. But these here are not from the same distribution. This is the contrast. What they argue is that the authors of the data set should create the contrast set. You see that there's a split here where the data set comes from. They argue that the authors of the data set with knowing what intention they have, they should create the contrast data set manually by hand in order to make really hard examples that show what they want out of a system. They capture this here in their example. If we go back to the example, here are things. They suggest to do this via perturbations. What they would do is they would start at this example up here. They would start and they would perturb it textually or via image. They would perturb it to make it change the gold label. This is different from adversarial examples. In adversarial examples you would want to perturb a sample such that it is still the same but to the classifier it's different. Here you have the opposite gold. You want to make something that is means kind of the opposite but you want to test whether your classifier can pick up on that. In this case the one example would be two similarly colored and similarly posed cats instead of dogs are face to face in one image. That would change the label. Whereas before the answer was yes that's a correct sentence. Now it's no that's not a correct sentence. There are no cats in these images. Also here three similarly colored dogs. The intention of the authors, you have to view it through this lens, the intention here is that the system can recognize the species of the entities in the images. The system can count and the system can compare right compare in this case colors. You want to kind of make perturbations on these attributes from a test image. You can also think about image perturbations where you keep the sentence but you modify the image such that there are still two dogs and they're still facing each other. But they're not similarly colored anymore. So the similarly colored here would be the attribute that where before it was true now it's false with the new image. You get the gist that the people that created the data set that know their intention will create manually these samples. The authors they propose a new metric to track this but essentially the authors propose how well the models do on these contrast sets will be a reflection. It should be kind of an additional thing that people do with their NLP models. Alright so you get the picture. That is I believe the entire gist of this paper and I have some problems. First of all here they say alright let's give a toy example in two dimensions. Say you have this data set right and the red one is the correct decision boundary right and you want to capture that but because you only have limited training data and because you in in this generation processes you have systematic biases. So if we had non-systematic biases we would think that okay we maybe pick this and this one and this one here and this one here and this one here right. We don't get all of them but we kind of get an IID sample right. That wouldn't be so much of a problem. You could still kind of recover the decision boundary but because we have systematic biases the authors argue we actually introduce biases. So the systematic bias here is that we would of the blue ones we would only capture things on this sorry on the on this layer up here and of the red ones orange ones we'd only capture things of the level down here and thereby we introduce the kind of data set, the bias. It doesn't require this complex decision boundary anymore. Right and if we now the problem is if we collect the data set like this and we simply say well these ones are the test set and these ones are the train set right it will generalize well to the test set but it will not generalize well to what we actually want and therefore the authors say well if we introduce these contrast sets here then you see that the decision boundary that we found will not perform well on these contrast sets right. So they would say we take one example of the test set right. This is you can see this is this example right here and we would perturb it to make it change its label or in this case one of them where the label remains. We would kind of perturb it meaningfully and yeah so as I said I have multiple problems with this. First 2D toy examples very very bad for NLP models. First of all low-dimensional intuition does not generalize to high-dimensional intuition like very very little. Second of all even worse usually these NLP models have so many parameters much more parameters than you have data set which means that your decision boundary is incidentally going to be simple even if you had all the data you could possibly want. It's just a very different kind of problem and then the next problem is if even with by doing this contrast set and you already see it here right you already see it you can only kind of bicker about the data okay but with the contrast that you only really capture this one aspect so if that was actually well adhered to you could measure very locally whether or not this this would work or not and the ability to come up with meaningful contrast sets to ever capture what the model is doing is almost impossible because you have to create them manually and then you suggest that the authors themselves make these contrast sets. Remember the authors are the ones that gave these instructions right these instructions right here the authors provided them to the to the data set annotators so the authors will probably be even more biased if they have to do their own right if they have to now create their own contrast examples they will probably even though they know their intention they will probably be like more biased than if you at least this here at least this here is a distributed process across people right so you get things that you wouldn't have thought of but if just the three authors of the date of the paper make the contrast examples I would argue that that's an even more biased measure often so all of this it just strikes me as as the paper is basically saying let's try on a few things and I think the fundamental problem is much much deeper and it goes with this intention part like I get it the the visual question answering data set doesn't capture the doesn't capture what you want it doesn't make the model suddenly understand that there are dogs and there are species of animal and so on it simply makes it correlate things but that's what deep learning and especially NLP does so right it's like it's like saying you you build a build an image net classifier and it can't fly and identify if I try it on my tests that it requires my computer to fly and my image net model can't do this then it doesn't serve my intention right and I mean it's it's a crass example but ultimately you the correct approach should be to better encapsulate your intention into the data set generating process and then correctly interpreting the results that mean okay on this data set as far as we can tell the way we created it this is the performance of the model it doesn't the model will never learn to fulfill your intention and I get it that's what you're saying but still even with this contrast set I think it's a really bad measure to formally propose it's I think you should much more propose how is the data set generating process different from what you want and what are the limitations there right and so that's that that I think that will lead to much more meaningful meaningful results than simply the authors providing a few manually put examples that they feel capture their intention it will not will not the reason we do deep learning instead of straightforward if else programming is because we cannot capture even our intentions and therefore data set generation is the only is the only method we have so to say all right so ultimately I believe these these whole NLP especially the visual question answering and so on the natural language understanding part needs to have a grounding so ultimately I think grounding grounded NLP it means basically that you're not only doing NLP which is simply you take text and you take images and you correlate them somehow right you just make a statistical connection grounded NLP models is the hope that you could build something that actually understands the world understands that there's entities that is interacted there's something like a pose that there is something like what the color means right what a dog is and so on and as entities I think we're not there yet and I think that will be the ultimate solution to these kind of tasks not not any sort of local very local very low dimensional perturbation I mean yeah let's say you create a contrast set you will be able to capture one tiny little bit of your intention one tiny little bit even though you know your intention you will capture a tiny little bit all of the thousand other degrees of freedom of your own intention you won't be able in there to capture in the contrast set I guarantee you all right that was my quarrels with that I invite you to read the whole paper they actually do this for NLP datasets it's a lot of work and they show that the models perform much worse on their contrast sets and interestingly the humans don't the humans are able to solve the contrast set of course of course because you tell the humans what the task is right that's like humans succeed on contrasts at like how surprising what you should do is you should just provide the humans with the data set not tell them what the task is even worse just provide them with the encoded data set like not the text itself but actually the token IDs right and then and then make them do the thing and the humans will just as well make a statistical correlation between the tokens and the images or whatnot and the humans will fail just as well on the test on these contrast sets because the humans maybe they'll figure out what the task is but probably not so humans succeed on contrasts at how surprising you tell them the intention while you don't tell it to the model yes I see critical but yeah please read the paper it's an interesting paper and with that goodbye
[ { "end": 5.68, "start": 0, "text": " Hi there! Today we're looking at evaluating NLP models via contrast sets." }, { "end": 12.8, "start": 5.68, "text": " These are too many authors from too many places for me to read out." }, { "end": 22.32, "start": 12.8, "text": " We'll just jump right into the problem. What is the problem? Let's jump into" }, { "end": 28.92, "start": 22.32, "text": " the solution. Here you see a visual question answering task. Visual question" }, { "end": 34.32, "start": 28.92, "text": " answering in this case. You have two pictures right here. Picture one, picture" }, { "end": 42.6, "start": 34.32, "text": " two and a sentence. Two similarly colored and similarly posed chow dogs are" }, { "end": 51.72, "start": 42.6, "text": " face-to-face in one image. I guess the task here is to have the" }, { "end": 57.68000000000001, "start": 51.72, "text": " system answer. Is this correct or incorrect? As you see here I believe" }, { "end": 65.48, "start": 57.68, "text": " that's a correct statement. Or you're maybe tasked to ask which is the" }, { "end": 70.2, "start": 65.48, "text": " image that this applies to. Is it image one or image two? Of course" }, { "end": 78.52, "start": 70.2, "text": " here it's image one. The problem with such systems is that there are a" }, { "end": 84.16, "start": 78.52, "text": " lot of easy things that the models can do that will usually get them the" }, { "end": 89, "start": 84.16, "text": " answer. What we like to imagine is that the model will look at this and recognize" }, { "end": 94.39999999999999, "start": 89, "text": " that this is a dog here. This is a dog. Here is its face and this is a dog and" }, { "end": 100.47999999999999, "start": 94.39999999999999, "text": " here is its face. It will see there's a count. There's two of them." }, { "end": 110.19999999999999, "start": 100.47999999999999, "text": " There's two of them. There's a notion of face and there's notion of pose and so" }, { "end": 117.72, "start": 110.2, "text": " on. Usually there are tricks that the models can do to get this easier." }, { "end": 122.24000000000001, "start": 117.72, "text": " For example I know that in a particular visual question answering system" }, { "end": 135.12, "start": 122.24000000000001, "text": " whenever there is a question of what is the ground covered in or something like" }, { "end": 142.64000000000001, "start": 135.12, "text": " this. The answer is always snow. You don't even have to look at the image." }, { "end": 148.88, "start": 142.64000000000001, "text": " Similarly there are a lot of these kind of tricks that the models learn and the" }, { "end": 154.20000000000002, "start": 148.88, "text": " authors recognize correctly that this is mostly a data set problem." }, { "end": 160.08, "start": 154.20000000000002, "text": " Usually what you do in these data sets is you have an image" }, { "end": 163.8, "start": 160.08, "text": " that you scrape from the web or something" }, { "end": 170.92000000000002, "start": 163.8, "text": " and it has some mountains and then there's snow on the mountains, on the ground." }, { "end": 181, "start": 170.92000000000002, "text": " You give this to a bunch of mechanical turks or someone like a raider and you" }, { "end": 186.36, "start": 181, "text": " instruct them. You produce a question to this image. You give them a couple of" }, { "end": 190.92000000000002, "start": 186.36, "text": " examples and they're usually kind of lazy and they will just look at it and" }, { "end": 196.72, "start": 190.92, "text": " be like what questions could I ask? You need to ask something." }, { "end": 204.32, "start": 196.72, "text": " Usually the instructions are it must be visual and it must maybe be answerable" }, { "end": 210.79999999999998, "start": 204.32, "text": " with a one word answer or something like this. Or it must be a" }, { "end": 214.79999999999998, "start": 210.79999999999998, "text": " multiple choice question. There are these number of instructions and they will" }, { "end": 218.92, "start": 214.79999999999998, "text": " usually be like what's kind of special about this picture? There's snow" }, { "end": 228.44, "start": 218.92, "text": " so I'm gonna ask about that. Snow right? The problem is mainly the process" }, { "end": 235.64, "start": 228.44, "text": " of data set generation. That will lead to biases and easy" }, { "end": 240.16, "start": 235.64, "text": " solutions for the models where the models will simply learn" }, { "end": 245.72, "start": 240.16, "text": " statistical correlations between things and the intention. We have a big" }, { "end": 257.64, "start": 245.72, "text": " divergence between the intention of what the data set creators" }, { "end": 268.88, "start": 257.64, "text": " want. The intention is in this case is visual understanding, visual of the" }, { "end": 275.96, "start": 268.88, "text": " world. There's a big difference between this and between how the data" }, { "end": 282.68, "start": 275.96, "text": " set is really constructed. The authors are trying to address this with" }, { "end": 287.64, "start": 282.68, "text": " what they call contrast sets. They say you get out of this process" }, { "end": 292.56, "start": 287.64, "text": " a data set. You get a training data set and a test data set." }, { "end": 298.68, "start": 292.56, "text": " Maybe here a smaller test data set. What they say is what we should do is we" }, { "end": 306.88, "start": 298.68, "text": " should additionally have these things called contrast sets. This is train" }, { "end": 313.56, "start": 306.88, "text": " and this is test. Usually these two come from the same distribution. You" }, { "end": 318.76, "start": 313.56, "text": " simply make them and then you split them somehow and you take the test from the" }, { "end": 326.15999999999997, "start": 318.76, "text": " train. But these here are not from the same distribution. This is the contrast." }, { "end": 334.08, "start": 326.15999999999997, "text": " What they argue is that the authors of the data set should create the contrast" }, { "end": 341.08, "start": 334.08, "text": " set. You see that there's a split here where the data set comes from." }, { "end": 345.64, "start": 341.08, "text": " They argue that the authors of the data set with knowing what intention they" }, { "end": 351.88, "start": 345.64, "text": " have, they should create the contrast data set manually by hand in order to" }, { "end": 357.71999999999997, "start": 351.88, "text": " make really hard examples that show what they want out of a system." }, { "end": 364.96, "start": 357.71999999999997, "text": " They capture this here in their example. If we go back to the example, here" }, { "end": 371.74, "start": 364.96, "text": " are things. They suggest to do this via perturbations. What they would do" }, { "end": 377.56, "start": 371.74, "text": " is they would start at this example up here. They would start and they would" }, { "end": 386.56, "start": 377.56, "text": " perturb it textually or via image. They would perturb it to make it change" }, { "end": 391.28000000000003, "start": 386.56, "text": " the gold label. This is different from adversarial examples. In" }, { "end": 397.40000000000003, "start": 391.28000000000003, "text": " adversarial examples you would want to perturb a sample such that it is still" }, { "end": 402.03999999999996, "start": 397.4, "text": " the same but to the classifier it's different. Here you have the opposite gold." }, { "end": 408.67999999999995, "start": 402.03999999999996, "text": " You want to make something that is means kind of the opposite but you want to" }, { "end": 414.12, "start": 408.67999999999995, "text": " test whether your classifier can pick up on that. In this case the one example" }, { "end": 418.28, "start": 414.12, "text": " would be two similarly colored and similarly posed cats instead of dogs" }, { "end": 423.59999999999997, "start": 418.28, "text": " are face to face in one image. That would change the label. Whereas" }, { "end": 429.16, "start": 423.6, "text": " before the answer was yes that's a correct sentence. Now it's no that's not" }, { "end": 435.44, "start": 429.16, "text": " a correct sentence. There are no cats in these images. Also here three similarly" }, { "end": 440.28000000000003, "start": 435.44, "text": " colored dogs. The intention of the authors, you have to view it through" }, { "end": 446.92, "start": 440.28000000000003, "text": " this lens, the intention here is that the system can recognize the species of" }, { "end": 454.04, "start": 446.92, "text": " the entities in the images. The system can count and the system can compare" }, { "end": 460.08000000000004, "start": 454.04, "text": " right compare in this case colors. You want to kind of make perturbations on" }, { "end": 465.36, "start": 460.08000000000004, "text": " these attributes from a test image. You can also think about image" }, { "end": 471.08000000000004, "start": 465.36, "text": " perturbations where you keep the sentence but you modify the image such" }, { "end": 475.84000000000003, "start": 471.08000000000004, "text": " that there are still two dogs and they're still facing each other." }, { "end": 481.59999999999997, "start": 475.84, "text": " But they're not similarly colored anymore. So the similarly" }, { "end": 489.23999999999995, "start": 481.59999999999997, "text": " colored here would be the attribute that where before it was true now it's false" }, { "end": 495.28, "start": 489.23999999999995, "text": " with the new image. You get the gist that the people that created the" }, { "end": 503.2, "start": 495.28, "text": " data set that know their intention will create manually these samples. The" }, { "end": 508.64, "start": 503.2, "text": " authors they propose a new metric to track this but essentially the authors" }, { "end": 515.04, "start": 508.64, "text": " propose how well the models do on these contrast sets will be a reflection." }, { "end": 521.16, "start": 515.04, "text": " It should be kind of an additional thing that people do with their NLP" }, { "end": 530.24, "start": 521.16, "text": " models. Alright so you get the picture. That is I believe the entire gist of" }, { "end": 540.04, "start": 530.24, "text": " this paper and I have some problems. First of all here they say alright let's" }, { "end": 544.6, "start": 540.04, "text": " give a toy example in two dimensions. Say you have this data set right and the red" }, { "end": 549.16, "start": 544.6, "text": " one is the correct decision boundary right and you want to capture that but" }, { "end": 555.48, "start": 549.16, "text": " because you only have limited training data and because you in in this" }, { "end": 562.6, "start": 555.48, "text": " generation processes you have systematic biases. So if we had non-systematic" }, { "end": 569.32, "start": 562.6, "text": " biases we would think that okay we maybe pick this and this one and this one here" }, { "end": 573.16, "start": 569.32, "text": " and this one here and this one here right. We don't get all of them but we" }, { "end": 577.4, "start": 573.16, "text": " kind of get an IID sample right. That wouldn't be so much of a problem. You" }, { "end": 580.88, "start": 577.4, "text": " could still kind of recover the decision boundary but because we have" }, { "end": 588.96, "start": 580.88, "text": " systematic biases the authors argue we actually introduce biases. So the" }, { "end": 594.04, "start": 588.96, "text": " systematic bias here is that we would of the blue ones we would only capture" }, { "end": 603.04, "start": 594.04, "text": " things on this sorry on the on this layer up here and of the red ones orange" }, { "end": 608.72, "start": 603.04, "text": " ones we'd only capture things of the level down here and thereby we introduce" }, { "end": 615.64, "start": 608.72, "text": " the kind of data set, the bias. It doesn't require this complex decision" }, { "end": 623.6, "start": 615.64, "text": " boundary anymore. Right and if we now the problem is if we collect the data set" }, { "end": 628.9200000000001, "start": 623.6, "text": " like this and we simply say well these ones are the test set and these ones" }, { "end": 633.12, "start": 628.9200000000001, "text": " are the train set right it will generalize well to the test set but it" }, { "end": 640.4, "start": 633.12, "text": " will not generalize well to what we actually want and therefore the authors" }, { "end": 645.84, "start": 640.4, "text": " say well if we introduce these contrast sets here then you see that the decision" }, { "end": 652.96, "start": 645.84, "text": " boundary that we found will not perform well on these contrast sets right. So" }, { "end": 659.12, "start": 652.96, "text": " they would say we take one example of the test set right. This is you can see" }, { "end": 665.36, "start": 659.12, "text": " this is this example right here and we would perturb it to make it change its" }, { "end": 670.8, "start": 665.36, "text": " label or in this case one of them where the label remains. We would kind of" }, { "end": 678.76, "start": 670.8, "text": " perturb it meaningfully and yeah so as I said I have multiple problems with this." }, { "end": 687.12, "start": 678.76, "text": " First 2D toy examples very very bad for NLP models. First of all low-dimensional" }, { "end": 692.2, "start": 687.12, "text": " intuition does not generalize to high-dimensional intuition like very very" }, { "end": 699.8, "start": 692.2, "text": " little. Second of all even worse usually these NLP models have so many parameters" }, { "end": 704.84, "start": 699.8, "text": " much more parameters than you have data set which means that your decision" }, { "end": 710.92, "start": 704.84, "text": " boundary is incidentally going to be simple even if you had all the data you" }, { "end": 719.8399999999999, "start": 710.92, "text": " could possibly want. It's just a very different kind of problem and then the" }, { "end": 727.68, "start": 719.8399999999999, "text": " next problem is if even with by doing this contrast set and you already see it" }, { "end": 733.18, "start": 727.68, "text": " here right you already see it you can only kind of bicker about the data okay" }, { "end": 737.68, "start": 733.18, "text": " but with the contrast that you only really capture this one aspect so if" }, { "end": 746.1999999999999, "start": 737.68, "text": " that was actually well adhered to you could measure very locally whether or" }, { "end": 752.16, "start": 746.1999999999999, "text": " not this this would work or not and the ability to come up with meaningful" }, { "end": 758.16, "start": 752.16, "text": " contrast sets to ever capture what the model is doing is almost impossible" }, { "end": 764.7199999999999, "start": 758.16, "text": " because you have to create them manually and then you suggest that the authors" }, { "end": 769.64, "start": 764.72, "text": " themselves make these contrast sets. Remember the authors are the ones that" }, { "end": 774.28, "start": 769.64, "text": " gave these instructions right these instructions right here the authors" }, { "end": 782.48, "start": 774.28, "text": " provided them to the to the data set annotators so the authors will probably" }, { "end": 787.72, "start": 782.48, "text": " be even more biased if they have to do their own right if they have to now" }, { "end": 793.4, "start": 787.72, "text": " create their own contrast examples they will probably even though they know" }, { "end": 799.0799999999999, "start": 793.4, "text": " their intention they will probably be like more biased than if you at least" }, { "end": 803.4399999999999, "start": 799.0799999999999, "text": " this here at least this here is a distributed process across people right" }, { "end": 807.36, "start": 803.4399999999999, "text": " so you get things that you wouldn't have thought of but if just the three authors" }, { "end": 811.28, "start": 807.36, "text": " of the date of the paper make the contrast examples I would argue that" }, { "end": 819.56, "start": 811.28, "text": " that's an even more biased measure often so all of this it just strikes me as as" }, { "end": 825.4799999999999, "start": 819.56, "text": " the paper is basically saying let's try on a few things and I think the" }, { "end": 831.4, "start": 825.4799999999999, "text": " fundamental problem is much much deeper and it goes with this intention part" }, { "end": 839.92, "start": 831.4, "text": " like I get it the the visual question answering data set doesn't capture the" }, { "end": 845.52, "start": 839.92, "text": " doesn't capture what you want it doesn't make the model suddenly understand that" }, { "end": 849.1999999999999, "start": 845.52, "text": " there are dogs and there are species of animal and so on it simply makes it" }, { "end": 855, "start": 849.2, "text": " correlate things but that's what deep learning and especially NLP does so" }, { "end": 861.72, "start": 855, "text": " right it's like it's like saying you you build a build an image net classifier" }, { "end": 870.24, "start": 861.72, "text": " and it can't fly and identify if I try it on my tests that it requires my" }, { "end": 876.2, "start": 870.24, "text": " computer to fly and my image net model can't do this then it doesn't serve my" }, { "end": 883.12, "start": 876.2, "text": " intention right and I mean it's it's a crass example but ultimately you the" }, { "end": 889.8000000000001, "start": 883.12, "text": " correct approach should be to better encapsulate your intention into the" }, { "end": 894.76, "start": 889.8000000000001, "text": " data set generating process and then correctly interpreting the results that" }, { "end": 900.08, "start": 894.76, "text": " mean okay on this data set as far as we can tell the way we created it this is" }, { "end": 906.24, "start": 900.08, "text": " the performance of the model it doesn't the model will never learn to fulfill" }, { "end": 910.6, "start": 906.24, "text": " your intention and I get it that's what you're saying but still even with this" }, { "end": 919.2, "start": 910.6, "text": " contrast set I think it's a really bad measure to formally propose it's I think" }, { "end": 923.96, "start": 919.2, "text": " you should much more propose how is the data set generating process different" }, { "end": 931.4000000000001, "start": 923.96, "text": " from what you want and what are the limitations there right and so that's" }, { "end": 938.5600000000001, "start": 931.4000000000001, "text": " that that I think that will lead to much more meaningful meaningful results than" }, { "end": 943.9200000000001, "start": 938.5600000000001, "text": " simply the authors providing a few manually put examples that they feel" }, { "end": 948.76, "start": 943.9200000000001, "text": " capture their intention it will not will not the reason we do deep learning" }, { "end": 954.64, "start": 948.76, "text": " instead of straightforward if else programming is because we cannot" }, { "end": 961.2, "start": 954.64, "text": " capture even our intentions and therefore data set generation is the" }, { "end": 969.64, "start": 961.2, "text": " only is the only method we have so to say all right so ultimately I believe" }, { "end": 973.8, "start": 969.64, "text": " these these whole NLP especially the visual question answering and so on the" }, { "end": 980.5999999999999, "start": 973.8, "text": " natural language understanding part needs to have a grounding so ultimately I" }, { "end": 988.8399999999999, "start": 980.5999999999999, "text": " think grounding grounded NLP it means basically that you're not only doing NLP" }, { "end": 992.76, "start": 988.8399999999999, "text": " which is simply you take text and you take images and you correlate them" }, { "end": 997.92, "start": 992.76, "text": " somehow right you just make a statistical connection grounded NLP" }, { "end": 1001.92, "start": 997.92, "text": " models is the hope that you could build something that actually understands the" }, { "end": 1005.76, "start": 1001.92, "text": " world understands that there's entities that is interacted there's something" }, { "end": 1011, "start": 1005.76, "text": " like a pose that there is something like what the color means right what a dog is" }, { "end": 1017.4399999999999, "start": 1011, "text": " and so on and as entities I think we're not there yet and I think that will be" }, { "end": 1026.6, "start": 1017.4399999999999, "text": " the ultimate solution to these kind of tasks not not any sort of local very" }, { "end": 1032.08, "start": 1026.6, "text": " local very low dimensional perturbation I mean yeah let's say you create a" }, { "end": 1039.08, "start": 1032.08, "text": " contrast set you will be able to capture one tiny little bit of your intention" }, { "end": 1043.36, "start": 1039.08, "text": " one tiny little bit even though you know your intention you will capture a tiny" }, { "end": 1048.7199999999998, "start": 1043.36, "text": " little bit all of the thousand other degrees of freedom of your own intention" }, { "end": 1053.76, "start": 1048.7199999999998, "text": " you won't be able in there to capture in the contrast set I guarantee you all" }, { "end": 1058.96, "start": 1053.76, "text": " right that was my quarrels with that I invite you to read the whole paper they" }, { "end": 1065.4, "start": 1058.96, "text": " actually do this for NLP datasets it's a lot of work and they show that the" }, { "end": 1070.08, "start": 1065.4, "text": " models perform much worse on their contrast sets and interestingly the" }, { "end": 1073.8799999999999, "start": 1070.08, "text": " humans don't the humans are able to solve the contrast set of course of" }, { "end": 1080.36, "start": 1073.8799999999999, "text": " course because you tell the humans what the task is right that's like humans" }, { "end": 1087, "start": 1080.36, "text": " succeed on contrasts at like how surprising what you should do is you" }, { "end": 1091.8799999999999, "start": 1087, "text": " should just provide the humans with the data set not tell them what the task is" }, { "end": 1096.6399999999999, "start": 1091.8799999999999, "text": " even worse just provide them with the encoded data set like not the text" }, { "end": 1102.1599999999999, "start": 1096.6399999999999, "text": " itself but actually the token IDs right and then and then make them do the thing" }, { "end": 1107.56, "start": 1102.1599999999999, "text": " and the humans will just as well make a statistical correlation between the" }, { "end": 1113.32, "start": 1107.56, "text": " tokens and the images or whatnot and the humans will fail just as well on the" }, { "end": 1118.98, "start": 1113.32, "text": " test on these contrast sets because the humans maybe they'll figure out what the" }, { "end": 1124.32, "start": 1118.98, "text": " task is but probably not so humans succeed on contrasts at how surprising" }, { "end": 1131.6799999999998, "start": 1124.32, "text": " you tell them the intention while you don't tell it to the model yes I see" }, { "end": 1136.6, "start": 1131.6799999999998, "text": " critical but yeah please read the paper it's an interesting paper and with that" }, { "end": 1139.6, "start": 1136.6, "text": " goodbye" } ]
8wkgDnNxiVs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and Solutions
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "evolution", "reinforcement learning", "neat", "open-ended", "never ending", "population", "bipedal walker" ]
From the makers of Go-Explore, POET is a mixture of ideas from novelty search, evolutionary methods, open-ended learning and curriculum learning. https://arxiv.org/abs/1901.01753 Abstract: While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. Such a process would in effect build its own diverse and expanding curricula, and the solutions to problems at various stages would become stepping stones towards solving even more challenging problems later in the process. The Paired Open-Ended Trailblazer (POET) algorithm introduced in this paper does just that: it pairs the generation of environmental challenges and the optimization of agents to solve those challenges. It simultaneously explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems if better, catalyzing innovation. The term open-ended signifies the intriguing potential for algorithms like POET to continue to create novel and increasingly complex capabilities without bound. Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges. The ability to transfer solutions from one environment to another proves essential to unlocking the full potential of the system as a whole, demonstrating the unpredictable nature of fortuitous stepping stones. We hope that POET will inspire a new push towards open-ended discovery across many domains, where algorithms like POET can blaze a trail through their interesting possible manifestations and solutions. Authors: Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new algorithm called PoET. So as you might guess, the challenge is to keep this little thing here walking to the right as far as you can while it encounters various obstacles. And it is and remains a challenging reinforcement learning problem to have an agent learn to overcome various obstacles and walk well in different environments. So the paper we're going to look at is called PoET. It's by Uber Engineering. And the full pronunciation is the Paired Open-Ended Trail Blazer, endlessly generating increasingly complex and diverse learning environments and their solutions by Roy Wang, Joel Lehmann, Jeff Klun and Kenneth O. Stanley, as I said from Uber AI Labs. So as you already saw, the challenge they take on is this bipedal walker problem. Now their method is very general and not limited to this problem, but this is the problem that they focus on. I'm going to jump some of the explanations here and dig right into the problem. As you can see, the problem is the following. You have this thing here, which is the walker, and it has two legs and specifically it has four joints. So the four joints are here too, and here too. And you can give torque on all of the four joints. So it's basically a four output problem. And you do have sensors as input. So the inputs, I believe, is a LIDAR. So the LIDAR is this red line you see here. I think it has 16 of those in various angles. And also it has pressure detection on the feet, I believe, to see whether or not they are in contact with the ground. And it might also have a gyroscope in that tells you which angle with respect to the ground the head is. So you have various sensors on these things, and you're able to basically control what the legs are doing. And your goal is to make this go as far to the right and as fast as possible. You see the reward down here is negative 100 if the robot falls over. That means if the head hits the ground. And then it is 130 times delta x. That's how far you go to the right minus the whole angle. And the whole angle, as I said, is this angle here. So you want to keep it as stable as possible. Because if there's a difference in the angle per step, then you get penalized. And also you get penalized for each torque you apply. So you want to kind of apply minimal force on the joints in order to go very far. But by far the most important point is to go to the right as far and as fast as you can. There is an end here somewhere. And if you reach it, you get a score that is above 230. They choose the limit of 230 here to determine. So if the agent gets 230 or more, then it has solved the environment. That's what they claim. That's from experience. So as you see, the environment has various obstacles here. There are holes that you can fall into that you need to jump or step over. There are these kind of stumps here. They can be of various height. So this is a bit shorter and this is a bit longer. And the general terrain has a roughness. And this can go to very rough from very smooth. So this is a parameterized environment. And obviously they are able to generate these environments from parameters. And the goal now is to have an agent that walks well in any environment that you can think of. Right, so here on the left you see this is very challenging down the stairs. This also isn't too easy because there is a gap here. And there are five parameters of these environments. So there is the general roughness of the terrain. That means how many hills it has and how fast they are coming. There is the stump lower bound and stump upper bound, I believe. So how high the stumps are. And also how long the gaps are. And with these parameters you control how difficult an environment is. So the straightforward thing to do is simply to sample environments and have a reinforcement learn approach to this. And that usually doesn't work. I already want to see this without having talked about what the algorithm is. This is the approach where you try this thing. It's called evolution strategies. But you can think of it as just a straightforward optimization procedure. So there is an agent and there is an environment and you are trying to solve the environment using just straightforward optimization. Now the evolution strategies are not your classic algorithm but you can compare it to it. It's just that these people, they like the more, I have a feeling they like the more esoteric learning algorithms. In any case, you see in these environments large gap, rough surface and so on. These are supposed to be the platinum figures. So these two environments and also these environments here. The evolution strategy, so the classic approach if you just straight forward optimize, they get very low scores on average, whereas poet gets here very high scores above the 230 threshold. So what's happening? If you're trying to just solve these environments from scratch, you basically don't really have a big chance of solving them. Because let's say you're here and you're trying to move to the right, you know, you might learn how to do this and you see this from scratch solution actually manages to get to the right. But then as soon as you reach this, you're in this gap and you just fall down the gap because all you've learned so far is how to move right. So what you would need to do is you would need to plan ahead like what poet does. You need to see that there is a gap. You need to plan ahead and already lift up a leg in order to then step over the gap here and then do a little jump right here. And this sequence of action, this kind of planning ahead, it is very difficult to learn this for a classic RL algorithm because you basically get reward for everything you do. So initially you get reward for moving to the right. So that's 10 if you reach here, another 10 if you reach here. And so there is another 10 if you reach here and another 10 if you reach here. Whereas if you lift up your leg, that's like minus five because now this you've changed this angle and we saw this is negative reward, right? So a classic optimization algorithm will always fall into the hole because that is where you get the immediate reward. Whereas you'd have to you'd have to do a sequence of action that doesn't give you a reward right now, but it gives you more reward later. And in order to learn this, we need a kind of a better algorithm that just straightforward optimization. So maybe I can explain this if you have a maze, here is the start and here is the goal and there is like walls and the walls are something like this. What you need to do is go around here. But what a classic optimization algorithm does is always like goes here because that's ever so closer to the goal. And then it just gets stuck because it can't fathom that it needs to go around here. So it needs to go farther away before it gets closer. So these people we've talked about this before in like open ended learning novelty search. What you would want to do is you would want to gradually build up solutions that can explore the space like to go here, go here, go here and basically build up these solutions. And there are two components to what this poet algorithm does. So the first component is curriculum learning. Curriculum learning. What does curriculum learning mean? Curriculum learning means that you start off with easy tasks and you increasingly build up more and more and more complex tasks. So let's say I have an environment here and I'm going to draw and at the beginning we just kind of start off with this flat surface right and here is our little walker right here. And we'll just train it to move right on that and that should be doable with kind of a classic approach. And then we gradually move to more difficult environments. So maybe we'll make it a bit more rough right. And an agent that can already walk to the right already kind of has think of it as a pre-training in like NLP. You can then get more and more challenging and then maybe at some point you can build in a gap right. So you build in one of these gaps and now it already knows how to move to the right and now it might actually learn to jump a small gap right if you make it small at the beginning not like this one down here. There's a very large gap. But if you make it small by accident it might stumble over it and then learn and continuously how to master the gap. So this is the curriculum learning approach. It means that from environment to environment you get harder, harder and harder challenges. So first flat then more rough then more rough with a gap and so on. The second approach, the second ingredient to POET is what they call stepping stone learning or transfer learning or things like this. And that's where you kind of have to think of this not as a single agent optimizing but as a population of agents. So let's say you do this curriculum learning right. And you're getting fairly well here at rough terrains right. More and more rough terrains. But in parallel you also have a second optimization procedure. You also start out kind of flat but with this thing you go as we said before small gap you keep it flat but you just increase the number of gaps here right. Whereas over here you just keep making the terrain rougher and rougher. So what the philosophy is that an agent that might be able to master this rougher terrain it might actually that skill because here you this kind of this kind of looks like a gap here. The skill of hopping over this gap here might actually transfer to the environment over here where you do have a proper you know a gap in the environment or the skill that you learn from an environment where you have one of these stumps right. So here let's draw in one of these stumps where you have to go over and if you have a walker that can successfully walk over this that skill now might transfer over here in order to get over this over this peaky terrain here. So the idea of poet is to start off with a generic flat very easy environment and then spawn new ones so you want to spawn new environments in kind of a hereditary way. So this one might get a bit rougher this one might include this and this one might include a gap or something like this and then again you want to spawn new environments and more rough more rough more rough with a stump here and this one retains the gap sorry and um this one now gets two gaps and so on and you want to continuously train these and then always you want to check whether or not the skill that you learn over here might actually transfer to anyone over here. So you get this tree of this continuous tree of solutions and once you improve on one branch this might actually be good on another branch right they always make the comparison to let's say biological evolution where a strategy that works over here for birds is all of a sudden can be cross adopted by mammals for an entirely different problem but the same skill might be valuable. Yeah so this this is basically the two ingredients of poet and now I want to show you the complete poet algorithm. So what does it do you start off with an initial environment right and in poet every environment is paired with an agent so there is one agent per environment right so for the time steps what you do is first of all you go through your environments and you mutate them and we already seen these environments they can be generated from a parameter vector so we have five numbers right how rough how stumpy and how wide the gaps are let's say we have three numbers to two and this might be one this might be two this might be five right so what you want to do is you want to mutate them right you want to spawn children and each of these parameters has a chance of mutating this might be one three five and this environment might be one four six and this one might be two two five right you spawn new ones you already see that the requirement here is that you can actually have environments that are procedurally generated and mutated like this where a small mutation probably is going to lead to a small change in the environment in any case you mutate them and then you you want to let's you want to optimize your eight each agent so each of these environments is paired with a new agent that always tries to solve that particular environment so now within one environment you simply do your classic optimization we already saw here the evolution strategy is akin to a classic optimization algorithm from reinforcement learning all right so each agent you optimize for a couple of steps right not fully every time but for a couple of steps so each agent including the one in the original environment each agent is continuously trained on its environment throughout the process of course you like you have to be you have bounded computation so you need to drop out the very old ones but in principle continuously as all of this goes on all the agents are always trained on their environments so the agent here this Walker will always try to solve this particular environment and the Walker here that is now newly generated when the environment is generated will only try to solve this particular environment throughout the whole algorithm right and then all right so you do mutations you spawn new ones and then you do a couple of steps in optimization right and yes step and then you do this transfer attempt right what you want to do is you want to evaluate all the candidates on all the environments in principle you can you can cut this down but in principle you want to go through the environments and say okay this environment right here I'm going to evaluate all of the other agents in this environment you can do this in a couple of different ways where you just straight up try them or try to optimize them for a few steps to see whether they can be adapted easily to that environment but ultimately you have to come up with a criterion to say for each agent is the agent better or worse than the agent that is continuously trained on this environment if it's worse then you keep this one if if anyone is better then you transfer that better one to replace this one right and you basically copy it over to this new environment and that's where this transfer learning comes in so you're continuously trying all the agents on all the environments and if they are better you transfer them right so here you say if the environment score is better than the one that you have you transfer it all right now there is a lot hidden here for example in this mutate environment step they do check whether or not the new mutated environments are not too hard and not too easy and that basically means whether or not the agents can solve them but not solve them too easily they also check whether the environments are enough novel so you need a couple of checks here you solvable and that that means not too easy and not too hard right so they need to pass like a certain score but they need to be kind of solvable to a to an okay score so there's a score range and also novel they check whether or not the out the mutated environments are novel enough and I believe they just do this by calculating the the distance between two environments in terms of their parameter vectors so to determine whether or not these are novel and sorry I don't mean the distance just between two but the distance of all of the ones you've seen so far so if we go to original very beautiful drawing here where is my tree if you create a new environment let's say you create a new environment right here then you want to check it against all environments you've seen so far to determine whether or not it is new or not so you want to create the distance to all of these and if you have enough distance to your nearest neighbors then you are novel and that's kind of how they they determine whether environment is new all right so that's basically the poet algorithm you continuously create new environments by mutation you ensure that they are solvable not hard enough sorry not too hard but hard enough ensure that they are novel and then you optimize each agent for its own environment continuously as the process goes on and so it's not I want to stress this it's not only the frontier so you're not only looking at the newest generation but you're always looking at all of the generation of the because the older ones while the environments are easier they have been optimized for longer on this environment so the skills might be very handy so you always want to look at your entire population and then you do crucially you do this these transfer attempts so that's the poet algorithm there is a lot hidden here and I kind of want to stress that just if you just look at the amount of hyper parameters there is so many hyper parameters in this how much you transfer how much you mutate how many steps you do each of these subroutines here has a billion hyper parameters and learning rates and and so on so to me that's a that is kind of if I look at this algorithm I am very scared if I attempted to do something like this myself it's it's going to be a long and hard thing to evaluate all of these different hyper parameters that you have to do shortly want to dip into what the evolution strategy does just so you know because you just might be familiar with your classic your classic reinforce algorithm so in policy gradient methods what you do is you scale your parameters of your neural network which is you can if this is your policy then your policy network here you want to scale the gradient according to your reward so in classic reinforcement learning this here would be the reward you got which basically means if you did an action and you got higher reward you want to make your network do that action more right here in evolution strategies what you do is you spawn it's a different way of doing the same thing basically you spawn different environments and sorry you spawn you spawn different agents so you have your current parameters and you want to spawn a number of noisy versions of those parameters and then you want to evaluate each one right and now you want to adjust your parameters into the direction of that particular so basically you are here with your parameters you create a bunch of noisy versions of it right and let's say these two performed really well you want to adjust your parameters into the direction of those two right that's basically what this says so this is the noisy version and then this is the noise that produced the noisy version so if this is high if this number here is high then you will adjust your parameters into that direction it's a fairly cool way if you especially if you can't back prop through your policy as it's pretty neat thing so this is the ES step algorithm but you can think of it just as a RL algorithm all right so they do various experiments to show that this actually has merits I've already shown you if you're trying if you take the same environments and try to solve them directly by this evolution step then it will not succeed because of the problems we've discussed before now the comparison is a bit unfair because um of course these environments for poet poet the problem here is you can't have it solve a particular environments because the environments they constantly change right you constantly mutate the environments you never know where it's going it's not directed so if your goal is to solve a particular environment you cannot do it with poet you can hope that the agent that comes out will perform well right you can do something like this but I believe I believe that these environments that they test on here are ones that appeared during the poet run right so it's kind of an unfair comparison I feel to to do this on an environment that you know this environment this poet agent actually comes from an environment that poet has generated in its all mutation tree curriculum while building it up and then the poor ES algorithm is simply tasked with solving that particular environment from scratch so yes always keep in mind this is this can have a goal this doesn't have a goal right that's kind of the drawback but as you can see poet does get super high scores whereas es the classic algorithm completely fails and they also investigate the importance of transfer learning so they compare to like a classic classic curriculum learning algorithms there are curriculum learning algorithms where you can continuously try to build up the difficulties of these environments but you also do it in a goal-directed way so as I said if you have an environment that has like a gap and then a stump a high stump or two high stumps you want to start out flat and then maybe build in a small gap and a small stump and so on until you're here it's very much goal-directed but it doesn't have this kind of population with transfer learning aspect of poet so if they compare this you can see here the red the red the red one sorry colored it blue stupidly the red one is whatever poet was able to solve now these are the five dimensions of the parameters and the more on the outside it is the harder the environment and for the same for the same environment the blue one is what the curriculum learning algorithm has managed so it's the best environment the curriculum learning algorithm has been able to solve while trying to build up to the so if we take this here is the environment that poet solved again the comparison is kind of unfair because we're starting out from an environment that poet has already solved and then we're trying to build our way up to it with the classic algorithm by basically again this is it's comparing a non goal-directed thing something that just happened to a goal-directed process that needs to get this particular environment to work in any case at some point this curriculum learning algorithm will fail like let's say that's here that's the environment that has somewhat of a gap but no stump right and that would be the the blue line here they do like five runs and they plot them here and you can see every time the classic curriculum learning algorithm manages to only solve a much much less challenging environment than the poet algorithm achieved even though it's it's trying to reach exactly that right and so here they show the difference so if you just the classified environment if it's just challenging then the classic algorithm the curriculum learning algorithm can solve it somewhat so the distance is close to zero but as you go more and more challenging the distance between poet and the classic becomes larger and larger they do give some examples of what this transfer learning does so they have this parent environment that just kind of slouches forward on the ground and then the child environment has a mutation that has now little stumps in it right so you can't get over it right now but the child environment because it's it's a small stump so it might stumble across learns to lift its leg here and it transfers this back to the parent right at a later iteration which is pretty cool and then the parent gets even better as a result of that transfer so we have two transfer learning events here that mutually help these agents remember both the parent and the child are continuously trained as the process goes on all right and they do some more things where they do actual poet not a classic algorithm but poet without transfer learning and they see that okay the poet without transfer is able to solve some of the very challenging problems but never reaches the extremely challenging stage and that's kind of their argument why the transfer learning is necessary so in total I would say this is a cool algorithm it has many many many many many many hyper parameters and these experimental results with that many hyper parameters you need to take it with a grain of salt because it's always possible that they just haven't put as much effort into their comparisons as they have into their own thing to get it to work all right with that I wish you a nice day and check out the paper they have lots of descriptions check out the blog post where they have animations and the YouTube video and with that bye bye
[ { "end": 6.88, "start": 0, "text": " Alright, so what you're seeing here are solutions found to this bipedal walker problem by a" }, { "end": 10.52, "start": 6.88, "text": " new algorithm called PoET." }, { "end": 16.84, "start": 10.52, "text": " So as you might guess, the challenge is to keep this little thing here walking to the" }, { "end": 21.72, "start": 16.84, "text": " right as far as you can while it encounters various obstacles." }, { "end": 30.92, "start": 21.72, "text": " And it is and remains a challenging reinforcement learning problem to have an agent learn to" }, { "end": 35.96, "start": 30.92, "text": " overcome various obstacles and walk well in different environments." }, { "end": 41.2, "start": 35.96, "text": " So the paper we're going to look at is called PoET." }, { "end": 46.08, "start": 41.2, "text": " It's by Uber Engineering." }, { "end": 52.96, "start": 46.08, "text": " And the full pronunciation is the Paired Open-Ended Trail Blazer, endlessly generating increasingly" }, { "end": 57.96, "start": 52.96, "text": " complex and diverse learning environments and their solutions by Roy Wang, Joel Lehmann," }, { "end": 64.56, "start": 57.96, "text": " Jeff Klun and Kenneth O. Stanley, as I said from Uber AI Labs." }, { "end": 70.48, "start": 64.56, "text": " So as you already saw, the challenge they take on is this bipedal walker problem." }, { "end": 75.6, "start": 70.48, "text": " Now their method is very general and not limited to this problem, but this is the problem that" }, { "end": 76.6, "start": 75.6, "text": " they focus on." }, { "end": 83.67999999999999, "start": 76.6, "text": " I'm going to jump some of the explanations here and dig right into the problem." }, { "end": 86.03999999999999, "start": 83.67999999999999, "text": " As you can see, the problem is the following." }, { "end": 91.36, "start": 86.03999999999999, "text": " You have this thing here, which is the walker, and it has two legs and specifically it has" }, { "end": 93.03999999999999, "start": 91.36, "text": " four joints." }, { "end": 97.56, "start": 93.03999999999999, "text": " So the four joints are here too, and here too." }, { "end": 102.56, "start": 97.56, "text": " And you can give torque on all of the four joints." }, { "end": 109.68, "start": 102.56, "text": " So it's basically a four output problem." }, { "end": 112.72, "start": 109.68, "text": " And you do have sensors as input." }, { "end": 116.26, "start": 112.72, "text": " So the inputs, I believe, is a LIDAR." }, { "end": 118.84, "start": 116.26, "text": " So the LIDAR is this red line you see here." }, { "end": 123.48, "start": 118.84, "text": " I think it has 16 of those in various angles." }, { "end": 129.88, "start": 123.48, "text": " And also it has pressure detection on the feet, I believe, to see whether or not they" }, { "end": 132.76, "start": 129.88, "text": " are in contact with the ground." }, { "end": 143.72, "start": 132.76, "text": " And it might also have a gyroscope in that tells you which angle with respect to the" }, { "end": 146.68, "start": 143.72, "text": " ground the head is." }, { "end": 151.07999999999998, "start": 146.68, "text": " So you have various sensors on these things, and you're able to basically control what" }, { "end": 153.68, "start": 151.07999999999998, "text": " the legs are doing." }, { "end": 161.76000000000002, "start": 153.68, "text": " And your goal is to make this go as far to the right and as fast as possible." }, { "end": 170.06, "start": 161.76000000000002, "text": " You see the reward down here is negative 100 if the robot falls over." }, { "end": 173.52, "start": 170.06, "text": " That means if the head hits the ground." }, { "end": 178.20000000000002, "start": 173.52, "text": " And then it is 130 times delta x." }, { "end": 184.64, "start": 178.2, "text": " That's how far you go to the right minus the whole angle." }, { "end": 187.39999999999998, "start": 184.64, "text": " And the whole angle, as I said, is this angle here." }, { "end": 190.48, "start": 187.39999999999998, "text": " So you want to keep it as stable as possible." }, { "end": 197.07999999999998, "start": 190.48, "text": " Because if there's a difference in the angle per step, then you get penalized." }, { "end": 200.94, "start": 197.07999999999998, "text": " And also you get penalized for each torque you apply." }, { "end": 209.64, "start": 200.94, "text": " So you want to kind of apply minimal force on the joints in order to go very far." }, { "end": 216.35999999999999, "start": 209.64, "text": " But by far the most important point is to go to the right as far and as fast as you" }, { "end": 217.36, "start": 216.35999999999999, "text": " can." }, { "end": 220.56, "start": 217.36, "text": " There is an end here somewhere." }, { "end": 227.36, "start": 220.56, "text": " And if you reach it, you get a score that is above 230." }, { "end": 233.24, "start": 227.36, "text": " They choose the limit of 230 here to determine." }, { "end": 238.12, "start": 233.24, "text": " So if the agent gets 230 or more, then it has solved the environment." }, { "end": 240.72000000000003, "start": 238.12, "text": " That's what they claim." }, { "end": 242.04000000000002, "start": 240.72000000000003, "text": " That's from experience." }, { "end": 244.76000000000002, "start": 242.04000000000002, "text": " So as you see, the environment has various obstacles here." }, { "end": 251.24, "start": 244.76000000000002, "text": " There are holes that you can fall into that you need to jump or step over." }, { "end": 253.76000000000002, "start": 251.24, "text": " There are these kind of stumps here." }, { "end": 255.86, "start": 253.76000000000002, "text": " They can be of various height." }, { "end": 259.36, "start": 255.86, "text": " So this is a bit shorter and this is a bit longer." }, { "end": 262.28000000000003, "start": 259.36, "text": " And the general terrain has a roughness." }, { "end": 268.7, "start": 262.28000000000003, "text": " And this can go to very rough from very smooth." }, { "end": 273.04, "start": 268.7, "text": " So this is a parameterized environment." }, { "end": 280.88, "start": 273.04, "text": " And obviously they are able to generate these environments from parameters." }, { "end": 288.71999999999997, "start": 280.88, "text": " And the goal now is to have an agent that walks well in any environment that you can" }, { "end": 289.71999999999997, "start": 288.71999999999997, "text": " think of." }, { "end": 295.48, "start": 289.71999999999997, "text": " Right, so here on the left you see this is very challenging down the stairs." }, { "end": 301.8, "start": 295.48, "text": " This also isn't too easy because there is a gap here." }, { "end": 306.96, "start": 301.8, "text": " And there are five parameters of these environments." }, { "end": 310.32, "start": 306.96, "text": " So there is the general roughness of the terrain." }, { "end": 314.2, "start": 310.32, "text": " That means how many hills it has and how fast they are coming." }, { "end": 319.4, "start": 314.2, "text": " There is the stump lower bound and stump upper bound, I believe." }, { "end": 322.52, "start": 319.4, "text": " So how high the stumps are." }, { "end": 326.84, "start": 322.52, "text": " And also how long the gaps are." }, { "end": 332.76, "start": 326.84, "text": " And with these parameters you control how difficult an environment is." }, { "end": 342.03999999999996, "start": 332.76, "text": " So the straightforward thing to do is simply to sample environments and have a reinforcement" }, { "end": 344.44, "start": 342.03999999999996, "text": " learn approach to this." }, { "end": 347.2, "start": 344.44, "text": " And that usually doesn't work." }, { "end": 353.8, "start": 347.2, "text": " I already want to see this without having talked about what the algorithm is." }, { "end": 358.24, "start": 353.8, "text": " This is the approach where you try this thing." }, { "end": 360.5, "start": 358.24, "text": " It's called evolution strategies." }, { "end": 364.68, "start": 360.5, "text": " But you can think of it as just a straightforward optimization procedure." }, { "end": 371.4, "start": 364.68, "text": " So there is an agent and there is an environment and you are trying to solve the environment" }, { "end": 374.76, "start": 371.4, "text": " using just straightforward optimization." }, { "end": 380.72, "start": 374.76, "text": " Now the evolution strategies are not your classic algorithm but you can compare it to" }, { "end": 381.72, "start": 380.72, "text": " it." }, { "end": 386.08, "start": 381.72, "text": " It's just that these people, they like the more, I have a feeling they like the more" }, { "end": 390.96, "start": 386.08, "text": " esoteric learning algorithms." }, { "end": 399, "start": 390.96, "text": " In any case, you see in these environments large gap, rough surface and so on." }, { "end": 402.68, "start": 399, "text": " These are supposed to be the platinum figures." }, { "end": 408.88, "start": 402.68, "text": " So these two environments and also these environments here." }, { "end": 414.91999999999996, "start": 408.88, "text": " The evolution strategy, so the classic approach if you just straight forward optimize, they" }, { "end": 424.6, "start": 414.92, "text": " get very low scores on average, whereas poet gets here very high scores above the 230 threshold." }, { "end": 426.28000000000003, "start": 424.6, "text": " So what's happening?" }, { "end": 434.34000000000003, "start": 426.28000000000003, "text": " If you're trying to just solve these environments from scratch, you basically don't really have" }, { "end": 437.02000000000004, "start": 434.34000000000003, "text": " a big chance of solving them." }, { "end": 441.68, "start": 437.02000000000004, "text": " Because let's say you're here and you're trying to move to the right, you know, you might" }, { "end": 447.72, "start": 441.68, "text": " learn how to do this and you see this from scratch solution actually manages to get to" }, { "end": 448.72, "start": 447.72, "text": " the right." }, { "end": 453.84000000000003, "start": 448.72, "text": " But then as soon as you reach this, you're in this gap and you just fall down the gap" }, { "end": 457.82, "start": 453.84000000000003, "text": " because all you've learned so far is how to move right." }, { "end": 464.74, "start": 457.82, "text": " So what you would need to do is you would need to plan ahead like what poet does." }, { "end": 466.24, "start": 464.74, "text": " You need to see that there is a gap." }, { "end": 472.92, "start": 466.24, "text": " You need to plan ahead and already lift up a leg in order to then step over the gap here" }, { "end": 476, "start": 472.92, "text": " and then do a little jump right here." }, { "end": 481.24, "start": 476, "text": " And this sequence of action, this kind of planning ahead, it is very difficult to learn" }, { "end": 488.06, "start": 481.24, "text": " this for a classic RL algorithm because you basically get reward for everything you do." }, { "end": 490.76, "start": 488.06, "text": " So initially you get reward for moving to the right." }, { "end": 494.72, "start": 490.76, "text": " So that's 10 if you reach here, another 10 if you reach here." }, { "end": 502.8, "start": 494.72, "text": " And so there is another 10 if you reach here and another 10 if you reach here." }, { "end": 508.12, "start": 502.8, "text": " Whereas if you lift up your leg, that's like minus five because now this you've changed" }, { "end": 512.44, "start": 508.12, "text": " this angle and we saw this is negative reward, right?" }, { "end": 517.24, "start": 512.44, "text": " So a classic optimization algorithm will always fall into the hole because that is where you" }, { "end": 519.6800000000001, "start": 517.24, "text": " get the immediate reward." }, { "end": 524.5600000000001, "start": 519.6800000000001, "text": " Whereas you'd have to you'd have to do a sequence of action that doesn't give you a reward right" }, { "end": 528.7199999999999, "start": 524.56, "text": " now, but it gives you more reward later." }, { "end": 534.8399999999999, "start": 528.7199999999999, "text": " And in order to learn this, we need a kind of a better algorithm that just straightforward" }, { "end": 536.4399999999999, "start": 534.8399999999999, "text": " optimization." }, { "end": 542.68, "start": 536.4399999999999, "text": " So maybe I can explain this if you have a maze, here is the start and here is the goal" }, { "end": 547.66, "start": 542.68, "text": " and there is like walls and the walls are something like this." }, { "end": 550.3599999999999, "start": 547.66, "text": " What you need to do is go around here." }, { "end": 554.52, "start": 550.3599999999999, "text": " But what a classic optimization algorithm does is always like goes here because that's" }, { "end": 557.12, "start": 554.52, "text": " ever so closer to the goal." }, { "end": 563.6, "start": 557.12, "text": " And then it just gets stuck because it can't fathom that it needs to go around here." }, { "end": 567.96, "start": 563.6, "text": " So it needs to go farther away before it gets closer." }, { "end": 574.0799999999999, "start": 567.96, "text": " So these people we've talked about this before in like open ended learning novelty search." }, { "end": 581.24, "start": 574.0799999999999, "text": " What you would want to do is you would want to gradually build up solutions that can explore" }, { "end": 589.28, "start": 581.24, "text": " the space like to go here, go here, go here and basically build up these solutions." }, { "end": 595.24, "start": 589.28, "text": " And there are two components to what this poet algorithm does." }, { "end": 602.6800000000001, "start": 595.24, "text": " So the first component is curriculum learning." }, { "end": 606.62, "start": 602.6800000000001, "text": " Curriculum learning." }, { "end": 608.42, "start": 606.62, "text": " What does curriculum learning mean?" }, { "end": 615.04, "start": 608.42, "text": " Curriculum learning means that you start off with easy tasks and you increasingly build" }, { "end": 620.28, "start": 615.04, "text": " up more and more and more complex tasks." }, { "end": 627.68, "start": 620.28, "text": " So let's say I have an environment here and I'm going to draw and at the beginning we" }, { "end": 632.8399999999999, "start": 627.68, "text": " just kind of start off with this flat surface right and here is our little walker right" }, { "end": 633.9599999999999, "start": 632.8399999999999, "text": " here." }, { "end": 644, "start": 633.96, "text": " And we'll just train it to move right on that and that should be doable with kind of a classic" }, { "end": 645.6800000000001, "start": 644, "text": " approach." }, { "end": 649.94, "start": 645.6800000000001, "text": " And then we gradually move to more difficult environments." }, { "end": 653.4000000000001, "start": 649.94, "text": " So maybe we'll make it a bit more rough right." }, { "end": 657.88, "start": 653.4000000000001, "text": " And an agent that can already walk to the right already kind of has think of it as a" }, { "end": 661.36, "start": 657.88, "text": " pre-training in like NLP." }, { "end": 666.92, "start": 661.36, "text": " You can then get more and more challenging and then maybe at some point you can build" }, { "end": 670.64, "start": 666.92, "text": " in a gap right." }, { "end": 675.44, "start": 670.64, "text": " So you build in one of these gaps and now it already knows how to move to the right" }, { "end": 682.32, "start": 675.44, "text": " and now it might actually learn to jump a small gap right if you make it small at the" }, { "end": 684.5600000000001, "start": 682.32, "text": " beginning not like this one down here." }, { "end": 686.5600000000001, "start": 684.5600000000001, "text": " There's a very large gap." }, { "end": 692.7199999999999, "start": 686.56, "text": " But if you make it small by accident it might stumble over it and then learn and continuously" }, { "end": 695.4399999999999, "start": 692.7199999999999, "text": " how to master the gap." }, { "end": 698.2399999999999, "start": 695.4399999999999, "text": " So this is the curriculum learning approach." }, { "end": 703.9599999999999, "start": 698.2399999999999, "text": " It means that from environment to environment you get harder, harder and harder challenges." }, { "end": 711.1199999999999, "start": 703.9599999999999, "text": " So first flat then more rough then more rough with a gap and so on." }, { "end": 721.36, "start": 711.12, "text": " The second approach, the second ingredient to POET is what they call stepping stone learning" }, { "end": 726, "start": 721.36, "text": " or transfer learning or things like this." }, { "end": 731.82, "start": 726, "text": " And that's where you kind of have to think of this not as a single agent optimizing but" }, { "end": 734.64, "start": 731.82, "text": " as a population of agents." }, { "end": 738.1800000000001, "start": 734.64, "text": " So let's say you do this curriculum learning right." }, { "end": 744.8, "start": 738.18, "text": " And you're getting fairly well here at rough terrains right." }, { "end": 746.04, "start": 744.8, "text": " More and more rough terrains." }, { "end": 751.12, "start": 746.04, "text": " But in parallel you also have a second optimization procedure." }, { "end": 761.76, "start": 751.12, "text": " You also start out kind of flat but with this thing you go as we said before small gap you" }, { "end": 770.16, "start": 761.76, "text": " keep it flat but you just increase the number of gaps here right." }, { "end": 776.96, "start": 770.16, "text": " Whereas over here you just keep making the terrain rougher and rougher." }, { "end": 786.4399999999999, "start": 776.96, "text": " So what the philosophy is that an agent that might be able to master this rougher terrain" }, { "end": 791.6, "start": 786.44, "text": " it might actually that skill because here you this kind of this kind of looks like a" }, { "end": 793.8800000000001, "start": 791.6, "text": " gap here." }, { "end": 802.6800000000001, "start": 793.8800000000001, "text": " The skill of hopping over this gap here might actually transfer to the environment over" }, { "end": 809.4000000000001, "start": 802.6800000000001, "text": " here where you do have a proper you know a gap in the environment or the skill that you" }, { "end": 813.32, "start": 809.4000000000001, "text": " learn from an environment where you have one of these stumps right." }, { "end": 821.9200000000001, "start": 813.32, "text": " So here let's draw in one of these stumps where you have to go over and if you have" }, { "end": 830.72, "start": 821.9200000000001, "text": " a walker that can successfully walk over this that skill now might transfer over here in" }, { "end": 836.88, "start": 830.72, "text": " order to get over this over this peaky terrain here." }, { "end": 849.8, "start": 836.88, "text": " So the idea of poet is to start off with a generic flat very easy environment and then" }, { "end": 859.28, "start": 849.8, "text": " spawn new ones so you want to spawn new environments in kind of a hereditary way." }, { "end": 869.68, "start": 859.28, "text": " So this one might get a bit rougher this one might include this and this one might include" }, { "end": 876.6, "start": 869.68, "text": " a gap or something like this and then again you want to spawn new environments and more" }, { "end": 887.72, "start": 876.6, "text": " rough more rough more rough with a stump here and this one retains the gap sorry and um" }, { "end": 897.08, "start": 887.72, "text": " this one now gets two gaps and so on and you want to continuously train these and then" }, { "end": 902.52, "start": 897.08, "text": " always you want to check whether or not the skill that you learn over here might actually" }, { "end": 905.26, "start": 902.52, "text": " transfer to anyone over here." }, { "end": 914.48, "start": 905.26, "text": " So you get this tree of this continuous tree of solutions and once you improve on one branch" }, { "end": 920.32, "start": 914.48, "text": " this might actually be good on another branch right they always make the comparison to let's" }, { "end": 926.88, "start": 920.32, "text": " say biological evolution where a strategy that works over here for birds is all of a" }, { "end": 935, "start": 926.88, "text": " sudden can be cross adopted by mammals for an entirely different problem but the same" }, { "end": 938.6, "start": 935, "text": " skill might be valuable." }, { "end": 948.64, "start": 938.6, "text": " Yeah so this this is basically the two ingredients of poet and now I want to show you the complete" }, { "end": 951.88, "start": 948.64, "text": " poet algorithm." }, { "end": 960.64, "start": 951.88, "text": " So what does it do you start off with an initial environment right and in poet every environment" }, { "end": 969.9399999999999, "start": 960.64, "text": " is paired with an agent so there is one agent per environment right so for the time steps" }, { "end": 976.68, "start": 969.9399999999999, "text": " what you do is first of all you go through your environments and you mutate them and" }, { "end": 982.48, "start": 976.68, "text": " we already seen these environments they can be generated from a parameter vector so we" }, { "end": 994.76, "start": 982.48, "text": " have five numbers right how rough how stumpy and how wide the gaps are let's say we have" }, { "end": 999.6800000000001, "start": 994.76, "text": " three numbers to two and this might be one this might be two this might be five right" }, { "end": 1006.64, "start": 999.6800000000001, "text": " so what you want to do is you want to mutate them right you want to spawn children and" }, { "end": 1013.4399999999999, "start": 1006.64, "text": " each of these parameters has a chance of mutating this might be one three five and this environment" }, { "end": 1025.92, "start": 1013.4399999999999, "text": " might be one four six and this one might be two two five right you spawn new ones you" }, { "end": 1031.52, "start": 1025.92, "text": " already see that the requirement here is that you can actually have environments that are" }, { "end": 1038.92, "start": 1031.52, "text": " procedurally generated and mutated like this where a small mutation probably is going to" }, { "end": 1050.52, "start": 1038.92, "text": " lead to a small change in the environment in any case you mutate them and then you you" }, { "end": 1061.84, "start": 1050.52, "text": " want to let's you want to optimize your eight each agent so each of these environments is" }, { "end": 1069.82, "start": 1061.84, "text": " paired with a new agent that always tries to solve that particular environment so now" }, { "end": 1075.74, "start": 1069.82, "text": " within one environment you simply do your classic optimization we already saw here the" }, { "end": 1084.16, "start": 1075.74, "text": " evolution strategy is akin to a classic optimization algorithm from reinforcement learning all" }, { "end": 1090.36, "start": 1084.16, "text": " right so each agent you optimize for a couple of steps right not fully every time but for" }, { "end": 1097, "start": 1090.36, "text": " a couple of steps so each agent including the one in the original environment each agent" }, { "end": 1104.36, "start": 1097, "text": " is continuously trained on its environment throughout the process of course you like" }, { "end": 1110.2199999999998, "start": 1104.36, "text": " you have to be you have bounded computation so you need to drop out the very old ones" }, { "end": 1117.32, "start": 1110.2199999999998, "text": " but in principle continuously as all of this goes on all the agents are always trained" }, { "end": 1122.8799999999999, "start": 1117.32, "text": " on their environments so the agent here this Walker will always try to solve this particular" }, { "end": 1128.6999999999998, "start": 1122.8799999999999, "text": " environment and the Walker here that is now newly generated when the environment is generated" }, { "end": 1135.28, "start": 1128.7, "text": " will only try to solve this particular environment throughout the whole algorithm right and then" }, { "end": 1144.88, "start": 1135.28, "text": " all right so you do mutations you spawn new ones and then you do a couple of steps in" }, { "end": 1153.04, "start": 1144.88, "text": " optimization right and yes step and then you do this transfer attempt right what you want" }, { "end": 1159.32, "start": 1153.04, "text": " to do is you want to evaluate all the candidates on all the environments in principle you can" }, { "end": 1167.6, "start": 1159.32, "text": " you can cut this down but in principle you want to go through the environments and say" }, { "end": 1174.32, "start": 1167.6, "text": " okay this environment right here I'm going to evaluate all of the other agents in this" }, { "end": 1179.5, "start": 1174.32, "text": " environment you can do this in a couple of different ways where you just straight up" }, { "end": 1186.52, "start": 1179.5, "text": " try them or try to optimize them for a few steps to see whether they can be adapted easily" }, { "end": 1193.52, "start": 1186.52, "text": " to that environment but ultimately you have to come up with a criterion to say for each" }, { "end": 1199.6, "start": 1193.52, "text": " agent is the agent better or worse than the agent that is continuously trained on this" }, { "end": 1208.5, "start": 1199.6, "text": " environment if it's worse then you keep this one if if anyone is better then you transfer" }, { "end": 1215.8, "start": 1208.5, "text": " that better one to replace this one right and you basically copy it over to this new" }, { "end": 1220.7, "start": 1215.8, "text": " environment and that's where this transfer learning comes in so you're continuously trying" }, { "end": 1228.08, "start": 1220.7, "text": " all the agents on all the environments and if they are better you transfer them right" }, { "end": 1235.72, "start": 1228.08, "text": " so here you say if the environment score is better than the one that you have you transfer" }, { "end": 1245.88, "start": 1235.72, "text": " it all right now there is a lot hidden here for example in this mutate environment step" }, { "end": 1252.92, "start": 1245.88, "text": " they do check whether or not the new mutated environments are not too hard and not too" }, { "end": 1262.08, "start": 1252.92, "text": " easy and that basically means whether or not the agents can solve them but not solve them" }, { "end": 1268.9199999999998, "start": 1262.08, "text": " too easily they also check whether the environments are enough novel so you need a couple of checks" }, { "end": 1279.04, "start": 1268.9199999999998, "text": " here you solvable and that that means not too easy and not too hard right so they need" }, { "end": 1285.96, "start": 1279.04, "text": " to pass like a certain score but they need to be kind of solvable to a to an okay score" }, { "end": 1293.04, "start": 1285.96, "text": " so there's a score range and also novel they check whether or not the out the mutated environments" }, { "end": 1299.72, "start": 1293.04, "text": " are novel enough and I believe they just do this by calculating the the distance between" }, { "end": 1307.3600000000001, "start": 1299.72, "text": " two environments in terms of their parameter vectors so to determine whether or not these" }, { "end": 1313.76, "start": 1307.3600000000001, "text": " are novel and sorry I don't mean the distance just between two but the distance of all of" }, { "end": 1323.44, "start": 1313.76, "text": " the ones you've seen so far so if we go to original very beautiful drawing here where" }, { "end": 1329.4, "start": 1323.44, "text": " is my tree if you create a new environment let's say you create a new environment right" }, { "end": 1337.56, "start": 1329.4, "text": " here then you want to check it against all environments you've seen so far to determine" }, { "end": 1342.96, "start": 1337.56, "text": " whether or not it is new or not so you want to create the distance to all of these and" }, { "end": 1348.1200000000001, "start": 1342.96, "text": " if you have enough distance to your nearest neighbors then you are novel and that's kind" }, { "end": 1356.64, "start": 1348.1200000000001, "text": " of how they they determine whether environment is new all right so that's basically the poet" }, { "end": 1363.72, "start": 1356.64, "text": " algorithm you continuously create new environments by mutation you ensure that they are solvable" }, { "end": 1371.54, "start": 1363.72, "text": " not hard enough sorry not too hard but hard enough ensure that they are novel and then" }, { "end": 1380.72, "start": 1371.54, "text": " you optimize each agent for its own environment continuously as the process goes on and so" }, { "end": 1385.76, "start": 1380.72, "text": " it's not I want to stress this it's not only the frontier so you're not only looking at" }, { "end": 1391.44, "start": 1385.76, "text": " the newest generation but you're always looking at all of the generation of the because the" }, { "end": 1397.52, "start": 1391.44, "text": " older ones while the environments are easier they have been optimized for longer on this" }, { "end": 1403.16, "start": 1397.52, "text": " environment so the skills might be very handy so you always want to look at your entire" }, { "end": 1411.96, "start": 1403.16, "text": " population and then you do crucially you do this these transfer attempts so that's the" }, { "end": 1418.48, "start": 1411.96, "text": " poet algorithm there is a lot hidden here and I kind of want to stress that just if" }, { "end": 1427.04, "start": 1418.48, "text": " you just look at the amount of hyper parameters there is so many hyper parameters in this" }, { "end": 1433.08, "start": 1427.04, "text": " how much you transfer how much you mutate how many steps you do each of these subroutines" }, { "end": 1443.08, "start": 1433.08, "text": " here has a billion hyper parameters and learning rates and and so on so to me that's a that" }, { "end": 1449.3999999999999, "start": 1443.08, "text": " is kind of if I look at this algorithm I am very scared if I attempted to do something" }, { "end": 1457.64, "start": 1449.4, "text": " like this myself it's it's going to be a long and hard thing to evaluate all of these different" }, { "end": 1465.1200000000001, "start": 1457.64, "text": " hyper parameters that you have to do shortly want to dip into what the evolution strategy" }, { "end": 1473.68, "start": 1465.1200000000001, "text": " does just so you know because you just might be familiar with your classic your classic" }, { "end": 1482.72, "start": 1473.68, "text": " reinforce algorithm so in policy gradient methods what you do is you scale your parameters" }, { "end": 1492.88, "start": 1482.72, "text": " of your neural network which is you can if this is your policy then your policy network" }, { "end": 1501.76, "start": 1492.88, "text": " here you want to scale the gradient according to your reward so in classic reinforcement" }, { "end": 1507.04, "start": 1501.76, "text": " learning this here would be the reward you got which basically means if you did an action" }, { "end": 1514.44, "start": 1507.04, "text": " and you got higher reward you want to make your network do that action more right here" }, { "end": 1521.92, "start": 1514.44, "text": " in evolution strategies what you do is you spawn it's a different way of doing the same" }, { "end": 1530.9, "start": 1521.92, "text": " thing basically you spawn different environments and sorry you spawn you spawn different agents" }, { "end": 1537.68, "start": 1530.9, "text": " so you have your current parameters and you want to spawn a number of noisy versions of" }, { "end": 1545.76, "start": 1537.68, "text": " those parameters and then you want to evaluate each one right and now you want to adjust" }, { "end": 1553.74, "start": 1545.76, "text": " your parameters into the direction of that particular so basically you are here with" }, { "end": 1564.16, "start": 1553.74, "text": " your parameters you create a bunch of noisy versions of it right and let's say these two" }, { "end": 1571.84, "start": 1564.16, "text": " performed really well you want to adjust your parameters into the direction of those two" }, { "end": 1579.36, "start": 1571.84, "text": " right that's basically what this says so this is the noisy version and then this is the" }, { "end": 1586.3999999999999, "start": 1579.36, "text": " noise that produced the noisy version so if this is high if this number here is high" }, { "end": 1594.56, "start": 1586.3999999999999, "text": " then you will adjust your parameters into that direction it's a fairly cool way if you" }, { "end": 1603.52, "start": 1594.56, "text": " especially if you can't back prop through your policy as it's pretty neat thing so this" }, { "end": 1614, "start": 1603.52, "text": " is the ES step algorithm but you can think of it just as a RL algorithm all right so" }, { "end": 1619.28, "start": 1614, "text": " they do various experiments to show that this actually has merits I've already shown you" }, { "end": 1626.28, "start": 1619.28, "text": " if you're trying if you take the same environments and try to solve them directly by this evolution" }, { "end": 1633, "start": 1626.28, "text": " step then it will not succeed because of the problems we've discussed before now the comparison" }, { "end": 1641.04, "start": 1633, "text": " is a bit unfair because um of course these environments for poet poet the problem here" }, { "end": 1646, "start": 1641.04, "text": " is you can't have it solve a particular environments because the environments they constantly change" }, { "end": 1651.32, "start": 1646, "text": " right you constantly mutate the environments you never know where it's going it's not directed" }, { "end": 1657.26, "start": 1651.32, "text": " so if your goal is to solve a particular environment you cannot do it with poet you can hope that" }, { "end": 1662.48, "start": 1657.26, "text": " the agent that comes out will perform well right you can do something like this but I" }, { "end": 1672, "start": 1662.48, "text": " believe I believe that these environments that they test on here are ones that appeared" }, { "end": 1680.1200000000001, "start": 1672, "text": " during the poet run right so it's kind of an unfair comparison I feel to to do this" }, { "end": 1685.64, "start": 1680.1200000000001, "text": " on an environment that you know this environment this poet agent actually comes from an environment" }, { "end": 1692.44, "start": 1685.64, "text": " that poet has generated in its all mutation tree curriculum while building it up and then" }, { "end": 1699.56, "start": 1692.44, "text": " the poor ES algorithm is simply tasked with solving that particular environment from scratch" }, { "end": 1706.76, "start": 1699.56, "text": " so yes always keep in mind this is this can have a goal this doesn't have a goal right" }, { "end": 1713.8, "start": 1706.76, "text": " that's kind of the drawback but as you can see poet does get super high scores whereas" }, { "end": 1722.72, "start": 1713.8, "text": " es the classic algorithm completely fails and they also investigate the importance of transfer" }, { "end": 1733.2, "start": 1722.72, "text": " learning so they compare to like a classic classic curriculum learning algorithms there" }, { "end": 1738.44, "start": 1733.2, "text": " are curriculum learning algorithms where you can continuously try to build up the difficulties" }, { "end": 1744.04, "start": 1738.44, "text": " of these environments but you also do it in a goal-directed way so as I said if you have" }, { "end": 1751.16, "start": 1744.04, "text": " an environment that has like a gap and then a stump a high stump or two high stumps you" }, { "end": 1758.68, "start": 1751.16, "text": " want to start out flat and then maybe build in a small gap and a small stump and so on" }, { "end": 1764.96, "start": 1758.68, "text": " until you're here it's very much goal-directed but it doesn't have this kind of population" }, { "end": 1774.64, "start": 1764.96, "text": " with transfer learning aspect of poet so if they compare this you can see here the red" }, { "end": 1785.1200000000001, "start": 1774.64, "text": " the red the red one sorry colored it blue stupidly the red one is whatever poet was" }, { "end": 1791.96, "start": 1785.1200000000001, "text": " able to solve now these are the five dimensions of the parameters and the more on the outside" }, { "end": 1802.72, "start": 1791.96, "text": " it is the harder the environment and for the same for the same environment the blue one" }, { "end": 1808.24, "start": 1802.72, "text": " is what the curriculum learning algorithm has managed so it's the best environment the" }, { "end": 1815.24, "start": 1808.24, "text": " curriculum learning algorithm has been able to solve while trying to build up to the so" }, { "end": 1821.56, "start": 1815.24, "text": " if we take this here is the environment that poet solved again the comparison is kind of" }, { "end": 1826.12, "start": 1821.56, "text": " unfair because we're starting out from an environment that poet has already solved and" }, { "end": 1833.28, "start": 1826.12, "text": " then we're trying to build our way up to it with the classic algorithm by basically again" }, { "end": 1840.9199999999998, "start": 1833.28, "text": " this is it's comparing a non goal-directed thing something that just happened to a goal-directed" }, { "end": 1848.8, "start": 1840.9199999999998, "text": " process that needs to get this particular environment to work in any case at some point" }, { "end": 1853.76, "start": 1848.8, "text": " this curriculum learning algorithm will fail like let's say that's here that's the environment" }, { "end": 1861.8, "start": 1853.76, "text": " that has somewhat of a gap but no stump right and that would be the the blue line here they" }, { "end": 1868.76, "start": 1861.8, "text": " do like five runs and they plot them here and you can see every time the classic curriculum" }, { "end": 1874.48, "start": 1868.76, "text": " learning algorithm manages to only solve a much much less challenging environment than" }, { "end": 1884.08, "start": 1874.48, "text": " the poet algorithm achieved even though it's it's trying to reach exactly that right and" }, { "end": 1889.08, "start": 1884.08, "text": " so here they show the difference so if you just the classified environment if it's just" }, { "end": 1895.24, "start": 1889.08, "text": " challenging then the classic algorithm the curriculum learning algorithm can solve it" }, { "end": 1900.96, "start": 1895.24, "text": " somewhat so the distance is close to zero but as you go more and more challenging the" }, { "end": 1911, "start": 1900.96, "text": " distance between poet and the classic becomes larger and larger they do give some examples" }, { "end": 1917.8, "start": 1911, "text": " of what this transfer learning does so they have this parent environment that just kind" }, { "end": 1923.4, "start": 1917.8, "text": " of slouches forward on the ground and then the child environment has a mutation that" }, { "end": 1930.16, "start": 1923.4, "text": " has now little stumps in it right so you can't get over it right now but the child environment" }, { "end": 1936.52, "start": 1930.16, "text": " because it's it's a small stump so it might stumble across learns to lift its leg here" }, { "end": 1943.3200000000002, "start": 1936.52, "text": " and it transfers this back to the parent right at a later iteration which is pretty cool" }, { "end": 1949.0800000000002, "start": 1943.3200000000002, "text": " and then the parent gets even better as a result of that transfer so we have two transfer" }, { "end": 1955.8400000000001, "start": 1949.0800000000002, "text": " learning events here that mutually help these agents remember both the parent and the child" }, { "end": 1964.6799999999998, "start": 1955.84, "text": " are continuously trained as the process goes on all right and they do some more things" }, { "end": 1970.76, "start": 1964.6799999999998, "text": " where they do actual poet not a classic algorithm but poet without transfer learning and they" }, { "end": 1977.48, "start": 1970.76, "text": " see that okay the poet without transfer is able to solve some of the very challenging" }, { "end": 1983.36, "start": 1977.48, "text": " problems but never reaches the extremely challenging stage and that's kind of their argument why" }, { "end": 1991.7199999999998, "start": 1983.36, "text": " the transfer learning is necessary so in total I would say this is a cool algorithm it has" }, { "end": 1999.56, "start": 1991.7199999999998, "text": " many many many many many many hyper parameters and these experimental results with that many" }, { "end": 2004.6399999999999, "start": 1999.56, "text": " hyper parameters you need to take it with a grain of salt because it's always possible" }, { "end": 2010.84, "start": 2004.6399999999999, "text": " that they just haven't put as much effort into their comparisons as they have into their" }, { "end": 2019.76, "start": 2010.84, "text": " own thing to get it to work all right with that I wish you a nice day and check out the" }, { "end": 2025.1999999999998, "start": 2019.76, "text": " paper they have lots of descriptions check out the blog post where they have animations" }, { "end": 2041.88, "start": 2025.2, "text": " and the YouTube video and with that bye bye" } ]
awyuuJoHawo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Dream to Control: Learning Behaviors by Latent Imagination
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "google", "rnn", "recurrent", "reinforcement learning", "deep reinforcement learning", "imagination", "latent space", "world model", "control", "deepmind", "deep mind" ]
Dreamer is a new RL agent by DeepMind that learns a continuous control task through forward-imagination in latent space. https://arxiv.org/abs/1912.01603 Videos: https://dreamrl.github.io/ Abstract: Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance. Authors: Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at Dream to Control Learning Behaviors by Latent Imagination by Dani Jarhofner, Timothy Lillikrup, Jimmy Baa and Mohamed Nerozi. This is a reinforcement learning paper that iterates on a series of previous papers where the goal is to learn a policy. In this case they want to learn policies for these kind of continuous control tasks of these physics-based robots, these hopper or walker types of tasks where you have to control these joints in order to move forward. The goal is that you have multiple observations as you do in reinforcement learning and from each observation you need to somehow come up with an action of what to do. Then that will give you the next observation as well as a reward. If your goal is to move this spider, maybe the reward is proportional to how far you move. So your goal is to collect the maximum reward, which would mean you have to move the spider as far as possible simply by doing the correct actions. The goal of this paper now is to do this by learning to plan ahead in this latent space. As you can see here, the way they do it is they take the observation and they feed it through an encoder. You can think of this as maybe a convolutional neural network or something. Anything that can work, that can take an image as an input and give you a hidden representation. This here is the hidden representation. From this hidden representation you can determine what the next action is going to be. Then you get a new observation and then again you can feed that along with the last hidden state into a new hidden state. Previous models do this a lot. You encode your observation and you have a recurrent neural network that incorporates all of the observations into a hidden state along with the actions you take. Then you always decide on a next action to do. What does this model do differently? This model wants to do this all in hidden space. This model wants to say I am here, I have this observation. Now my encoder tells me that this is going to give me this hidden state. Now what it wants to do is it wants to take in the action that it's doing and without seeing the next observation, it wants to predict it. It wants to say if I am here and I do this action, what might the action be? The action might be to put the joystick to the right. It will learn the hidden state corresponding to the spider being a bit more to the right. This is a bit more to the right than it is right now. It will need to do so a number of time steps into the future and it will learn from its own imagination. It will imagine into the future how the hidden states look and then it will learn from that instead of having to really do the actions in the real world. We've already looked at a number of papers including something like mu0 or I2A or something like this. This now is slightly different. You can see what's different here. What is different is in mu0 we used this latent model in order to plan ahead, like in order to do our decision tree planning ahead and so on. This model doesn't do this. This model still wants to come up with a single policy where you encode your state. On the right is the final result. You encode your state, it gets you to a hidden representation and then from that you determine what your actions going to be and you have your next state and so on. The final goal is simply going to be a policy like a single shot policy without any Monte Carlo tree expansion and so on. What it wants to do is it wants to learn this policy not by interacting in the real world like here on the left but actually by interacting only in the dream world right here. The crucial part if you want to learn from your dreams is to make sure that your dreams are an accurate representation of the real world. We already saw this in a paper called World Models by Jürgen Schmidhuber. In that paper what they did was they first collected experience, like this one, and then they learned from the one observation to predict the next ones or to predict the next hidden states. They did so by basically moving in the world at random. They have this little spider thingy and they just do random movements. They randomly move around and thus they collect these trajectories and then they learn from the random trajectories. The difference that this paper does is it does these steps iteratively. It will not learn from random policy but it will actually first start out learning this random, learning a good policy for its environment model, then acting going back and using that policy in order to learn a better environment model and then again learn using the better environment model in order to learn a better policy. If this wasn't clear enough we'll jump to the algorithm. The algorithm isn't actually too complicated. As I said I think it's a relatively minor iteration on previous research but it appears to work and it works in these kind of continuous control tasks. You see you have three models here that you need to learn and that's what you see over here. There is representation, transition and reward and you'll see they all have the same parameters. That gives you an indication that these things are a single model. Now what is the model representation, transition and reward? This is the thing on the left here. In this part of the algorithm you assume that you have a policy. You already know what action you do or you can even assume that you have some experience. You have your agent is running with a given policy and you simply collect that and now you're trying to learn. Let me scratch all of this. What do you have given? Given is the observation sequence and the actions you took and the rewards you got. That's also given. Each action gives you reward. These things are given, provided to you and now what do you want to learn? You want to learn a representation and a transition and let's say a reward. You also want to predict the next reward. This thing, this thing. As we already said you can do this by encoding the state using for example a CNN and then using an LSTM in order to incorporate this over time. What you learn is the transition from one hidden state to the next hidden state and you also learn how the observation goes into the hidden state. Thirdly you learn that if I'm in this hidden state and I take this particular action I will get this reward in the future. You can learn this from just a set of pre-computed or from a set of experience that you have in your let's say your replay buffer. This is one model and you learn this here in this first step in this called dynamics learning section. You see while not converged, you do dynamics learning, you draw data sequences from your experience, then you compute the model states. These are the hidden states and then you update this parameter theta using representation learning. They don't really specify what representation learning is but they do give examples of what you can do. I think their point is whatever you need to do in order to learn this representation. One example is actually drawn here. One example is you can learn a model that reconstructs the next state or actually sorry reconstructs the same state. You can learn a model that predicts. If you give the observation as an input it goes through the hidden state. You can learn a decoder that reconstructs that observation. This is usually done in things like variational auto encoders in order to produce generative models. This part here would be the generator and that would be kind of the thing of interest if you are doing a variational auto encoder. Of course here our quantity of interest is this encoder model because we want a good representation of the state. It comes down to the same thing. If you can learn a model that learns to accurately reconstruct the observation then your representation here in the middle is probably an informative one. Because you learn the same model across multiple observations that means it can accurately encode what makes one observation different from another one. This is how you learn the theta parameters. The other models here are the action and the value parameters. This is here in the step called behavior learning. In the behavior learning what they say is imagine trajectories from each of the states that you have. What you're going to do is from each of the observations here you're going to obtain the hidden states. From each of the hidden states here, here is an observation from its hidden state, you're going to use the model that you learned here through the LSTM. This is terrible. Through the LSTM you're going to use that model to imagine future trajectories of hidden states. You have given, or now is the observation here, and the hidden state. You're going to imagine future hidden states, you're also going to imagine future rewards. You are going to use your policy in order to determine which actions you're going to take. The ultimate goal here is to learn a good policy, so a policy that will give you better rewards in the future. This is regular reinforcement learning, except that the difference is in regular reinforcement learning I have my observation, I encode it and then I determine what action I want to take. Then I feed that action back into the environment, which would give me the next observation. Then I'd use that to determine, maybe in conjunction with the last hidden state, the next action. In this thing, since we learned a dynamics model of the hidden states, we can simply determine the action and then simply compute what the probable next hidden state is going to be. Then use that to determine an action again and so on. There's no need to go through the environment, which means potentially we can learn much faster without having to expensively interact with the environment. That allows us to basically... Also these models here, they might be quite large, so our backprop now only needs to happen through this path basically, if we want to, or through this path here, in case we have discrete actions. That will be the dynamics learning. As you can see, we predict the rewards and the values and compute value estimates. Then we update these parameters. What we have is here a value function. The value function is dependent on this psi here. This we update using a gradient of its output minus the true value. This here is an estimate of the value. As you know, a value function is supposed to tell you the complete future reward given a state. It's important for us that we have a function that can estimate that, because of course then we can take actions. If we can make this function go high and this is an accurate function, that means we get a lot of reward in the future. It's important to learn this function. Here you can see we adjust it into the direction of matching this quantity better. We'll get to this quantity in a second. You can also see we update this parameter, which is the action model. Here you see that the action model depends on this. This is our policy. This thing here determines which action we take. We update it into the direction. This is a gradient with respect to this value function. We train the policy to maximize the value, which is all the future rewards that we get. Of course we can do this because we can now back propagate through all of these time steps. We have this transition model. We can back propagate through all of this, which is pretty cool. I think in my opinion the workhorse of this paper might be this quantity here. How exactly do you compute the value of a state? Especially in these continuous control tasks you sometimes have a lot of steps. These trajectories might be pretty long and they might be longer than what you can back propagate here reasonably from time step to time step. Even an LSTM might only be able to back prop through a couple of dozen or maybe a few hundred steps in time. Maybe you have longer trajectories here. I think this value estimate here is a main component of extending that range. They say this is according to equation 6 and this is what it does. This is my opinion that this here is the workhorse of the method. It's a three-step process actually. It's pretty heavy. You see this is the quantity they estimate with the value function. It is set between an average over... H is the time horizon that you're looking for. It is set between these two things across the sum over the time horizon. Now each of those things again here is a sum over this tau here, which is this tau and H minus 1. H here is the minimum of tau plus K and tau plus horizon. This quantity looks K steps into the future. For each step to the horizon we look K steps into the future. For each step we look into the future we sum again across these quantities here. These quantities here, what is that? It's a mixture of the reward you get in that particular step plus your own your estimate of the value function at the at the horizon step discounted by that. So it's a pretty... Imagine you have like a time number of steps that you took and each time you get a reward. This is a very complicated way of going into the future, summing up the rewards, going more steps, summing up the rewards again in different fashion and then mixing these individual quantities. So this one, this one, this one that you got from accumulating all of these in a weird fashion. That allows you to look way beyond. Especially you see here your estimate of the value function will actually include your own value function that again probably looks into the future. So what you accumulate from the last step in your time horizon already includes information from all the future steps because you take your own value estimate into account. This is I think it's very convoluted but again I think this complicated value estimate allows you to have a better value estimate far into the future. They do show some kind of samples here of what they can do. I haven't found any videos of it unfortunately but it appears to work pretty well. They have a discussion of different representation learning methods and different experiments and ablations and so on. So I invite you to look at this paper and I hope this was somewhat clear. Bye bye.
[ { "end": 5.92, "start": 0, "text": " Hi there! Today we're looking at Dream to Control Learning Behaviors by Latent" }, { "end": 13.08, "start": 5.92, "text": " Imagination by Dani Jarhofner, Timothy Lillikrup, Jimmy Baa and" }, { "end": 21.2, "start": 13.08, "text": " Mohamed Nerozi. This is a reinforcement learning paper that iterates on a" }, { "end": 31.439999999999998, "start": 21.2, "text": " series of previous papers where the goal is to learn a policy. In this" }, { "end": 35.76, "start": 31.439999999999998, "text": " case they want to learn policies for these kind of continuous control tasks" }, { "end": 42.76, "start": 35.76, "text": " of these physics-based robots, these hopper or walker types of tasks where" }, { "end": 53.26, "start": 42.76, "text": " you have to control these joints in order to move forward. The" }, { "end": 57.72, "start": 53.26, "text": " goal is that you have multiple observations as you do in reinforcement" }, { "end": 64.08, "start": 57.72, "text": " learning and from each observation you need to somehow come up with an action" }, { "end": 71.88, "start": 64.08, "text": " of what to do. Then that will give you the next observation as well as a" }, { "end": 80.52, "start": 71.88, "text": " reward. If your goal is to move this spider, maybe the reward is" }, { "end": 85.64, "start": 80.52, "text": " proportional to how far you move. So your goal is to collect the maximum reward," }, { "end": 91.47999999999999, "start": 85.64, "text": " which would mean you have to move the spider as far as possible simply by" }, { "end": 100.08, "start": 91.47999999999999, "text": " doing the correct actions. The goal of this paper now is to do this by" }, { "end": 108.6, "start": 100.08, "text": " learning to plan ahead in this latent space. As you can see" }, { "end": 115.28, "start": 108.6, "text": " here, the way they do it is they take the observation and they feed it through an" }, { "end": 121.03999999999999, "start": 115.28, "text": " encoder. You can think of this as maybe a convolutional neural network or" }, { "end": 125.92, "start": 121.03999999999999, "text": " something. Anything that can work, that can take an image as an input and give" }, { "end": 132.72, "start": 125.92, "text": " you a hidden representation. This here is the hidden representation. From" }, { "end": 137.64000000000001, "start": 132.72, "text": " this hidden representation you can determine what the next action is going" }, { "end": 144.24, "start": 137.64000000000001, "text": " to be. Then you get a new observation and then again you can feed that along" }, { "end": 151.08, "start": 144.24, "text": " with the last hidden state into a new hidden state. Previous" }, { "end": 157.52, "start": 151.08, "text": " models do this a lot. You encode your observation and you have a" }, { "end": 163.72000000000003, "start": 157.52, "text": " recurrent neural network that incorporates all of the observations" }, { "end": 167.8, "start": 163.72000000000003, "text": " into a hidden state along with the actions you take. Then you always" }, { "end": 176, "start": 167.8, "text": " decide on a next action to do. What does this model do differently? This model" }, { "end": 187.16, "start": 176, "text": " wants to do this all in hidden space. This model wants to say" }, { "end": 193.16, "start": 187.16, "text": " I am here, I have this observation. Now my encoder tells me that this is going to" }, { "end": 198.44, "start": 193.16, "text": " give me this hidden state. Now what it wants to do is it wants to take in the" }, { "end": 205.04, "start": 198.44, "text": " action that it's doing and without seeing the next observation, it wants to" }, { "end": 211.56, "start": 205.04, "text": " predict it. It wants to say if I am here and I do this action, what" }, { "end": 215.72, "start": 211.56, "text": " might the action be? The action might be to put the joystick to the right. It will" }, { "end": 221.88, "start": 215.72, "text": " learn the hidden state corresponding to the spider being a bit more to the right." }, { "end": 228.68, "start": 221.88, "text": " This is a bit more to the right than it is right now. It will need to" }, { "end": 235.28, "start": 228.68, "text": " do so a number of time steps into the future and it will learn from" }, { "end": 243.4, "start": 235.28, "text": " its own imagination. It will imagine into the future how the hidden" }, { "end": 250.16, "start": 243.4, "text": " states look and then it will learn from that instead of having to really do the" }, { "end": 254.72, "start": 250.16, "text": " actions in the real world. We've already looked at a number of papers" }, { "end": 262.88, "start": 254.72, "text": " including something like mu0 or I2A or something like this. This now is" }, { "end": 268.64, "start": 262.88, "text": " slightly different. You can see what's different here." }, { "end": 275.44, "start": 268.64, "text": " What is different is in mu0 we used this latent model in order to" }, { "end": 280.24, "start": 275.44, "text": " plan ahead, like in order to do our decision tree planning ahead and so on." }, { "end": 284.88, "start": 280.24, "text": " This model doesn't do this. This model still wants to come up with a single" }, { "end": 291.04, "start": 284.88, "text": " policy where you encode your state. On the right is the final result." }, { "end": 295.28000000000003, "start": 291.04, "text": " You encode your state, it gets you to a hidden representation and then from that" }, { "end": 301.8, "start": 295.28000000000003, "text": " you determine what your actions going to be and you have your next state and so on." }, { "end": 308.24, "start": 301.8, "text": " The final goal is simply going to be a policy like a single shot policy" }, { "end": 315.92, "start": 308.24, "text": " without any Monte Carlo tree expansion and so on. What it wants to do is it" }, { "end": 321.64, "start": 315.92, "text": " wants to learn this policy not by interacting in the real world like here" }, { "end": 330.76, "start": 321.64, "text": " on the left but actually by interacting only in the dream world right here." }, { "end": 335.88, "start": 330.76, "text": " The crucial part if you want to learn from your dreams is to make sure" }, { "end": 345.2, "start": 335.88, "text": " that your dreams are an accurate representation of the real world." }, { "end": 351.12, "start": 345.2, "text": " We already saw this in a paper called World Models by Jürgen Schmidhuber." }, { "end": 359.96, "start": 351.12, "text": " In that paper what they did was they first collected experience," }, { "end": 367.08, "start": 359.96, "text": " like this one, and then they learned from the one observation" }, { "end": 376.52, "start": 367.08, "text": " to predict the next ones or to predict the next hidden states." }, { "end": 383.03999999999996, "start": 376.52, "text": " They did so by basically moving in the world at random. They have this" }, { "end": 389.4, "start": 383.03999999999996, "text": " little spider thingy and they just do random movements. They randomly" }, { "end": 394.35999999999996, "start": 389.4, "text": " move around and thus they collect these trajectories and then they learn from" }, { "end": 399.91999999999996, "start": 394.35999999999996, "text": " the random trajectories. The difference that this paper does is it does these" }, { "end": 405.56, "start": 399.91999999999996, "text": " steps iteratively. It will not learn from random policy but it will" }, { "end": 412.59999999999997, "start": 405.56, "text": " actually first start out learning this random, learning a good policy for its" }, { "end": 420.24, "start": 412.6, "text": " environment model, then acting going back and using that policy in order to learn" }, { "end": 425.12, "start": 420.24, "text": " a better environment model and then again learn using the better environment" }, { "end": 433.28000000000003, "start": 425.12, "text": " model in order to learn a better policy. If this wasn't clear enough we'll jump" }, { "end": 441.64000000000004, "start": 433.28000000000003, "text": " to the algorithm. The algorithm isn't actually too complicated. As I said" }, { "end": 447.76, "start": 441.64, "text": " I think it's a relatively minor iteration on previous research but it" }, { "end": 454.03999999999996, "start": 447.76, "text": " appears to work and it works in these kind of continuous control tasks." }, { "end": 458.44, "start": 454.03999999999996, "text": " You see you have three models here that you need to learn and that's what you see" }, { "end": 463.32, "start": 458.44, "text": " over here. There is representation, transition and reward and you'll see" }, { "end": 468.24, "start": 463.32, "text": " they all have the same parameters. That gives you an indication that these" }, { "end": 474.16, "start": 468.24, "text": " things are a single model. Now what is the model representation," }, { "end": 482.64, "start": 474.16, "text": " transition and reward? This is the thing on the left here." }, { "end": 491.24, "start": 482.64, "text": " In this part of the algorithm you assume that you have a policy. You" }, { "end": 497.76, "start": 491.24, "text": " already know what action you do or you can even assume that you have some" }, { "end": 503.92, "start": 497.76, "text": " experience. You have your agent is running with a given policy and you" }, { "end": 512.28, "start": 503.92, "text": " simply collect that and now you're trying to learn. Let me scratch all of" }, { "end": 523.48, "start": 512.28, "text": " this. What do you have given? Given is the observation sequence and the actions" }, { "end": 534.32, "start": 523.48, "text": " you took and the rewards you got. That's also given. Each action gives" }, { "end": 542.36, "start": 534.32, "text": " you reward. These things are given, provided to you and now what do" }, { "end": 552.32, "start": 542.36, "text": " you want to learn? You want to learn a representation and a transition and" }, { "end": 562.48, "start": 552.32, "text": " let's say a reward. You also want to predict the next reward. This thing," }, { "end": 573.12, "start": 562.48, "text": " this thing. As we already said you can do this by encoding the state using" }, { "end": 580.6, "start": 573.12, "text": " for example a CNN and then using an LSTM in order to incorporate this over time." }, { "end": 587.28, "start": 580.6, "text": " What you learn is the transition from one hidden state to the next hidden" }, { "end": 594.6800000000001, "start": 587.28, "text": " state and you also learn how the observation goes into the hidden state." }, { "end": 602.2, "start": 594.6800000000001, "text": " Thirdly you learn that if I'm in this hidden state and I take this particular" }, { "end": 608.8000000000001, "start": 602.2, "text": " action I will get this reward in the future. You can learn this from" }, { "end": 615.04, "start": 608.8, "text": " just a set of pre-computed or from a set of experience that you have in your" }, { "end": 621.28, "start": 615.04, "text": " let's say your replay buffer. This is one model and you learn this here" }, { "end": 627.3199999999999, "start": 621.28, "text": " in this first step in this called dynamics learning section. You see" }, { "end": 637.56, "start": 627.3199999999999, "text": " while not converged, you do dynamics learning, you draw data sequences from" }, { "end": 643.64, "start": 637.56, "text": " your experience, then you compute the model states. These are the hidden" }, { "end": 651.68, "start": 643.64, "text": " states and then you update this parameter theta using representation" }, { "end": 656.64, "start": 651.68, "text": " learning. They don't really specify what representation learning is but they" }, { "end": 663.0799999999999, "start": 656.64, "text": " do give examples of what you can do. I think their point is whatever you need" }, { "end": 668.84, "start": 663.08, "text": " to do in order to learn this representation. One example is" }, { "end": 679.2800000000001, "start": 668.84, "text": " actually drawn here. One example is you can learn a model that reconstructs the" }, { "end": 685.2800000000001, "start": 679.2800000000001, "text": " next state or actually sorry reconstructs the same state. You can learn a" }, { "end": 691.72, "start": 685.2800000000001, "text": " model that predicts. If you give the observation as an input it goes" }, { "end": 699.4, "start": 691.72, "text": " through the hidden state. You can learn a decoder that reconstructs that" }, { "end": 705.24, "start": 699.4, "text": " observation. This is usually done in things like variational auto encoders in" }, { "end": 710.44, "start": 705.24, "text": " order to produce generative models. This part here would be the" }, { "end": 714.64, "start": 710.44, "text": " generator and that would be kind of the thing of interest if you are doing a" }, { "end": 720.9200000000001, "start": 714.64, "text": " variational auto encoder. Of course here our quantity of interest is this" }, { "end": 729.1999999999999, "start": 720.92, "text": " encoder model because we want a good representation of the state." }, { "end": 734.4799999999999, "start": 729.1999999999999, "text": " It comes down to the same thing. If you can learn a model that learns to" }, { "end": 740.68, "start": 734.4799999999999, "text": " accurately reconstruct the observation then your representation here in the" }, { "end": 746.76, "start": 740.68, "text": " middle is probably an informative one. Because you learn the same model" }, { "end": 753.28, "start": 746.76, "text": " across multiple observations that means it can accurately encode what makes one" }, { "end": 759.3, "start": 753.28, "text": " observation different from another one. This is how you learn the" }, { "end": 768.36, "start": 759.3, "text": " theta parameters. The other models here are the action and the value" }, { "end": 775.08, "start": 768.36, "text": " parameters. This is here in the step called behavior learning. In the" }, { "end": 780.2800000000001, "start": 775.08, "text": " behavior learning what they say is imagine trajectories from each of the" }, { "end": 785.32, "start": 780.2800000000001, "text": " states that you have. What you're going to do is from each of the observations" }, { "end": 791.64, "start": 785.32, "text": " here you're going to obtain the hidden states. From each" }, { "end": 797.48, "start": 791.64, "text": " of the hidden states here, here is an observation from its hidden state," }, { "end": 806.12, "start": 797.48, "text": " you're going to use the model that you learned here through the LSTM." }, { "end": 812.52, "start": 806.12, "text": " This is terrible. Through the LSTM you're going to use that model to imagine future" }, { "end": 820.9200000000001, "start": 812.52, "text": " trajectories of hidden states. You have given, or now is the" }, { "end": 826.72, "start": 820.9200000000001, "text": " observation here, and the hidden state. You're going to imagine future hidden" }, { "end": 838.6, "start": 826.72, "text": " states, you're also going to imagine future rewards. You are going to use" }, { "end": 846.4, "start": 838.6, "text": " your policy in order to determine which actions you're" }, { "end": 852.88, "start": 846.4, "text": " going to take. The ultimate goal here is to learn a good policy, so a" }, { "end": 858.56, "start": 852.88, "text": " policy that will give you better rewards in the future. This is" }, { "end": 867.36, "start": 858.56, "text": " regular reinforcement learning, except that the difference is in regular" }, { "end": 873.36, "start": 867.36, "text": " reinforcement learning I have my observation, I encode it and then I" }, { "end": 878, "start": 873.36, "text": " determine what action I want to take. Then I feed that action back into the" }, { "end": 883.28, "start": 878, "text": " environment, which would give me the next observation. Then I'd use that to" }, { "end": 888.48, "start": 883.28, "text": " determine, maybe in conjunction with the last hidden state, the next action." }, { "end": 894, "start": 888.48, "text": " In this thing, since we learned a dynamics model of the hidden states, we can simply" }, { "end": 899.76, "start": 894, "text": " determine the action and then simply compute what the probable next hidden" }, { "end": 906.32, "start": 899.76, "text": " state is going to be. Then use that to determine an action again and so on." }, { "end": 910.7600000000001, "start": 906.32, "text": " There's no need to go through the environment, which means potentially we" }, { "end": 916.36, "start": 910.7600000000001, "text": " can learn much faster without having to expensively interact with the" }, { "end": 925.7600000000001, "start": 916.36, "text": " environment. That allows us to basically... Also these models here, they might be" }, { "end": 931.72, "start": 925.7600000000001, "text": " quite large, so our backprop now only needs to happen through this path" }, { "end": 938.6, "start": 931.72, "text": " basically, if we want to, or through this path here, in case we have" }, { "end": 948.28, "start": 938.6, "text": " discrete actions. That will be the dynamics learning." }, { "end": 957.76, "start": 948.28, "text": " As you can see, we predict the rewards and the values and" }, { "end": 964.8, "start": 957.76, "text": " compute value estimates. Then we update these parameters. What we have" }, { "end": 971.4399999999999, "start": 964.8, "text": " is here a value function. The value function is dependent on this psi here." }, { "end": 981.52, "start": 971.4399999999999, "text": " This we update using a gradient of its output minus the true value." }, { "end": 985.68, "start": 981.52, "text": " This here is an estimate of the value. As you know, a value function is" }, { "end": 993.28, "start": 985.68, "text": " supposed to tell you the complete future reward given a state." }, { "end": 998.0799999999999, "start": 993.28, "text": " It's important for us that we have a function that can estimate that, because of" }, { "end": 1004.16, "start": 998.0799999999999, "text": " course then we can take actions. If we can make this function go high and this" }, { "end": 1011.12, "start": 1004.16, "text": " is an accurate function, that means we get a lot of reward in the future." }, { "end": 1015, "start": 1011.12, "text": " It's important to learn this function. Here you can see we adjust it into the" }, { "end": 1020.76, "start": 1015, "text": " direction of matching this quantity better. We'll get to this quantity in a" }, { "end": 1028.92, "start": 1020.76, "text": " second. You can also see we update this parameter, which is the action model." }, { "end": 1034.96, "start": 1028.92, "text": " Here you see that the action model depends on this. This is our policy." }, { "end": 1042.16, "start": 1034.96, "text": " This thing here determines which action we take. We update it into the" }, { "end": 1046.88, "start": 1042.16, "text": " direction. This is a gradient with respect to this value function." }, { "end": 1053.68, "start": 1046.88, "text": " We train the policy to maximize the value, which is all the future rewards that we get." }, { "end": 1059.52, "start": 1053.68, "text": " Of course we can do this because we can now back propagate through all of these" }, { "end": 1065.52, "start": 1059.52, "text": " time steps. We have this transition model. We can back" }, { "end": 1073.8, "start": 1065.52, "text": " propagate through all of this, which is pretty cool. I think in my opinion the" }, { "end": 1080.4, "start": 1073.8, "text": " workhorse of this paper might be this quantity here." }, { "end": 1088.6399999999999, "start": 1080.4, "text": " How exactly do you compute the value of a state? Especially in these continuous" }, { "end": 1096.3600000000001, "start": 1088.64, "text": " control tasks you sometimes have a lot of steps. These trajectories" }, { "end": 1101.96, "start": 1096.3600000000001, "text": " might be pretty long and they might be longer than what you can back propagate" }, { "end": 1111.6000000000001, "start": 1101.96, "text": " here reasonably from time step to time step. Even an LSTM might only be" }, { "end": 1117.2, "start": 1111.6000000000001, "text": " able to back prop through a couple of dozen or maybe a few hundred steps in" }, { "end": 1125.1200000000001, "start": 1117.2, "text": " time. Maybe you have longer trajectories here. I think this" }, { "end": 1132.88, "start": 1125.1200000000001, "text": " value estimate here is a main component of extending that range. They say this" }, { "end": 1140.32, "start": 1132.88, "text": " is according to equation 6 and this is what it does. This is my" }, { "end": 1145.48, "start": 1140.32, "text": " opinion that this here is the workhorse of the method. It's a" }, { "end": 1151.28, "start": 1145.48, "text": " three-step process actually. It's pretty heavy. You see this is the" }, { "end": 1160.24, "start": 1151.28, "text": " quantity they estimate with the value function. It is set between an" }, { "end": 1167.6, "start": 1160.24, "text": " average over... H is the time horizon that you're looking for. It is" }, { "end": 1177, "start": 1167.6, "text": " set between these two things across the sum over the time horizon. Now each of" }, { "end": 1189.6, "start": 1177, "text": " those things again here is a sum over this tau here, which is this" }, { "end": 1199.8, "start": 1189.6, "text": " tau and H minus 1. H here is the minimum of tau plus K and tau plus horizon." }, { "end": 1206.84, "start": 1199.8, "text": " This quantity looks K steps into the future. For each" }, { "end": 1219.8, "start": 1206.84, "text": " step to the horizon we look K steps into the future. For each step we" }, { "end": 1225.76, "start": 1219.8, "text": " look into the future we sum again across these quantities here. These" }, { "end": 1231.12, "start": 1225.76, "text": " quantities here, what is that? It's a mixture of the reward you get in that" }, { "end": 1239.6, "start": 1231.12, "text": " particular step plus your own your estimate of the value function at the" }, { "end": 1246.36, "start": 1239.6, "text": " at the horizon step discounted by that. So it's a pretty... Imagine you have" }, { "end": 1252.12, "start": 1246.36, "text": " like a time number of steps that you took and each time you get a reward." }, { "end": 1258.32, "start": 1252.12, "text": " This is a very complicated way of going into the future," }, { "end": 1264.24, "start": 1258.32, "text": " summing up the rewards, going more steps, summing up the rewards again in different" }, { "end": 1269.4399999999998, "start": 1264.24, "text": " fashion and then mixing these individual quantities. So this one, this" }, { "end": 1273.36, "start": 1269.4399999999998, "text": " one, this one that you got from accumulating all of these in a weird" }, { "end": 1282.2, "start": 1273.36, "text": " fashion. That allows you to look way beyond. Especially you see here your" }, { "end": 1290.64, "start": 1282.2, "text": " estimate of the value function will actually include your own value function" }, { "end": 1298.1200000000001, "start": 1290.64, "text": " that again probably looks into the future. So what you accumulate from the" }, { "end": 1304.16, "start": 1298.1200000000001, "text": " last step in your time horizon already includes information from all the future" }, { "end": 1311.0800000000002, "start": 1304.16, "text": " steps because you take your own value estimate into account. This is I think" }, { "end": 1319.24, "start": 1311.08, "text": " it's very convoluted but again I think this complicated value" }, { "end": 1326.9199999999998, "start": 1319.24, "text": " estimate allows you to have a better value estimate far into the future." }, { "end": 1336.48, "start": 1327.72, "text": " They do show some kind of samples here of what they can do. I haven't found any" }, { "end": 1342.92, "start": 1336.48, "text": " videos of it unfortunately but it appears to work pretty well. They have a" }, { "end": 1346.8, "start": 1342.92, "text": " discussion of different representation learning methods and different" }, { "end": 1353.24, "start": 1346.8, "text": " experiments and ablations and so on. So I invite you to look at this paper and I" }, { "end": 1369.24, "start": 1353.24, "text": " hope this was somewhat clear. Bye bye." } ]
XdpF9ZixIbI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Can we Contain Covid-19 without Locking-down the Economy?
[ "Science & Technology" ]
[ "machine learning", "epidemiology", "worst case", "statistics", "hypothesis test", "covid", "corona", "coronavirus" ]
My thoughts on the let-the-young-get-infected argument. https://medium.com/amnon-shashua/can-we-contain-covid-19-without-locking-down-the-economy-2a134a71873f Abstract: In this article, we present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle the question of whether this model is safe, in the sense that the health system can contain the number of low-risk people that require severe ICU care (such as life support systems). Authors: Shai Shalev-Shwartz, Amnon Shashua Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Can we contain COVID-19 without locking down the economy? This is a question and I do care about this article because Shai Shalef-Schwarz is one of the bigger names in machine learning theory. So it was interesting for me to see what he and his collaborator here had to say about the kind of outbreak and the strategy to contain it. So contain maybe isn't the right word that they ask. I think the way they ask the question is how are we going to survive this the best? And so this in no means is an endorsement by me. I'm not a medical professional. Please just view this as a commentary and an explanation of what they are saying. I'll give my opinions along the way, of course. So they identify three different models for handling the spread of COVID-19. And we'll start with the third one because they argue for the first one and this builds more suspense. So they say there is countrywide lockdown, right, until the spread of the virus is under control. They say it could take anywhere from weeks to months. It is the safest route, but it does not prevent a second wave from occurring. Now, of course, if you have people, let's say these are people, right, then the thing is everyone just stays in there, stay in your house, right, everybody, right, until it's kind of gone. Now they say correctly there is a risk of a second wave because only a single infected person, because there's no immunity, still has the potential of creating another epicenter. So they don't consider this option. The next option is called containment-based selective quarantine, which means find all the positive cases and put them in quarantine. So let's say we go here and we let you roam around freely, but we know we can test people and we know that some of them are positive. So we simply tell them to stay at home, right. Now this depends a lot on how well you can test people, and it also depends on what they claim the contagious time interval. We know that there are people that are contagious without showing symptoms. So unless you can test every single person all the time, this is likely to not really help a lot. There's various data from various countries that actually shows it can reduce the load, but they basically argue against that because there are these contagious people and you can never test fast enough or accurate or thoroughly enough. And then they say there is risk-based selective quarantine, which means what? It means that some of these people are going to be at risk. And in this case, we obviously mean old people. So old people, I'm going to draw them with a cane, not because old people aren't fit, just because they have better tastes in canes. And then there are young people and they run a smartphone with TikTok. And what we're going to say is that you youngsters, you're not really at risk from this. So you go out, you sneeze on each other, you go about your life normally, and you old people basically stay at home until all the young people have immunity. So we ramp up the cases and then it flattens out eventually in the low-risk population. And at that point, there is enough herd immunity, right? All these people are now immune so that the old person here, even if they now go out again, they won't catch it because everyone's already had it. So they argue for this particular strategy, or at least they analyze this particular strategy. Now, I have to say at the beginning that the core assumption here is that this quarantine of the high-risk people, you can do basically in a perfect way. So the assumption here is that you are able to perfectly quarantine all the high-risk people and that the level of infection in the low-risk population has no influence on the level of infection in the high-risk population. And in my opinion, I simply don't believe that. I simply don't believe you can build this quarantine. I think even these old people, they need food sometimes, the nursing home needs staff. So even if they can reduce their contact to the outside world, they cannot fully be sheltered. And that means the more infections you have in the low-risk population, the more infections you will have in the high-risk population. So I think the fundamental core assumption of this model is quite flawed. That being said, let's analyze it. So we assume that all the high-risk people, none of them is going to get sick because they all stay at home. So the math in this paper is actually pretty basic. So we'll go through it a bit more detailed. So we'll understand the core argument. So they introduce the following quantities, M here. M is the low-risk population, right? This is the population size. V or new, let's call it new. New here is the probability. So that's the probability that if you are sick, you need to go to the ICU. Right? So sick means simply you have the virus. And ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease. So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get? We get a worst-case scenario. So basically the authors here, and I find this is the good part of this analysis. They really don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on. They simply consider the worst case. So MD here, if you multiply these two numbers, what does that mean? That is the number of severe cases. Severe meaning you need ICU cases. If everybody gets sick, if all get sick. If everybody gets sick at the same time, right? Same time. So this is the work. So let's say we all go out, the lowest population, and we all sneeze in each other's faces as much as we can. And we just all get sick at the same time. Then this here is the number of people going to the ICU. Right? And if this, so they introduce this quantity B here, B is the number of beds in the ICU. If the number of beds in the ICU is larger than the worst case, severe cases, right? Then we are safe. So that's the argument. Basically it's not that we are safe. It is no one will die from lack of an ICU bed. Which is kind of the lever we have as a population. If you assume everyone's going to get sick anyway and so on. If the number of beds is larger than the worst case number of ICU patients, we are safe. That's at least how they define safe. Alright, so that's their premise. Now what are they going to do? They're going to find a quantity where they can bound this thing. So they are going to find a bound, an upper bound on the number of severe cases. And if this upper bound is lower than the number of beds, then they can say we're safe with this method. See this is a worst case analysis under their assumptions. Alright, so I said they don't resort to any kind of epidemiological dynamics. They simply estimate this thing from current numbers. I'm going to introduce two more quantities here. P star and K. Now K is the current number of severe cases. So this is kind of an analog to this thing here. So these two are connected. This is the current number of severe cases and this up here is the total possible, like the worst case number of severe cases in the future. Likewise, P star here is the percentage of people, the percent of people that are sick. And they claim correctly this is unknown. So if we could test everybody who is sick, not severe, just sick. And up here this has no connection because of course you can imagine here another factor times, let's call this P plus or something. Which is the number of people who are sick in the worst case, which of course in our worst case scenario is one. So that's why they don't include it here. So this is the current percentage of sick people. So this here is a percentage and this here is an actual number. Keep that in mind. All right, now if we do some basic reformulation here, if we take this P star and multiply that by, you see it in this corner here, multiply it by the total size of the population, right? We get the number of people who are currently sick. This is a percentage of current sick ones, this is the total size of the population. Get the number of people who are currently sick. If we take that in the denominator and put K here, which is the number of people who are currently severe, then we get an estimate of this quantity new. So remember what new is? New is the probability if you are sick, you go to the ICU. So the ICU means you're severe, right? So these are the current number of sick people and these are the current number of severe people. This gives you an estimate of if you are sick, what's the probability that you're severe? Now they argue that this number, it doesn't change, independent of, so this quantity here is a constant. So the probability that if you are sick from this virus, you go to the ICU, doesn't change over time. So we can estimate it with current numbers, right? Which is a pretty reasonable thing to assume that this stays constant unless the virus mutates or something. So we know the total size of the population. We know the current number of severe cases. You can make an argument about that. So do we really know the current number of severe cases? Because there is an exponential growth involved, this might be difficult to estimate. And they say the same thing. So they say this is the only time where they reference the dynamics of the situation. It grows at an exponential rate. So what we can do is we can take a worst case upper bound, they say, to be on the safe side, perform a worst case analysis. So instead of taking K, they add this confidence interval on it that is based on concentration inequalities. So they don't use K, they use this K tilde here, which has two additional summons here. That is supposed to be an upper bound with confidence, at least one over delta here. And this you can set, for example, to be 0.05. That gives you a 95 percent confidence that this is an upper bound on that. Now, this comes from some concentration bound. And there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here. I'm going to assume they are reasonable. If they are not, then of course, that is an additional point of criticism of this work. All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde. So we know this as well. Now, the unknown quantity, of course, is this thing here, P star. What is the percentage of people that are currently sick? So the goal is now to find that. So they say, OK, if we plug in this upper bound of K tilde, then with this probability, we can upper bound this quantity, nu, which is exactly what we wanted, because we need to upper bound MD. That's what they say here. So since at the top we saw that M times nu equals MD and we want to upper bound MD, we can rearrange this thing. If we plug in these two together, we see that the M cancels out. We can upper bound MD by this quantity here. The upper bound on the current severe cases divided by the percentage of the currently sick people. So again, they reformulate and they plug in. This, of course, needs to be smaller than the number of beds. So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities we know, then we are safe. Now, again, our goal is going to be to find a quantity that lower bounds P star, but up, but is larger than this quantity here. And they do this via hypothesis testing. They call this quantity here, they call it P tilde. And they do a hypothesis test for classic statistics where they ask, is P star significantly larger than P tilde? If that's the case, then we're safe. If not, we're not. And how do they do that? They say, OK, we have the population. I did draw this at one point. Let's go back there. We have the population here, right? And what we can do is we can just go out and uniformly, uniformly test people, like just randomly select people. Now, this is an old person, old people stay at home. So we randomly select people to test and their test results come back. And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy. And so we have four tests and out of the four, one was positive. Can we work out a hypothesis test from that? So can we decide whether P star is probably much larger than P tilde or not? And the answer is yes, because this is a uniform sample. You can work out using classic statistical tools whether or not you can reject an old hypothesis or not. And they actually work this out and they do give a number here. And that's this. So they say if we test N, which is four, let's say four and a half times this quantity B divided by K. So the number of beds divided by the upper bound on the current severe cases. So we test four point five times this many people. Then if we find at least 10 positive cases or more, then with a probability of 95 percent, we know that the risk based model is safe. So the more, of course, the more infected people you find in this case, the better, because that means because the number of severe cases stays constant at any given time. It means that a lot more people are infected. That means the probability that you are going to become severe is lower. That's why it says at least. So again, you go out, you test N people and according to this formula, plug in the numbers here for your current situation. If you find at least 10 people, then with a probability of at least 95 percent, you know that this model is safe. Cool. And this is done using, you know, classic statistical testing hypothesis testing literature. So I think that is a pretty cool result. But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine. Of course, if you can't, it means that there is a direct correlation between the number of sick people in your low risk population, the number of sick people in your high risk population, which means that more of the high risk population are going to get infected as well, which again means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate, which makes your entire model that we developed down there less valid, because now this used to be a constant in the model. It's now no longer a constant. It's sinking. And the worse it gets, the more it's sinking. And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly. And that doesn't include all the high risk people that are going to be in danger additionally because you can't enforce the quarantine. All right. So this was my take on that. Take it for what it's worth. And I wish you a healthy pandemic. Bye bye.
[ { "end": 6, "start": 0, "text": " Can we contain COVID-19 without locking down the economy?" }, { "end": 11, "start": 6, "text": " This is a question and I do care about this article because" }, { "end": 16, "start": 11, "text": " Shai Shalef-Schwarz is one of the bigger names in machine learning theory." }, { "end": 21, "start": 16, "text": " So it was interesting for me to see what he and his collaborator here" }, { "end": 28, "start": 21, "text": " had to say about the kind of outbreak and the strategy to contain it." }, { "end": 35, "start": 28, "text": " So contain maybe isn't the right word that they ask." }, { "end": 44, "start": 35, "text": " I think the way they ask the question is how are we going to survive this the best?" }, { "end": 49, "start": 44, "text": " And so this in no means is an endorsement by me." }, { "end": 51, "start": 49, "text": " I'm not a medical professional." }, { "end": 59, "start": 51, "text": " Please just view this as a commentary and an explanation of what they are saying." }, { "end": 63, "start": 59, "text": " I'll give my opinions along the way, of course." }, { "end": 70, "start": 63, "text": " So they identify three different models for handling the spread of COVID-19." }, { "end": 78, "start": 70, "text": " And we'll start with the third one because they argue for the first one and this builds more suspense." }, { "end": 86, "start": 78, "text": " So they say there is countrywide lockdown, right, until the spread of the virus is under control." }, { "end": 89, "start": 86, "text": " They say it could take anywhere from weeks to months." }, { "end": 96, "start": 89, "text": " It is the safest route, but it does not prevent a second wave from occurring." }, { "end": 103, "start": 96, "text": " Now, of course, if you have people, let's say these are people, right," }, { "end": 115, "start": 103, "text": " then the thing is everyone just stays in there, stay in your house, right, everybody, right, until it's kind of gone." }, { "end": 122, "start": 115, "text": " Now they say correctly there is a risk of a second wave because only a single infected person," }, { "end": 129, "start": 122, "text": " because there's no immunity, still has the potential of creating another epicenter." }, { "end": 133, "start": 129, "text": " So they don't consider this option." }, { "end": 138, "start": 133, "text": " The next option is called containment-based selective quarantine," }, { "end": 144, "start": 138, "text": " which means find all the positive cases and put them in quarantine." }, { "end": 150, "start": 144, "text": " So let's say we go here and we let you roam around freely," }, { "end": 158, "start": 150, "text": " but we know we can test people and we know that some of them are positive." }, { "end": 162, "start": 158, "text": " So we simply tell them to stay at home, right." }, { "end": 167, "start": 162, "text": " Now this depends a lot on how well you can test people," }, { "end": 173, "start": 167, "text": " and it also depends on what they claim the contagious time interval." }, { "end": 178, "start": 173, "text": " We know that there are people that are contagious without showing symptoms." }, { "end": 187, "start": 178, "text": " So unless you can test every single person all the time, this is likely to not really help a lot." }, { "end": 192, "start": 187, "text": " There's various data from various countries that actually shows it can reduce the load," }, { "end": 200, "start": 192, "text": " but they basically argue against that because there are these contagious people" }, { "end": 206, "start": 200, "text": " and you can never test fast enough or accurate or thoroughly enough." }, { "end": 213, "start": 206, "text": " And then they say there is risk-based selective quarantine, which means what?" }, { "end": 219, "start": 213, "text": " It means that some of these people are going to be at risk." }, { "end": 223, "start": 219, "text": " And in this case, we obviously mean old people." }, { "end": 230, "start": 223, "text": " So old people, I'm going to draw them with a cane, not because old people aren't fit," }, { "end": 235, "start": 230, "text": " just because they have better tastes in canes." }, { "end": 241, "start": 235, "text": " And then there are young people and they run a smartphone with TikTok." }, { "end": 248, "start": 241, "text": " And what we're going to say is that you youngsters, you're not really at risk from this." }, { "end": 253, "start": 248, "text": " So you go out, you sneeze on each other, you go about your life normally," }, { "end": 262, "start": 253, "text": " and you old people basically stay at home until all the young people have immunity." }, { "end": 270, "start": 262, "text": " So we ramp up the cases and then it flattens out eventually in the low-risk population." }, { "end": 273, "start": 270, "text": " And at that point, there is enough herd immunity, right?" }, { "end": 279, "start": 273, "text": " All these people are now immune so that the old person here," }, { "end": 284, "start": 279, "text": " even if they now go out again, they won't catch it because everyone's already had it." }, { "end": 295, "start": 284, "text": " So they argue for this particular strategy, or at least they analyze this particular strategy." }, { "end": 304, "start": 295, "text": " Now, I have to say at the beginning that the core assumption here is that this quarantine of the high-risk people," }, { "end": 309, "start": 304, "text": " you can do basically in a perfect way." }, { "end": 317, "start": 309, "text": " So the assumption here is that you are able to perfectly quarantine all the high-risk people" }, { "end": 328, "start": 317, "text": " and that the level of infection in the low-risk population has no influence on the level of infection in the high-risk population." }, { "end": 332, "start": 328, "text": " And in my opinion, I simply don't believe that." }, { "end": 335, "start": 332, "text": " I simply don't believe you can build this quarantine." }, { "end": 343, "start": 335, "text": " I think even these old people, they need food sometimes, the nursing home needs staff." }, { "end": 350, "start": 343, "text": " So even if they can reduce their contact to the outside world, they cannot fully be sheltered." }, { "end": 356, "start": 350, "text": " And that means the more infections you have in the low-risk population," }, { "end": 360, "start": 356, "text": " the more infections you will have in the high-risk population." }, { "end": 366, "start": 360, "text": " So I think the fundamental core assumption of this model is quite flawed." }, { "end": 368, "start": 366, "text": " That being said, let's analyze it." }, { "end": 379, "start": 368, "text": " So we assume that all the high-risk people, none of them is going to get sick because they all stay at home." }, { "end": 383, "start": 379, "text": " So the math in this paper is actually pretty basic." }, { "end": 386, "start": 383, "text": " So we'll go through it a bit more detailed." }, { "end": 389, "start": 386, "text": " So we'll understand the core argument." }, { "end": 393, "start": 389, "text": " So they introduce the following quantities, M here." }, { "end": 397, "start": 393, "text": " M is the low-risk population, right?" }, { "end": 402, "start": 397, "text": " This is the population size." }, { "end": 408, "start": 402, "text": " V or new, let's call it new." }, { "end": 412, "start": 408, "text": " New here is the probability." }, { "end": 421, "start": 412, "text": " So that's the probability that if you are sick, you need to go to the ICU." }, { "end": 424, "start": 421, "text": " Right? So sick means simply you have the virus." }, { "end": 435, "start": 424, "text": " And ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease." }, { "end": 445, "start": 435, "text": " So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get?" }, { "end": 448, "start": 445, "text": " We get a worst-case scenario." }, { "end": 455, "start": 448, "text": " So basically the authors here, and I find this is the good part of this analysis." }, { "end": 464, "start": 455, "text": " They really don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on." }, { "end": 468, "start": 464, "text": " They simply consider the worst case." }, { "end": 473, "start": 468, "text": " So MD here, if you multiply these two numbers, what does that mean?" }, { "end": 478, "start": 473, "text": " That is the number of severe cases." }, { "end": 482, "start": 478, "text": " Severe meaning you need ICU cases." }, { "end": 492, "start": 482, "text": " If everybody gets sick, if all get sick." }, { "end": 498, "start": 492, "text": " If everybody gets sick at the same time, right?" }, { "end": 501, "start": 498, "text": " Same time." }, { "end": 510, "start": 501, "text": " So this is the work. So let's say we all go out, the lowest population, and we all sneeze in each other's faces as much as we can." }, { "end": 513, "start": 510, "text": " And we just all get sick at the same time." }, { "end": 519, "start": 513, "text": " Then this here is the number of people going to the ICU." }, { "end": 530, "start": 519, "text": " Right? And if this, so they introduce this quantity B here, B is the number of beds in the ICU." }, { "end": 539, "start": 530, "text": " If the number of beds in the ICU is larger than the worst case, severe cases, right?" }, { "end": 541, "start": 539, "text": " Then we are safe." }, { "end": 543, "start": 541, "text": " So that's the argument." }, { "end": 545, "start": 543, "text": " Basically it's not that we are safe." }, { "end": 549, "start": 545, "text": " It is no one will die from lack of an ICU bed." }, { "end": 553, "start": 549, "text": " Which is kind of the lever we have as a population." }, { "end": 557, "start": 553, "text": " If you assume everyone's going to get sick anyway and so on." }, { "end": 566, "start": 557, "text": " If the number of beds is larger than the worst case number of ICU patients, we are safe." }, { "end": 569, "start": 566, "text": " That's at least how they define safe." }, { "end": 572, "start": 569, "text": " Alright, so that's their premise." }, { "end": 574, "start": 572, "text": " Now what are they going to do?" }, { "end": 579, "start": 574, "text": " They're going to find a quantity where they can bound this thing." }, { "end": 585, "start": 579, "text": " So they are going to find a bound, an upper bound on the number of severe cases." }, { "end": 594, "start": 585, "text": " And if this upper bound is lower than the number of beds, then they can say we're safe with this method." }, { "end": 601, "start": 594, "text": " See this is a worst case analysis under their assumptions." }, { "end": 607, "start": 601, "text": " Alright, so I said they don't resort to any kind of epidemiological dynamics." }, { "end": 611, "start": 607, "text": " They simply estimate this thing from current numbers." }, { "end": 615, "start": 611, "text": " I'm going to introduce two more quantities here. P star and K." }, { "end": 626, "start": 615, "text": " Now K is the current number of severe cases." }, { "end": 632, "start": 626, "text": " So this is kind of an analog to this thing here." }, { "end": 635, "start": 632, "text": " So these two are connected." }, { "end": 646, "start": 635, "text": " This is the current number of severe cases and this up here is the total possible, like the worst case number of severe cases in the future." }, { "end": 661, "start": 646, "text": " Likewise, P star here is the percentage of people, the percent of people that are sick." }, { "end": 666, "start": 661, "text": " And they claim correctly this is unknown." }, { "end": 672, "start": 666, "text": " So if we could test everybody who is sick, not severe, just sick." }, { "end": 685, "start": 672, "text": " And up here this has no connection because of course you can imagine here another factor times, let's call this P plus or something." }, { "end": 693, "start": 685, "text": " Which is the number of people who are sick in the worst case, which of course in our worst case scenario is one." }, { "end": 695, "start": 693, "text": " So that's why they don't include it here." }, { "end": 702, "start": 695, "text": " So this is the current percentage of sick people." }, { "end": 706, "start": 702, "text": " So this here is a percentage and this here is an actual number." }, { "end": 709, "start": 706, "text": " Keep that in mind." }, { "end": 726, "start": 709, "text": " All right, now if we do some basic reformulation here, if we take this P star and multiply that by, you see it in this corner here," }, { "end": 731, "start": 726, "text": " multiply it by the total size of the population, right?" }, { "end": 739, "start": 731, "text": " We get the number of people who are currently sick." }, { "end": 743, "start": 739, "text": " This is a percentage of current sick ones, this is the total size of the population." }, { "end": 746, "start": 743, "text": " Get the number of people who are currently sick." }, { "end": 760, "start": 746, "text": " If we take that in the denominator and put K here, which is the number of people who are currently severe," }, { "end": 764, "start": 760, "text": " then we get an estimate of this quantity new." }, { "end": 766, "start": 764, "text": " So remember what new is?" }, { "end": 773, "start": 766, "text": " New is the probability if you are sick, you go to the ICU." }, { "end": 777, "start": 773, "text": " So the ICU means you're severe, right?" }, { "end": 783, "start": 777, "text": " So these are the current number of sick people and these are the current number of severe people." }, { "end": 791, "start": 783, "text": " This gives you an estimate of if you are sick, what's the probability that you're severe?" }, { "end": 800, "start": 791, "text": " Now they argue that this number, it doesn't change, independent of, so this quantity here is a constant." }, { "end": 805, "start": 800, "text": " So the probability that if you are sick from this virus, you go to the ICU, doesn't change over time." }, { "end": 809, "start": 805, "text": " So we can estimate it with current numbers, right?" }, { "end": 817, "start": 809, "text": " Which is a pretty reasonable thing to assume that this stays constant unless the virus mutates or something." }, { "end": 820, "start": 817, "text": " So we know the total size of the population." }, { "end": 826, "start": 820, "text": " We know the current number of severe cases." }, { "end": 829, "start": 826, "text": " You can make an argument about that." }, { "end": 832, "start": 829, "text": " So do we really know the current number of severe cases?" }, { "end": 837, "start": 832, "text": " Because there is an exponential growth involved, this might be difficult to estimate." }, { "end": 840, "start": 837, "text": " And they say the same thing." }, { "end": 845, "start": 840, "text": " So they say this is the only time where they reference the dynamics of the situation." }, { "end": 847, "start": 845, "text": " It grows at an exponential rate." }, { "end": 857, "start": 847, "text": " So what we can do is we can take a worst case upper bound, they say, to be on the safe side, perform a worst case analysis." }, { "end": 869, "start": 857, "text": " So instead of taking K, they add this confidence interval on it that is based on concentration inequalities." }, { "end": 878, "start": 869, "text": " So they don't use K, they use this K tilde here, which has two additional summons here." }, { "end": 885, "start": 878, "text": " That is supposed to be an upper bound with confidence, at least one over delta here." }, { "end": 888, "start": 885, "text": " And this you can set, for example, to be 0.05." }, { "end": 906, "start": 888, "text": " That gives you a 95 percent confidence that this is an upper bound on that." }, { "end": 910, "start": 906, "text": " Now, this comes from some concentration bound." }, { "end": 921, "start": 910, "text": " And there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here." }, { "end": 923, "start": 921, "text": " I'm going to assume they are reasonable." }, { "end": 929, "start": 923, "text": " If they are not, then of course, that is an additional point of criticism of this work." }, { "end": 938, "start": 929, "text": " All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde." }, { "end": 940, "start": 938, "text": " So we know this as well." }, { "end": 945, "start": 940, "text": " Now, the unknown quantity, of course, is this thing here, P star." }, { "end": 956, "start": 945, "text": " What is the percentage of people that are currently sick?" }, { "end": 960, "start": 956, "text": " So the goal is now to find that." }, { "end": 967, "start": 960, "text": " So they say, OK, if we plug in this upper bound of K tilde, then with this probability," }, { "end": 977, "start": 967, "text": " we can upper bound this quantity, nu, which is exactly what we wanted, because we need to upper bound MD." }, { "end": 979, "start": 977, "text": " That's what they say here." }, { "end": 990, "start": 979, "text": " So since at the top we saw that M times nu equals MD and we want to upper bound MD," }, { "end": 1001, "start": 990, "text": " we can rearrange this thing. If we plug in these two together, we see that the M cancels out." }, { "end": 1005, "start": 1001, "text": " We can upper bound MD by this quantity here." }, { "end": 1019, "start": 1005, "text": " The upper bound on the current severe cases divided by the percentage of the currently sick people." }, { "end": 1027, "start": 1019, "text": " So again, they reformulate and they plug in." }, { "end": 1033, "start": 1027, "text": " This, of course, needs to be smaller than the number of beds." }, { "end": 1043, "start": 1033, "text": " So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities we know, then we are safe." }, { "end": 1056, "start": 1043, "text": " Now, again, our goal is going to be to find a quantity that lower bounds P star, but up, but is larger than this quantity here." }, { "end": 1064, "start": 1056, "text": " And they do this via hypothesis testing. They call this quantity here, they call it P tilde." }, { "end": 1075, "start": 1064, "text": " And they do a hypothesis test for classic statistics where they ask, is P star significantly larger than P tilde?" }, { "end": 1081, "start": 1075, "text": " If that's the case, then we're safe. If not, we're not." }, { "end": 1087, "start": 1081, "text": " And how do they do that? They say, OK, we have the population." }, { "end": 1091, "start": 1087, "text": " I did draw this at one point." }, { "end": 1095, "start": 1091, "text": " Let's go back there. We have the population here, right?" }, { "end": 1105, "start": 1095, "text": " And what we can do is we can just go out and uniformly, uniformly test people, like just randomly select people." }, { "end": 1108, "start": 1105, "text": " Now, this is an old person, old people stay at home." }, { "end": 1113, "start": 1108, "text": " So we randomly select people to test and their test results come back." }, { "end": 1120, "start": 1113, "text": " And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy." }, { "end": 1127, "start": 1120, "text": " And so we have four tests and out of the four, one was positive." }, { "end": 1131, "start": 1127, "text": " Can we work out a hypothesis test from that?" }, { "end": 1139, "start": 1131, "text": " So can we decide whether P star is probably much larger than P tilde or not?" }, { "end": 1143, "start": 1139, "text": " And the answer is yes, because this is a uniform sample." }, { "end": 1152, "start": 1143, "text": " You can work out using classic statistical tools whether or not you can reject an old hypothesis or not." }, { "end": 1160, "start": 1152, "text": " And they actually work this out and they do give a number here." }, { "end": 1162, "start": 1160, "text": " And that's this." }, { "end": 1173, "start": 1162, "text": " So they say if we test N, which is four, let's say four and a half times this quantity B divided by K." }, { "end": 1178, "start": 1173, "text": " So the number of beds divided by the upper bound on the current severe cases." }, { "end": 1184, "start": 1178, "text": " So we test four point five times this many people." }, { "end": 1196, "start": 1184, "text": " Then if we find at least 10 positive cases or more, then with a probability of 95 percent," }, { "end": 1200, "start": 1196, "text": " we know that the risk based model is safe." }, { "end": 1205, "start": 1200, "text": " So the more, of course, the more infected people you find in this case, the better," }, { "end": 1212, "start": 1205, "text": " because that means because the number of severe cases stays constant at any given time." }, { "end": 1214, "start": 1212, "text": " It means that a lot more people are infected." }, { "end": 1219, "start": 1214, "text": " That means the probability that you are going to become severe is lower." }, { "end": 1222, "start": 1219, "text": " That's why it says at least." }, { "end": 1226, "start": 1222, "text": " So again, you go out, you test N people and according to this formula," }, { "end": 1229, "start": 1226, "text": " plug in the numbers here for your current situation." }, { "end": 1237, "start": 1229, "text": " If you find at least 10 people, then with a probability of at least 95 percent, you know that this model is safe." }, { "end": 1239, "start": 1237, "text": " Cool." }, { "end": 1248, "start": 1239, "text": " And this is done using, you know, classic statistical testing hypothesis testing literature." }, { "end": 1252, "start": 1248, "text": " So I think that is a pretty cool result." }, { "end": 1262, "start": 1252, "text": " But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine." }, { "end": 1268, "start": 1262, "text": " Of course, if you can't, it means that there is a direct correlation between the number of sick people" }, { "end": 1273, "start": 1268, "text": " in your low risk population, the number of sick people in your high risk population," }, { "end": 1279, "start": 1273, "text": " which means that more of the high risk population are going to get infected as well," }, { "end": 1288, "start": 1279, "text": " which again means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate," }, { "end": 1295, "start": 1288, "text": " which makes your entire model that we developed down there less valid," }, { "end": 1298, "start": 1295, "text": " because now this used to be a constant in the model." }, { "end": 1300, "start": 1298, "text": " It's now no longer a constant." }, { "end": 1301, "start": 1300, "text": " It's sinking." }, { "end": 1303, "start": 1301, "text": " And the worse it gets, the more it's sinking." }, { "end": 1316, "start": 1303, "text": " And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly." }, { "end": 1322, "start": 1316, "text": " And that doesn't include all the high risk people that are going to be in danger additionally" }, { "end": 1325, "start": 1322, "text": " because you can't enforce the quarantine." }, { "end": 1326, "start": 1325, "text": " All right." }, { "end": 1328, "start": 1326, "text": " So this was my take on that." }, { "end": 1330, "start": 1328, "text": " Take it for what it's worth." }, { "end": 1334, "start": 1330, "text": " And I wish you a healthy pandemic." }, { "end": 1353, "start": 1334, "text": " Bye bye." } ]
lqtlua-Ylts
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
State-of-Art-Reviewing: A Radical Proposal to Improve Scientific Publication
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "arxiv", "attention", "peer review", "automate", "distributed", "scalable", "neurips", "score", "objective" ]
Peer Review is outdated and ineffective. SOAR is a new and revolutionary way to distribute scientific reviewing and scale to the new age of faster, better and more significant research. https://arxiv.org/abs/2003.14415 Abstract: Peer review forms the backbone of modern scientific manuscript evaluation. But after two hundred and eighty-nine years of egalitarian service to the scientific community, does this protocol remain fit for purpose in 2020? In this work, we answer this question in the negative (strong reject, high confidence) and propose instead State-Of-the-Art Review (SOAR), a neoteric reviewing pipeline that serves as a 'plug-and-play' replacement for peer review. At the heart of our approach is an interpretation of the review process as a multi-objective, massively distributed and extremely-high-latency optimisation, which we scalarise and solve efficiently for PAC and CMT-optimal solutions. We make the following contributions: (1) We propose a highly scalable, fully automatic methodology for review, drawing inspiration from best-practices from premier computer vision and machine learning conferences; (2) We explore several instantiations of our approach and demonstrate that SOAR can be used to both review prints and pre-review pre-prints; (3) We wander listlessly in vain search of catharsis from our latest rounds of savage CVPR rejections. Authors: Samuel Albanie, Jaime Thewmore, Robert McCraith, Joao F. Henriques Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication. This has been on my mind for a while. The review process for modern science, especially machine learning, is just broken. I've spoken numerous times about the fact that we need to replace it with a better system. Samuel Albany at Al have actually come up with such a system and we're going to explore it today. I am a big fan of this work and I'm 100% on board with this. They basically say peer review forms the backbone of modern scientific manuscript evaluation. If you don't know what peer review is in machine learning right now, if you have some genius idea, so here is your idea, that's a light bulb by the way, you write it up into an eight-page PDF. Yes, it must be a PDF and yes, it must be eight pages. You submit it to a conference, which is some kind of... So you submit it to be accepted in a conference proceeding. And if conference organizers, of course, they're just a bunch of people, they can't review these 1,000 million applications that come by themselves. So what they do is they recruit experts, which are called peers. So peers are other people. These are called peers and they have usually written up their own papers and they can critique each other's paper and they decide what gets accepted and what doesn't. Now, I've spoken numerous times of how this is super noisy right now. They're way, way, they're not enough peers, they're not experienced enough. So whether or not your particular idea gets accepted is extremely dependent on probability, on a coin flip usually. And it's just overloaded and just makes no sense. And they ask the same question, is this fit for purpose in 2020? And we need to replace it and I support. So they, you can already see they kind of want to automate this away with their state of the art review score and the score will be an out of 10 score that can be integrated into something like archive. And, you know, display right away. So they have some requirements to this new system. What should it be done? It should have the ability to scale. Very important. Our current review system doesn't have this, right? Our current review system relies on other humans reviewing your paper. And that means that the reviewers need to scale with the amount of papers, which just isn't the case currently. So a new review system must have the ability to scale. Right. And then, you know, automating the reviews away or scaling it up in a distributed fashion does this. Speed. Yes, because right now, if I submit my manuscript for review, it takes them months to review it. And our science progress is faster than that. So a speedy, more speedy version of peer review is definitely required. And then consistency. And this is the most shocking part, right? There is a the grand 2014 NURIPS experiment, which concluded that 57 percent of papers accepted by one committee were rejected by another committee and vice versa. Reviewing the exact same paper, different committees came to completely different conclusions to an astounding degree. So basically, you're flipping a coin of whether or not your paper gets accepted or not. And I think this is just not acceptable. And so they propose these three things, speed, scale, consistency, and their new method certainly has this. Now, let's jump down here where they introduce this state of the art reviewing SOAR. So they say, OK, the quality of a scientific work can be judged along three axes, efficacy, significance and novelty. So there are these three pillars, right? Efficacy, which means is is kind of how how effective is your work in achieving the goal in machine learning? That's usually to train some good classifier or something like this. Then the other one, sorry, is significance, right? Significance is how relevant is what you've done to the to the field. Right. And the third one is novelty. So, you know, in your scientific work should be an original contribution to the knowledge of mankind and therefore it should be novel. Right. So the more of these three things you have, of course, the higher your score should be. And here in the middle is where the highest scores should be. So imagine this is kind of a landscape. And so you want to grade papers along these three axes. But they have a pretty good method of of of assessing these in an automated fashion. So, first of all, assessing efficacy, efficacy, they say, is best assessed by determining if the proposed method achieves a new state of the art. I think that's not really I don't think you can really doubt this. I mean, this this is this is kind of the gold standard of of whether your paper is effective, is whether or not it achieves state of the art. I mean, I it might be a bit of a controversial opinion, but if a paper doesn't achieve a state of the art, it's you know, why? Why do you even care? Like no one cares. So from they say from an implementation perspective, they can they can use they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself. But the authors themselves can be relied upon to state this repeatedly in the text. Right. And this this is important. So the authors will state that they have state of the art many, many times in the text if they have actually achieved it. If they haven't achieved it or not so sure about it, they probably won't repeat it as many times. But this is can be can kind of abuse now to distribute it. Basically, you don't imagine now these these all of these reviewers. They don't they don't have to do this work anymore. They can just distribute to all the authors of their own papers, right? Because the authors in the text by the way, the text is structures is kind of an NLP approach to reviewing kind of NLP mixed with game theory. Right. So the other authors themselves if they have state of the art, you have to do some stemming and stuff, but they will put that into the text a lot. So it's a bit controversial, but the the authors here propose to simply count the number of word occurrences of state of the art case and sensitive very important in the text, right? It stands to reason that a higher state of the art count is preferable. Of course. All right. So the second thing so this might be a bit controversial. The second thing significance and they now make the claim significance is measured by efficacy. So they simply the efficacy term. So if your paper is effective at achieving its goal, you can also say it's significant for the community because again, significance should like if you have state of the art, then your paper is significant. If you do not have state of the art, then your paper is obviously not significant because why should it matter if you don't have state of the art in a given task? It's useless. All right. So we weigh it twice. That's pretty good. And then novelty now here they take much of the same approach. They say the authors probably state this. So how much they use the word novel in their manuscript will dictate. So here they say, okay, they novel. Wow, this is failing me. Hello. How much they use the word novel in the text will probably be an indication. I don't think so though. They do do the smart thing of they include the works. They include the related work section from this. Sorry, they exclude the related work section. They say we make the key observation that individuals best play to make the judgment are the authors themselves since they have likely read at least one of the works cited in the bibliography. I don't agree here. I think a better method would be to simply count the number of references and the lower the amount of references to related work, the higher the novelty. Because if you think, if these are current papers and your paper is here, you'll have a lot of related work. So it's not as novel. If you're way out here, you'll have maybe one or two related works. So it's way more novel if you have less references. So this would be my criticism of this. So this novelty thing here, I think this term should be replaced by a graph centrality measure or simply a count of how many references you have would be enough. All right, so they define their score. Their score, as we saw, is the SOTA term weighted twice. A geometric mean between that and the novelty term, which I've criticized. They add the suffix out of 10 because out of 10 score is pretty interpretable. So they divide by 10 here. So yeah, they say that here. We attach a suffix out of 10 because that's easy to interpret. And as you saw in the kind of archive implementation right here, sorry, this will be then easy to integrate right here. So they even give code, right? They give code in the paper themselves of how to implement this. It's pretty easy. And I think, yeah, even though it's quite a short paper, it's thorough and it's a good new method. And I think this could revolutionize publishing. And they even, so as a bit of a bonus, they give the official pronunciation of state of the art reviewing, which is something like state of the art reviewing pretty smooth. And yeah, with that, I hope you enjoyed this. And if the authors could just be a little more subtle next time, that would be great. And I guess you'd have to go. Yeah, nothing more. Bye.
[ { "end": 8.6, "start": 0, "text": " Hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication." }, { "end": 17.8, "start": 8.6, "text": " This has been on my mind for a while. The review process for modern science, especially machine learning, is just broken." }, { "end": 23.400000000000002, "start": 17.8, "text": " I've spoken numerous times about the fact that we need to replace it with a better system." }, { "end": 30, "start": 23.4, "text": " Samuel Albany at Al have actually come up with such a system and we're going to explore it today." }, { "end": 35.6, "start": 30, "text": " I am a big fan of this work and I'm 100% on board with this." }, { "end": 42.4, "start": 35.6, "text": " They basically say peer review forms the backbone of modern scientific manuscript evaluation." }, { "end": 47.8, "start": 42.4, "text": " If you don't know what peer review is in machine learning right now, if you have some genius idea," }, { "end": 54.199999999999996, "start": 47.8, "text": " so here is your idea, that's a light bulb by the way, you write it up into an eight-page PDF." }, { "end": 64.19999999999999, "start": 54.199999999999996, "text": " Yes, it must be a PDF and yes, it must be eight pages. You submit it to a conference, which is some kind of..." }, { "end": 68.6, "start": 64.19999999999999, "text": " So you submit it to be accepted in a conference proceeding." }, { "end": 79.39999999999999, "start": 68.6, "text": " And if conference organizers, of course, they're just a bunch of people, they can't review these 1,000 million applications that come by themselves." }, { "end": 87, "start": 79.39999999999999, "text": " So what they do is they recruit experts, which are called peers. So peers are other people." }, { "end": 95, "start": 87, "text": " These are called peers and they have usually written up their own papers and they can critique each other's paper" }, { "end": 102.2, "start": 95, "text": " and they decide what gets accepted and what doesn't. Now, I've spoken numerous times of how this is super noisy right now." }, { "end": 106.4, "start": 102.2, "text": " They're way, way, they're not enough peers, they're not experienced enough." }, { "end": 117.6, "start": 106.4, "text": " So whether or not your particular idea gets accepted is extremely dependent on probability, on a coin flip usually." }, { "end": 126, "start": 117.6, "text": " And it's just overloaded and just makes no sense. And they ask the same question, is this fit for purpose in 2020?" }, { "end": 136.79999999999998, "start": 126, "text": " And we need to replace it and I support. So they, you can already see they kind of want to automate this away" }, { "end": 145.4, "start": 136.79999999999998, "text": " with their state of the art review score and the score will be an out of 10 score that can be integrated into something like archive." }, { "end": 156.4, "start": 145.4, "text": " And, you know, display right away. So they have some requirements to this new system." }, { "end": 164.20000000000002, "start": 156.4, "text": " What should it be done? It should have the ability to scale. Very important. Our current review system doesn't have this, right?" }, { "end": 171.8, "start": 164.20000000000002, "text": " Our current review system relies on other humans reviewing your paper." }, { "end": 178, "start": 171.8, "text": " And that means that the reviewers need to scale with the amount of papers, which just isn't the case currently." }, { "end": 182.60000000000002, "start": 178, "text": " So a new review system must have the ability to scale. Right." }, { "end": 190, "start": 182.60000000000002, "text": " And then, you know, automating the reviews away or scaling it up in a distributed fashion does this." }, { "end": 197.4, "start": 190, "text": " Speed. Yes, because right now, if I submit my manuscript for review, it takes them months to review it." }, { "end": 207.6, "start": 197.4, "text": " And our science progress is faster than that. So a speedy, more speedy version of peer review is definitely required." }, { "end": 211, "start": 207.6, "text": " And then consistency. And this is the most shocking part, right?" }, { "end": 219, "start": 211, "text": " There is a the grand 2014 NURIPS experiment, which concluded that" }, { "end": 229, "start": 219, "text": " 57 percent of papers accepted by one committee were rejected by another committee and vice versa. Reviewing the exact same paper," }, { "end": 233.8, "start": 229, "text": " different committees came to completely different conclusions to an astounding degree." }, { "end": 239.6, "start": 233.8, "text": " So basically, you're flipping a coin of whether or not your paper gets accepted or not." }, { "end": 243.2, "start": 239.6, "text": " And I think this is just not acceptable." }, { "end": 251.39999999999998, "start": 243.2, "text": " And so they propose these three things, speed, scale, consistency, and their new method certainly has this." }, { "end": 260.8, "start": 251.39999999999998, "text": " Now, let's jump down here where they introduce this state of the art reviewing SOAR." }, { "end": 270.8, "start": 260.8, "text": " So they say, OK, the quality of a scientific work can be judged along three axes, efficacy, significance and novelty." }, { "end": 275.8, "start": 270.8, "text": " So there are these three pillars, right?" }, { "end": 286, "start": 275.8, "text": " Efficacy, which means is is kind of how how effective is your work in achieving the goal in machine learning?" }, { "end": 292.2, "start": 286, "text": " That's usually to train some good classifier or something like this." }, { "end": 299.8, "start": 292.2, "text": " Then the other one, sorry, is significance, right?" }, { "end": 308.40000000000003, "start": 299.8, "text": " Significance is how relevant is what you've done to the to the field." }, { "end": 314.2, "start": 308.40000000000003, "text": " Right. And the third one is novelty." }, { "end": 323, "start": 314.2, "text": " So, you know, in your scientific work should be an original contribution to the knowledge of mankind and therefore it should be novel." }, { "end": 327.8, "start": 323, "text": " Right. So the more of these three things you have, of course," }, { "end": 333.6, "start": 327.8, "text": " the higher your score should be. And here in the middle is where the highest scores should be." }, { "end": 340.40000000000003, "start": 333.6, "text": " So imagine this is kind of a landscape. And so you want to grade papers along these three axes." }, { "end": 348.7, "start": 340.40000000000003, "text": " But they have a pretty good method of of of assessing these in an automated fashion." }, { "end": 355.5, "start": 348.7, "text": " So, first of all, assessing efficacy, efficacy, they say," }, { "end": 361.9, "start": 355.5, "text": " is best assessed by determining if the proposed method achieves a new state of the art." }, { "end": 367, "start": 361.9, "text": " I think that's not really I don't think you can really doubt this." }, { "end": 373.9, "start": 367, "text": " I mean, this this is this is kind of the gold standard of of whether your paper is effective," }, { "end": 376.3, "start": 373.9, "text": " is whether or not it achieves state of the art." }, { "end": 381, "start": 376.3, "text": " I mean, I it might be a bit of a controversial opinion," }, { "end": 385.9, "start": 381, "text": " but if a paper doesn't achieve a state of the art, it's you know, why?" }, { "end": 389.6, "start": 385.9, "text": " Why do you even care? Like no one cares." }, { "end": 393.1, "start": 389.6, "text": " So from they say from an implementation perspective," }, { "end": 402.3, "start": 393.1, "text": " they can they can use they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself." }, { "end": 409.5, "start": 402.3, "text": " But the authors themselves can be relied upon to state this repeatedly in the text." }, { "end": 412.1, "start": 409.5, "text": " Right. And this this is important." }, { "end": 416.2, "start": 412.1, "text": " So the authors will state that they have state of the art many," }, { "end": 419.8, "start": 416.2, "text": " many times in the text if they have actually achieved it." }, { "end": 422.2, "start": 419.8, "text": " If they haven't achieved it or not so sure about it," }, { "end": 424.6, "start": 422.2, "text": " they probably won't repeat it as many times." }, { "end": 431.3, "start": 424.6, "text": " But this is can be can kind of abuse now to distribute it." }, { "end": 436.8, "start": 431.3, "text": " Basically, you don't imagine now these these all of these reviewers." }, { "end": 439.7, "start": 436.8, "text": " They don't they don't have to do this work anymore." }, { "end": 443.90000000000003, "start": 439.7, "text": " They can just distribute to all the authors of their own papers," }, { "end": 448.90000000000003, "start": 443.90000000000003, "text": " right? Because the authors in the text by the way," }, { "end": 456.8, "start": 448.90000000000003, "text": " the text is structures is kind of an NLP approach to reviewing kind of NLP mixed with game theory." }, { "end": 461.1, "start": 456.8, "text": " Right. So the other authors themselves if they have state of the art," }, { "end": 466.7, "start": 461.1, "text": " you have to do some stemming and stuff, but they will put that into the text a lot." }, { "end": 468.8, "start": 466.7, "text": " So it's a bit controversial," }, { "end": 478.7, "start": 468.8, "text": " but the the authors here propose to simply count the number of word occurrences of state of the art case" }, { "end": 483, "start": 478.7, "text": " and sensitive very important in the text, right?" }, { "end": 485.9, "start": 483, "text": " It stands to reason that a higher state of the art count is preferable." }, { "end": 489.09999999999997, "start": 485.9, "text": " Of course." }, { "end": 489.8, "start": 489.09999999999997, "text": " All right." }, { "end": 492.59999999999997, "start": 489.8, "text": " So the second thing so this might be a bit controversial." }, { "end": 500.6, "start": 492.6, "text": " The second thing significance and they now make the claim significance is measured by efficacy." }, { "end": 503.70000000000005, "start": 500.6, "text": " So they simply the efficacy term." }, { "end": 506.70000000000005, "start": 503.70000000000005, "text": " So if your paper is effective at achieving its goal," }, { "end": 510.90000000000003, "start": 506.70000000000005, "text": " you can also say it's significant for the community because again," }, { "end": 516.8000000000001, "start": 510.90000000000003, "text": " significance should like if you have state of the art," }, { "end": 519.3000000000001, "start": 516.8000000000001, "text": " then your paper is significant." }, { "end": 523.5999999999999, "start": 519.3, "text": " If you do not have state of the art, then your paper is obviously not significant" }, { "end": 530.4, "start": 523.5999999999999, "text": " because why should it matter if you don't have state of the art in a given task?" }, { "end": 532.1999999999999, "start": 530.4, "text": " It's useless." }, { "end": 532.5, "start": 532.1999999999999, "text": " All right." }, { "end": 534.3, "start": 532.5, "text": " So we weigh it twice." }, { "end": 535.5, "start": 534.3, "text": " That's pretty good." }, { "end": 541.5, "start": 535.5, "text": " And then novelty now here they take much of the same approach." }, { "end": 543.5999999999999, "start": 541.5, "text": " They say the authors probably state this." }, { "end": 549.0999999999999, "start": 543.5999999999999, "text": " So how much they use the word novel in their manuscript will dictate." }, { "end": 554.6, "start": 549.1, "text": " So here they say, okay, they novel." }, { "end": 557, "start": 554.6, "text": " Wow, this is failing me." }, { "end": 557.9, "start": 557, "text": " Hello." }, { "end": 563.4, "start": 557.9, "text": " How much they use the word novel in the text will probably be an indication." }, { "end": 565.2, "start": 563.4, "text": " I don't think so though." }, { "end": 574.9, "start": 565.2, "text": " They do do the smart thing of they include the works." }, { "end": 579.6, "start": 574.9, "text": " They include the related work section from this." }, { "end": 584.1999999999999, "start": 579.6, "text": " Sorry, they exclude the related work section." }, { "end": 587.5, "start": 584.1999999999999, "text": " They say we make the key observation that individuals best play to make the judgment" }, { "end": 591.3, "start": 587.5, "text": " are the authors themselves since they have likely read at least one of the works" }, { "end": 593.9, "start": 591.3, "text": " cited in the bibliography." }, { "end": 595, "start": 593.9, "text": " I don't agree here." }, { "end": 601.5, "start": 595, "text": " I think a better method would be to simply count the number of references" }, { "end": 607.7, "start": 601.5, "text": " and the lower the amount of references to related work, the higher the novelty." }, { "end": 617.7, "start": 607.7, "text": " Because if you think, if these are current papers and your paper is here," }, { "end": 620.2, "start": 617.7, "text": " you'll have a lot of related work." }, { "end": 622.3, "start": 620.2, "text": " So it's not as novel." }, { "end": 627.2, "start": 622.3, "text": " If you're way out here, you'll have maybe one or two related works." }, { "end": 631.1, "start": 627.2, "text": " So it's way more novel if you have less references." }, { "end": 634, "start": 631.1, "text": " So this would be my criticism of this." }, { "end": 640, "start": 634, "text": " So this novelty thing here, I think this term should be replaced by a graph" }, { "end": 646.6, "start": 640, "text": " centrality measure or simply a count of how many references you have would be enough." }, { "end": 649.4, "start": 646.6, "text": " All right, so they define their score." }, { "end": 653.6, "start": 649.4, "text": " Their score, as we saw, is the SOTA term weighted twice." }, { "end": 661, "start": 653.6, "text": " A geometric mean between that and the novelty term, which I've criticized." }, { "end": 669.2, "start": 661, "text": " They add the suffix out of 10 because out of 10 score is pretty interpretable." }, { "end": 673.6, "start": 669.2, "text": " So they divide by 10 here." }, { "end": 677, "start": 673.6, "text": " So yeah, they say that here." }, { "end": 682.8, "start": 677, "text": " We attach a suffix out of 10 because that's easy to interpret." }, { "end": 689.3, "start": 682.8, "text": " And as you saw in the kind of archive implementation right here," }, { "end": 694.5999999999999, "start": 689.3, "text": " sorry, this will be then easy to integrate right here." }, { "end": 700.0999999999999, "start": 694.5999999999999, "text": " So they even give code, right?" }, { "end": 705, "start": 700.0999999999999, "text": " They give code in the paper themselves of how to implement this." }, { "end": 708.8, "start": 705, "text": " It's pretty easy." }, { "end": 714.0999999999999, "start": 708.8, "text": " And I think, yeah, even though it's quite a short paper," }, { "end": 719.1999999999999, "start": 714.0999999999999, "text": " it's thorough and it's a good new method." }, { "end": 722.6, "start": 719.2, "text": " And I think this could revolutionize publishing." }, { "end": 725, "start": 722.6, "text": " And they even, so as a bit of a bonus," }, { "end": 728.9000000000001, "start": 725, "text": " they give the official pronunciation of state of the art reviewing," }, { "end": 734.8000000000001, "start": 728.9000000000001, "text": " which is something like state of the art reviewing pretty smooth." }, { "end": 739.8000000000001, "start": 734.8000000000001, "text": " And yeah, with that, I hope you enjoyed this." }, { "end": 744.4000000000001, "start": 739.8000000000001, "text": " And if the authors could just be a little more subtle next time," }, { "end": 747.2, "start": 744.4000000000001, "text": " that would be great." }, { "end": 756.6, "start": 747.2, "text": " And I guess you'd have to go." }, { "end": 758.4000000000001, "start": 756.6, "text": " Yeah, nothing more." }, { "end": 784.4, "start": 758.4, "text": " Bye." } ]
U3zmekzQ8WQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Agent57: Outperforming the Atari Human Benchmark
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "google", "rnn", "recurrent", "deepmind", "r2d2", "ngu", "reinforcement learning", "deep q learning", "replay buffer", "exploration", "exploitation", "tradeoff", "policy", "lstm", "atari" ]
DeepMind's Agent57 is the first RL agent to outperform humans in all 57 Atari benchmark games. It extends previous algorithms like Never Give Up and R2D2 by meta-learning the exploration-exploitation tradeoff controls. https://arxiv.org/abs/2003.13350 https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark Abstract: Atari games have been a long-standing benchmark in the reinforcement learning (RL) community for the past decade. This benchmark was proposed to test general competency of RL algorithms. Previous work has achieved good average performance by doing outstandingly well on many games of the set, but very poorly in several of the most challenging games. We propose Agent57, the first deep RL agent that outperforms the standard human benchmark on all 57 Atari games. To achieve this result, we train a neural network which parameterizes a family of policies ranging from very exploratory to purely exploitative. We propose an adaptive mechanism to choose which policy to prioritize throughout the training process. Additionally, we utilize a novel parameterization of the architecture that allows for more consistent and stable learning. Authors: Adrià Puigdomènech Badia, Bilal Piot, Steven Kapturowski, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Charles Blundell Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has been one of the hardest games for reinforcement learning agents to solve. What you're seeing is Agent 57, which is a new agent by DeepMind that is the first one to beat all of the 57 games in the Atari suite to a human or superhuman performance. So some of these games have been pretty easy for RL agents, but some of them, look at this one here, have been pretty hard, mainly because of the reward structure. Now you see on the top edge, the reward, it's not going up for a long time, and this kind of games where the reward doesn't go up for a long time is very hard for RL agents. So Agent 57 builds on a number of previous improvements to the original DeepQ networks of DeepMind, and today we'll look into this. So it's called Agent 57, as I said, because it beats all of these 57 games. They're quite diverse, and it's a cool thing that a single system can beat it. So they go into this. This is a printout of the website, so I can scribble on it. And this here, it's been cut off, but it should say DQN here. DQN from 2015. All right, so this DQN paper of 2015 was kind of the original paper that popularized this Atari benchmark and introduced neural networks to reinforcement learning, basically, that made it work. Since then, there have been a number of improvements. So maybe we'll just go into what DeepQ learning is. So in reinforcement learning, usually, you have an agent here, and you have an environment over here, right? And the environment will give you an observation. Now, the observation in our case would be something like the frame of a game, right? And you're here, you're a little rocket, and there is a bunch of meteors, right? And then the agent needs to somehow give back an action. So an action, and the actions in the Atari benchmark are always defined. So you can, in Atari, you used to have this kind of joystick thing. You can put it up, down, left, right, or you can put it upright, up, left, and so on. And also you have a button, I think, one or two buttons. I don't actually remember, but you can press at least one button. So these are the actions. Let's say there is something like 20 different actions. So all of the directions here, and then you can always press or not press a button with it. So you have to give, you send back this action here. You say, I want to put the joystick up, and I want to press the button at the same time. And then the environment will give you back a, it will say, okay, we'll give you back a new observation, which would be the next frame of the game. You've pressed up, so your little rocket is a bit more forward. You've pressed the button, so you fired a shot, and the meteors are still here. And it will also give you back a reward. So the reward, different games give different rewards. For example, in Pac-Man, every time your Pac-Man eats a little one of these dots, you get a reward. But in other games, most famously games like Montezuma's Revenge, you're in this room, and there are these platforms and ladders and stuff, and you're here, and there are opponents rolling around, and there's a door over here. You need to go down, jump over here, get up, get some key, and then go to the door. And only then will you get a reward. So games vary in many ways in how you get this reward. So that's kind of the intrinsic problem here. So deep queue learning is the following. We have a neural network taking in the observation. So we have a neural network, let's designate it as this. And the observation goes in here, right? This is O, the observation goes in here. And also, the action goes in here. Now let's call this AI, because we have different actions. And O, observation at step T. And it will give you a queue value. So the queue value for the observation at time T and action I. Now you do this for every single action. So you put observation with action A, J in the same network, right? You get an output that is the queue value for observation, the same observation with this different action. You do this for every action. And wherever the queue value is the highest, right? Wherever that's the highest, that's the action you go with. So what you have to do is you have to train this neural network to predict the queue value as accurate as possible. And the queue value is basically the reward that you expect from now until the end of the episode by performing this action in this situation, right? That's queue learning. Simply predicting if I do action I right now, how much reward am I going to get from now until the end of the episode, right? That's basically it. That's deep queue and deep queue learning simply because you have a neural network doing the learning. So that was deep queue networks and they work pretty well, but they don't work for these long time horizons because you always just learn. You just see one observation, right? And you kind of learn one step at a time and you rely on these queue values propagating through from your experience. It doesn't work very well for these long credit assignments. Now a significant improvement upon that is this R2D2 algorithm that incorporated LSTMs or GRUs, which are recurrent neural networks. So not only does your observation go into the neural network, right? Now your history of observations, so what happened before, so not only the current game state, right? But here you have the observation from step one, the action you did at that step, then the observation time two, the action you did at time two and so on. They now all, so this is encoded and this is encoded and then you have a recurrent neural network that incorporates all of these things that happened previously, right? To your current representation. So now not only does the agent see what is happening right now, it also gets the information of what happened previously, what did it do previously and it can also now back propagate through these things and kind of learn a longer range credit assignment. Credit assignment means it gets to figure out which actions had actually an influence on the final reward. If you incorporate the history, right, you can have a direct gradient flow across that history. So notably these LSTMs or GRUs, you can let them, you know, compute over maybe 10 or 100 steps, right? And then you get a pretty good idea of which of these actions within those 100 steps led to which rewards. And the other thing on R2D2 is of course it is now more distributed. So this was already here improvements to DQN, but the R2D2 agent is also distributed, meaning that you have like a central instance. So this is now engineering, right? You have a central instance that is called the learner. And the learner has the main weights, which I'm going to designate with theta here. And it just takes in experience from all of these workers. So there's worker one, worker two, worker three, four, and so on. And these, they will all just run episodes. They will all do work, work, work, work, work, work, work, work, work independently of each other and then send back their experience to the learner. And every now and then the learner sinks out the weights of the neural networks to the workers. So that's kind of distributed RL in this sense. You have a central learner, then you have many, many workers doing the actual interaction with the environment. So one of the main pitfalls of R2D2 is still it has a poor exploration, exploitation strategy, which I believe it is still just kind of epsilon greedy. What does it mean? So in order to understand this, maybe consider again our screen here, right? So let's say you're here with your space ship, right? And there are, there's a meteor coming right here and one right here. And there is a gold coin right here. Let's make this gold, right? So let's say you get a reward for collecting the coin, but you also get a reward for shooting the meteors, right? So what happens if you shoot right now? So if you shoot, then let's say you shoot and this meteor explodes, right? So you get a reward. Yeah. So you get one reward, but then the meteor right behind it will hit you, right? It's coming toward you. You'll have no way, no time to get out of the way. So one reward and then death, right? Let's make a little arrow here, right? So in total you get one reward. Now what happens instead if you move to the right? So move, right? So the next, in the next frame, the meteor will fly past you. You are over here, right? But the gold coin is here. Now this has given you so far zero reward, right? Oops. This has given you zero reward, but then in the next frame, you know, these meteors have passed now and you are going to get that gold coin. And that gives you one reward and no death, right? So you can technically go on here and maybe you'll get five more rewards down the line. So the, the, this is, here's the exploration exploitation dilemma. If an agent has for some reason learned that the shooting action in this situation will give it a one reward and the move action will give it zero reward, but has not learned to look past this. So this is kind of nebulous here. It has only experienced, it has only experienced one frame of here. Yeah. It has only experienced one frame of experience. It will say, wait a minute, shoot here appears to be like really good. It gives me one reward and move gives me zero reward. So from now on I'll just always do shoot, right? Shoot, shoot, shoot. Now what you would like to do. So this is called exploitation, right? Exploitation. It has learned something that gives it a reward. So it will just do that over and over again. Whereas here you could say, ah, I, I might go this way, even though it's zero word, because I can hope, right? I don't know yet, but I can hope that I will get a more reward down here. This is exploration. And the question in, in reinforcement learning is always how to trade off these two, right? And ideally you would want your agent to collect maximum reward that speaks for exploitation of what it has already learned. But also you never want to discard the possibility that, um, down the line of things that you don't yet know, there might be even more reward. And that speaks for exploration. I'm just, this both are abbreviated, same exploit, explore. This was dumb. Um, so in the original deep QN formulation, and I believe also in R2D2, this is done with Epsilon greedy, um, which is surprisingly performing well. Uh, so in Epsilon greedy, you simply say, I'm going to have a constant Epsilon. This is E Epsilon. Um, this is maybe 5% or something. I'm going to simply do something at random and the other one minus Epsilon. I'm just going to go with the, um, with the thing I have already learned. And this performs pretty well, but you might imagine that there is something smarter to do. So never give up. Um, these, this algorithm, it kind of goes into this, um, exploration, uh, mode where it tries to get, get to smarter ways to do exploration. And the keywords here are things like intrinsic motivation. So intrinsic motivation and curiosity refer to the fact that, um, it is so in addition to the reward you get from the environment here, right? This, this reward right here, you can also interject at this point and say, ah, I'm going to give some R prime, some reward of myself, right? To to kind of encourage some behavior in the agent. And this here we call intrinsic intrinsic. Um, so that means you add to the reward of the environment, you add some reward of your own that has nothing to do with the environment or not much, um, but just encourages certain behavior in the agent that is now also trying to maximize this intrinsic reward. Um, and in curiosity and intrinsic motivation formulations, usually you are rewarded for novelty. Novelty, which means the agent is rewarded for finding things that it has not yet seen. Um, so you, in this situation over here, you might see why this encourages the agent to go this route here because it says, wait a minute, there's a bunch of stuff like here. I just die, right? But there is a bunch of stuff I haven't seen yet down here. So I might want to go explore that and we give it extra intrinsic reward or prime for seeing things it hasn't seen yet. So it will learn if I do things that I have never done, I will get this sweet intrinsic reward and then it will go explore. Now, of course it's a, it's a big engineering question of how exactly to set this intrinsic reward. And there are many, many different formulations of that, um, that fall under this term of, let's say curiosity or something like this. Um, nevertheless, this never give up has, has, um, improved over R2D2, uh, using ideas like that. And now agent 57 improves again. Now how does agent 57 improve again? And it is mainly, um, it is mainly in, in the, in the, in this, what I just said. So how exactly do you apply this intrinsic reward? How exactly do you navigate the exploration, exploitation trade off? That's where agent 57 comes in because what they've realized this, that for these different Atari games right here, uh, some are very easy. Some you don't need much exploration. Some you need a lot. Some you need it over a large time scale and simply one agent, um, one never give up agent with the same settings of this curiosity of how long it looks into the future is not going to solve all the games. So agent 57 learns, um, how to, to modulate this exploration, exploitation trade off. So let's jump into the paper a bit more. I encourage you to read the blog post that is quite thorough and, um, the paper is a bit more technical. Sorry. Let me switch over. This is the paper agent 57 up forming the Atari human benchmark by Google deep mind. And um, here they say improvements to end you to never give up. So the first improvement they do is, um, so we've, we've already talked about how this is classic Q learning, right? So you're trying to learn this function, uh, that gives you the Q value of an action and the state. Um, now since we're going to deal with intrinsic reward in addition to extrinsic reward, uh, it makes sense. That's what they argue to split the Q learning function into two different parts. One part that learns the extrinsic reward and one part that learns the intrinsic reward. Right. And then you have a parameter beta, um, in front of it. Now beta in this case is the trade off. How much do you want to value this intrinsic reward? Right. Um, and here we see our first lever on the exploitation, exploration trade off. If an agent gets lots of reward for, uh, for exploring, right, it might never exploit and exploiting might actually be a good, a good option in the game that you're in. So you might want to set beta small, but in other games you might want to encourage exploration to the max and therefore set beta very high. Um, all right, another, uh, constant along with that, that they modulate is the, is the, um, the discount factor. So which is called this gamma here. So you already see here this beta we've already seen and they also modulate this gamma. Now what does gamma do, um, if I have my state and action, we already said, so here is an observation one and I do action one and that gives me observation two and I do action two and that gives me observation three and I do action three and each time I get a reward, right? An extrinsic reward and an intrinsic reward. So reward one, reward two, reward three and so on. Now usually, um, an RL agent will look at these rewards and let's say you are here, you are at observation one and you're trying to estimate your future rewards. Um, what will be most important will be the reward that you're getting right now, right? Because that's the most sure because, um, this reward here that you might get two steps from now, you know, a lot of things could happen, right? You are pretty sure that if you do action one, you're going to get to this state, but you're not entirely sure. You could also get to another state and therefore you had to do another action and therefore this reward here could be something different. Um, so these algorithms are, are having what's known as a discount factor. That means the value of a state, uh, of a state S is going to be the sum from time, uh, zero, let's say K equals T that's stated time T up until some horizon. I think they call it H in the paper. You could also think of this as infinity of the reward at step K, but discounted by this factor. Um, and you raise it to the, to the power of K usually or T T minus, uh, yeah, K minus T. So basically means that you, this is if T is one, so it's the reward at the at this time step plus let's say gamma here is 0.99, right? Plus 0.99 the reward at the next time step plus 0.99 squared, uh, the reward of that after that. And you see that the more, the more into the future you look, the less, um, value these rewards have. So little bars here indicate that you're going to value future rewards less and less. This is called a discount factor right here. And it's, um, how to set it is very important because if you set it very low, let's say you set it to 0.1, that means all that you want to do is maximize the rewards that you're getting in the likely the next and next, next step. Uh, you're not really looking into the future. Um, this is very good for games that give you immediate reward for good actions. But if you, uh, if you set it very high, let's say 0.999, right? That means a reward a hundred steps from now doesn't, you know, is, is almost the same to you as a reward one step from now. And this is very valuable for games that don't give you a reward immediately or that kind of trying to trick you as we saw before. Like if you shoot the meteor now, then you get one reward, but if you don't and pass on the opportunity, you might get much more later. So the modulation of the discount factor is also very important, uh, to set and really depends on the game. So we have two quantities here that really depend on what kind of game it is. And also they argue, um, it, it also depends where in the learning process you are. So if you're at the very beginning of the learning process, you might want to have a very high goal, the high intrinsic reward to go explore. And you want, might want to get, have a very low discount factor in order to learn a good immediate value function. But then as time goes on, you might want to bring down the intrinsic reward because now you really want actually, because your end goal is to maximize the extrinsic reward and you want to up this discount factor to look more into the future. Now that you have already learned the immediate values very well. So if I had to summarize and simplify what agent 57 does is it builds a neural network that adjusts these two quantities across the training, right? Um, so, so it adjusts the beta and gamma across the training and it does this in a so-called bandit setting. Now there is no real good picture in this paper that I can show you. So I'm just going to have to, to draw. So you have an agent, right? It interacts with this environment here and it always gets these rewards. Now what you have here is a meta controller, right? So the agents, it has two parameters. It has this beta and this gamma and the meta controller now observes this. It observes this interaction and it outputs values for these two constants and the does this dynamically as the training progresses, right? So the agent, the agent will, will kind of learn, the agent will change its behavior over time. Now this is actually implemented in a slightly different way in that the meta controller doesn't control the values directly, but it, it has kind of options. So what you do is you define a bunch of possibilities for beta and gamma. So you say I have strategy one, strategy one has beta at 0.1 and gamma at 0.9. Strategy two has beta at 0.2 and gamma at 0.8 and so on. Right? And now the meta controller has to choose between one of these, in this case, six different strategies across training. So it might start off, as we said, with a high beta, which might be over here, 0.9, 0.1. It might start off with a high beta and then transition to the lower ends. And it can do so depending on the game and depending on the progress in the game. So this is, this is dynamic and this is the improvement over never give up over this other agent, because this other agent simply had these strategies and trained them at the same time. And now this meta controller here controls which strategy is currently trained and which one is used to generate the experience. So this is, this is basically, I mean, there's a, they also, of course, they also say, well, we also increase the window of, let me go back. So this LSTM, these, I've shown you these things here that incorporate experience over time. They also say, well, we increase the window of how long the LSTM, the time window of how much experience is incorporated. And they do a bunch of other things, which I always find kind of annoying because it's always really, really hard to see where the improvements come from that they claim they made. So, but, you know, barring that, basically they built this meta controller to choose the strategies for the agent over time. Now of course, this meta controller again is trained by the rewards that you get back from the environment. So the meta controller as an action has the choice of strategy, right? And the reward, it gets back from the agent environment interaction, right? So in itself, it is a reinforcement learning problem. Now why, like, to me it seems just shifts the, it just shifts the problem of exploration exploitation one level higher. They use a sliding window bandit algorithm to do this. But again, you have hyper parameters there, like how long is the sliding window and how does the bandit algorithm do the exploration exploitation tradeoff. So it seems to me you're just shifting it one level higher. And it also seems like we're getting into the region of where we are meta over engineering our approaches to the specifics of this Atari benchmark. Because we're kind of observing, oh, okay, these agents do this wrong, these agents do this wrong. So let's just build an agent that can do both sort of. And then the kind of audastic thing I find that they open with how to measure artificial general intelligence, which, I mean, come on, you're just it's kind of amnest right now you're just kind of over and over and overfitting on this one benchmark, there's not really a need to, to make this into a story on artificial general intelligence. Alright, so this was my two cents to this. I hope you enjoyed this and bye bye.
[ { "end": 9.120000000000001, "start": 0, "text": " Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has" }, { "end": 14.74, "start": 9.120000000000001, "text": " been one of the hardest games for reinforcement learning agents to solve." }, { "end": 22, "start": 14.74, "text": " What you're seeing is Agent 57, which is a new agent by DeepMind that is the first one" }, { "end": 31.92, "start": 22, "text": " to beat all of the 57 games in the Atari suite to a human or superhuman performance." }, { "end": 38.2, "start": 31.92, "text": " So some of these games have been pretty easy for RL agents, but some of them, look at this" }, { "end": 42.8, "start": 38.2, "text": " one here, have been pretty hard, mainly because of the reward structure." }, { "end": 52.72, "start": 42.8, "text": " Now you see on the top edge, the reward, it's not going up for a long time, and this kind" }, { "end": 58.8, "start": 52.72, "text": " of games where the reward doesn't go up for a long time is very hard for RL agents." }, { "end": 66.4, "start": 58.8, "text": " So Agent 57 builds on a number of previous improvements to the original DeepQ networks" }, { "end": 71.24, "start": 66.4, "text": " of DeepMind, and today we'll look into this." }, { "end": 76.36, "start": 71.24, "text": " So it's called Agent 57, as I said, because it beats all of these 57 games." }, { "end": 84.28, "start": 76.36, "text": " They're quite diverse, and it's a cool thing that a single system can beat it." }, { "end": 86.16, "start": 84.28, "text": " So they go into this." }, { "end": 91.06, "start": 86.16, "text": " This is a printout of the website, so I can scribble on it." }, { "end": 95.6, "start": 91.06, "text": " And this here, it's been cut off, but it should say DQN here." }, { "end": 98.28, "start": 95.6, "text": " DQN from 2015." }, { "end": 108.32000000000001, "start": 98.28, "text": " All right, so this DQN paper of 2015 was kind of the original paper that popularized this" }, { "end": 116.28, "start": 108.32000000000001, "text": " Atari benchmark and introduced neural networks to reinforcement learning, basically, that" }, { "end": 119.24000000000001, "start": 116.28, "text": " made it work." }, { "end": 121.18, "start": 119.24000000000001, "text": " Since then, there have been a number of improvements." }, { "end": 125.78, "start": 121.18, "text": " So maybe we'll just go into what DeepQ learning is." }, { "end": 134.24, "start": 125.78, "text": " So in reinforcement learning, usually, you have an agent here, and you have an environment" }, { "end": 136.1, "start": 134.24, "text": " over here, right?" }, { "end": 139.44, "start": 136.1, "text": " And the environment will give you an observation." }, { "end": 146, "start": 139.44, "text": " Now, the observation in our case would be something like the frame of a game, right?" }, { "end": 150.52, "start": 146, "text": " And you're here, you're a little rocket, and there is a bunch of meteors, right?" }, { "end": 156.16, "start": 150.52, "text": " And then the agent needs to somehow give back an action." }, { "end": 163.02, "start": 156.16, "text": " So an action, and the actions in the Atari benchmark are always defined." }, { "end": 169.16000000000003, "start": 163.02, "text": " So you can, in Atari, you used to have this kind of joystick thing." }, { "end": 177.12, "start": 169.16000000000003, "text": " You can put it up, down, left, right, or you can put it upright, up, left, and so on." }, { "end": 181.88, "start": 177.12, "text": " And also you have a button, I think, one or two buttons." }, { "end": 187.28, "start": 181.88, "text": " I don't actually remember, but you can press at least one button." }, { "end": 188.56, "start": 187.28, "text": " So these are the actions." }, { "end": 192.76, "start": 188.56, "text": " Let's say there is something like 20 different actions." }, { "end": 198.28, "start": 192.76, "text": " So all of the directions here, and then you can always press or not press a button with" }, { "end": 201.12, "start": 198.28, "text": " it." }, { "end": 204.64000000000001, "start": 201.12, "text": " So you have to give, you send back this action here." }, { "end": 210.44, "start": 204.64, "text": " You say, I want to put the joystick up, and I want to press the button at the same time." }, { "end": 215.95999999999998, "start": 210.44, "text": " And then the environment will give you back a, it will say, okay, we'll give you back" }, { "end": 220.35999999999999, "start": 215.95999999999998, "text": " a new observation, which would be the next frame of the game." }, { "end": 224, "start": 220.35999999999999, "text": " You've pressed up, so your little rocket is a bit more forward." }, { "end": 229.17999999999998, "start": 224, "text": " You've pressed the button, so you fired a shot, and the meteors are still here." }, { "end": 232.5, "start": 229.17999999999998, "text": " And it will also give you back a reward." }, { "end": 238.52, "start": 232.5, "text": " So the reward, different games give different rewards." }, { "end": 247.16, "start": 238.52, "text": " For example, in Pac-Man, every time your Pac-Man eats a little one of these dots, you get a" }, { "end": 248.16, "start": 247.16, "text": " reward." }, { "end": 253.8, "start": 248.16, "text": " But in other games, most famously games like Montezuma's Revenge, you're in this room," }, { "end": 258.6, "start": 253.8, "text": " and there are these platforms and ladders and stuff, and you're here, and there are" }, { "end": 262, "start": 258.6, "text": " opponents rolling around, and there's a door over here." }, { "end": 267.24, "start": 262, "text": " You need to go down, jump over here, get up, get some key, and then go to the door." }, { "end": 271.12, "start": 267.24, "text": " And only then will you get a reward." }, { "end": 277.52, "start": 271.12, "text": " So games vary in many ways in how you get this reward." }, { "end": 280.96, "start": 277.52, "text": " So that's kind of the intrinsic problem here." }, { "end": 284.52, "start": 280.96, "text": " So deep queue learning is the following." }, { "end": 288.28, "start": 284.52, "text": " We have a neural network taking in the observation." }, { "end": 291.4, "start": 288.28, "text": " So we have a neural network, let's designate it as this." }, { "end": 293.91999999999996, "start": 291.4, "text": " And the observation goes in here, right?" }, { "end": 296.59999999999997, "start": 293.91999999999996, "text": " This is O, the observation goes in here." }, { "end": 299.4, "start": 296.59999999999997, "text": " And also, the action goes in here." }, { "end": 302.2, "start": 299.4, "text": " Now let's call this AI, because we have different actions." }, { "end": 306.2, "start": 302.2, "text": " And O, observation at step T." }, { "end": 309.91999999999996, "start": 306.2, "text": " And it will give you a queue value." }, { "end": 315.23999999999995, "start": 309.91999999999996, "text": " So the queue value for the observation at time T and action I." }, { "end": 318.15999999999997, "start": 315.23999999999995, "text": " Now you do this for every single action." }, { "end": 325.64000000000004, "start": 318.16, "text": " So you put observation with action A, J in the same network, right?" }, { "end": 331.04, "start": 325.64000000000004, "text": " You get an output that is the queue value for observation, the same observation with" }, { "end": 332.28000000000003, "start": 331.04, "text": " this different action." }, { "end": 334.32000000000005, "start": 332.28000000000003, "text": " You do this for every action." }, { "end": 338.36, "start": 334.32000000000005, "text": " And wherever the queue value is the highest, right?" }, { "end": 341.90000000000003, "start": 338.36, "text": " Wherever that's the highest, that's the action you go with." }, { "end": 347.96000000000004, "start": 341.90000000000003, "text": " So what you have to do is you have to train this neural network to predict the queue value" }, { "end": 348.96, "start": 347.96, "text": " as accurate as possible." }, { "end": 356.68, "start": 348.96, "text": " And the queue value is basically the reward that you expect from now until the end of" }, { "end": 361.64, "start": 356.68, "text": " the episode by performing this action in this situation, right?" }, { "end": 364.32, "start": 361.64, "text": " That's queue learning." }, { "end": 372.67999999999995, "start": 364.32, "text": " Simply predicting if I do action I right now, how much reward am I going to get from now" }, { "end": 376.08, "start": 372.67999999999995, "text": " until the end of the episode, right?" }, { "end": 379.76, "start": 376.08, "text": " That's basically it." }, { "end": 384.03999999999996, "start": 379.76, "text": " That's deep queue and deep queue learning simply because you have a neural network doing" }, { "end": 385.7, "start": 384.03999999999996, "text": " the learning." }, { "end": 389.97999999999996, "start": 385.7, "text": " So that was deep queue networks and they work pretty well, but they don't work for these" }, { "end": 393.56, "start": 389.97999999999996, "text": " long time horizons because you always just learn." }, { "end": 395.96, "start": 393.56, "text": " You just see one observation, right?" }, { "end": 402.88, "start": 395.96, "text": " And you kind of learn one step at a time and you rely on these queue values propagating" }, { "end": 404.84, "start": 402.88, "text": " through from your experience." }, { "end": 408.4, "start": 404.84, "text": " It doesn't work very well for these long credit assignments." }, { "end": 417.28, "start": 408.4, "text": " Now a significant improvement upon that is this R2D2 algorithm that incorporated LSTMs" }, { "end": 420.84, "start": 417.28, "text": " or GRUs, which are recurrent neural networks." }, { "end": 427.23999999999995, "start": 420.84, "text": " So not only does your observation go into the neural network, right?" }, { "end": 433.94, "start": 427.23999999999995, "text": " Now your history of observations, so what happened before, so not only the current game" }, { "end": 435.08, "start": 433.94, "text": " state, right?" }, { "end": 441.16, "start": 435.08, "text": " But here you have the observation from step one, the action you did at that step, then" }, { "end": 446.84, "start": 441.16, "text": " the observation time two, the action you did at time two and so on." }, { "end": 454.36, "start": 446.84, "text": " They now all, so this is encoded and this is encoded and then you have a recurrent neural" }, { "end": 461.8, "start": 454.36, "text": " network that incorporates all of these things that happened previously, right?" }, { "end": 464.32, "start": 461.8, "text": " To your current representation." }, { "end": 472.92, "start": 464.32, "text": " So now not only does the agent see what is happening right now, it also gets the information" }, { "end": 481.28000000000003, "start": 472.92, "text": " of what happened previously, what did it do previously and it can also now back propagate" }, { "end": 486.28000000000003, "start": 481.28000000000003, "text": " through these things and kind of learn a longer range credit assignment." }, { "end": 493.28, "start": 486.28, "text": " Credit assignment means it gets to figure out which actions had actually an influence" }, { "end": 496.76, "start": 493.28, "text": " on the final reward." }, { "end": 503.59999999999997, "start": 496.76, "text": " If you incorporate the history, right, you can have a direct gradient flow across that" }, { "end": 504.59999999999997, "start": 503.59999999999997, "text": " history." }, { "end": 513.0799999999999, "start": 504.59999999999997, "text": " So notably these LSTMs or GRUs, you can let them, you know, compute over maybe 10 or 100" }, { "end": 514.0799999999999, "start": 513.0799999999999, "text": " steps, right?" }, { "end": 520.12, "start": 514.08, "text": " And then you get a pretty good idea of which of these actions within those 100 steps led" }, { "end": 523.72, "start": 520.12, "text": " to which rewards." }, { "end": 531.0600000000001, "start": 523.72, "text": " And the other thing on R2D2 is of course it is now more distributed." }, { "end": 537.88, "start": 531.0600000000001, "text": " So this was already here improvements to DQN, but the R2D2 agent is also distributed, meaning" }, { "end": 540.1400000000001, "start": 537.88, "text": " that you have like a central instance." }, { "end": 541.32, "start": 540.1400000000001, "text": " So this is now engineering, right?" }, { "end": 546, "start": 541.32, "text": " You have a central instance that is called the learner." }, { "end": 551.6400000000001, "start": 546, "text": " And the learner has the main weights, which I'm going to designate with theta here." }, { "end": 557.0600000000001, "start": 551.6400000000001, "text": " And it just takes in experience from all of these workers." }, { "end": 561.7600000000001, "start": 557.0600000000001, "text": " So there's worker one, worker two, worker three, four, and so on." }, { "end": 565.5200000000001, "start": 561.7600000000001, "text": " And these, they will all just run episodes." }, { "end": 569.6800000000001, "start": 565.5200000000001, "text": " They will all do work, work, work, work, work, work, work, work, work independently" }, { "end": 573.3199999999999, "start": 569.68, "text": " of each other and then send back their experience to the learner." }, { "end": 579.4399999999999, "start": 573.3199999999999, "text": " And every now and then the learner sinks out the weights of the neural networks to the" }, { "end": 580.4399999999999, "start": 579.4399999999999, "text": " workers." }, { "end": 583.4399999999999, "start": 580.4399999999999, "text": " So that's kind of distributed RL in this sense." }, { "end": 590.3199999999999, "start": 583.4399999999999, "text": " You have a central learner, then you have many, many workers doing the actual interaction" }, { "end": 593.8599999999999, "start": 590.3199999999999, "text": " with the environment." }, { "end": 609, "start": 593.86, "text": " So one of the main pitfalls of R2D2 is still it has a poor exploration, exploitation strategy," }, { "end": 613.08, "start": 609, "text": " which I believe it is still just kind of epsilon greedy." }, { "end": 614.24, "start": 613.08, "text": " What does it mean?" }, { "end": 623.84, "start": 614.24, "text": " So in order to understand this, maybe consider again our screen here, right?" }, { "end": 628.6, "start": 623.84, "text": " So let's say you're here with your space ship, right?" }, { "end": 635.28, "start": 628.6, "text": " And there are, there's a meteor coming right here and one right here." }, { "end": 638.16, "start": 635.28, "text": " And there is a gold coin right here." }, { "end": 641.28, "start": 638.16, "text": " Let's make this gold, right?" }, { "end": 646, "start": 641.28, "text": " So let's say you get a reward for collecting the coin, but you also get a reward for shooting" }, { "end": 648.8399999999999, "start": 646, "text": " the meteors, right?" }, { "end": 652.24, "start": 648.8399999999999, "text": " So what happens if you shoot right now?" }, { "end": 664.6999999999999, "start": 652.24, "text": " So if you shoot, then let's say you shoot and this meteor explodes, right?" }, { "end": 665.8399999999999, "start": 664.6999999999999, "text": " So you get a reward." }, { "end": 666.8399999999999, "start": 665.8399999999999, "text": " Yeah." }, { "end": 672.5600000000001, "start": 666.84, "text": " So you get one reward, but then the meteor right behind it will hit you, right?" }, { "end": 673.72, "start": 672.5600000000001, "text": " It's coming toward you." }, { "end": 676.2800000000001, "start": 673.72, "text": " You'll have no way, no time to get out of the way." }, { "end": 680.12, "start": 676.2800000000001, "text": " So one reward and then death, right?" }, { "end": 683.5, "start": 680.12, "text": " Let's make a little arrow here, right?" }, { "end": 687.34, "start": 683.5, "text": " So in total you get one reward." }, { "end": 692.26, "start": 687.34, "text": " Now what happens instead if you move to the right?" }, { "end": 694.4000000000001, "start": 692.26, "text": " So move, right?" }, { "end": 699.84, "start": 694.4, "text": " So the next, in the next frame, the meteor will fly past you." }, { "end": 701.8, "start": 699.84, "text": " You are over here, right?" }, { "end": 704, "start": 701.8, "text": " But the gold coin is here." }, { "end": 707.56, "start": 704, "text": " Now this has given you so far zero reward, right?" }, { "end": 708.56, "start": 707.56, "text": " Oops." }, { "end": 717.92, "start": 708.56, "text": " This has given you zero reward, but then in the next frame, you know, these meteors have" }, { "end": 723.04, "start": 717.92, "text": " passed now and you are going to get that gold coin." }, { "end": 728.0799999999999, "start": 723.04, "text": " And that gives you one reward and no death, right?" }, { "end": 734.4399999999999, "start": 728.0799999999999, "text": " So you can technically go on here and maybe you'll get five more rewards down the line." }, { "end": 740.04, "start": 734.4399999999999, "text": " So the, the, this is, here's the exploration exploitation dilemma." }, { "end": 746.52, "start": 740.04, "text": " If an agent has for some reason learned that the shooting action in this situation will" }, { "end": 753.76, "start": 746.52, "text": " give it a one reward and the move action will give it zero reward, but has not learned to" }, { "end": 754.96, "start": 753.76, "text": " look past this." }, { "end": 757.36, "start": 754.96, "text": " So this is kind of nebulous here." }, { "end": 764.84, "start": 757.36, "text": " It has only experienced, it has only experienced one frame of here." }, { "end": 765.84, "start": 764.84, "text": " Yeah." }, { "end": 768.6999999999999, "start": 765.84, "text": " It has only experienced one frame of experience." }, { "end": 773.6, "start": 768.6999999999999, "text": " It will say, wait a minute, shoot here appears to be like really good." }, { "end": 777.08, "start": 773.6, "text": " It gives me one reward and move gives me zero reward." }, { "end": 781.08, "start": 777.08, "text": " So from now on I'll just always do shoot, right?" }, { "end": 783.44, "start": 781.08, "text": " Shoot, shoot, shoot." }, { "end": 785.76, "start": 783.44, "text": " Now what you would like to do." }, { "end": 789, "start": 785.76, "text": " So this is called exploitation, right?" }, { "end": 790.28, "start": 789, "text": " Exploitation." }, { "end": 794.24, "start": 790.28, "text": " It has learned something that gives it a reward." }, { "end": 798.0400000000001, "start": 794.24, "text": " So it will just do that over and over again." }, { "end": 806.16, "start": 798.04, "text": " Whereas here you could say, ah, I, I might go this way, even though it's zero word, because" }, { "end": 807.8, "start": 806.16, "text": " I can hope, right?" }, { "end": 813.28, "start": 807.8, "text": " I don't know yet, but I can hope that I will get a more reward down here." }, { "end": 815.92, "start": 813.28, "text": " This is exploration." }, { "end": 821.76, "start": 815.92, "text": " And the question in, in reinforcement learning is always how to trade off these two, right?" }, { "end": 829.36, "start": 821.76, "text": " And ideally you would want your agent to collect maximum reward that speaks for exploitation" }, { "end": 831.56, "start": 829.36, "text": " of what it has already learned." }, { "end": 838.6, "start": 831.56, "text": " But also you never want to discard the possibility that, um, down the line of things that you" }, { "end": 843.52, "start": 838.6, "text": " don't yet know, there might be even more reward." }, { "end": 844.96, "start": 843.52, "text": " And that speaks for exploration." }, { "end": 852.88, "start": 844.96, "text": " I'm just, this both are abbreviated, same exploit, explore." }, { "end": 854.76, "start": 852.88, "text": " This was dumb." }, { "end": 862.88, "start": 854.76, "text": " Um, so in the original deep QN formulation, and I believe also in R2D2, this is done with" }, { "end": 869.08, "start": 862.88, "text": " Epsilon greedy, um, which is surprisingly performing well." }, { "end": 875.48, "start": 869.08, "text": " Uh, so in Epsilon greedy, you simply say, I'm going to have a constant Epsilon." }, { "end": 877.96, "start": 875.48, "text": " This is E Epsilon." }, { "end": 882.84, "start": 877.96, "text": " Um, this is maybe 5% or something." }, { "end": 889.2800000000001, "start": 882.84, "text": " I'm going to simply do something at random and the other one minus Epsilon." }, { "end": 895.2, "start": 889.2800000000001, "text": " I'm just going to go with the, um, with the thing I have already learned." }, { "end": 901.6, "start": 895.2, "text": " And this performs pretty well, but you might imagine that there is something smarter to" }, { "end": 902.88, "start": 901.6, "text": " do." }, { "end": 904.96, "start": 902.88, "text": " So never give up." }, { "end": 913.48, "start": 904.96, "text": " Um, these, this algorithm, it kind of goes into this, um, exploration, uh, mode where" }, { "end": 918.08, "start": 913.48, "text": " it tries to get, get to smarter ways to do exploration." }, { "end": 923.96, "start": 918.08, "text": " And the keywords here are things like intrinsic motivation." }, { "end": 933.76, "start": 923.96, "text": " So intrinsic motivation and curiosity refer to the fact that, um, it is so in addition" }, { "end": 938.24, "start": 933.76, "text": " to the reward you get from the environment here, right?" }, { "end": 945.1800000000001, "start": 938.24, "text": " This, this reward right here, you can also interject at this point and say, ah, I'm going" }, { "end": 951.48, "start": 945.1800000000001, "text": " to give some R prime, some reward of myself, right?" }, { "end": 955.08, "start": 951.48, "text": " To to kind of encourage some behavior in the agent." }, { "end": 960.52, "start": 955.08, "text": " And this here we call intrinsic intrinsic." }, { "end": 967.6800000000001, "start": 960.52, "text": " Um, so that means you add to the reward of the environment, you add some reward of your" }, { "end": 974.2, "start": 967.6800000000001, "text": " own that has nothing to do with the environment or not much, um, but just encourages certain" }, { "end": 980.44, "start": 974.2, "text": " behavior in the agent that is now also trying to maximize this intrinsic reward." }, { "end": 988.96, "start": 980.44, "text": " Um, and in curiosity and intrinsic motivation formulations, usually you are rewarded for" }, { "end": 990.6400000000001, "start": 988.96, "text": " novelty." }, { "end": 998.0400000000001, "start": 990.6400000000001, "text": " Novelty, which means the agent is rewarded for finding things that it has not yet seen." }, { "end": 1004.24, "start": 998.0400000000001, "text": " Um, so you, in this situation over here, you might see why this encourages the agent to" }, { "end": 1009.12, "start": 1004.24, "text": " go this route here because it says, wait a minute, there's a bunch of stuff like here." }, { "end": 1010.84, "start": 1009.12, "text": " I just die, right?" }, { "end": 1014.24, "start": 1010.84, "text": " But there is a bunch of stuff I haven't seen yet down here." }, { "end": 1022.12, "start": 1014.24, "text": " So I might want to go explore that and we give it extra intrinsic reward or prime for" }, { "end": 1024.78, "start": 1022.12, "text": " seeing things it hasn't seen yet." }, { "end": 1030.64, "start": 1024.78, "text": " So it will learn if I do things that I have never done, I will get this sweet intrinsic" }, { "end": 1033.72, "start": 1030.64, "text": " reward and then it will go explore." }, { "end": 1040.8, "start": 1033.72, "text": " Now, of course it's a, it's a big engineering question of how exactly to set this intrinsic" }, { "end": 1042.08, "start": 1040.8, "text": " reward." }, { "end": 1048.04, "start": 1042.08, "text": " And there are many, many different formulations of that, um, that fall under this term of," }, { "end": 1051.04, "start": 1048.04, "text": " let's say curiosity or something like this." }, { "end": 1059.6000000000001, "start": 1051.04, "text": " Um, nevertheless, this never give up has, has, um, improved over R2D2, uh, using ideas" }, { "end": 1061.2, "start": 1059.6000000000001, "text": " like that." }, { "end": 1065.0800000000002, "start": 1061.2, "text": " And now agent 57 improves again." }, { "end": 1069.64, "start": 1065.0800000000002, "text": " Now how does agent 57 improve again?" }, { "end": 1079.1200000000001, "start": 1069.64, "text": " And it is mainly, um, it is mainly in, in the, in the, in this, what I just said." }, { "end": 1082.56, "start": 1079.1200000000001, "text": " So how exactly do you apply this intrinsic reward?" }, { "end": 1087.68, "start": 1082.56, "text": " How exactly do you navigate the exploration, exploitation trade off?" }, { "end": 1093.1200000000001, "start": 1087.68, "text": " That's where agent 57 comes in because what they've realized this, that for these different" }, { "end": 1097.76, "start": 1093.1200000000001, "text": " Atari games right here, uh, some are very easy." }, { "end": 1099.8, "start": 1097.76, "text": " Some you don't need much exploration." }, { "end": 1101.64, "start": 1099.8, "text": " Some you need a lot." }, { "end": 1109.04, "start": 1101.64, "text": " Some you need it over a large time scale and simply one agent, um, one never give up agent" }, { "end": 1115.1200000000001, "start": 1109.04, "text": " with the same settings of this curiosity of how long it looks into the future is not going" }, { "end": 1116.96, "start": 1115.1200000000001, "text": " to solve all the games." }, { "end": 1127.88, "start": 1116.96, "text": " So agent 57 learns, um, how to, to modulate this exploration, exploitation trade off." }, { "end": 1130.88, "start": 1127.88, "text": " So let's jump into the paper a bit more." }, { "end": 1138.6000000000001, "start": 1130.88, "text": " I encourage you to read the blog post that is quite thorough and, um, the paper is a" }, { "end": 1139.6000000000001, "start": 1138.6000000000001, "text": " bit more technical." }, { "end": 1140.6000000000001, "start": 1139.6000000000001, "text": " Sorry." }, { "end": 1143.4, "start": 1140.6000000000001, "text": " Let me switch over." }, { "end": 1150.6000000000001, "start": 1143.4, "text": " This is the paper agent 57 up forming the Atari human benchmark by Google deep mind." }, { "end": 1160.88, "start": 1150.6000000000001, "text": " And um, here they say improvements to end you to never give up." }, { "end": 1166.6000000000001, "start": 1160.88, "text": " So the first improvement they do is, um, so we've, we've already talked about how this" }, { "end": 1169.4, "start": 1166.6000000000001, "text": " is classic Q learning, right?" }, { "end": 1176.1200000000001, "start": 1169.4, "text": " So you're trying to learn this function, uh, that gives you the Q value of an action and" }, { "end": 1177.1200000000001, "start": 1176.1200000000001, "text": " the state." }, { "end": 1185.92, "start": 1177.1200000000001, "text": " Um, now since we're going to deal with intrinsic reward in addition to extrinsic reward, uh," }, { "end": 1187.64, "start": 1185.92, "text": " it makes sense." }, { "end": 1193.3200000000002, "start": 1187.64, "text": " That's what they argue to split the Q learning function into two different parts." }, { "end": 1199.9199999999998, "start": 1193.32, "text": " One part that learns the extrinsic reward and one part that learns the intrinsic reward." }, { "end": 1200.9199999999998, "start": 1199.9199999999998, "text": " Right." }, { "end": 1206.6799999999998, "start": 1200.9199999999998, "text": " And then you have a parameter beta, um, in front of it." }, { "end": 1211, "start": 1206.6799999999998, "text": " Now beta in this case is the trade off." }, { "end": 1215.8, "start": 1211, "text": " How much do you want to value this intrinsic reward?" }, { "end": 1216.8, "start": 1215.8, "text": " Right." }, { "end": 1221.3999999999999, "start": 1216.8, "text": " Um, and here we see our first lever on the exploitation, exploration trade off." }, { "end": 1229.44, "start": 1221.4, "text": " If an agent gets lots of reward for, uh, for exploring, right, it might never exploit and" }, { "end": 1233.88, "start": 1229.44, "text": " exploiting might actually be a good, a good option in the game that you're in." }, { "end": 1241.46, "start": 1233.88, "text": " So you might want to set beta small, but in other games you might want to encourage exploration" }, { "end": 1245.68, "start": 1241.46, "text": " to the max and therefore set beta very high." }, { "end": 1258.2, "start": 1245.68, "text": " Um, all right, another, uh, constant along with that, that they modulate is the, is the," }, { "end": 1260.64, "start": 1258.2, "text": " um, the discount factor." }, { "end": 1265.5600000000002, "start": 1260.64, "text": " So which is called this gamma here." }, { "end": 1271.8400000000001, "start": 1265.5600000000002, "text": " So you already see here this beta we've already seen and they also modulate this gamma." }, { "end": 1280.52, "start": 1271.84, "text": " Now what does gamma do, um, if I have my state and action, we already said, so here is an" }, { "end": 1289, "start": 1280.52, "text": " observation one and I do action one and that gives me observation two and I do action two" }, { "end": 1295.8799999999999, "start": 1289, "text": " and that gives me observation three and I do action three and each time I get a reward," }, { "end": 1296.8799999999999, "start": 1295.8799999999999, "text": " right?" }, { "end": 1299.6599999999999, "start": 1296.8799999999999, "text": " An extrinsic reward and an intrinsic reward." }, { "end": 1306.72, "start": 1299.66, "text": " So reward one, reward two, reward three and so on." }, { "end": 1316.38, "start": 1306.72, "text": " Now usually, um, an RL agent will look at these rewards and let's say you are here," }, { "end": 1321.92, "start": 1316.38, "text": " you are at observation one and you're trying to estimate your future rewards." }, { "end": 1327.66, "start": 1321.92, "text": " Um, what will be most important will be the reward that you're getting right now, right?" }, { "end": 1333.76, "start": 1327.66, "text": " Because that's the most sure because, um, this reward here that you might get two steps" }, { "end": 1337.2, "start": 1333.76, "text": " from now, you know, a lot of things could happen, right?" }, { "end": 1341.28, "start": 1337.2, "text": " You are pretty sure that if you do action one, you're going to get to this state, but" }, { "end": 1342.4, "start": 1341.28, "text": " you're not entirely sure." }, { "end": 1347.8000000000002, "start": 1342.4, "text": " You could also get to another state and therefore you had to do another action and therefore" }, { "end": 1350.8400000000001, "start": 1347.8000000000002, "text": " this reward here could be something different." }, { "end": 1358.1599999999999, "start": 1350.84, "text": " Um, so these algorithms are, are having what's known as a discount factor." }, { "end": 1366.48, "start": 1358.1599999999999, "text": " That means the value of a state, uh, of a state S is going to be the sum from time," }, { "end": 1374.4399999999998, "start": 1366.48, "text": " uh, zero, let's say K equals T that's stated time T up until some horizon." }, { "end": 1377.86, "start": 1374.4399999999998, "text": " I think they call it H in the paper." }, { "end": 1385.6999999999998, "start": 1377.86, "text": " You could also think of this as infinity of the reward at step K, but discounted by this" }, { "end": 1387.12, "start": 1385.6999999999998, "text": " factor." }, { "end": 1398.36, "start": 1387.12, "text": " Um, and you raise it to the, to the power of K usually or T T minus, uh, yeah, K minus" }, { "end": 1407.84, "start": 1398.36, "text": " T. So basically means that you, this is if T is one, so it's the reward at the" }, { "end": 1416.6799999999998, "start": 1407.84, "text": " at this time step plus let's say gamma here is 0.99, right?" }, { "end": 1428.76, "start": 1416.6799999999998, "text": " Plus 0.99 the reward at the next time step plus 0.99 squared, uh, the reward of that" }, { "end": 1429.76, "start": 1428.76, "text": " after that." }, { "end": 1436.6, "start": 1429.76, "text": " And you see that the more, the more into the future you look, the less, um, value these" }, { "end": 1442.8799999999999, "start": 1436.6, "text": " rewards have. So little bars here indicate that you're going to value future rewards" }, { "end": 1444.76, "start": 1442.8799999999999, "text": " less and less." }, { "end": 1448.28, "start": 1444.76, "text": " This is called a discount factor right here." }, { "end": 1454.08, "start": 1448.28, "text": " And it's, um, how to set it is very important because if you set it very low, let's say" }, { "end": 1461.6799999999998, "start": 1454.08, "text": " you set it to 0.1, that means all that you want to do is maximize the rewards that you're" }, { "end": 1465.9199999999998, "start": 1461.6799999999998, "text": " getting in the likely the next and next, next step." }, { "end": 1469.16, "start": 1465.92, "text": " Uh, you're not really looking into the future." }, { "end": 1475.8400000000001, "start": 1469.16, "text": " Um, this is very good for games that give you immediate reward for good actions." }, { "end": 1483.5600000000002, "start": 1475.8400000000001, "text": " But if you, uh, if you set it very high, let's say 0.999, right?" }, { "end": 1490.16, "start": 1483.5600000000002, "text": " That means a reward a hundred steps from now doesn't, you know, is, is almost the same" }, { "end": 1492.76, "start": 1490.16, "text": " to you as a reward one step from now." }, { "end": 1500.08, "start": 1492.76, "text": " And this is very valuable for games that don't give you a reward immediately or that kind" }, { "end": 1502.64, "start": 1500.08, "text": " of trying to trick you as we saw before." }, { "end": 1508.92, "start": 1502.64, "text": " Like if you shoot the meteor now, then you get one reward, but if you don't and pass" }, { "end": 1512.4, "start": 1508.92, "text": " on the opportunity, you might get much more later." }, { "end": 1519.06, "start": 1512.4, "text": " So the modulation of the discount factor is also very important, uh, to set and really" }, { "end": 1520.2, "start": 1519.06, "text": " depends on the game." }, { "end": 1526.6000000000001, "start": 1520.2, "text": " So we have two quantities here that really depend on what kind of game it is." }, { "end": 1532.7, "start": 1526.6000000000001, "text": " And also they argue, um, it, it also depends where in the learning process you are." }, { "end": 1538.6000000000001, "start": 1532.7, "text": " So if you're at the very beginning of the learning process, you might want to have a" }, { "end": 1545, "start": 1538.6000000000001, "text": " very high goal, the high intrinsic reward to go explore." }, { "end": 1551.56, "start": 1545, "text": " And you want, might want to get, have a very low discount factor in order to learn a good" }, { "end": 1553.92, "start": 1551.56, "text": " immediate value function." }, { "end": 1559.78, "start": 1553.92, "text": " But then as time goes on, you might want to bring down the intrinsic reward because now" }, { "end": 1565.96, "start": 1559.78, "text": " you really want actually, because your end goal is to maximize the extrinsic reward and" }, { "end": 1570.16, "start": 1565.96, "text": " you want to up this discount factor to look more into the future." }, { "end": 1575.72, "start": 1570.16, "text": " Now that you have already learned the immediate values very well." }, { "end": 1588.72, "start": 1575.72, "text": " So if I had to summarize and simplify what agent 57 does is it builds a neural network" }, { "end": 1596.3600000000001, "start": 1588.72, "text": " that adjusts these two quantities across the training, right?" }, { "end": 1605.36, "start": 1596.36, "text": " Um, so, so it adjusts the beta and gamma across the training and it does this in a so-called" }, { "end": 1608.3, "start": 1605.36, "text": " bandit setting." }, { "end": 1614.1799999999998, "start": 1608.3, "text": " Now there is no real good picture in this paper that I can show you." }, { "end": 1616.4799999999998, "start": 1614.1799999999998, "text": " So I'm just going to have to, to draw." }, { "end": 1618.8, "start": 1616.4799999999998, "text": " So you have an agent, right?" }, { "end": 1626.02, "start": 1618.8, "text": " It interacts with this environment here and it always gets these rewards." }, { "end": 1630.44, "start": 1626.02, "text": " Now what you have here is a meta controller, right?" }, { "end": 1633.76, "start": 1630.44, "text": " So the agents, it has two parameters." }, { "end": 1640.48, "start": 1633.76, "text": " It has this beta and this gamma and the meta controller now observes this." }, { "end": 1648.52, "start": 1640.48, "text": " It observes this interaction and it outputs values for these two constants and the does" }, { "end": 1652.84, "start": 1648.52, "text": " this dynamically as the training progresses, right?" }, { "end": 1662.36, "start": 1652.84, "text": " So the agent, the agent will, will kind of learn, the agent will change its behavior" }, { "end": 1663.36, "start": 1662.36, "text": " over time." }, { "end": 1668.9599999999998, "start": 1663.36, "text": " Now this is actually implemented in a slightly different way in that the meta controller" }, { "end": 1673.62, "start": 1668.9599999999998, "text": " doesn't control the values directly, but it, it has kind of options." }, { "end": 1680.56, "start": 1673.62, "text": " So what you do is you define a bunch of possibilities for beta and gamma." }, { "end": 1687.1399999999999, "start": 1680.56, "text": " So you say I have strategy one, strategy one has beta at 0.1 and gamma at 0.9." }, { "end": 1691.76, "start": 1687.1399999999999, "text": " Strategy two has beta at 0.2 and gamma at 0.8 and so on." }, { "end": 1692.76, "start": 1691.76, "text": " Right?" }, { "end": 1700.6399999999999, "start": 1692.76, "text": " And now the meta controller has to choose between one of these, in this case, six different" }, { "end": 1702.82, "start": 1700.6399999999999, "text": " strategies across training." }, { "end": 1708.12, "start": 1702.82, "text": " So it might start off, as we said, with a high beta, which might be over here, 0.9," }, { "end": 1709.12, "start": 1708.12, "text": " 0.1." }, { "end": 1717.1599999999999, "start": 1709.12, "text": " It might start off with a high beta and then transition to the lower ends." }, { "end": 1723.6399999999999, "start": 1717.1599999999999, "text": " And it can do so depending on the game and depending on the progress in the game." }, { "end": 1729.8, "start": 1723.6399999999999, "text": " So this is, this is dynamic and this is the improvement over never give up over this other" }, { "end": 1734.84, "start": 1729.8, "text": " agent, because this other agent simply had these strategies and trained them at the same" }, { "end": 1736.56, "start": 1734.84, "text": " time." }, { "end": 1743.56, "start": 1736.56, "text": " And now this meta controller here controls which strategy is currently trained and which" }, { "end": 1748.32, "start": 1743.56, "text": " one is used to generate the experience." }, { "end": 1757.52, "start": 1748.32, "text": " So this is, this is basically, I mean, there's a, they also, of course, they also say, well," }, { "end": 1764.84, "start": 1757.52, "text": " we also increase the window of, let me go back." }, { "end": 1771.9599999999998, "start": 1764.84, "text": " So this LSTM, these, I've shown you these things here that incorporate experience over" }, { "end": 1772.9599999999998, "start": 1771.9599999999998, "text": " time." }, { "end": 1779.48, "start": 1772.9599999999998, "text": " They also say, well, we increase the window of how long the LSTM, the time window of how" }, { "end": 1783.28, "start": 1779.48, "text": " much experience is incorporated." }, { "end": 1787.72, "start": 1783.28, "text": " And they do a bunch of other things, which I always find kind of annoying because it's" }, { "end": 1793.9199999999998, "start": 1787.72, "text": " always really, really hard to see where the improvements come from that they claim they" }, { "end": 1794.92, "start": 1793.92, "text": " made." }, { "end": 1802.4, "start": 1794.92, "text": " So, but, you know, barring that, basically they built this meta controller to choose" }, { "end": 1807.0800000000002, "start": 1802.4, "text": " the strategies for the agent over time." }, { "end": 1816.02, "start": 1807.0800000000002, "text": " Now of course, this meta controller again is trained by the rewards that you get back" }, { "end": 1817.6000000000001, "start": 1816.02, "text": " from the environment." }, { "end": 1825.3999999999999, "start": 1817.6, "text": " So the meta controller as an action has the choice of strategy, right?" }, { "end": 1831.56, "start": 1825.3999999999999, "text": " And the reward, it gets back from the agent environment interaction, right?" }, { "end": 1835.6, "start": 1831.56, "text": " So in itself, it is a reinforcement learning problem." }, { "end": 1847.6799999999998, "start": 1835.6, "text": " Now why, like, to me it seems just shifts the, it just shifts the problem of exploration" }, { "end": 1850.6, "start": 1847.6799999999998, "text": " exploitation one level higher." }, { "end": 1854.08, "start": 1850.6, "text": " They use a sliding window bandit algorithm to do this." }, { "end": 1859.98, "start": 1854.08, "text": " But again, you have hyper parameters there, like how long is the sliding window and how" }, { "end": 1863.8799999999999, "start": 1859.98, "text": " does the bandit algorithm do the exploration exploitation tradeoff." }, { "end": 1867.5400000000002, "start": 1863.88, "text": " So it seems to me you're just shifting it one level higher." }, { "end": 1876.22, "start": 1867.5400000000002, "text": " And it also seems like we're getting into the region of where we are meta over engineering" }, { "end": 1883.14, "start": 1876.22, "text": " our approaches to the specifics of this Atari benchmark." }, { "end": 1887.88, "start": 1883.14, "text": " Because we're kind of observing, oh, okay, these agents do this wrong, these agents do" }, { "end": 1888.88, "start": 1887.88, "text": " this wrong." }, { "end": 1893.8200000000002, "start": 1888.88, "text": " So let's just build an agent that can do both sort of." }, { "end": 1901.32, "start": 1893.82, "text": " And then the kind of audastic thing I find that they open with how to measure artificial" }, { "end": 1907.04, "start": 1901.32, "text": " general intelligence, which, I mean, come on, you're just it's kind of amnest right" }, { "end": 1913.08, "start": 1907.04, "text": " now you're just kind of over and over and overfitting on this one benchmark, there's" }, { "end": 1922.1599999999999, "start": 1913.08, "text": " not really a need to, to make this into a story on artificial general intelligence." }, { "end": 1924.68, "start": 1922.16, "text": " Alright, so this was my two cents to this." }, { "end": 1952.4, "start": 1924.68, "text": " I hope you enjoyed this and bye bye." } ]
lmAj0SU_bW0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Axial Attention & MetNet: A Neural Weather Model for Precipitation Forecasting
[ "Science & Technology" ]
[ "deep learning", "machine learning", "arxiv", "google", "attention mechanism", "attention", "transformer", "rnn", "recurrent", "weather", "long-range", "layers", "convolutions", "cnns", "rain", "physics" ]
MetNet is a predictive neural network model for weather prediction. It uses axial attention to capture long-range dependencies. Axial attention decomposes attention layers over images into row-attention and column-attention in order to save memory and computation. https://ai.googleblog.com/2020/03/a-neural-weather-model-for-eight-hour.html https://arxiv.org/abs/1912.12180 Abstract: Weather forecasting is a long standing scientific challenge with direct social and economic impact. The task is suitable for deep neural networks due to vast amounts of continuously collected data and a rich spatial and temporal structure that presents long range dependencies. We introduce MetNet, a neural network that forecasts precipitation up to 8 hours into the future at the high spatial resolution of 1 km2 and at the temporal resolution of 2 minutes with a latency in the order of seconds. MetNet takes as input radar and satellite data and forecast lead time and produces a probabilistic precipitation map. The architecture uses axial self-attention to aggregate the global context from a large input patch corresponding to a million square kilometers. We evaluate the performance of MetNet at various precipitation thresholds and find that MetNet outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on the scale of the continental United States. Authors: Casper Kaae Sønderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver,Tim Salimans, Shreya Agrawal, Jason Hickey, Nal Kalchbrenner Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. So what you're looking at here is a weather forecast model. Specifically the very top row is a new weather forecast model called NetNet by Google Research. So the goal of weather prediction is pretty simple. You want to know what the weather is going to be in the future. Specifically here you want to know precipitation rates. And so this is a new work that uses neural network instead of physical models in order to predict precipitation. So in the middle here you see this is the ground truth of what really happened at that particular time. You see precipitation rates in red here moving across the country. Now the bottom there is a physical model and as far as I understand it, physical models have been used so far to make weather predictions. Which basically means that you simulate these rain clouds and the movement of them across the country. And you do a physical simulation like a particle simulation type of thing and then that allows you to predict and then you run that maybe multiple times and you get an idea of the kind of distribution you're going to get. Now what MetNet does is it simply uses a neural network to predict the outcome directly. So there's no physical simulation involved. There is just a neural network that takes as input what's the situation now and maybe over a stretch of time. And then you ask it please make a prediction in eight hours or something. And then the MetNet will make that prediction and it will just output it like snap. No physical simulation needed. And you also see here that MetNet outputs things in kind of a cloud way, in a probabilistic way. In one forward pass you don't need to run it multiple times. But we'll get to that. On the bottom here you see the measurement. So the axis is F1. F1 is kind of the overlap of how well you're able to predict the precipitation. And you see here the MetNet is above the HRR baseline for most of this time. Up to 480 minutes into the future. Which is eight hours I believe. All right. So the paper is the following. It's called MetNet, a Neural Weather Model for Precipitation Forecasting. And I'm not going to read all the names here. The main corresponding authors are Caspar K. Sonderby and Nalkalke Brenner. And it's a team of Google research. So specifically they use the input of these two things here. So one is this GOS16, which is what you see here on the left. And the precipitation rates are here depicted on the right. So you want to take these things as input into your model. Now how do you do that? Of course we want to build a neural network. And this is the architecture they come up with. So on the bottom here they feed in the data. And they feed in the data in 15 minute interval from 90 minutes into the pass. So you have to imagine it like this. So there's a timeline. I'm going to use a little bit of a finer thing. So there's a timeline. And you let's say are here. This is now. And then here in the future, this is maybe one hour into the future. This is your target, right? This is you are here and you're looking out. You would like to know what's the precipitation going to be in one hour from now. What Metnet does is it takes an input. And specifically it takes the last 90 minutes before now as an input. And it samples it in frequencies of 15 minute intervals. So each one of these is going to be 15 minutes. And each 15 minutes you get like a snapshot of this entire of the input region. Now the input region, if I can jump back here to the website for a second, they show it what the input region is. The input region, if you want to predict in the middle of this small square, the input region is actually the entire 1024 square kilometers around it. So it's very big input. Though the actual region you consider is the inside 64 square kilometers. But you take in information from the big region. And the main point of the paper, I believe, is how to do that. All right, so each 15 minutes you take in a snapshot. And these are these snapshots here on the bottom. So these are, and you have to imagine in here, every 15 minutes there's a stack of these inputs. So what are these inputs? These inputs are some kind of features that you have. So there is the target time, which in this case would be this one hour here. There is the month, day and hour, which is important for weather prediction, right? So the time of year, time of day and so on. Longitude latitude is probably pretty important. Elevation map is probably pretty important. So these you can see, these are all maps. Now sometimes, and this is how you encode things in these. Since it's a neural network, you know, all of these things must be of the same dimensions here. So if you have 256 dimensions here and probably 256 dimensions here, then all of these things must be of the same dimension. And if you want to give a feature such as the target time, which in this case, let's say it's one hour, you just put here one hour, what's one hour? Let's say 60 minutes. So you just put the number 60 here, 60, 60, 60, 60, 60, 256 times and 256 times 250, 65, sorry, 265 times, no, 56. I'm confusing with German. So this is how you encode features. It's pretty primitive, but it turns out it works the best if you do it this way. All right. So you have these planes and some, as I said, are just features such as the target time, month, day and hour and so on. Elevation, I guess is a map, is like an elevation map of the region you consider. And this corresponds now to this, these 64 kilometers times 64 kilometers here. And that's exactly what these center crops are here. So this center crop thing, that now, this thing here, this plane, sorry, is these 64 by 64 region. That's this plane here. And also the, that's the precipitation and the GOES, that's this thing here. Now we also have these down sampled things, which these are the 1024 kilometers. So this here and this here, these are the 1024 square kilometer patches, but they are down sampled. So everything is down sampled, I guess, to 256 by 256 pixels. So you don't really take into account every nuance of that very big, of that very big input, but you do down sample it. So you kind of get the big picture of the outer frame and in the inner frame, you take it in a much higher resolution in order to get the details. All right. So you stack all of this up into a big tensor and then you feed it into here into a spatial down sampler, which I guess, no, I have read is a, some just a convolutional neural network, right? So this is your typical image processing pipeline. So you do this for each of these stacks, right? And then what you get out of it is a lower size representation right here. So you get these representations and then you let a temporal encoder run over it. What does a temporal encoder do? This in particular is a convolutional LSTM. And if you already know what an LSTM is, a convolutional LSTM is nothing more than an LSTM that has as intermediate layers, convolutional layers. So it's pretty suited to do, for example, videos or any sort of image processing that goes over time like this one. So the temporal encoder simply starts out here with an initial state. My pens are screwing me today. So it starts out here with an initial state and then it simply inputs each of these representations, takes them one by one, runs across time, right? And each time producing a new intermediate representation of the input until it finally reaches this here, final representation. So this thing here is a single final representation of all of this input, right? Of this entire time span of all of these stacks here. Yeah, so you can press this into a single input with first a convolutional network to downsample each time point individually and then with a recurrent neural network, an LSTM, to integrate the information over time. You end up with this single piece here. And then what you do, so you still, here you still retain kind of an image sort of thing. So this representation here, you can see it in the background. Maybe I'll get down my scribbles here. This here is still sort of an image tensor, though I guess it's a hidden representation, so you couldn't really look at it. But it still has dimensions of images. So this here is still, I think, the same or corresponding to these dimensions here. So this still has some spatial information where this might be north-south here in this axis, might be east and west, right? And then these are just the hidden channels, the channels of the hidden representations, right? So what you would like to do now is to basically encode information from the space around you. If you look at, let's look at one of these, one of the big pictures. What you would like to do in weather prediction, let's say you are right here. What's a good example, you are right here, right? Now if you want to know if this particular cloud over here is going to move to your direction, what you want to know is, for example, is there a mountain range here, right? Because then it's more probable that this cloud is going maybe to move up there. You would also want to know how this cloud here moves, right? If this cloud here moves somewhere here around, then it's probably this cloud down here might be pulled with it or something like this. So you're very much, sorry, you're there. You're very much kind of want to look out into each of the directions here and you want to incorporate kind of what's happening across the space. We're already used to kind of convolutional networks being able to do this, but in here the authors use attention to do that. So if you don't know what attention is, my most popular video is about attention and you can do attention for images. So the way that works is that you have a series of images of stacked blocks of a neural network. Let me draw this here. So you have an image here and let's say it has just four pixels, right? So you have the next layer of these four pixels, right? So you have layers of this. So the next layers of the four pixels, they all emit what are called queries and queries are just vectors. So each pixel emits a single vector. Let's say this, that, that, this, right? And each of the lower layers emits what is called a key. This, this, this, this. And now the keys and the queries are routed together based on their inner product. So these two would be routed together. This would probably be routed here. This as well. This would probably routed here. So what in effect each of the pixels of the higher layer can look at specific pixels of the lower layer. Now you can imagine this is exactly what we want here in that if there is a mountain range here and we might be interested in that. So we'd be able from our, from our point here to specifically attend to that location using, using attention, right? So the authors here build basically a stacked model of attention layers. And that's what's happening in the third part here. And this is the attention is in order to incorporate long range dependencies. As I made the example with the mountain range, this might be far away, but it might actually influence your weather very much. So the attention is to incorporate these long range dependencies. But the problem with attention is, is as you saw in the example, each of these pixels can attend to each of the pixels in the lower layer. So what you'd end up with, so each can attend to that. This can attend to each. This can attend to each. You'll see you'll end up with 16 connections. Can't even draw them. So you end up with 16 connections. In general, if you have D here, you will end up with a D squared number of things you need to calculate, right? So if this here, and now of course we have images. So generally we'll think of D by D pixels. Now we have D by D pixels and that thing squared number of things we need to calculate. This quickly gets too much. So in, for example, MNIST, you have 28 by 28 pixel images. This is 780 or 2 or something. I don't quite remember. But you'll have to calculate this squared many connections between things. This is simply impossible pretty quickly, especially if you scale up the images and then have some channels in here as well. So attention for image processing has been a bit lagging compared to natural language processing. In natural language processing, you usually have maybe 500 tokens or something. Images you have much more. So attention is much more expensive. So you can't really do it on current hardware. Now this paper uses something called axial attention. Axial attention is kind of the trick of how to make this tension happen for images. And for that I want to switch over to this paper. It's called Axial Attention in Multidimensional Transformers by some of the same authors. So Jonathan Ho and Nell Coutt Brenner, also of Google Brain and UC Berkeley, proposed this axial transformer. Now they originally proposed axial attention for autoregressive models. If you know transformers, they also started by making autoregressive models, so language modeling and so on. But we can decouple the axial attention from the autoregressivity of these models. So I'm not going to talk about autoregressive models, it's just axial attention. So what is axial attention? It's pretty simple actually. And I want to start by talking about convolutions. So what does a convolution do? Let's just take a one-dimensional image, which is pretty boring, but let's say it has these eight pixels here. So this is an image, it just has one row of eight pixels. What do I do when I run a convolutional filter across that? This is the lower layer, and now this is the next layer that is produced by a convolution. So for each of the pixels in the next layer, what I can do with the convolutional layer, I can look at its neighbors in the lower layer. So these three would be part of that. And then I go on to this, and again I look at its neighbors at these three. I might have done this in a different color. And then I look at this, and it can look at itself and its neighbors. So a convolution is pretty smart. And then of course in the next layer, that repeats. Now if you think, what's the difference between doing this and a fully connected layer? So if I have a fully connected layer, a classic neural network, a fully connected layer, then this pixel here would incorporate information from all of the pixels here. And this pixel here would incorporate information from all the pixels. Now why might this be better? Because the information that I want here for this pixel might depend on this pixel over here. So I might benefit from a connection over there, or it might benefit from this pixel here, which it can't reach. And with a convolutional network, I can't do that. Why are then convolutional networks preferable? Because the convolutional network can do the same thing across multiple layers. So let's assume again that this pixel here needs information from this pixel right here. And as you can see in just one layer, it can only get information from those, right? But now take the next layer, so the same pixel here, it can attend to these three, right? Now these three can each in turn attend to their neighbors, right? And I'm not going to draw everything, but the resolution field for this pixel here will end up being all of this, right? Now we still don't have our desired pixel in here, but if we just go one layer more, then this pixel right here, a different color, this pixel right here, right? The resolution field across the layers increases, because it's always incorporating information from downstream, and the downstream again incorporates information from the downstream, so eventually you can aggregate the same information. So instead of having a single layer with all of these connections, we have convolutional layers, which seem like a worse idea, because they can only do less things, attend to less things, but across the layers they actually can do the same thing, right? And that turns out to be a huge advantage of these convolutional layers, and that's why convolutional layers are used for image processing, and not the multi-layer perceptrons. So the same exact thing happens with axial attention, just in a different form. It is a bit poorly drawn here, I believe, but this is how you have to imagine it. As before, this pixel, the red pixel here, if I just have a normal transformer layer, the red pixel can attend to all of the other pixels in the image, right? That's the, that's basically, and each of the pixels can do that, so that's your d squared computation right here. Now, what we want to do is, in a convolutional layer, what we would do is, okay, you can only attend to your neighbors, and then in the next layer the neighbors can attend to their neighbors, and thereby you go out and out. In axial attention, you say, okay, this thing can only attend to its row and its column, right? That's it. You can only do attention to your row and your column, and I believe they don't even do it at the same time. So in one layer you can attend to the row you're in, and in the other you can attend to the column you're in. Now, let's see how the same thing happens as for a convolutional layer. So in the, basically, how then, if the red pixel needs access to information in this green pixel, how does it do that? So in the first layer it can attend to its row and its column, right? And so can every other pixel, including, sorry, including, of course, the pixel where that, so let's say this square here can also attend to its row and its column, and its row happens to be including the green one, right? So in layer one, this red square here gets information from the green square via row attention, right? And then in layer two now, this, our red square of interest now can row attend to this other red square here, so they get connected in layer two. I'm sorry, I don't want that. So you see that within just two layers we've transferred information from the green square via this red square to that red square. So we can, in the same way as a convolution, you can replace the long-range arbitrary dependencies between pixels by simply having multiple layers of restricted dependence. The same goes for this axial attention. So you can replace the arbitrary attention in layers, right? You can replace that by a two-step process where you first transfer information via the column and then transfer it via the row. It's a bit like, you know, in chess you can have a queen that can move any direction, especially diagonally, and then if you just have a rook you kind of need to do two moves. So in the queen is like the full attention and the rook is the multi-layer axial attention. They can achieve the same thing, you just need more layers. But as a trade-off you get a super, super saving in requirement of memory and computation, right? So they stress that, you know, kind of you can represent the same distributions with the axial attention. And you know, the trade-off is you just have to do multiple layers of it. Right, so this is axial attention and they are now able to incorporate this into their model right here. So they have, I believe, eight blocks, so four row attention, you see this right here, and four column attention blocks in their model. And finally they output this distribution here across their region of interest. Now this again is your, I believe, this 64 by 64 resolution. So you can see how they kind of aggregated information across the 64 using this axial attention. And then that makes their prediction in this one hour. So this is this. Alright, so this was a long way. So recap, they have 15-minute snapshots of this input data across along with some features. They use a spatial down sampler, which is a CNN, on each of them individually. Then they use a convolutional LSTM to encode this across time to end up with a single representation here at the end. Then they use axial attention in order to aggregate information across the spatial dimensions. They do this in multiple stages and at the end they make a participation prediction, which is a distribution, as you can see here. So as an output you directly get a distribution of results, which is also cool because the physical simulation, you have to let it run many, many times in order to get a distribution of results. And this neural network can simply give you a distribution right away. That's what they say right here. So they go a bit into the architecture compared to baseline. I want to get back to what I showed you at the beginning. This here is just the picture, kind of the picture book example. So left is the ground truth, in the middle is MatNet, and on the right is a baseline method. This here is in, as you can see, in two hours, in four, six and eight. So you can see the MatNet gives you as an output this distribution. What I find interesting, for example, is this sample two right here. So in this sample one you can see there is a consistent difference and this is the forecast time, so how much in advance you want to get it? No, this would be a one hour, but it can go up to eight hours. Here is a consistent gap in F1, which means the MatNet does it better across this span of time, which is for the top sample right here. For the bottom sample though, you can see here, there is a big gap at the beginning, again, there is a big gap at the beginning, and then this gap gets smaller and smaller and smaller. And this, I think, might give you an indication of, let's say, the weakness of this approach, doing it with neural networks. So with neural networks you kind of rely on regularities, you kind of rely on broad scale, correct things that you can learn from the data, and this might work well as long as things are regular, which of course across shorter time spans things tend to be more regular, right? But if you go for longer time spans, I believe there is more of a chaos element to it, like weather can be very dependent on very subtle things, and the physics simulation that is really taking into account the actual physics might be able to much, much better account for that. And that's why I believe across time here you'll see that the two models get closer together. That being said, MetNet of course is still on top here. But it will be interesting to forecast for longer even, though I haven't actually dig through their results, through their numerical results, but you can do that if you want. Alright, so this was it for MetNet and axial attention. I hope you liked this, and bye bye.
[ { "end": 8.52, "start": 0, "text": " Hi there. So what you're looking at here is a weather forecast model. Specifically the" }, { "end": 15.76, "start": 8.52, "text": " very top row is a new weather forecast model called NetNet by Google Research. So the goal" }, { "end": 19.96, "start": 15.76, "text": " of weather prediction is pretty simple. You want to know what the weather is going to" }, { "end": 27.68, "start": 19.96, "text": " be in the future. Specifically here you want to know precipitation rates. And so this is" }, { "end": 36.04, "start": 27.68, "text": " a new work that uses neural network instead of physical models in order to predict precipitation." }, { "end": 41.519999999999996, "start": 36.04, "text": " So in the middle here you see this is the ground truth of what really happened at that" }, { "end": 48.64, "start": 41.519999999999996, "text": " particular time. You see precipitation rates in red here moving across the country. Now" }, { "end": 55.22, "start": 48.64, "text": " the bottom there is a physical model and as far as I understand it, physical models have" }, { "end": 62.92, "start": 55.22, "text": " been used so far to make weather predictions. Which basically means that you simulate these" }, { "end": 68.96, "start": 62.92, "text": " rain clouds and the movement of them across the country. And you do a physical simulation" }, { "end": 74.72, "start": 68.96, "text": " like a particle simulation type of thing and then that allows you to predict and then you" }, { "end": 80.6, "start": 74.72, "text": " run that maybe multiple times and you get an idea of the kind of distribution you're" }, { "end": 88.24, "start": 80.6, "text": " going to get. Now what MetNet does is it simply uses a neural network to predict the outcome" }, { "end": 95.83999999999999, "start": 88.24, "text": " directly. So there's no physical simulation involved. There is just a neural network that" }, { "end": 101.91999999999999, "start": 95.83999999999999, "text": " takes as input what's the situation now and maybe over a stretch of time. And then you" }, { "end": 111.4, "start": 101.92, "text": " ask it please make a prediction in eight hours or something. And then the MetNet will make" }, { "end": 119.24000000000001, "start": 111.4, "text": " that prediction and it will just output it like snap. No physical simulation needed." }, { "end": 124.8, "start": 119.24000000000001, "text": " And you also see here that MetNet outputs things in kind of a cloud way, in a probabilistic" }, { "end": 133.04, "start": 124.8, "text": " way. In one forward pass you don't need to run it multiple times. But we'll get to that." }, { "end": 142.68, "start": 133.04, "text": " On the bottom here you see the measurement. So the axis is F1. F1 is kind of the overlap" }, { "end": 151.44, "start": 142.68, "text": " of how well you're able to predict the precipitation. And you see here the MetNet is above the HRR" }, { "end": 161.6, "start": 151.44, "text": " baseline for most of this time. Up to 480 minutes into the future. Which is eight hours" }, { "end": 169.32, "start": 161.6, "text": " I believe. All right. So the paper is the following. It's called MetNet, a Neural Weather" }, { "end": 176.72, "start": 169.32, "text": " Model for Precipitation Forecasting. And I'm not going to read all the names here. The" }, { "end": 182.8, "start": 176.72, "text": " main corresponding authors are Caspar K. Sonderby and Nalkalke Brenner. And it's a" }, { "end": 194.28, "start": 182.8, "text": " team of Google research. So specifically they use the input of these two things here. So" }, { "end": 203.96, "start": 194.28, "text": " one is this GOS16, which is what you see here on the left. And the precipitation rates are" }, { "end": 213.4, "start": 203.96, "text": " here depicted on the right. So you want to take these things as input into your model." }, { "end": 218.64000000000001, "start": 213.4, "text": " Now how do you do that? Of course we want to build a neural network. And this is the" }, { "end": 226.52, "start": 218.64000000000001, "text": " architecture they come up with. So on the bottom here they feed in the data. And they" }, { "end": 232.44, "start": 226.52, "text": " feed in the data in 15 minute interval from 90 minutes into the pass. So you have to imagine" }, { "end": 238.8, "start": 232.44, "text": " it like this. So there's a timeline. I'm going to use a little bit of a finer thing. So there's" }, { "end": 246.52, "start": 238.8, "text": " a timeline. And you let's say are here. This is now. And then here in the future, this" }, { "end": 252.56, "start": 246.52, "text": " is maybe one hour into the future. This is your target, right? This is you are here and" }, { "end": 258.68, "start": 252.56, "text": " you're looking out. You would like to know what's the precipitation going to be in one" }, { "end": 268.88, "start": 258.68, "text": " hour from now. What Metnet does is it takes an input. And specifically it takes the last" }, { "end": 278.04, "start": 268.88, "text": " 90 minutes before now as an input. And it samples it in frequencies of 15 minute intervals." }, { "end": 287.4, "start": 278.04, "text": " So each one of these is going to be 15 minutes. And each 15 minutes you get like a snapshot" }, { "end": 296.4, "start": 287.4, "text": " of this entire of the input region. Now the input region, if I can jump back here to the" }, { "end": 304.47999999999996, "start": 296.4, "text": " website for a second, they show it what the input region is. The input region, if you" }, { "end": 310.12, "start": 304.47999999999996, "text": " want to predict in the middle of this small square, the input region is actually the entire" }, { "end": 318.32, "start": 310.12, "text": " 1024 square kilometers around it. So it's very big input. Though the actual region you" }, { "end": 327.2, "start": 318.32, "text": " consider is the inside 64 square kilometers. But you take in information from the big region." }, { "end": 335.28000000000003, "start": 327.2, "text": " And the main point of the paper, I believe, is how to do that. All right, so each 15 minutes" }, { "end": 340.15999999999997, "start": 335.28, "text": " you take in a snapshot. And these are these snapshots here on the bottom. So these are," }, { "end": 345.91999999999996, "start": 340.15999999999997, "text": " and you have to imagine in here, every 15 minutes there's a stack of these inputs. So" }, { "end": 352.2, "start": 345.91999999999996, "text": " what are these inputs? These inputs are some kind of features that you have. So there is" }, { "end": 359.59999999999997, "start": 352.2, "text": " the target time, which in this case would be this one hour here. There is the month," }, { "end": 365.03999999999996, "start": 359.59999999999997, "text": " day and hour, which is important for weather prediction, right? So the time of year, time" }, { "end": 371.66, "start": 365.04, "text": " of day and so on. Longitude latitude is probably pretty important. Elevation map is probably" }, { "end": 379.64000000000004, "start": 371.66, "text": " pretty important. So these you can see, these are all maps. Now sometimes, and this is how" }, { "end": 384.20000000000005, "start": 379.64000000000004, "text": " you encode things in these. Since it's a neural network, you know, all of these things must" }, { "end": 389.96000000000004, "start": 384.20000000000005, "text": " be of the same dimensions here. So if you have 256 dimensions here and probably 256" }, { "end": 396.84, "start": 389.96, "text": " dimensions here, then all of these things must be of the same dimension. And if you" }, { "end": 401.56, "start": 396.84, "text": " want to give a feature such as the target time, which in this case, let's say it's one" }, { "end": 408.35999999999996, "start": 401.56, "text": " hour, you just put here one hour, what's one hour? Let's say 60 minutes. So you just put" }, { "end": 421.48, "start": 408.36, "text": " the number 60 here, 60, 60, 60, 60, 60, 256 times and 256 times 250, 65, sorry, 265 times," }, { "end": 429.8, "start": 421.48, "text": " no, 56. I'm confusing with German. So this is how you encode features. It's pretty primitive," }, { "end": 435.48, "start": 429.8, "text": " but it turns out it works the best if you do it this way. All right. So you have these" }, { "end": 440.84000000000003, "start": 435.48, "text": " planes and some, as I said, are just features such as the target time, month, day and hour" }, { "end": 449.66, "start": 440.84000000000003, "text": " and so on. Elevation, I guess is a map, is like an elevation map of the region you consider." }, { "end": 458.88, "start": 449.66, "text": " And this corresponds now to this, these 64 kilometers times 64 kilometers here. And that's" }, { "end": 466.04, "start": 458.88, "text": " exactly what these center crops are here. So this center crop thing, that now, this thing" }, { "end": 476.96, "start": 466.04, "text": " here, this plane, sorry, is these 64 by 64 region. That's this plane here. And also the," }, { "end": 484.96, "start": 476.96, "text": " that's the precipitation and the GOES, that's this thing here. Now we also have these down" }, { "end": 499.2, "start": 484.96, "text": " sampled things, which these are the 1024 kilometers. So this here and this here, these are the" }, { "end": 507.32, "start": 499.2, "text": " 1024 square kilometer patches, but they are down sampled. So everything is down sampled," }, { "end": 516.24, "start": 507.32, "text": " I guess, to 256 by 256 pixels. So you don't really take into account every nuance of that" }, { "end": 522.04, "start": 516.24, "text": " very big, of that very big input, but you do down sample it. So you kind of get the" }, { "end": 527.88, "start": 522.04, "text": " big picture of the outer frame and in the inner frame, you take it in a much higher" }, { "end": 534.88, "start": 527.88, "text": " resolution in order to get the details. All right. So you stack all of this up into a" }, { "end": 542.16, "start": 534.88, "text": " big tensor and then you feed it into here into a spatial down sampler, which I guess," }, { "end": 550.36, "start": 542.16, "text": " no, I have read is a, some just a convolutional neural network, right? So this is your typical" }, { "end": 557.08, "start": 550.36, "text": " image processing pipeline. So you do this for each of these stacks, right? And then" }, { "end": 566.24, "start": 557.08, "text": " what you get out of it is a lower size representation right here. So you get these representations" }, { "end": 571.72, "start": 566.24, "text": " and then you let a temporal encoder run over it. What does a temporal encoder do? This" }, { "end": 579.0400000000001, "start": 571.72, "text": " in particular is a convolutional LSTM. And if you already know what an LSTM is, a convolutional" }, { "end": 587, "start": 579.0400000000001, "text": " LSTM is nothing more than an LSTM that has as intermediate layers, convolutional layers." }, { "end": 593.64, "start": 587, "text": " So it's pretty suited to do, for example, videos or any sort of image processing that" }, { "end": 601.44, "start": 593.64, "text": " goes over time like this one. So the temporal encoder simply starts out here with an initial" }, { "end": 608.12, "start": 601.44, "text": " state. My pens are screwing me today. So it starts out here with an initial state and" }, { "end": 616.92, "start": 608.12, "text": " then it simply inputs each of these representations, takes them one by one, runs across time, right?" }, { "end": 625.04, "start": 616.92, "text": " And each time producing a new intermediate representation of the input until it finally" }, { "end": 634.1999999999999, "start": 625.04, "text": " reaches this here, final representation. So this thing here is a single final representation" }, { "end": 645.9599999999999, "start": 634.1999999999999, "text": " of all of this input, right? Of this entire time span of all of these stacks here. Yeah," }, { "end": 652.5600000000001, "start": 645.96, "text": " so you can press this into a single input with first a convolutional network to downsample" }, { "end": 659.96, "start": 652.5600000000001, "text": " each time point individually and then with a recurrent neural network, an LSTM, to integrate" }, { "end": 666.2800000000001, "start": 659.96, "text": " the information over time. You end up with this single piece here. And then what you" }, { "end": 673.6, "start": 666.2800000000001, "text": " do, so you still, here you still retain kind of an image sort of thing. So this representation" }, { "end": 682, "start": 673.6, "text": " here, you can see it in the background. Maybe I'll get down my scribbles here. This here" }, { "end": 688.12, "start": 682, "text": " is still sort of an image tensor, though I guess it's a hidden representation, so you" }, { "end": 698.4, "start": 688.12, "text": " couldn't really look at it. But it still has dimensions of images. So this here is still," }, { "end": 705.76, "start": 698.4, "text": " I think, the same or corresponding to these dimensions here. So this still has some spatial" }, { "end": 713.76, "start": 705.76, "text": " information where this might be north-south here in this axis, might be east and west," }, { "end": 720.88, "start": 713.76, "text": " right? And then these are just the hidden channels, the channels of the hidden representations," }, { "end": 733.72, "start": 720.88, "text": " right? So what you would like to do now is to basically encode information from the space" }, { "end": 742.56, "start": 733.72, "text": " around you. If you look at, let's look at one of these, one of the big pictures. What" }, { "end": 750, "start": 742.56, "text": " you would like to do in weather prediction, let's say you are right here. What's a good" }, { "end": 756.2, "start": 750, "text": " example, you are right here, right? Now if you want to know if this particular cloud" }, { "end": 765.4, "start": 756.2, "text": " over here is going to move to your direction, what you want to know is, for example, is" }, { "end": 770.32, "start": 765.4, "text": " there a mountain range here, right? Because then it's more probable that this cloud is" }, { "end": 778.62, "start": 770.32, "text": " going maybe to move up there. You would also want to know how this cloud here moves, right?" }, { "end": 786.76, "start": 778.62, "text": " If this cloud here moves somewhere here around, then it's probably this cloud down here might" }, { "end": 793.28, "start": 786.76, "text": " be pulled with it or something like this. So you're very much, sorry, you're there. You're" }, { "end": 802.36, "start": 793.28, "text": " very much kind of want to look out into each of the directions here and you want to incorporate" }, { "end": 809.16, "start": 802.36, "text": " kind of what's happening across the space. We're already used to kind of convolutional" }, { "end": 817.44, "start": 809.16, "text": " networks being able to do this, but in here the authors use attention to do that. So if" }, { "end": 823, "start": 817.44, "text": " you don't know what attention is, my most popular video is about attention and you can" }, { "end": 832.12, "start": 823, "text": " do attention for images. So the way that works is that you have a series of images of stacked" }, { "end": 838.32, "start": 832.12, "text": " blocks of a neural network. Let me draw this here. So you have an image here and let's" }, { "end": 845.16, "start": 838.32, "text": " say it has just four pixels, right? So you have the next layer of these four pixels," }, { "end": 851.32, "start": 845.16, "text": " right? So you have layers of this. So the next layers of the four pixels, they all emit" }, { "end": 860.38, "start": 851.32, "text": " what are called queries and queries are just vectors. So each pixel emits a single vector." }, { "end": 868, "start": 860.38, "text": " Let's say this, that, that, this, right? And each of the lower layers emits what is called" }, { "end": 877.52, "start": 868, "text": " a key. This, this, this, this. And now the keys and the queries are routed together based" }, { "end": 881.14, "start": 877.52, "text": " on their inner product. So these two would be routed together. This would probably be" }, { "end": 888.16, "start": 881.14, "text": " routed here. This as well. This would probably routed here. So what in effect each of the" }, { "end": 896.76, "start": 888.16, "text": " pixels of the higher layer can look at specific pixels of the lower layer. Now you can imagine" }, { "end": 904.68, "start": 896.76, "text": " this is exactly what we want here in that if there is a mountain range here and we might" }, { "end": 911.64, "start": 904.68, "text": " be interested in that. So we'd be able from our, from our point here to specifically attend" }, { "end": 920.84, "start": 911.64, "text": " to that location using, using attention, right? So the authors here build basically a stacked" }, { "end": 927.48, "start": 920.84, "text": " model of attention layers. And that's what's happening in the third part here. And this" }, { "end": 935.76, "start": 927.48, "text": " is the attention is in order to incorporate long range dependencies. As I made the example" }, { "end": 940.64, "start": 935.76, "text": " with the mountain range, this might be far away, but it might actually influence your" }, { "end": 946.88, "start": 940.64, "text": " weather very much. So the attention is to incorporate these long range dependencies." }, { "end": 955.92, "start": 946.88, "text": " But the problem with attention is, is as you saw in the example, each of these pixels can" }, { "end": 963.3199999999999, "start": 955.92, "text": " attend to each of the pixels in the lower layer. So what you'd end up with, so each" }, { "end": 968.58, "start": 963.3199999999999, "text": " can attend to that. This can attend to each. This can attend to each. You'll see you'll" }, { "end": 973.88, "start": 968.58, "text": " end up with 16 connections. Can't even draw them. So you end up with 16 connections. In" }, { "end": 983.76, "start": 973.88, "text": " general, if you have D here, you will end up with a D squared number of things you need" }, { "end": 990.5200000000001, "start": 983.76, "text": " to calculate, right? So if this here, and now of course we have images. So generally" }, { "end": 1001.04, "start": 990.52, "text": " we'll think of D by D pixels. Now we have D by D pixels and that thing squared number" }, { "end": 1008.72, "start": 1001.04, "text": " of things we need to calculate. This quickly gets too much. So in, for example, MNIST," }, { "end": 1025.3600000000001, "start": 1008.72, "text": " you have 28 by 28 pixel images. This is 780 or 2 or something. I don't quite remember." }, { "end": 1034.8, "start": 1025.3600000000001, "text": " But you'll have to calculate this squared many connections between things. This is simply" }, { "end": 1040.2, "start": 1034.8, "text": " impossible pretty quickly, especially if you scale up the images and then have some channels" }, { "end": 1049.32, "start": 1040.2, "text": " in here as well. So attention for image processing has been a bit lagging compared to natural" }, { "end": 1056, "start": 1049.32, "text": " language processing. In natural language processing, you usually have maybe 500 tokens or something." }, { "end": 1059.8, "start": 1056, "text": " Images you have much more. So attention is much more expensive. So you can't really do" }, { "end": 1067.2, "start": 1059.8, "text": " it on current hardware. Now this paper uses something called axial attention. Axial attention" }, { "end": 1074.1599999999999, "start": 1067.2, "text": " is kind of the trick of how to make this tension happen for images. And for that I want to" }, { "end": 1080.48, "start": 1074.1599999999999, "text": " switch over to this paper. It's called Axial Attention in Multidimensional Transformers" }, { "end": 1088.56, "start": 1080.48, "text": " by some of the same authors. So Jonathan Ho and Nell Coutt Brenner, also of Google Brain" }, { "end": 1097.8, "start": 1088.56, "text": " and UC Berkeley, proposed this axial transformer. Now they originally proposed axial attention" }, { "end": 1105.6399999999999, "start": 1097.8, "text": " for autoregressive models. If you know transformers, they also started by making autoregressive" }, { "end": 1113.9199999999998, "start": 1105.6399999999999, "text": " models, so language modeling and so on. But we can decouple the axial attention from the" }, { "end": 1118.72, "start": 1113.92, "text": " autoregressivity of these models. So I'm not going to talk about autoregressive models," }, { "end": 1126.6000000000001, "start": 1118.72, "text": " it's just axial attention. So what is axial attention? It's pretty simple actually. And" }, { "end": 1132.64, "start": 1126.6000000000001, "text": " I want to start by talking about convolutions. So what does a convolution do? Let's just" }, { "end": 1142.4, "start": 1132.64, "text": " take a one-dimensional image, which is pretty boring, but let's say it has these eight pixels" }, { "end": 1149.88, "start": 1142.4, "text": " here. So this is an image, it just has one row of eight pixels. What do I do when I run" }, { "end": 1157.0800000000002, "start": 1149.88, "text": " a convolutional filter across that? This is the lower layer, and now this is the next" }, { "end": 1165.24, "start": 1157.0800000000002, "text": " layer that is produced by a convolution. So for each of the pixels in the next layer," }, { "end": 1173.16, "start": 1165.24, "text": " what I can do with the convolutional layer, I can look at its neighbors in the lower layer." }, { "end": 1180.04, "start": 1173.16, "text": " So these three would be part of that. And then I go on to this, and again I look at" }, { "end": 1187.64, "start": 1180.04, "text": " its neighbors at these three. I might have done this in a different color. And then I" }, { "end": 1197.5600000000002, "start": 1187.64, "text": " look at this, and it can look at itself and its neighbors. So a convolution is pretty" }, { "end": 1208.6000000000001, "start": 1197.5600000000002, "text": " smart. And then of course in the next layer, that repeats. Now if you think, what's the" }, { "end": 1213.4, "start": 1208.6000000000001, "text": " difference between doing this and a fully connected layer? So if I have a fully connected" }, { "end": 1223.8400000000001, "start": 1213.4, "text": " layer, a classic neural network, a fully connected layer, then this pixel here would incorporate" }, { "end": 1233.4, "start": 1223.8400000000001, "text": " information from all of the pixels here. And this pixel here would incorporate information" }, { "end": 1240.24, "start": 1233.4, "text": " from all the pixels. Now why might this be better? Because the information that I want" }, { "end": 1248.6, "start": 1240.24, "text": " here for this pixel might depend on this pixel over here. So I might benefit from a connection" }, { "end": 1256.08, "start": 1248.6, "text": " over there, or it might benefit from this pixel here, which it can't reach. And with" }, { "end": 1262.56, "start": 1256.08, "text": " a convolutional network, I can't do that. Why are then convolutional networks preferable?" }, { "end": 1269.4, "start": 1262.56, "text": " Because the convolutional network can do the same thing across multiple layers. So let's" }, { "end": 1279.3200000000002, "start": 1269.4, "text": " assume again that this pixel here needs information from this pixel right here. And as you can" }, { "end": 1291.3200000000002, "start": 1279.3200000000002, "text": " see in just one layer, it can only get information from those, right? But now take the next layer," }, { "end": 1300.6399999999999, "start": 1291.32, "text": " so the same pixel here, it can attend to these three, right? Now these three can each in" }, { "end": 1309.04, "start": 1300.6399999999999, "text": " turn attend to their neighbors, right? And I'm not going to draw everything, but the" }, { "end": 1317.08, "start": 1309.04, "text": " resolution field for this pixel here will end up being all of this, right? Now we still" }, { "end": 1327.8, "start": 1317.08, "text": " don't have our desired pixel in here, but if we just go one layer more, then this pixel" }, { "end": 1336.72, "start": 1327.8, "text": " right here, a different color, this pixel right here, right? The resolution field across" }, { "end": 1347.04, "start": 1336.72, "text": " the layers increases, because it's always incorporating information from downstream," }, { "end": 1351.96, "start": 1347.04, "text": " and the downstream again incorporates information from the downstream, so eventually you can" }, { "end": 1357.24, "start": 1351.96, "text": " aggregate the same information. So instead of having a single layer with all of these" }, { "end": 1362.64, "start": 1357.24, "text": " connections, we have convolutional layers, which seem like a worse idea, because they" }, { "end": 1370.0600000000002, "start": 1362.64, "text": " can only do less things, attend to less things, but across the layers they actually can do" }, { "end": 1378, "start": 1370.0600000000002, "text": " the same thing, right? And that turns out to be a huge advantage of these convolutional" }, { "end": 1383.1200000000001, "start": 1378, "text": " layers, and that's why convolutional layers are used for image processing, and not the" }, { "end": 1391.18, "start": 1383.1200000000001, "text": " multi-layer perceptrons. So the same exact thing happens with axial attention, just in" }, { "end": 1398.04, "start": 1391.18, "text": " a different form. It is a bit poorly drawn here, I believe, but this is how you have" }, { "end": 1412, "start": 1398.04, "text": " to imagine it. As before, this pixel, the red pixel here, if I just have a normal transformer" }, { "end": 1420.2, "start": 1412, "text": " layer, the red pixel can attend to all of the other pixels in the image, right? That's" }, { "end": 1425.76, "start": 1420.2, "text": " the, that's basically, and each of the pixels can do that, so that's your d squared computation" }, { "end": 1433, "start": 1425.76, "text": " right here. Now, what we want to do is, in a convolutional layer, what we would do is," }, { "end": 1437.88, "start": 1433, "text": " okay, you can only attend to your neighbors, and then in the next layer the neighbors can" }, { "end": 1443.8400000000001, "start": 1437.88, "text": " attend to their neighbors, and thereby you go out and out. In axial attention, you say," }, { "end": 1455.1999999999998, "start": 1443.84, "text": " okay, this thing can only attend to its row and its column, right? That's it. You can" }, { "end": 1460.9199999999998, "start": 1455.1999999999998, "text": " only do attention to your row and your column, and I believe they don't even do it at the" }, { "end": 1466.1599999999999, "start": 1460.9199999999998, "text": " same time. So in one layer you can attend to the row you're in, and in the other you" }, { "end": 1472.8799999999999, "start": 1466.1599999999999, "text": " can attend to the column you're in. Now, let's see how the same thing happens as for a convolutional" }, { "end": 1479.96, "start": 1472.88, "text": " layer. So in the, basically, how then, if the red pixel needs access to information" }, { "end": 1486.4, "start": 1479.96, "text": " in this green pixel, how does it do that? So in the first layer it can attend to its" }, { "end": 1499.2800000000002, "start": 1486.4, "text": " row and its column, right? And so can every other pixel, including, sorry, including," }, { "end": 1509.56, "start": 1499.28, "text": " of course, the pixel where that, so let's say this square here can also attend to its" }, { "end": 1517.6, "start": 1509.56, "text": " row and its column, and its row happens to be including the green one, right? So in layer" }, { "end": 1531.6799999999998, "start": 1517.6, "text": " one, this red square here gets information from the green square via row attention, right?" }, { "end": 1543.3799999999999, "start": 1531.6799999999998, "text": " And then in layer two now, this, our red square of interest now can row attend to this other" }, { "end": 1555.0600000000002, "start": 1543.38, "text": " red square here, so they get connected in layer two. I'm sorry, I don't want that. So" }, { "end": 1561.1200000000001, "start": 1555.0600000000002, "text": " you see that within just two layers we've transferred information from the green square" }, { "end": 1569.2600000000002, "start": 1561.1200000000001, "text": " via this red square to that red square. So we can, in the same way as a convolution," }, { "end": 1578.72, "start": 1569.26, "text": " you can replace the long-range arbitrary dependencies between pixels by simply having multiple layers" }, { "end": 1587.84, "start": 1578.72, "text": " of restricted dependence. The same goes for this axial attention. So you can replace the" }, { "end": 1598.56, "start": 1587.84, "text": " arbitrary attention in layers, right? You can replace that by a two-step process where" }, { "end": 1608.08, "start": 1598.56, "text": " you first transfer information via the column and then transfer it via the row. It's a bit" }, { "end": 1617.12, "start": 1608.08, "text": " like, you know, in chess you can have a queen that can move any direction, especially diagonally," }, { "end": 1623.12, "start": 1617.12, "text": " and then if you just have a rook you kind of need to do two moves. So in the queen is" }, { "end": 1630.7199999999998, "start": 1623.12, "text": " like the full attention and the rook is the multi-layer axial attention. They can achieve" }, { "end": 1639.7199999999998, "start": 1630.7199999999998, "text": " the same thing, you just need more layers. But as a trade-off you get a super, super" }, { "end": 1647.4799999999998, "start": 1639.7199999999998, "text": " saving in requirement of memory and computation, right? So they stress that, you know, kind" }, { "end": 1653.56, "start": 1647.48, "text": " of you can represent the same distributions with the axial attention. And you know, the" }, { "end": 1660.3600000000001, "start": 1653.56, "text": " trade-off is you just have to do multiple layers of it. Right, so this is axial attention" }, { "end": 1667.28, "start": 1660.3600000000001, "text": " and they are now able to incorporate this into their model right here. So they have," }, { "end": 1674.4, "start": 1667.28, "text": " I believe, eight blocks, so four row attention, you see this right here, and four column attention" }, { "end": 1685.68, "start": 1674.4, "text": " blocks in their model. And finally they output this distribution here across their region" }, { "end": 1695.68, "start": 1685.68, "text": " of interest. Now this again is your, I believe, this 64 by 64 resolution. So you can see how" }, { "end": 1703.3200000000002, "start": 1695.68, "text": " they kind of aggregated information across the 64 using this axial attention. And then" }, { "end": 1712.28, "start": 1703.32, "text": " that makes their prediction in this one hour. So this is this. Alright, so this was a long" }, { "end": 1719.6399999999999, "start": 1712.28, "text": " way. So recap, they have 15-minute snapshots of this input data across along with some" }, { "end": 1726.8, "start": 1719.6399999999999, "text": " features. They use a spatial down sampler, which is a CNN, on each of them individually." }, { "end": 1735.24, "start": 1726.8, "text": " Then they use a convolutional LSTM to encode this across time to end up with a single representation" }, { "end": 1743.8, "start": 1735.24, "text": " here at the end. Then they use axial attention in order to aggregate information across the" }, { "end": 1750.08, "start": 1743.8, "text": " spatial dimensions. They do this in multiple stages and at the end they make a participation" }, { "end": 1760.24, "start": 1750.08, "text": " prediction, which is a distribution, as you can see here. So as an output you directly" }, { "end": 1766.6, "start": 1760.24, "text": " get a distribution of results, which is also cool because the physical simulation, you" }, { "end": 1772.04, "start": 1766.6, "text": " have to let it run many, many times in order to get a distribution of results. And this" }, { "end": 1780.04, "start": 1772.04, "text": " neural network can simply give you a distribution right away. That's what they say right here." }, { "end": 1787.92, "start": 1780.04, "text": " So they go a bit into the architecture compared to baseline. I want to get back to what I" }, { "end": 1792.8799999999999, "start": 1787.92, "text": " showed you at the beginning. This here is just the picture, kind of the picture book" }, { "end": 1798.6, "start": 1792.8799999999999, "text": " example. So left is the ground truth, in the middle is MatNet, and on the right is a baseline" }, { "end": 1810.24, "start": 1798.6, "text": " method. This here is in, as you can see, in two hours, in four, six and eight. So you" }, { "end": 1815.04, "start": 1810.24, "text": " can see the MatNet gives you as an output this distribution. What I find interesting," }, { "end": 1823.3999999999999, "start": 1815.04, "text": " for example, is this sample two right here. So in this sample one you can see there is" }, { "end": 1828.56, "start": 1823.3999999999999, "text": " a consistent difference and this is the forecast time, so how much in advance you want to get" }, { "end": 1834.6399999999999, "start": 1828.56, "text": " it? No, this would be a one hour, but it can go up to eight hours. Here is a consistent" }, { "end": 1844.32, "start": 1834.6399999999999, "text": " gap in F1, which means the MatNet does it better across this span of time, which is" }, { "end": 1851.28, "start": 1844.32, "text": " for the top sample right here. For the bottom sample though, you can see here, there is" }, { "end": 1857.6799999999998, "start": 1851.28, "text": " a big gap at the beginning, again, there is a big gap at the beginning, and then this" }, { "end": 1865.1200000000001, "start": 1857.68, "text": " gap gets smaller and smaller and smaller. And this, I think, might give you an indication" }, { "end": 1870.6000000000001, "start": 1865.1200000000001, "text": " of, let's say, the weakness of this approach, doing it with neural networks. So with neural" }, { "end": 1878.88, "start": 1870.6000000000001, "text": " networks you kind of rely on regularities, you kind of rely on broad scale, correct things" }, { "end": 1885.8400000000001, "start": 1878.88, "text": " that you can learn from the data, and this might work well as long as things are regular," }, { "end": 1892.28, "start": 1885.84, "text": " which of course across shorter time spans things tend to be more regular, right? But" }, { "end": 1898.36, "start": 1892.28, "text": " if you go for longer time spans, I believe there is more of a chaos element to it, like" }, { "end": 1906.12, "start": 1898.36, "text": " weather can be very dependent on very subtle things, and the physics simulation that is" }, { "end": 1912, "start": 1906.12, "text": " really taking into account the actual physics might be able to much, much better account" }, { "end": 1920.56, "start": 1912, "text": " for that. And that's why I believe across time here you'll see that the two models get" }, { "end": 1927.96, "start": 1920.56, "text": " closer together. That being said, MetNet of course is still on top here. But it will be" }, { "end": 1939.04, "start": 1927.96, "text": " interesting to forecast for longer even, though I haven't actually dig through their results," }, { "end": 1945.76, "start": 1939.04, "text": " through their numerical results, but you can do that if you want. Alright, so this was" }, { "end": 1969.68, "start": 1945.76, "text": " it for MetNet and axial attention. I hope you liked this, and bye bye." } ]
wAgO2WZzjn4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Rant] coronavirus
[ "Science & Technology" ]
[ "corona", "covid", "covid19", "lockdown", "social distancing" ]
A rant about toilet paper and lockdowns. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
This video is going to be a rant. There is not really a script and I have not really thought this through. But I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus. I'm not a medical expert. I don't play a doctor on the internet. And there absolutely is no need to follow any of my advice or take anything as advice that I say. I just want to talk and maybe someone else will have a good idea of what I talk. So it is a crazy world we live in. I would have never thought at the beginning of this year that this would be the year where everyone stays at their house and works from home. I've always thought that in a time like this, when the economy is going down, basically the thing of value would be something like Bitcoin or Ethereum or alternative things. But no, everything is going down and the actual new currency of choice is toilet paper. Everyone is going to grab the toilet paper. What a crazy world where the most trusted news source is someone like Tucker Carlson. Yeah, didn't see that one coming. Thanks, Tucker, for saving us. So I don't know what to make of this. And I do know that this is a serious situation. And you should definitely do everything you can to take care of yourself and to take care of your community. What I want to talk about is the question of what is it going to do long term? So if we think about this, we often think about this right now. We have an exponential increase in number of cases. You've probably seen this and you've probably seen graphics like these where the goal is to flatten the curve. The sense behind this being that if this rises exponentially, of course, at some point it will affect the entire population. So it's going to flatten out. And if you look at the number of new cases daily, it might be some curve like this. The problem is that we only have a finite capacity of health care systems. So all of these people are basically going to be screwed once we get to this point. Now, the goal is to flatten the curve, that we can take some measures to keep this curve under or at the capacity of our health care system. These measures are varying wildly. So it is these measures that I want to talk about a bit. Now, these measures range from something like social distancing, where you basically say, all right, no big events, no groups of large people, social distancing. And just kind of avoid contact with other people. Now, of course, all the CS departments of the world go like, well, this is business as usual. Like, yay, we've practiced for this our entire lives. So it is mildly inconvenient. But we can keep it up all the way to lockdown. Lockdown comes also in various forms. But the most drastic sense is stay home or you'll get shot or locked up or something like this. And it is this discrepancy. Of course, the more down on the curve you go, the more you're going to theoretically flatten this out. The more the less you do, the higher your peak is going to be. But it's not that easy, I find. If you look at the cases here, of course, they're exponentially rising. But if you look at where the outbreak started in China, the orange curve here, you actually see the number of cases flattening out. Now, you see it flattening out at something like 100 K. And last I know China has more people than 100 K. So that means not everyone's infected. Now, with a disease that infects this easily and spreads this easily from person to person, as it appears to be the case, there are two possibilities. Either the rest of China, which China is over a billion people, and this is 100 K. So the entire rest of China, basically almost all of China is asymptomatic, which the latest numbers I hear are that maybe 50 percent of cases are asymptomatic. Or the other possibility is that most of China has yet to be infected. Now, with a virus like this, if you look at the distribution, it's basically arrived everywhere in the world. So there is very, very little hope of snuffing this thing out, actually making it stop, which what you'd have to do is you'd have to lock every single person down for two to three weeks. And now only a single person that doesn't keep to that can start a new outbreak. So what I fully expect to happen if these numbers are correct and if China actually has done this successfully, so flattened this curve successfully, is that let's say the green thing here is China, is that, okay, they get to a point where they feel they have no new cases for a while, so they let the restriction up, right, they remove the restriction. There's going to be some person somewhere in some CS department that now goes outside and meets another person. And in that particular person here, the virus happens to have an incubation period of 21 days instead of 14. And they're going to transmit that to two, three, four, five people. After these measures, everyone's going to be longing for social contacts and large groups. And we might gradually loosen the restrictions, but still a new outbreak is inevitable, it seems. So what you'll have again is a spike. And then a country might enact measures again and so on. But I believe the world we're going to live in, if we really lock down people, if we really enforce these measures, is a world of multiple repeated seasonal peaks of this disease. And that means we are in for the long term. I don't want to say ever that we shouldn't do that, because it of course effectively reduces the number of deaths, which should be our ultimate goal here. But just know that flattening the curve once, like these graphics here, is a bit misleading, I believe. We need to be thinking about a long term plan here. And since we're going long term, and with long term, I mean months, I mean multiple years, with long term, the problem here is the people. And I want to elaborate on that. So the largest problem are the people. People aren't just machines that you can command around. People are individuals. They have their own ideas. They have their own goals that they want to fulfill, right? At some point, you want to go on a vacation. This is an island with a tree. So let's talk about lockdown. Lockdown, it appears to be a thing that is necessary in some parts if you ask some people. Again, I don't want to give advice on this. I just want to give some thoughts. So what do you get with lockdown? With lockdown, you get OMG, it's happening, and so on. That's day one. Day three, you get funny YouTube videos. Everyone that is in lockdown will be like, oh, I'm stuck at home. It's so boring. Already forgetting that other people have major issues with being locked down. A lot of people sitting on top of each other is going to create a lot of problems. And eventually, more and more people are going to long for this to end. And I'm not saying that, you know, that response to a virus should be fun. But what I'm saying is that people are going to break this. It is inevitable. First some are going to break, then more. You have a very delicate balance going on here. Right now, there is a lot of support. A lot of people are on the side of locking things down in a lockdown. A lot of people are conscientious, staying home, avoiding social contact as much as possible. But some are going to be the first ones to go over there. Some are going to break. Some are going to find excuses not to keep to it. And the problem is, the harder the measures are, the harder you are down here. The stronger the pull is going to be for people to go on this other side. And I guarantee you, the people on social media that are shaming others the most, that are yelling out the loudest for others to not break the lockdown, either they have an extremely comfortable living at their own homes, which is an extreme privilege, or they are the worst ones to break it themselves, to find every excuse they can, why they are exempt from it. And people are going to see this. More and more people are going to be over here, and with more people over here. Look, they have the sunshine. They are out and about. They are doing their things more like normal. The people over here, they are going to see this. And more and more people will be, hey, why am I keeping to this? Why am I not over there? Why can these people do that? And they will go. And at some point, the scale is going to tip, and any lockdown, barring martial law and the threat of being shot, if you go outside, will be ineffective. And at that point, wherever you are, the cases are going to spike. And it will be even worse than when you did nothing, or as bad. So I believe that it is a very delicate balance that you have to strike here. Total lockdown, people aren't going to take this for a long time, and you need to think about a long time here. I don't know what the answer is. I don't know where exactly the scale of just keep apart, to stay home, whatever it takes, is. I just think that two harsh measures can also be counterproductive. I'm very fortunate to live in Switzerland. Most of our neighbors have instituted total lockdowns, and the Swiss government has recently decided not to do so at this time, with, I believe, much of the same reasoning as I'm just laying out. We need to think about this long term, and people are not going to keep to a lockdown long term, and it will be worse if they don't. Now, I believe the best response to something like this is a distributed one. I believe the best response is to go to people in their networks. People usually care about the people around them, enough so that they will take responsibility into the hand. I believe you should give the people the responsibility, as much responsibility as you can, and I believe the network of people, each one arranging themselves in the most pro-social way, can be the best response, better than any government could do. Governments can do things such as prohibit large gatherings. Sometimes, if you don't do that, even the individual people can't do anything against that. But to actually believe in your citizens, and believe in the fundamental goodness of humans, and the fundamental care for other humans, is a strong suit here. On the other hand, you see other governments. I have read that a city in Norway is thinking about employing a monitoring system, where they track everyone's phone, and if more than a certain amount of people are in the same place, they will basically send everyone a text message, saying you should disperse. While this is an effective measure, and I believe can definitely help, and it is something that you need to be very careful about. As we saw with 9-11, as soon as governments get power, they rarely let it go, as Edward Snowden finally demonstrated. If you enact something like this, you must definitely make sure that there is a time limit on it. Any government measure right now, be that spending to help the economy, which is certainly a good thing, be this measures to increase social distancing, to prohibit public gatherings. Support this, but it must be time limited. Otherwise, governments aren't going to let this go. Finally, I would like to come to a more global scale of long-term thinking, countries and other countries. As you go on, you need to think about your economy. Our economies were growing at a fairly good pace until this hit, and now they're plunging. At any point, they're going to be opportunists. They're going to be personal opportunists, hoarding toilet paper and hand sanitizer, and trying to sell them for marked-up prices. They're going to be country opportunists. When everything's falling down, if you're the country that locks things down now, your economy is going to fall. Eventually, though, you'll have to get back. Countries that get back sooner will be in an upswing sooner. Basically, the question is, where is the ideal point here? To leave the... To not react anymore, to let people do their thing, to get back on track. I don't know where that is, but I believe you're going to see a Cold War-like situation in the world where countries are going to accuse other countries of not doing enough or doing too much, of not playing fairly, of helping to spread the virus. And I believe that will be the case for the years to come. Because what happens over the long time? Of course, right now, you can afford to not fix that pipe under your house that's broken. You can afford to not clean the... To not get the person to clean the chimney. You can afford to not get dental work done. I don't even know how to draw a tooth. Let's say this is a tooth. It probably has some peaks here. Over the long term, though, all of these things are going to break. And we need to get back to normal. And the longer a state keeps up these measures, the worse it's going to get. Finally, we need to talk about risk people. People at risk tend to be older, tend to be ones with health issues. Think about this. If you're an old person having health issues, you're looking at long term. Once you realize this is not going to be over in a few weeks, what do you do? You're old. And the next year or so in lockdown mode is going to be hard for you. And for everyone. But a year, if you're that old and sick, is probably more quality life you have left than after it. So you need to be thinking either, I'm going to survive this because I bunker in my house, don't get the virus. But what is it worth? Because my other diseases will get me afterwards. Otherwise, I could be spending the quality time I have with my family, with my children, with my grandchildren. I could be spending it with my friends. And if I die, I die. It is not an easy question, but I'm absolutely sure there are people right now who are asking themselves this. If you're a government and you're thinking about mandatory lockdowns, I do see that this is in order to save people, in order to not have people walking around that spread the virus to vulnerable populations. But you need to be thinking about the people you're trying to help. Some of them would actually be on this side. I don't know what the best response is to everything here. I think we're just going to see and I don't want to give advice. This is just some of the things I think. I wish everyone the absolute healthiest season they can have right now. Take care. Please think about others. Please do not make the problem worse yourself. You're part of a network and you can be a powerful force for good during this time. Think about long-term, if you're asking your government to do things, think about what's the best situation and how we are going to get there. Thanks and stay healthy.
[ { "end": 7, "start": 0, "text": " This video is going to be a rant. There is not really a script and I have not really thought this through." }, { "end": 18, "start": 7, "text": " But I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus." }, { "end": 22, "start": 18, "text": " I'm not a medical expert. I don't play a doctor on the internet." }, { "end": 30, "start": 22, "text": " And there absolutely is no need to follow any of my advice or take anything as advice that I say." }, { "end": 37, "start": 30, "text": " I just want to talk and maybe someone else will have a good idea of what I talk." }, { "end": 41, "start": 37, "text": " So it is a crazy world we live in." }, { "end": 50, "start": 41, "text": " I would have never thought at the beginning of this year that this would be the year where everyone stays at their house and works from home." }, { "end": 65, "start": 50, "text": " I've always thought that in a time like this, when the economy is going down, basically the thing of value would be something like Bitcoin or Ethereum or alternative things." }, { "end": 71, "start": 65, "text": " But no, everything is going down and the actual new currency of choice is toilet paper." }, { "end": 74, "start": 71, "text": " Everyone is going to grab the toilet paper." }, { "end": 84, "start": 74, "text": " What a crazy world where the most trusted news source is someone like Tucker Carlson." }, { "end": 88, "start": 84, "text": " Yeah, didn't see that one coming." }, { "end": 91, "start": 88, "text": " Thanks, Tucker, for saving us." }, { "end": 95, "start": 91, "text": " So I don't know what to make of this." }, { "end": 99, "start": 95, "text": " And I do know that this is a serious situation." }, { "end": 108, "start": 99, "text": " And you should definitely do everything you can to take care of yourself and to take care of your community." }, { "end": 116, "start": 108, "text": " What I want to talk about is the question of what is it going to do long term?" }, { "end": 122, "start": 116, "text": " So if we think about this, we often think about this right now." }, { "end": 125, "start": 122, "text": " We have an exponential increase in number of cases." }, { "end": 134, "start": 125, "text": " You've probably seen this and you've probably seen graphics like these where the goal is to flatten the curve." }, { "end": 145, "start": 134, "text": " The sense behind this being that if this rises exponentially, of course, at some point it will affect the entire population." }, { "end": 147, "start": 145, "text": " So it's going to flatten out." }, { "end": 152, "start": 147, "text": " And if you look at the number of new cases daily, it might be some curve like this." }, { "end": 157, "start": 152, "text": " The problem is that we only have a finite capacity of health care systems." }, { "end": 162, "start": 157, "text": " So all of these people are basically going to be screwed once we get to this point." }, { "end": 172, "start": 162, "text": " Now, the goal is to flatten the curve, that we can take some measures to keep this curve under or at the capacity of our health care system." }, { "end": 175, "start": 172, "text": " These measures are varying wildly." }, { "end": 179, "start": 175, "text": " So it is these measures that I want to talk about a bit." }, { "end": 196, "start": 179, "text": " Now, these measures range from something like social distancing, where you basically say, all right, no big events, no groups of large people, social distancing." }, { "end": 202, "start": 196, "text": " And just kind of avoid contact with other people." }, { "end": 209, "start": 202, "text": " Now, of course, all the CS departments of the world go like, well, this is business as usual." }, { "end": 214, "start": 209, "text": " Like, yay, we've practiced for this our entire lives." }, { "end": 217, "start": 214, "text": " So it is mildly inconvenient." }, { "end": 223, "start": 217, "text": " But we can keep it up all the way to lockdown." }, { "end": 226, "start": 223, "text": " Lockdown comes also in various forms." }, { "end": 233, "start": 226, "text": " But the most drastic sense is stay home or you'll get shot or locked up or something like this." }, { "end": 235, "start": 233, "text": " And it is this discrepancy." }, { "end": 243, "start": 235, "text": " Of course, the more down on the curve you go, the more you're going to theoretically flatten this out." }, { "end": 248, "start": 243, "text": " The more the less you do, the higher your peak is going to be." }, { "end": 252, "start": 248, "text": " But it's not that easy, I find." }, { "end": 257, "start": 252, "text": " If you look at the cases here, of course, they're exponentially rising." }, { "end": 267, "start": 257, "text": " But if you look at where the outbreak started in China, the orange curve here, you actually see the number of cases flattening out." }, { "end": 271, "start": 267, "text": " Now, you see it flattening out at something like 100 K." }, { "end": 276, "start": 271, "text": " And last I know China has more people than 100 K." }, { "end": 279, "start": 276, "text": " So that means not everyone's infected." }, { "end": 288, "start": 279, "text": " Now, with a disease that infects this easily and spreads this easily from person to person, as it appears to be the case, there are two possibilities." }, { "end": 296, "start": 288, "text": " Either the rest of China, which China is over a billion people, and this is 100 K." }, { "end": 308, "start": 296, "text": " So the entire rest of China, basically almost all of China is asymptomatic, which the latest numbers I hear are that maybe 50 percent of cases are asymptomatic." }, { "end": 316, "start": 308, "text": " Or the other possibility is that most of China has yet to be infected." }, { "end": 322, "start": 316, "text": " Now, with a virus like this, if you look at the distribution, it's basically arrived everywhere in the world." }, { "end": 335, "start": 322, "text": " So there is very, very little hope of snuffing this thing out, actually making it stop, which what you'd have to do is you'd have to lock every single person down for two to three weeks." }, { "end": 341, "start": 335, "text": " And now only a single person that doesn't keep to that can start a new outbreak." }, { "end": 356, "start": 341, "text": " So what I fully expect to happen if these numbers are correct and if China actually has done this successfully, so flattened this curve successfully, is that let's say the green thing here is China, is that, okay," }, { "end": 365, "start": 356, "text": " they get to a point where they feel they have no new cases for a while, so they let the restriction up, right, they remove the restriction." }, { "end": 374, "start": 365, "text": " There's going to be some person somewhere in some CS department that now goes outside and meets another person." }, { "end": 382, "start": 374, "text": " And in that particular person here, the virus happens to have an incubation period of 21 days instead of 14." }, { "end": 388, "start": 382, "text": " And they're going to transmit that to two, three, four, five people." }, { "end": 393, "start": 388, "text": " After these measures, everyone's going to be longing for social contacts and large groups." }, { "end": 400, "start": 393, "text": " And we might gradually loosen the restrictions, but still a new outbreak is inevitable, it seems." }, { "end": 403, "start": 400, "text": " So what you'll have again is a spike." }, { "end": 407, "start": 403, "text": " And then a country might enact measures again and so on." }, { "end": 417, "start": 407, "text": " But I believe the world we're going to live in, if we really lock down people, if we really enforce these measures," }, { "end": 423, "start": 417, "text": " is a world of multiple repeated seasonal peaks of this disease." }, { "end": 427, "start": 423, "text": " And that means we are in for the long term." }, { "end": 439, "start": 427, "text": " I don't want to say ever that we shouldn't do that, because it of course effectively reduces the number of deaths, which should be our ultimate goal here." }, { "end": 449, "start": 439, "text": " But just know that flattening the curve once, like these graphics here, is a bit misleading, I believe." }, { "end": 452, "start": 449, "text": " We need to be thinking about a long term plan here." }, { "end": 459, "start": 452, "text": " And since we're going long term, and with long term, I mean months, I mean multiple years," }, { "end": 464, "start": 459, "text": " with long term, the problem here is the people." }, { "end": 467, "start": 464, "text": " And I want to elaborate on that." }, { "end": 471, "start": 467, "text": " So the largest problem are the people." }, { "end": 475, "start": 471, "text": " People aren't just machines that you can command around." }, { "end": 478, "start": 475, "text": " People are individuals. They have their own ideas." }, { "end": 482, "start": 478, "text": " They have their own goals that they want to fulfill, right?" }, { "end": 485, "start": 482, "text": " At some point, you want to go on a vacation." }, { "end": 490, "start": 485, "text": " This is an island with a tree." }, { "end": 492, "start": 490, "text": " So let's talk about lockdown." }, { "end": 501, "start": 492, "text": " Lockdown, it appears to be a thing that is necessary in some parts if you ask some people." }, { "end": 503, "start": 501, "text": " Again, I don't want to give advice on this." }, { "end": 507, "start": 503, "text": " I just want to give some thoughts." }, { "end": 509, "start": 507, "text": " So what do you get with lockdown?" }, { "end": 514, "start": 509, "text": " With lockdown, you get OMG, it's happening, and so on." }, { "end": 516, "start": 514, "text": " That's day one." }, { "end": 520, "start": 516, "text": " Day three, you get funny YouTube videos." }, { "end": 527, "start": 520, "text": " Everyone that is in lockdown will be like, oh, I'm stuck at home." }, { "end": 529, "start": 527, "text": " It's so boring." }, { "end": 534, "start": 529, "text": " Already forgetting that other people have major issues with being locked down." }, { "end": 540, "start": 534, "text": " A lot of people sitting on top of each other is going to create a lot of problems." }, { "end": 546, "start": 540, "text": " And eventually, more and more people are going to long for this to end." }, { "end": 553, "start": 546, "text": " And I'm not saying that, you know, that response to a virus should be fun." }, { "end": 557, "start": 553, "text": " But what I'm saying is that people are going to break this." }, { "end": 558, "start": 557, "text": " It is inevitable." }, { "end": 560, "start": 558, "text": " First some are going to break, then more." }, { "end": 564, "start": 560, "text": " You have a very delicate balance going on here." }, { "end": 566, "start": 564, "text": " Right now, there is a lot of support." }, { "end": 571, "start": 566, "text": " A lot of people are on the side of locking things down in a lockdown." }, { "end": 577, "start": 571, "text": " A lot of people are conscientious, staying home, avoiding social contact as much as possible." }, { "end": 581, "start": 577, "text": " But some are going to be the first ones to go over there." }, { "end": 583, "start": 581, "text": " Some are going to break." }, { "end": 588, "start": 583, "text": " Some are going to find excuses not to keep to it." }, { "end": 593, "start": 588, "text": " And the problem is, the harder the measures are, the harder you are down here." }, { "end": 597, "start": 593, "text": " The stronger the pull is going to be for people to go on this other side." }, { "end": 603, "start": 597, "text": " And I guarantee you, the people on social media that are shaming others the most," }, { "end": 609, "start": 603, "text": " that are yelling out the loudest for others to not break the lockdown," }, { "end": 613, "start": 609, "text": " either they have an extremely comfortable living at their own homes," }, { "end": 615, "start": 613, "text": " which is an extreme privilege," }, { "end": 620, "start": 615, "text": " or they are the worst ones to break it themselves," }, { "end": 624, "start": 620, "text": " to find every excuse they can, why they are exempt from it." }, { "end": 626, "start": 624, "text": " And people are going to see this." }, { "end": 630, "start": 626, "text": " More and more people are going to be over here, and with more people over here." }, { "end": 632, "start": 630, "text": " Look, they have the sunshine." }, { "end": 634, "start": 632, "text": " They are out and about." }, { "end": 637, "start": 634, "text": " They are doing their things more like normal." }, { "end": 641, "start": 637, "text": " The people over here, they are going to see this." }, { "end": 646, "start": 641, "text": " And more and more people will be, hey, why am I keeping to this?" }, { "end": 648, "start": 646, "text": " Why am I not over there?" }, { "end": 650, "start": 648, "text": " Why can these people do that?" }, { "end": 651, "start": 650, "text": " And they will go." }, { "end": 654, "start": 651, "text": " And at some point, the scale is going to tip," }, { "end": 660, "start": 654, "text": " and any lockdown, barring martial law and the threat of being shot," }, { "end": 663, "start": 660, "text": " if you go outside, will be ineffective." }, { "end": 668, "start": 663, "text": " And at that point, wherever you are, the cases are going to spike." }, { "end": 673, "start": 668, "text": " And it will be even worse than when you did nothing, or as bad." }, { "end": 678, "start": 673, "text": " So I believe that it is a very delicate balance that you have to strike here." }, { "end": 682, "start": 678, "text": " Total lockdown, people aren't going to take this for a long time," }, { "end": 685, "start": 682, "text": " and you need to think about a long time here." }, { "end": 687, "start": 685, "text": " I don't know what the answer is." }, { "end": 693, "start": 687, "text": " I don't know where exactly the scale of just keep apart," }, { "end": 698, "start": 693, "text": " to stay home, whatever it takes, is." }, { "end": 704, "start": 698, "text": " I just think that two harsh measures can also be counterproductive." }, { "end": 708, "start": 704, "text": " I'm very fortunate to live in Switzerland." }, { "end": 711, "start": 708, "text": " Most of our neighbors have instituted total lockdowns," }, { "end": 716, "start": 711, "text": " and the Swiss government has recently decided not to do so at this time," }, { "end": 721, "start": 716, "text": " with, I believe, much of the same reasoning as I'm just laying out." }, { "end": 723, "start": 721, "text": " We need to think about this long term," }, { "end": 726, "start": 723, "text": " and people are not going to keep to a lockdown long term," }, { "end": 730, "start": 726, "text": " and it will be worse if they don't." }, { "end": 734, "start": 730, "text": " Now, I believe the best response to something like this is a distributed one." }, { "end": 738, "start": 734, "text": " I believe the best response is to go to people in their networks." }, { "end": 741, "start": 738, "text": " People usually care about the people around them," }, { "end": 746, "start": 741, "text": " enough so that they will take responsibility into the hand." }, { "end": 751, "start": 746, "text": " I believe you should give the people the responsibility," }, { "end": 754, "start": 751, "text": " as much responsibility as you can," }, { "end": 756, "start": 754, "text": " and I believe the network of people," }, { "end": 760, "start": 756, "text": " each one arranging themselves in the most pro-social way," }, { "end": 765, "start": 760, "text": " can be the best response, better than any government could do." }, { "end": 770, "start": 765, "text": " Governments can do things such as prohibit large gatherings." }, { "end": 773, "start": 770, "text": " Sometimes, if you don't do that," }, { "end": 777, "start": 773, "text": " even the individual people can't do anything against that." }, { "end": 781, "start": 777, "text": " But to actually believe in your citizens," }, { "end": 784, "start": 781, "text": " and believe in the fundamental goodness of humans," }, { "end": 788, "start": 784, "text": " and the fundamental care for other humans," }, { "end": 791, "start": 788, "text": " is a strong suit here." }, { "end": 795, "start": 791, "text": " On the other hand, you see other governments." }, { "end": 799, "start": 795, "text": " I have read that a city in Norway" }, { "end": 804, "start": 799, "text": " is thinking about employing a monitoring system," }, { "end": 807, "start": 804, "text": " where they track everyone's phone," }, { "end": 811, "start": 807, "text": " and if more than a certain amount of people are in the same place," }, { "end": 815, "start": 811, "text": " they will basically send everyone a text message," }, { "end": 818, "start": 815, "text": " saying you should disperse." }, { "end": 820, "start": 818, "text": " While this is an effective measure," }, { "end": 823, "start": 820, "text": " and I believe can definitely help," }, { "end": 827, "start": 823, "text": " and it is something that you need to be very careful about." }, { "end": 831, "start": 827, "text": " As we saw with 9-11, as soon as governments get power," }, { "end": 836, "start": 831, "text": " they rarely let it go, as Edward Snowden finally demonstrated." }, { "end": 838, "start": 836, "text": " If you enact something like this," }, { "end": 842, "start": 838, "text": " you must definitely make sure that there is a time limit on it." }, { "end": 844, "start": 842, "text": " Any government measure right now," }, { "end": 848, "start": 844, "text": " be that spending to help the economy, which is certainly a good thing," }, { "end": 852, "start": 848, "text": " be this measures to increase social distancing," }, { "end": 855, "start": 852, "text": " to prohibit public gatherings." }, { "end": 859, "start": 855, "text": " Support this, but it must be time limited." }, { "end": 863, "start": 859, "text": " Otherwise, governments aren't going to let this go." }, { "end": 867, "start": 863, "text": " Finally, I would like to come to a more global scale" }, { "end": 872, "start": 867, "text": " of long-term thinking, countries and other countries." }, { "end": 878, "start": 872, "text": " As you go on, you need to think about your economy." }, { "end": 883, "start": 878, "text": " Our economies were growing at a fairly good pace until this hit," }, { "end": 885, "start": 883, "text": " and now they're plunging." }, { "end": 887, "start": 885, "text": " At any point, they're going to be opportunists." }, { "end": 889, "start": 887, "text": " They're going to be personal opportunists," }, { "end": 891, "start": 889, "text": " hoarding toilet paper and hand sanitizer," }, { "end": 895, "start": 891, "text": " and trying to sell them for marked-up prices." }, { "end": 898, "start": 895, "text": " They're going to be country opportunists." }, { "end": 901, "start": 898, "text": " When everything's falling down," }, { "end": 905, "start": 901, "text": " if you're the country that locks things down now," }, { "end": 907, "start": 905, "text": " your economy is going to fall." }, { "end": 910, "start": 907, "text": " Eventually, though, you'll have to get back." }, { "end": 914, "start": 910, "text": " Countries that get back sooner will be in an upswing sooner." }, { "end": 919, "start": 914, "text": " Basically, the question is, where is the ideal point here?" }, { "end": 921, "start": 919, "text": " To leave the..." }, { "end": 925, "start": 921, "text": " To not react anymore, to let people do their thing," }, { "end": 927, "start": 925, "text": " to get back on track." }, { "end": 929, "start": 927, "text": " I don't know where that is," }, { "end": 935, "start": 929, "text": " but I believe you're going to see a Cold War-like situation in the world" }, { "end": 938, "start": 935, "text": " where countries are going to accuse other countries" }, { "end": 940, "start": 938, "text": " of not doing enough or doing too much," }, { "end": 944, "start": 940, "text": " of not playing fairly, of helping to spread the virus." }, { "end": 949, "start": 944, "text": " And I believe that will be the case for the years to come." }, { "end": 951, "start": 949, "text": " Because what happens over the long time?" }, { "end": 955, "start": 951, "text": " Of course, right now, you can afford to not fix that pipe" }, { "end": 958, "start": 955, "text": " under your house that's broken." }, { "end": 961, "start": 958, "text": " You can afford to not clean the..." }, { "end": 964, "start": 961, "text": " To not get the person to clean the chimney." }, { "end": 967, "start": 964, "text": " You can afford to not get dental work done." }, { "end": 970, "start": 967, "text": " I don't even know how to draw a tooth." }, { "end": 973, "start": 970, "text": " Let's say this is a tooth." }, { "end": 976, "start": 973, "text": " It probably has some peaks here." }, { "end": 980, "start": 976, "text": " Over the long term, though, all of these things are going to break." }, { "end": 982, "start": 980, "text": " And we need to get back to normal." }, { "end": 987, "start": 982, "text": " And the longer a state keeps up these measures," }, { "end": 991, "start": 987, "text": " the worse it's going to get." }, { "end": 996, "start": 991, "text": " Finally, we need to talk about risk people." }, { "end": 1002, "start": 996, "text": " People at risk tend to be older, tend to be ones with health issues." }, { "end": 1003, "start": 1002, "text": " Think about this." }, { "end": 1008, "start": 1003, "text": " If you're an old person having health issues," }, { "end": 1010, "start": 1008, "text": " you're looking at long term." }, { "end": 1014, "start": 1010, "text": " Once you realize this is not going to be over in a few weeks," }, { "end": 1015, "start": 1014, "text": " what do you do?" }, { "end": 1016, "start": 1015, "text": " You're old." }, { "end": 1021, "start": 1016, "text": " And the next year or so in lockdown mode" }, { "end": 1024, "start": 1021, "text": " is going to be hard for you." }, { "end": 1025, "start": 1024, "text": " And for everyone." }, { "end": 1029, "start": 1025, "text": " But a year, if you're that old and sick," }, { "end": 1036, "start": 1029, "text": " is probably more quality life you have left than after it." }, { "end": 1038, "start": 1036, "text": " So you need to be thinking either," }, { "end": 1043, "start": 1038, "text": " I'm going to survive this because I bunker in my house," }, { "end": 1045, "start": 1043, "text": " don't get the virus." }, { "end": 1046, "start": 1045, "text": " But what is it worth?" }, { "end": 1050, "start": 1046, "text": " Because my other diseases will get me afterwards." }, { "end": 1053, "start": 1050, "text": " Otherwise, I could be spending the quality time I have" }, { "end": 1056, "start": 1053, "text": " with my family, with my children, with my grandchildren." }, { "end": 1059, "start": 1056, "text": " I could be spending it with my friends." }, { "end": 1061, "start": 1059, "text": " And if I die, I die." }, { "end": 1063, "start": 1061, "text": " It is not an easy question," }, { "end": 1067, "start": 1063, "text": " but I'm absolutely sure there are people right now" }, { "end": 1069, "start": 1067, "text": " who are asking themselves this." }, { "end": 1073, "start": 1069, "text": " If you're a government and you're thinking about mandatory lockdowns," }, { "end": 1078, "start": 1073, "text": " I do see that this is in order to save people," }, { "end": 1084, "start": 1078, "text": " in order to not have people walking around that spread the virus" }, { "end": 1087, "start": 1084, "text": " to vulnerable populations." }, { "end": 1090, "start": 1087, "text": " But you need to be thinking about the people you're trying to help." }, { "end": 1098, "start": 1090, "text": " Some of them would actually be on this side." }, { "end": 1104, "start": 1098, "text": " I don't know what the best response is to everything here." }, { "end": 1108, "start": 1104, "text": " I think we're just going to see and I don't want to give advice." }, { "end": 1112, "start": 1108, "text": " This is just some of the things I think." }, { "end": 1119, "start": 1112, "text": " I wish everyone the absolute healthiest season they can have right now." }, { "end": 1120, "start": 1119, "text": " Take care." }, { "end": 1122, "start": 1120, "text": " Please think about others." }, { "end": 1125, "start": 1122, "text": " Please do not make the problem worse yourself." }, { "end": 1132, "start": 1125, "text": " You're part of a network and you can be a powerful force for good during this time." }, { "end": 1139, "start": 1132, "text": " Think about long-term, if you're asking your government to do things," }, { "end": 1144, "start": 1139, "text": " think about what's the best situation and how we are going to get there." }, { "end": 1166, "start": 1144, "text": " Thanks and stay healthy." } ]
H3Bhlan0mE0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Online Education - How I Make My Videos
[ "Science & Technology" ]
[ "deep learning", "machine learning", "online video", "university", "online", "create", "lecture" ]
Just a short overview of tools I use to make my videos. OneNote - https://www.onenote.com iSpring Free Cam - https://www.ispringsolutions.com/ispring-cam Shotcut - https://shotcut.org Slack - https://slack.com RocketChat - https://rocket.chat Zoom - https://zoom.us Jitsi - https://jitsi.org GDocs - https://www.google.com/docs/about Piazza - https://piazza.com CMT - https://cmt3.research.microsoft.com/About Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! So a lot of people have been asking me how I make these videos. And this is of course relevant now that everyone's work from home and all the schools are converted into online schools. All of a sudden a lot of people have to make these online education happen. And I think this style of video lends itself to online education. So I'll quickly go over the process of how to do this and maybe also how to run a university class online. Alright, so the process is pretty simple of how I make my videos. This might not work for everyone, but it works for me. I use the Microsoft OneNote in order to scribble on papers basically. So the thing is, in OneNote you have this insert thing here and you can print out a PDF onto your notebook here. So the way this looks then is you'll get the PDF in your notebook and you can scribble on it with this using this draw tab here. You can choose a pen and scribble on it. You can highlight things and so on. And I do this while I record the screen, so that's pretty much all there is to it. You can then print out again this notebook and you can distribute those annotated PDF if you want. Now I'm pretty sure this is here inserted as some sort of an image. So I don't know about the copy paste ability of the resulting product. But here you see this is a paper I actually made a video about. And that's basically all there is to it. It's OneNote. It's a free program from Microsoft. In order to do the annotating I use a last generation Microsoft Surface tablet that I got for cheap. At some point it comes with a nice pen and touch screen so you can basically zoom around and zip around while you do these things. In order to record the screen I use this iSpring FreeCam software. It might not be the best but it does work for me well and they have a cool Pro edition if you need more features. But it works really well for recording your screen. You can record parts of your screen or the full screen. You can record with sound. So I use a microphone and then I just record the sound from that with the same tool. And at the end you get a video file that you can upload to YouTube. Easy as that. If I need to do some editing, which is rarely because I am lazy, I use either iMovie from Apple which comes with an Apple operating system. So I have a MacBook that I run iMovie on. iMovie is really easy to edit movies on. I don't know if there's anything on Windows where it's that easy that comes pre-packaged. But if I need to do more complicated things I use Shotcut which is an open source editor. I believe that's available for all the platforms. You can do fairly complicated things with Shotcut if you ever need to do that. But if I just need to stitch like two or three things together I use iMovie. And that's pretty much it for making and recording videos, I believe. One note is that in order to do a class online not all people will just be able to record a video and then upload. Some of the things you need to do are actually live. A lot of people right now use Zoom for live teleconferencing. But you can also do this sort of presenter mode where you present and people can do questions. Of course you can do this via YouTube streaming as well. But then it's of course it's kind of public on YouTube or link accessible with Zoom. I believe you have more control. But of course Zoom is a proprietary solution and with the free account you can only get so far. So they limit your meetings in length if you have more than I believe three or four people. An alternative is Jitsi which is open source video conferencing. And the cool thing here is you can actually run your own server such that you can truly have control over everything. In order to communicate with lots of people, of course people use Slack. But again Slack is a proprietary service and an alternative to that would be Rocket Chat. Again where you can run your own server and it is fairly similar to Slack. In order to collaborate or share just general notes, of course Google's suite of docs and sheets and so on is excellent. And for classes especially, Piazza is a good place. You can sign up as a class. You can have TAs sign up as TAs. You can have your students sign up as students and then the students can ask questions and then other students or the TAs can answer those questions. Basically a bit of a forum. But you can also announce things there for your classes. It's pretty cool and it's really geared towards online classes and it's free. So I know a lot of universities are using that right now. So if you're looking for some sort of announcement or discussion board for your class, Piazza is definitely a good place to be. And lastly we sometimes have classes where students have to submit projects. And we actually use CMT for this because it's really neat where you can set deadlines and everything. Students can upload and then you can assign reviewers to those projects, which in our case are us, the TAs. And you know you can have meta reviews and so on. So CMT is actually very good. Maybe a bit of an overkill if you just run a single class. But it has lots and lots of features. And of course the big conferences also use CMT. So it's definitely stress tested. Alright, so that was it for my videos. Or at least how I make them. I just print out the PDF, sit down for half an hour and rant about it. And that's pretty much it. And then you throw it on YouTube or distribute the file however you want. And with that I hope I answered a little bit of these questions. And I all wish you a healthy rest of the Corona season. Bye.
[ { "end": 5, "start": 0, "text": " Hi there! So a lot of people have been asking me how I make these videos." }, { "end": 13, "start": 5, "text": " And this is of course relevant now that everyone's work from home and all the schools are converted into online schools." }, { "end": 19, "start": 13, "text": " All of a sudden a lot of people have to make these online education happen." }, { "end": 24, "start": 19, "text": " And I think this style of video lends itself to online education." }, { "end": 31, "start": 24, "text": " So I'll quickly go over the process of how to do this and maybe also how to run a university class online." }, { "end": 35, "start": 31, "text": " Alright, so the process is pretty simple of how I make my videos." }, { "end": 38, "start": 35, "text": " This might not work for everyone, but it works for me." }, { "end": 44, "start": 38, "text": " I use the Microsoft OneNote in order to scribble on papers basically." }, { "end": 56, "start": 44, "text": " So the thing is, in OneNote you have this insert thing here and you can print out a PDF onto your notebook here." }, { "end": 67, "start": 56, "text": " So the way this looks then is you'll get the PDF in your notebook and you can scribble on it with this using this draw tab here." }, { "end": 72, "start": 67, "text": " You can choose a pen and scribble on it. You can highlight things and so on." }, { "end": 77, "start": 72, "text": " And I do this while I record the screen, so that's pretty much all there is to it." }, { "end": 86, "start": 77, "text": " You can then print out again this notebook and you can distribute those annotated PDF if you want." }, { "end": 93, "start": 86, "text": " Now I'm pretty sure this is here inserted as some sort of an image." }, { "end": 97, "start": 93, "text": " So I don't know about the copy paste ability of the resulting product." }, { "end": 102, "start": 97, "text": " But here you see this is a paper I actually made a video about." }, { "end": 108, "start": 102, "text": " And that's basically all there is to it. It's OneNote. It's a free program from Microsoft." }, { "end": 119, "start": 108, "text": " In order to do the annotating I use a last generation Microsoft Surface tablet that I got for cheap." }, { "end": 128, "start": 119, "text": " At some point it comes with a nice pen and touch screen so you can basically zoom around and zip around while you do these things." }, { "end": 136, "start": 128, "text": " In order to record the screen I use this iSpring FreeCam software." }, { "end": 144, "start": 136, "text": " It might not be the best but it does work for me well and they have a cool Pro edition if you need more features." }, { "end": 151, "start": 144, "text": " But it works really well for recording your screen. You can record parts of your screen or the full screen." }, { "end": 159, "start": 151, "text": " You can record with sound. So I use a microphone and then I just record the sound from that with the same tool." }, { "end": 164, "start": 159, "text": " And at the end you get a video file that you can upload to YouTube. Easy as that." }, { "end": 178, "start": 164, "text": " If I need to do some editing, which is rarely because I am lazy, I use either iMovie from Apple which comes with an Apple operating system." }, { "end": 189, "start": 178, "text": " So I have a MacBook that I run iMovie on. iMovie is really easy to edit movies on." }, { "end": 194, "start": 189, "text": " I don't know if there's anything on Windows where it's that easy that comes pre-packaged." }, { "end": 201, "start": 194, "text": " But if I need to do more complicated things I use Shotcut which is an open source editor." }, { "end": 205, "start": 201, "text": " I believe that's available for all the platforms." }, { "end": 211, "start": 205, "text": " You can do fairly complicated things with Shotcut if you ever need to do that." }, { "end": 217, "start": 211, "text": " But if I just need to stitch like two or three things together I use iMovie." }, { "end": 226, "start": 217, "text": " And that's pretty much it for making and recording videos, I believe." }, { "end": 240, "start": 226, "text": " One note is that in order to do a class online not all people will just be able to record a video and then upload." }, { "end": 244, "start": 240, "text": " Some of the things you need to do are actually live." }, { "end": 249, "start": 244, "text": " A lot of people right now use Zoom for live teleconferencing." }, { "end": 255, "start": 249, "text": " But you can also do this sort of presenter mode where you present and people can do questions." }, { "end": 259, "start": 255, "text": " Of course you can do this via YouTube streaming as well." }, { "end": 266, "start": 259, "text": " But then it's of course it's kind of public on YouTube or link accessible with Zoom." }, { "end": 269, "start": 266, "text": " I believe you have more control." }, { "end": 276, "start": 269, "text": " But of course Zoom is a proprietary solution and with the free account you can only get so far." }, { "end": 281, "start": 276, "text": " So they limit your meetings in length if you have more than I believe three or four people." }, { "end": 287, "start": 281, "text": " An alternative is Jitsi which is open source video conferencing." }, { "end": 296, "start": 287, "text": " And the cool thing here is you can actually run your own server such that you can truly have control over everything." }, { "end": 303, "start": 296, "text": " In order to communicate with lots of people, of course people use Slack." }, { "end": 310, "start": 303, "text": " But again Slack is a proprietary service and an alternative to that would be Rocket Chat." }, { "end": 318, "start": 310, "text": " Again where you can run your own server and it is fairly similar to Slack." }, { "end": 331, "start": 318, "text": " In order to collaborate or share just general notes, of course Google's suite of docs and sheets and so on is excellent." }, { "end": 337, "start": 331, "text": " And for classes especially, Piazza is a good place." }, { "end": 342, "start": 337, "text": " You can sign up as a class. You can have TAs sign up as TAs." }, { "end": 351, "start": 342, "text": " You can have your students sign up as students and then the students can ask questions and then other students or the TAs can answer those questions." }, { "end": 356, "start": 351, "text": " Basically a bit of a forum. But you can also announce things there for your classes." }, { "end": 361, "start": 356, "text": " It's pretty cool and it's really geared towards online classes and it's free." }, { "end": 366, "start": 361, "text": " So I know a lot of universities are using that right now." }, { "end": 375, "start": 366, "text": " So if you're looking for some sort of announcement or discussion board for your class, Piazza is definitely a good place to be." }, { "end": 382, "start": 375, "text": " And lastly we sometimes have classes where students have to submit projects." }, { "end": 389, "start": 382, "text": " And we actually use CMT for this because it's really neat where you can set deadlines and everything." }, { "end": 396, "start": 389, "text": " Students can upload and then you can assign reviewers to those projects, which in our case are us, the TAs." }, { "end": 400, "start": 396, "text": " And you know you can have meta reviews and so on." }, { "end": 408, "start": 400, "text": " So CMT is actually very good. Maybe a bit of an overkill if you just run a single class." }, { "end": 412, "start": 408, "text": " But it has lots and lots of features." }, { "end": 416, "start": 412, "text": " And of course the big conferences also use CMT." }, { "end": 419, "start": 416, "text": " So it's definitely stress tested." }, { "end": 423, "start": 419, "text": " Alright, so that was it for my videos." }, { "end": 430, "start": 423, "text": " Or at least how I make them. I just print out the PDF, sit down for half an hour and rant about it." }, { "end": 435, "start": 430, "text": " And that's pretty much it. And then you throw it on YouTube or distribute the file however you want." }, { "end": 442, "start": 435, "text": " And with that I hope I answered a little bit of these questions." }, { "end": 451, "start": 442, "text": " And I all wish you a healthy rest of the Corona season. Bye." } ]
p3sAF3gVMMA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Deep Learning for Symbolic Mathematics
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "attention mechanism", "attention", "transformer", "rnn", "recurrent", "seq2seq", "facebook", "fair", "research", "math", "integral", "ode" ]
This model solves integrals and ODEs by doing seq2seq! https://arxiv.org/abs/1912.01412 https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/ Abstract: Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica. Authors: Guillaume Lample, François Charton Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Can you solve this? Neither can I. But Wolfram Alpha can. So this is the thing that probably I have most to thank for for passing university, especially the math classes in it. If you don't know Wolfram Alpha, it is an engine from the creators of Mathematica, but it is online. It can do symbolic math. So it can integrate this expression, for example, and you'll get the solution here. And if you have the pro version, it can even give you a step-by-step solution of how to get there. So this part of math is an entirely different part than we usually do with computers. Usually we do numeric math, working with actual values. But here it's about symbolic math. It's about manipulating expression, in this case integrating them. So here is a paper by Facebook AI Research called Deep Learning for Symbolic Mathematics by Guillaume Lampe and François Gertot. These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks. So they start out by saying here, neural networks have a reputation for being better at solving statistical or proximate problems. That's what I meant by numeric, then at performing calculations or working with symbolic data. And in this case, they go about this other than other people have. So let's look at how they did it. We can express symbolic mathematics in these kind of trees. So an expression like these up here would be expressed into this tree. So you would have a plus, this 2 plus 3. Sorry, of course there's an implicit bracket here. So you'd have this plus right here, the 2 here and the entire right hand side here. So you can basically decompose it into trees like this or this or this. Here you also can have the differentiation operator as a symbol in there, just like any other operator. Moreover, you can basically decompose everything into everything they have here, into binary and unary nodes in a tree. What that means is either like a plus sign, it has two components, so a left and a right hand side that should be added together. Or like the cosine, it has one argument, namely the thing that it should take the cosine of. So a lot of people have tried going about this problem by working with these trees and basically training neural networks to... So first they use kind of a parser to decompose such a thing into a tree like this. And then use neural networks, let's say tree recursive neural networks or so on, to kind of make sense of the tree and solve it in a recursive manner or something like this. But that has its limitations. So what these people from Facebook AI did is they viewed it as a natural language expression problem. So they say, no, no, let's actually go with trees as sequences. So you can see that this mathematical expression, for example, is already a sequence. It's simply a sequence of tokens. But there are many different ways of expressing this. So you can say 2 plus 3 times the parentheses, you can say 3 times parentheses plus 2. You can turn many things around and there's always these parentheses make it harder and so on. So what they do is they say, OK, let's actually go from this thing to a tree. So let's go to the tree representation and then let's take the tree representation because the tree representation can be normalized. And then let's put that again into a sequence representation such as this one. And this is called reverse polish notation. And it has multiple advantages over the old expression. So let's keep that on the right hand side here. This is the same thing, except it's what is called a prefix notation, whereas the thing on the right here is called an infix notation. Infix because the operators such as the plus is always between its arguments. So it's left hand argument and it's right hand argument. In prefix notation, the operator is always in front of its arguments. So this operator here is has a first argument. This end as a second argument. This right now, the cool thing is if you express a tree like this, you can simply go and use a stack machine to solve it. So you can basically go. I would say you can go from the from the right here and you see you select two and five plus. And let's do it by hand. Actually, this is fun. So we have plus two times three. If you're a boomer like me, you remember you have to use calculators like this and couldn't use the infix notation. So you go from the right, right? You say two, five plus. Cool. That's seven. So scratch that. Put seven here, right? So your new stack is three, two times. This right. Then you go again from the right and you go seven, three times. OK, that's twenty one. Cool. Twenty one. Scratch this. Now it's twenty one. Two plus twenty one is twenty three. I'm fairly sure that's the solution. Well, correct me if I'm wrong. But this is how you would would go about solving like this. So it is the same expression as the original one, but it doesn't use any parentheses. And it is it is derived from the from the tree, basically. So it is you can you can normalize it much more in order to find unique expressions. So what this system does is it it transforms any expression into a prefix notation such as this one. Oops. And then it uses a sequence to sequence model. In order to derive the solution. Now, just how crazy is this? Right. So we come we go from this thing here, right? From this thing. And the solution is twenty one. Right. And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence. That means it basically parses this as a token level. Right. And then it outputs these tokens without. So during training, you simply give it the you give it the input here and you give it the output. And it's supposed to learn how to transform one into the other without you giving it any sort of mathematical ability. Right. Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you. Now, this by itself is already pretty astounding that you would try such a thing. It really transforms the string. So this is not the mathematical equation, but the string of this into the string of that. Now, they don't do it on numbers. Like, I don't think that would work as well if you were to to make it kind of calculate numerical things like this. As we said, this is symbolic. So what it can do is it can, for example, integrate. So if you have an expression like. Let's see some on the bottom here. So if you had an expression such as a polynomial. Here, an expression like this. Right. You would like to find its integral. That is a problem. That's one of the problems we had at the beginning. Right. This integral right here. You can write this in a string like we said. And then derive its solution right here. And have the neural network learn to map one to the other, right, to map this to that. So the way it goes is it would map this into map this into its tree representation. It would map this into its prefix notation. Right. It would also map this to. Let's take another color here. This into its tree. Then it will map this into its prefix notation. And then that's the training data. The training data is take this, derive that. Right. And at inference time, of course, you won't have this here. You'll simply be asked to output a sequence as a normal natural language. Like you can think of machine translation. This thing translates problems into solutions. It's crazy. I mean, it's not it's not technically super challenging, but it's crazy that it works or that it could work. Right. So we'll see how this actually works. They use a transformer model, which is just which is a classic model. If you don't know what a transformer is, I have a video called Attention is All You Need about transformers. You can basically use them to do these kinds of tasks, to map one string into another string. So. Yeah, so they go into detail here of how they construct the data set and how big the problem space is and so on. Ultimately, they compare their system to. Mathematica, I think, and Maple and MathLab, which do the same thing. So Mathematica, which is the kind of desktop version of Wolfram Alpha that I've shown you, you have it here. So integration. Is the task of integrating, let's say, these these symbolic expressions. ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation, which is also a task in symbolic mathematics. If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds, what Mathematica will do is it will kind of search the manipulations that it knows. So the advantage of this is it can always give you, let's say, a step by step solution if it finds a solution. Right. It will just start and it will do a tree search, manipulating the expression you give in until it reaches a satisfactory solution. But then once it has that, it can give you a path through the tree, which leads to the solution, which will give you a step by step solution. So you can understand it. The system that Facebook designs here doesn't do that. It simply takes right. It simply takes the input tokens like this is the input and it just gives you an output that is learned so that the network per se doesn't understand math. It simply learns from many, many examples that to transform to to come up with good hypotheses. So if you compare here, Mathematica, for example, it can integrate 84 percent of the things that they put into it. It's not said whether it gets it wrong or simply times out in the rest. I would say it times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules. So I guess the rest of the rest 16 percent, it simply times out, doesn't find a solution. Whereas this Facebook system and they say it usually finds the solution in less than a second, finds these solutions in 98.4 percent of the time with a beam size of one. Now, what does the beam size mean? It means that the time that you have to generate the output is the time that you generate the output. So if you have a sequence of input, you can always choose to do a beam search. So when you have a sequence of input, let's actually give an example, a cat jumps. The task is simply to continue the sentence, right, to continue the sentence so you can generate an output sequence. The output sequence could be over the dog. What you can do is you can this is beam size, would be called beam size one or no beam search at all. You can do what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory. So you in a beam size of 10, you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best. Let's see how this goes. Let's do a beam size of three in our case. So a cat jumps and then you could come up with three different things. This sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly. Right. So these are your three hypotheses. Then we go to the next step. We have to evaluate each of those, each of them. So a cat jumps over the over a over me. A cat jumps between the between two and a cat jumps between many. The cat jumps swiftly end of sentence, that jumps swiftly over cat jumps swiftly. And right, these are all valid. So of these nine, you would now select again the three that overall have the highest likelihood. Maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two. These three. Right. So you just keep these three. And then in the next step, you again from these three, you would want for each three hypotheses and so on. So this is what's called a beam search. And if you give it a beam size of 10 or 50, this system tends to improve even more. The way this system works is quite different from Mathematica in that Mathematica, as I said, is a symbolic solver that never makes mistakes, but can fail to give you a solution. This system simply generates an output sequence that is not guaranteed to be actually a solution to the problem. It's just a hypothesis. But then you can quickly check whether the hypothesis is correct. So the nature of these math problems with integration, you can simply differentiate. And with ODE, you can simply plug them in to see if there is solution. It's kind of like your classic, let's say, NP-hard problems or like a SAT solving where you can quickly check whether something is a solution. So if you have a system that generates 50 hypotheses, you could quickly check which one is actually correct. So these numbers here mean that one of these 50 that the system came up with was a correct solution. And if you allow for such many hypotheses, you can see it goes up quite a bit. For example, the ODE solving is almost the same. And here it's even worse if you take ODE's of order 2. It's even worse than Mathematica. But if you allow for larger beam sizes, you see it dramatically goes up. And so it's a different approach. I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off of Facebook or something, or Facebook by Mathematica in whatever way. This clearly is a different approach and it appears to work better. But there is a caveat. So here's the caveat that I see with this kind of thing. These evaluations are done on data sets, of course. And this paper goes into big detail on how to generate these data sets. So they have to pay attention to many things like many solutions are equivalent. For example, here, you know, that this solution and this solution to this equation, to this differential equation are the same. So they have to use a symbolic framework to check whether the solutions are the same and so on. This it is very good work, but they do evaluate on expressions that fit into their data set. So here in their data set, they say, OK, we evaluate, you know, expressions with up to 15 internal nodes, 11 leave values for these four binary operators, then these 15 unary operators. So the expressions that they train on fall into this data set. Right. Also, just numbers from negative five to five. So it is it is kind of to be expected that a system that is trained on these things would meet would perform very well on these things as opposed to opposed to Mathematica. That is, you know, a general purpose tool. Moreover, if you look at. Sorry, I think this is further down. For example, in integration for the integration task, they have three different ways of solving of generating data. They have the forward way where they simply use a symbolic integrator to generate expressions. They have the backward way where they start from the integral and then differentiate it in order to obtain a training pair. And they have an integration by parts method. These are three different methods to come up with problems for this system to be trained on. And they have very different properties to the effect that if you train with one just one, it won't work well on the other. So if you train with the forward method, it will work very well on data that has been generated with the forward method. So this is down here. This is what it's trained on. And this is what it's evaluated on. Right. You can see the diagonal is very, very strong. But if you train with the backward method, but you evaluate on data generated with the forward method, it is actually very poor. That's because in one case, generally, the solutions are longer than the input. In the other case, the solutions are shorter. So not only does this system only work on the particular task here, it is actually very attuned to the way that this data was generated. Right. So in fact, I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate. And again, the problem the problem is made kind of worse because they their evaluation set would also come from their distribution. So what they've ultimately shown is that they can do this on a very skewed, probably very biased subset of this mathematical problem. And on that biased subset, they can outperform something like Mathematica. Right. They kind of defeat themselves. Yeah. If you look here, they even the different integration data generating methods, if you only train on one of them, it doesn't generalize. If you only train on forward data, then if you evaluate on backward generated data, it doesn't work. So even the integrator can't really generalize. So they have to kind of combine different method. And even now, we can probably easily find examples that this integrator can't solve. So, I mean, there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to. And it's cool that it works anyway. As I said, this model has no programmed in notion of how math works. But also it kind of shows the problems if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process, the claims you can make at the end are limited. And to be fair, I don't know what claims they made in the press generally. So I think there is a pretty cool work. Check it out. And that was it. Thanks.
[ { "end": 16, "start": 0, "text": " Hi there! Can you solve this? Neither can I. But Wolfram Alpha can. So this is the thing that probably I have most to thank for for passing university, especially the math classes in it." }, { "end": 33, "start": 16, "text": " If you don't know Wolfram Alpha, it is an engine from the creators of Mathematica, but it is online. It can do symbolic math. So it can integrate this expression, for example, and you'll get the solution here." }, { "end": 48, "start": 33, "text": " And if you have the pro version, it can even give you a step-by-step solution of how to get there. So this part of math is an entirely different part than we usually do with computers." }, { "end": 60, "start": 48, "text": " Usually we do numeric math, working with actual values. But here it's about symbolic math. It's about manipulating expression, in this case integrating them." }, { "end": 75, "start": 60, "text": " So here is a paper by Facebook AI Research called Deep Learning for Symbolic Mathematics by Guillaume Lampe and François Gertot." }, { "end": 86, "start": 75, "text": " These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks." }, { "end": 95, "start": 86, "text": " So they start out by saying here, neural networks have a reputation for being better at solving statistical or proximate problems." }, { "end": 101, "start": 95, "text": " That's what I meant by numeric, then at performing calculations or working with symbolic data." }, { "end": 111, "start": 101, "text": " And in this case, they go about this other than other people have. So let's look at how they did it." }, { "end": 124, "start": 111, "text": " We can express symbolic mathematics in these kind of trees. So an expression like these up here would be expressed into this tree." }, { "end": 132, "start": 124, "text": " So you would have a plus, this 2 plus 3. Sorry, of course there's an implicit bracket here." }, { "end": 138, "start": 132, "text": " So you'd have this plus right here, the 2 here and the entire right hand side here." }, { "end": 146, "start": 138, "text": " So you can basically decompose it into trees like this or this or this." }, { "end": 156, "start": 146, "text": " Here you also can have the differentiation operator as a symbol in there, just like any other operator." }, { "end": 165, "start": 156, "text": " Moreover, you can basically decompose everything into everything they have here, into binary and unary nodes in a tree." }, { "end": 173, "start": 165, "text": " What that means is either like a plus sign, it has two components, so a left and a right hand side that should be added together." }, { "end": 181, "start": 173, "text": " Or like the cosine, it has one argument, namely the thing that it should take the cosine of." }, { "end": 191, "start": 181, "text": " So a lot of people have tried going about this problem by working with these trees and basically training neural networks to..." }, { "end": 196, "start": 191, "text": " So first they use kind of a parser to decompose such a thing into a tree like this." }, { "end": 208, "start": 196, "text": " And then use neural networks, let's say tree recursive neural networks or so on, to kind of make sense of the tree and solve it in a recursive manner or something like this." }, { "end": 212, "start": 208, "text": " But that has its limitations." }, { "end": 218, "start": 212, "text": " So what these people from Facebook AI did is they viewed it as a natural language expression problem." }, { "end": 226, "start": 218, "text": " So they say, no, no, let's actually go with trees as sequences." }, { "end": 233, "start": 226, "text": " So you can see that this mathematical expression, for example, is already a sequence." }, { "end": 237, "start": 233, "text": " It's simply a sequence of tokens." }, { "end": 242, "start": 237, "text": " But there are many different ways of expressing this." }, { "end": 247, "start": 242, "text": " So you can say 2 plus 3 times the parentheses, you can say 3 times parentheses plus 2." }, { "end": 254, "start": 247, "text": " You can turn many things around and there's always these parentheses make it harder and so on." }, { "end": 259, "start": 254, "text": " So what they do is they say, OK, let's actually go from this thing to a tree." }, { "end": 272, "start": 259, "text": " So let's go to the tree representation and then let's take the tree representation because the tree representation can be normalized." }, { "end": 278, "start": 272, "text": " And then let's put that again into a sequence representation such as this one." }, { "end": 281, "start": 278, "text": " And this is called reverse polish notation." }, { "end": 286, "start": 281, "text": " And it has multiple advantages over the old expression." }, { "end": 291, "start": 286, "text": " So let's keep that on the right hand side here." }, { "end": 300, "start": 291, "text": " This is the same thing, except it's what is called a prefix notation, whereas the thing on the right here is called an infix notation." }, { "end": 306, "start": 300, "text": " Infix because the operators such as the plus is always between its arguments." }, { "end": 310, "start": 306, "text": " So it's left hand argument and it's right hand argument." }, { "end": 316, "start": 310, "text": " In prefix notation, the operator is always in front of its arguments." }, { "end": 321, "start": 316, "text": " So this operator here is has a first argument." }, { "end": 324, "start": 321, "text": " This end as a second argument." }, { "end": 329, "start": 324, "text": " This right now, the cool thing is if you express a tree like this," }, { "end": 334, "start": 329, "text": " you can simply go and use a stack machine to solve it." }, { "end": 337, "start": 334, "text": " So you can basically go." }, { "end": 343, "start": 337, "text": " I would say you can go from the from the right here and you see you select two and five plus." }, { "end": 346, "start": 343, "text": " And let's do it by hand." }, { "end": 352, "start": 346, "text": " Actually, this is fun. So we have plus two times three." }, { "end": 359, "start": 352, "text": " If you're a boomer like me, you remember you have to use calculators like this and couldn't use the infix notation." }, { "end": 361, "start": 359, "text": " So you go from the right, right?" }, { "end": 364, "start": 361, "text": " You say two, five plus. Cool." }, { "end": 366, "start": 364, "text": " That's seven. So scratch that." }, { "end": 368, "start": 366, "text": " Put seven here, right?" }, { "end": 373, "start": 368, "text": " So your new stack is three, two times." }, { "end": 374, "start": 373, "text": " This right." }, { "end": 380, "start": 374, "text": " Then you go again from the right and you go seven, three times." }, { "end": 382, "start": 380, "text": " OK, that's twenty one. Cool." }, { "end": 384, "start": 382, "text": " Twenty one. Scratch this." }, { "end": 389, "start": 384, "text": " Now it's twenty one. Two plus twenty one is twenty three." }, { "end": 392, "start": 389, "text": " I'm fairly sure that's the solution." }, { "end": 394, "start": 392, "text": " Well, correct me if I'm wrong." }, { "end": 397, "start": 394, "text": " But this is how you would would go about solving like this." }, { "end": 404, "start": 397, "text": " So it is the same expression as the original one, but it doesn't use any parentheses." }, { "end": 411, "start": 404, "text": " And it is it is derived from the from the tree, basically." }, { "end": 420, "start": 411, "text": " So it is you can you can normalize it much more in order to find unique expressions." }, { "end": 430, "start": 420, "text": " So what this system does is it it transforms any expression into a prefix notation such as this one." }, { "end": 435, "start": 430, "text": " Oops. And then it uses a sequence to sequence model." }, { "end": 437, "start": 435, "text": " In order to derive the solution." }, { "end": 441, "start": 437, "text": " Now, just how crazy is this? Right." }, { "end": 446, "start": 441, "text": " So we come we go from this thing here, right?" }, { "end": 450, "start": 446, "text": " From this thing. And the solution is twenty one." }, { "end": 459, "start": 450, "text": " Right. And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence." }, { "end": 467, "start": 459, "text": " That means it basically parses this as a token level. Right." }, { "end": 471, "start": 467, "text": " And then it outputs these tokens without." }, { "end": 480, "start": 471, "text": " So during training, you simply give it the you give it the input here and you give it the output." }, { "end": 488, "start": 480, "text": " And it's supposed to learn how to transform one into the other without you giving it any sort of" }, { "end": 490, "start": 488, "text": " mathematical ability. Right." }, { "end": 496, "start": 490, "text": " Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you." }, { "end": 503, "start": 496, "text": " Now, this by itself is already pretty astounding that you would try such a thing." }, { "end": 506, "start": 503, "text": " It really transforms the string." }, { "end": 512, "start": 506, "text": " So this is not the mathematical equation, but the string of this into the string of that." }, { "end": 515, "start": 512, "text": " Now, they don't do it on numbers." }, { "end": 524, "start": 515, "text": " Like, I don't think that would work as well if you were to to make it kind of calculate numerical things like this." }, { "end": 526, "start": 524, "text": " As we said, this is symbolic." }, { "end": 530, "start": 526, "text": " So what it can do is it can, for example, integrate." }, { "end": 539, "start": 530, "text": " So if you have an expression like." }, { "end": 541, "start": 539, "text": " Let's see some on the bottom here." }, { "end": 548, "start": 541, "text": " So if you had an expression such as a polynomial." }, { "end": 552, "start": 548, "text": " Here, an expression like this." }, { "end": 556, "start": 552, "text": " Right. You would like to find its integral." }, { "end": 559, "start": 556, "text": " That is a problem. That's one of the problems we had at the beginning." }, { "end": 561, "start": 559, "text": " Right. This integral right here." }, { "end": 567, "start": 561, "text": " You can write this in a string like we said." }, { "end": 575, "start": 567, "text": " And then derive its solution right here." }, { "end": 582, "start": 575, "text": " And have the neural network learn to map one to the other, right, to map this to that." }, { "end": 591, "start": 582, "text": " So the way it goes is it would map this into map this into its tree representation." }, { "end": 598, "start": 591, "text": " It would map this into its prefix notation." }, { "end": 602, "start": 598, "text": " Right. It would also map this to." }, { "end": 604, "start": 602, "text": " Let's take another color here." }, { "end": 608, "start": 604, "text": " This into its tree." }, { "end": 612, "start": 608, "text": " Then it will map this into its prefix notation." }, { "end": 614, "start": 612, "text": " And then that's the training data." }, { "end": 619, "start": 614, "text": " The training data is take this, derive that." }, { "end": 624, "start": 619, "text": " Right. And at inference time, of course, you won't have this here." }, { "end": 630, "start": 624, "text": " You'll simply be asked to output a sequence as a normal natural language." }, { "end": 632, "start": 630, "text": " Like you can think of machine translation." }, { "end": 638, "start": 632, "text": " This thing translates problems into solutions." }, { "end": 640, "start": 638, "text": " It's crazy." }, { "end": 646, "start": 640, "text": " I mean, it's not it's not technically super challenging, but it's crazy that it works or that it could work." }, { "end": 647, "start": 646, "text": " Right." }, { "end": 650, "start": 647, "text": " So we'll see how this actually works." }, { "end": 654, "start": 650, "text": " They use a transformer model, which is just which is a classic model." }, { "end": 661, "start": 654, "text": " If you don't know what a transformer is, I have a video called Attention is All You Need about transformers." }, { "end": 667, "start": 661, "text": " You can basically use them to do these kinds of tasks, to map one string into another string." }, { "end": 671, "start": 667, "text": " So." }, { "end": 683, "start": 671, "text": " Yeah, so they go into detail here of how they construct the data set and how big the problem space is and so on." }, { "end": 688, "start": 683, "text": " Ultimately, they compare their system to." }, { "end": 696, "start": 688, "text": " Mathematica, I think, and Maple and MathLab, which do the same thing." }, { "end": 704, "start": 696, "text": " So Mathematica, which is the kind of desktop version of Wolfram Alpha that I've shown you, you have it here." }, { "end": 707, "start": 704, "text": " So integration." }, { "end": 712, "start": 707, "text": " Is the task of integrating, let's say, these these symbolic expressions." }, { "end": 725, "start": 712, "text": " ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation, which is also a task in symbolic mathematics." }, { "end": 737, "start": 725, "text": " If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds, what Mathematica will do is it will kind of search the manipulations that it knows." }, { "end": 745, "start": 737, "text": " So the advantage of this is it can always give you, let's say, a step by step solution if it finds a solution." }, { "end": 755, "start": 745, "text": " Right. It will just start and it will do a tree search, manipulating the expression you give in until it reaches a satisfactory solution." }, { "end": 764, "start": 755, "text": " But then once it has that, it can give you a path through the tree, which leads to the solution, which will give you a step by step solution." }, { "end": 766, "start": 764, "text": " So you can understand it." }, { "end": 768, "start": 766, "text": " The system that Facebook designs here doesn't do that." }, { "end": 770, "start": 768, "text": " It simply takes right." }, { "end": 780, "start": 770, "text": " It simply takes the input tokens like this is the input and it just gives you an output that is learned so that the network per se doesn't understand math." }, { "end": 790, "start": 780, "text": " It simply learns from many, many examples that to transform to to come up with good hypotheses." }, { "end": 800, "start": 790, "text": " So if you compare here, Mathematica, for example, it can integrate 84 percent of the things that they put into it." }, { "end": 805, "start": 800, "text": " It's not said whether it gets it wrong or simply times out in the rest." }, { "end": 815, "start": 805, "text": " I would say it times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules." }, { "end": 822, "start": 815, "text": " So I guess the rest of the rest 16 percent, it simply times out, doesn't find a solution." }, { "end": 838, "start": 822, "text": " Whereas this Facebook system and they say it usually finds the solution in less than a second, finds these solutions in 98.4 percent of the time with a beam size of one." }, { "end": 840, "start": 838, "text": " Now, what does the beam size mean?" }, { "end": 846, "start": 840, "text": " It means that the time that you have to generate the output is the time that you generate the output." }, { "end": 852, "start": 846, "text": " So if you have a sequence of input, you can always choose to do a beam search." }, { "end": 865, "start": 852, "text": " So when you have a sequence of input, let's actually give an example, a cat jumps." }, { "end": 871, "start": 865, "text": " The task is simply to continue the sentence, right, to continue the sentence so you can generate an output sequence." }, { "end": 876, "start": 871, "text": " The output sequence could be over the dog." }, { "end": 884, "start": 876, "text": " What you can do is you can this is beam size, would be called beam size one or no beam search at all." }, { "end": 893, "start": 884, "text": " You can do what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory." }, { "end": 908, "start": 893, "text": " So you in a beam size of 10, you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best." }, { "end": 913, "start": 908, "text": " Let's see how this goes. Let's do a beam size of three in our case." }, { "end": 917, "start": 913, "text": " So a cat jumps and then you could come up with three different things." }, { "end": 930, "start": 917, "text": " This sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly." }, { "end": 932, "start": 930, "text": " Right. So these are your three hypotheses." }, { "end": 937, "start": 932, "text": " Then we go to the next step. We have to evaluate each of those, each of them." }, { "end": 944, "start": 937, "text": " So a cat jumps over the over a over me." }, { "end": 956, "start": 944, "text": " A cat jumps between the between two and a cat jumps between many." }, { "end": 967, "start": 956, "text": " The cat jumps swiftly end of sentence, that jumps swiftly over cat jumps swiftly." }, { "end": 970, "start": 967, "text": " And right, these are all valid." }, { "end": 978, "start": 970, "text": " So of these nine, you would now select again the three that overall have the highest likelihood." }, { "end": 990, "start": 978, "text": " Maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two." }, { "end": 992, "start": 990, "text": " These three. Right. So you just keep these three." }, { "end": 1000, "start": 992, "text": " And then in the next step, you again from these three, you would want for each three hypotheses and so on." }, { "end": 1002, "start": 1000, "text": " So this is what's called a beam search." }, { "end": 1010, "start": 1002, "text": " And if you give it a beam size of 10 or 50, this system tends to improve even more." }, { "end": 1019, "start": 1010, "text": " The way this system works is quite different from Mathematica in that Mathematica, as I said, is a symbolic solver that never makes mistakes," }, { "end": 1022, "start": 1019, "text": " but can fail to give you a solution." }, { "end": 1029, "start": 1022, "text": " This system simply generates an output sequence that is not guaranteed to be actually a solution to the problem." }, { "end": 1031, "start": 1029, "text": " It's just a hypothesis." }, { "end": 1035, "start": 1031, "text": " But then you can quickly check whether the hypothesis is correct." }, { "end": 1041, "start": 1035, "text": " So the nature of these math problems with integration, you can simply differentiate." }, { "end": 1046, "start": 1041, "text": " And with ODE, you can simply plug them in to see if there is solution." }, { "end": 1057, "start": 1046, "text": " It's kind of like your classic, let's say, NP-hard problems or like a SAT solving where you can quickly check whether something is a solution." }, { "end": 1065, "start": 1057, "text": " So if you have a system that generates 50 hypotheses, you could quickly check which one is actually correct." }, { "end": 1073, "start": 1065, "text": " So these numbers here mean that one of these 50 that the system came up with was a correct solution." }, { "end": 1079, "start": 1073, "text": " And if you allow for such many hypotheses, you can see it goes up quite a bit." }, { "end": 1083, "start": 1079, "text": " For example, the ODE solving is almost the same." }, { "end": 1087, "start": 1083, "text": " And here it's even worse if you take ODE's of order 2." }, { "end": 1089, "start": 1087, "text": " It's even worse than Mathematica." }, { "end": 1096, "start": 1089, "text": " But if you allow for larger beam sizes, you see it dramatically goes up." }, { "end": 1099, "start": 1096, "text": " And so it's a different approach." }, { "end": 1113, "start": 1099, "text": " I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off of Facebook or something, or Facebook by Mathematica in whatever way." }, { "end": 1117, "start": 1113, "text": " This clearly is a different approach and it appears to work better." }, { "end": 1119, "start": 1117, "text": " But there is a caveat." }, { "end": 1123, "start": 1119, "text": " So here's the caveat that I see with this kind of thing." }, { "end": 1130, "start": 1123, "text": " These evaluations are done on data sets, of course." }, { "end": 1136, "start": 1130, "text": " And this paper goes into big detail on how to generate these data sets." }, { "end": 1142, "start": 1136, "text": " So they have to pay attention to many things like many solutions are equivalent." }, { "end": 1156, "start": 1142, "text": " For example, here, you know, that this solution and this solution to this equation, to this differential equation are the same." }, { "end": 1164, "start": 1156, "text": " So they have to use a symbolic framework to check whether the solutions are the same and so on." }, { "end": 1176, "start": 1164, "text": " This it is very good work, but they do evaluate on expressions that fit into their data set." }, { "end": 1191, "start": 1176, "text": " So here in their data set, they say, OK, we evaluate, you know, expressions with up to 15 internal nodes, 11 leave values for these four binary operators, then these 15 unary operators." }, { "end": 1197, "start": 1191, "text": " So the expressions that they train on fall into this data set." }, { "end": 1205, "start": 1197, "text": " Right. Also, just numbers from negative five to five." }, { "end": 1217, "start": 1205, "text": " So it is it is kind of to be expected that a system that is trained on these things would meet would perform very well on these things as opposed to opposed to Mathematica." }, { "end": 1222, "start": 1217, "text": " That is, you know, a general purpose tool." }, { "end": 1226, "start": 1222, "text": " Moreover, if you look at." }, { "end": 1228, "start": 1226, "text": " Sorry, I think this is further down." }, { "end": 1239, "start": 1228, "text": " For example, in integration for the integration task, they have three different ways of solving of generating data." }, { "end": 1245, "start": 1239, "text": " They have the forward way where they simply use a symbolic integrator to generate expressions." }, { "end": 1253, "start": 1245, "text": " They have the backward way where they start from the integral and then differentiate it in order to obtain a training pair." }, { "end": 1256, "start": 1253, "text": " And they have an integration by parts method." }, { "end": 1261, "start": 1256, "text": " These are three different methods to come up with problems for this system to be trained on." }, { "end": 1271, "start": 1261, "text": " And they have very different properties to the effect that if you train with one just one, it won't work well on the other." }, { "end": 1282, "start": 1271, "text": " So if you train with the forward method, it will work very well on data that has been generated with the forward method." }, { "end": 1285, "start": 1282, "text": " So this is down here. This is what it's trained on." }, { "end": 1287, "start": 1285, "text": " And this is what it's evaluated on." }, { "end": 1291, "start": 1287, "text": " Right. You can see the diagonal is very, very strong." }, { "end": 1303, "start": 1291, "text": " But if you train with the backward method, but you evaluate on data generated with the forward method, it is actually very poor." }, { "end": 1309, "start": 1303, "text": " That's because in one case, generally, the solutions are longer than the input." }, { "end": 1311, "start": 1309, "text": " In the other case, the solutions are shorter." }, { "end": 1317, "start": 1311, "text": " So not only does this system only work on the particular task here," }, { "end": 1325, "start": 1317, "text": " it is actually very attuned to the way that this data was generated." }, { "end": 1338, "start": 1325, "text": " Right. So in fact, I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate." }, { "end": 1349, "start": 1338, "text": " And again, the problem the problem is made kind of worse because they their evaluation set would also come from their distribution." }, { "end": 1361, "start": 1349, "text": " So what they've ultimately shown is that they can do this on a very skewed, probably very biased subset of this mathematical problem." }, { "end": 1366, "start": 1361, "text": " And on that biased subset, they can outperform something like Mathematica." }, { "end": 1369, "start": 1366, "text": " Right. They kind of defeat themselves." }, { "end": 1378, "start": 1369, "text": " Yeah. If you look here, they even the different integration data generating methods, if you only train on one of them, it doesn't generalize." }, { "end": 1388, "start": 1378, "text": " If you only train on forward data, then if you evaluate on backward generated data, it doesn't work." }, { "end": 1392, "start": 1388, "text": " So even the integrator can't really generalize." }, { "end": 1403, "start": 1392, "text": " So they have to kind of combine different method. And even now, we can probably easily find examples that this integrator can't solve." }, { "end": 1415, "start": 1403, "text": " So, I mean, there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to." }, { "end": 1421, "start": 1415, "text": " And it's cool that it works anyway. As I said, this model has no programmed in notion of how math works." }, { "end": 1443, "start": 1421, "text": " But also it kind of shows the problems if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process, the claims you can make at the end are limited." }, { "end": 1447, "start": 1443, "text": " And to be fair, I don't know what claims they made in the press generally." }, { "end": 1455, "start": 1447, "text": " So I think there is a pretty cool work. Check it out. And that was it. Thanks." } ]
JPX_jSZtszY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
NeurIPS 2020 Changes to Paper Submission Process
[ "Science & Technology" ]
[ "machine learning", "deep learning", "phd", "papers", "neurips", "nips", "conference", "submission", "society", "ethics" ]
My thoughts on the changes to the paper submission process for NeurIPS 2020. The main new changes are: 1. ACs can desk reject papers 2. All authors have to be able to review if asked 3. Resubmissions from other conferences must be marked and a summary of changes since the last submission must be provided 4. Borader societal / ethical impact must be discussed 5. Upon acceptance, all papers must link to an explanatory video and the PDFs for slides and poster https://neurips.cc/Conferences/2020/CallForPapers https://youtu.be/361h6lHZGDg Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. So I just wanted to give a few quick thoughts about the changes to the NeurIPS submission process this year as opposed to last year. They've announced this on the website, on Twitter, with the video and so on, and I thought I might share some thoughts on that. And maybe some of you haven't heard yet in case you're planning to submit or thinking about it. So desk rejections. ACs, area chairs, have the ability to desk reject papers that they feel strongly are not going to be passable to the reviewers. They did an experiment last year where the ACs were simply supposed to mark submissions that they would desk reject, and it turned out that ACs aren't very good at estimating which submissions are going to be rejected by the reviewers. That might be because there wasn't really anything at stake because it was just kind of a let's see how this works. But it is definitely a move to reduce the number of submissions because the field is exploding and we lack reviewing power, reviewing people. So this is a move to reduce the number of people that have to review something because there will be fewer papers. I don't know if this increases the quality overall. If your paper gets desk rejected, there's usually some obvious reason for it why an AC decided it's not worth it. They probably haven't read it in depth, but there might be some kind of overall structural issue that, or like the introduction has many typos, or you know, look for the obvious things even though your work might be good. Second, all authors of a paper have to be able to review if asked to do so. And again, this is a stab at this kind of reviewing crisis, but I have mixed feelings about this. I really think this is a move in the wrong direction. It will increase the number of authors because a lot of people have been kind of free riding in that they're submitting papers, but they aren't reviewing other papers even though they would be competent researchers simply because reviewing doesn't get you anything. So there's no incentive to do reviews. Maybe you can say you're a reviewer, but then there's every incentive to do bad reviews, like two line reviews where the first line says you should have compared to my paper, reject like, fuck you if you're a reviewer like this. In any case, like a lot of times, and this hits, for example, like universities where you maybe work with a master student and the master student does some of the pre-processing of the data and they don't really have a clue about the machine learning, but they still contribute it. So why shouldn't they be an author on the paper? They might even have written that section about the data pre-processing. And now they're asked to review entire papers about topics where they're not really familiar with or you have some outside collaborators or, you know, there are so many things wrong. I think this attracts the wrong kind of people and by forcing people to do it, you encourage even more, like all these reviewers that would not have reviewed, what will happen is they will give shitty reviews and you will have even worse quality of reviews as a result. I think this is the wrong move to reduce the number of load per reviewer. I'd rather see abolish peer review completely in computer science, in machine learning at least. That's my opinion, but that might be a video for another time. I have plans how to replace it another time. Resubmissions have to be clearly marked. So if your paper is a resubmission of, like if you had already submitted it in the last 12 months, it's been rejected, you have to say it is a resubmission and the changes you made to the paper. Again with a peer review process that actually works, this would make a lot of sense. You can say, well, it got rejected last time and here is how I corrected for what the reviewers criticized, but with the review quality right now, I mean most of the papers, what are they going to say? It got rejected for nefarious reasons because the reviewer had a bad bowel movement that morning and I didn't really change much. So you encourage people to kind of blow out of proportion the changes they made and put a lot of additional unnecessary work on two papers that would actually be already fine. So all of these things, they are forcing people to do things and then the incentives of what we want aren't aligned with what we give. So what you'll end up with is lower quality reviews and lower quality work. So the next two points are of a different nature. The first one though, that will probably, I mean even if the ACs aren't perfect, that's a good move. I like that. The fourth point and the fifth point are a bit different. The fourth point is there is a new section in CMT apparently where you have to describe the broader societal impact and ethics around your work. How will your work influence society? What are positives and negatives? Ethical outcomes? How can it be used? And this is targeted towards things like let's say facial recognition. If you develop a new facial recognition algorithm, you may be able to argue, well this could be better used to identify victims in a big crowd. There's a mass riot or something and then you don't know who is there. Is my relative one of the people in the mass that gets stomped on? Or you can also say this potentially helps a dictatorial state to govern their people because they can now recognize everyone. For most papers it will be a bit shaky. Like if your third order optimization algorithm achieves a slightly better convergence rate, I'm not sure what's here. But what I feel is that this is dumb in a way because this just means more work. Basically now you have to demonstrate and yeah it says you should discuss positive and negative aspects but in essence everyone will be demonstrating virtue signaling how good their work will be for society and what good can be done and maybe a bit of bad. But that can be mitigated and it just pushes into a more PR world. So it goes from the science world into a more PR world. It means extra work and who are the people that can afford to do extra work? It's mostly the big companies. They can just put an additional team member on that, maybe even do additional experiments to show the societal impact of the work and who will lose out are probably small universities, independent researchers. And so on that don't have that capacity that simply do their research because it's an interesting research question. And for almost every single thing in the world that has an application it will have good and bad applications. So yeah mixed feelings. So the fifth is you are now supposed if your paper gets accepted to make a video about it and upload the poster basically link to the poster that you would use and also link to slides that you would give your talk with. This is to make it more accessible to people that are not at the conference which again I have mixed feelings about. Again it pushes it into this more PR realm. Talks are already live streamed. Most of them are for most of the large conferences and I feel it just gets people one step more away from the actual paper. So it allows people to grandstand and PR up even more of their work because even people who don't attend the conference now they're not going to read the paper, they're just going to watch the video. And in the video you can always leave away those things that you would have to like that a reviewer makes you put in the paper right and in the video you can overbought. It's camera ready. No one reviews the video. You can say whatever you want. So it's just where before if you didn't attend the conference I think many people actually did read the paper, watched talks where people could ask questions and now it's just one more PR thing. And again who has time, energy and money to really invest a lot into this? It's mainly large companies right if you're small and you're time bound and so on you might not have equipment or time to do that. I am not for hire to do your NURBS videos just saying. I don't have time to make these videos really. As you can see stellar quality I think there's a bright glare right here. So that was it for my opinions on this and I wish you a nice day. Bye bye.
[ { "end": 4.5200000000000005, "start": 0, "text": " Hi there." }, { "end": 11.120000000000001, "start": 4.5200000000000005, "text": " So I just wanted to give a few quick thoughts about the changes to the NeurIPS submission" }, { "end": 14.6, "start": 11.120000000000001, "text": " process this year as opposed to last year." }, { "end": 20.2, "start": 14.6, "text": " They've announced this on the website, on Twitter, with the video and so on, and I thought" }, { "end": 22.68, "start": 20.2, "text": " I might share some thoughts on that." }, { "end": 27.52, "start": 22.68, "text": " And maybe some of you haven't heard yet in case you're planning to submit or thinking" }, { "end": 28.6, "start": 27.52, "text": " about it." }, { "end": 31.360000000000003, "start": 28.6, "text": " So desk rejections." }, { "end": 39.760000000000005, "start": 31.360000000000003, "text": " ACs, area chairs, have the ability to desk reject papers that they feel strongly are" }, { "end": 45.400000000000006, "start": 39.760000000000005, "text": " not going to be passable to the reviewers." }, { "end": 50.88, "start": 45.400000000000006, "text": " They did an experiment last year where the ACs were simply supposed to mark submissions" }, { "end": 57.120000000000005, "start": 50.88, "text": " that they would desk reject, and it turned out that ACs aren't very good at estimating" }, { "end": 61.64, "start": 57.12, "text": " which submissions are going to be rejected by the reviewers." }, { "end": 65.42, "start": 61.64, "text": " That might be because there wasn't really anything at stake because it was just kind" }, { "end": 68.24, "start": 65.42, "text": " of a let's see how this works." }, { "end": 73.84, "start": 68.24, "text": " But it is definitely a move to reduce the number of submissions because the field is" }, { "end": 80.56, "start": 73.84, "text": " exploding and we lack reviewing power, reviewing people." }, { "end": 87.44, "start": 80.56, "text": " So this is a move to reduce the number of people that have to review something because" }, { "end": 91.16, "start": 87.44, "text": " there will be fewer papers." }, { "end": 94.68, "start": 91.16, "text": " I don't know if this increases the quality overall." }, { "end": 101.04, "start": 94.68, "text": " If your paper gets desk rejected, there's usually some obvious reason for it why an" }, { "end": 104.24000000000001, "start": 101.04, "text": " AC decided it's not worth it." }, { "end": 109.56, "start": 104.24000000000001, "text": " They probably haven't read it in depth, but there might be some kind of overall structural" }, { "end": 117.98, "start": 109.56, "text": " issue that, or like the introduction has many typos, or you know, look for the obvious things" }, { "end": 121.16, "start": 117.98, "text": " even though your work might be good." }, { "end": 129.56, "start": 121.16, "text": " Second, all authors of a paper have to be able to review if asked to do so." }, { "end": 134.72, "start": 129.56, "text": " And again, this is a stab at this kind of reviewing crisis, but I have mixed feelings" }, { "end": 135.72, "start": 134.72, "text": " about this." }, { "end": 139.6, "start": 135.72, "text": " I really think this is a move in the wrong direction." }, { "end": 144.24, "start": 139.6, "text": " It will increase the number of authors because a lot of people have been kind of free riding" }, { "end": 150.32, "start": 144.24, "text": " in that they're submitting papers, but they aren't reviewing other papers even though" }, { "end": 155.04, "start": 150.32, "text": " they would be competent researchers simply because reviewing doesn't get you anything." }, { "end": 158.14, "start": 155.04, "text": " So there's no incentive to do reviews." }, { "end": 162.4, "start": 158.14, "text": " Maybe you can say you're a reviewer, but then there's every incentive to do bad reviews," }, { "end": 166.6, "start": 162.4, "text": " like two line reviews where the first line says you should have compared to my paper," }, { "end": 172, "start": 166.6, "text": " reject like, fuck you if you're a reviewer like this." }, { "end": 179.32, "start": 172, "text": " In any case, like a lot of times, and this hits, for example, like universities where" }, { "end": 184, "start": 179.32, "text": " you maybe work with a master student and the master student does some of the pre-processing" }, { "end": 189.56, "start": 184, "text": " of the data and they don't really have a clue about the machine learning, but they still" }, { "end": 190.56, "start": 189.56, "text": " contribute it." }, { "end": 192.26, "start": 190.56, "text": " So why shouldn't they be an author on the paper?" }, { "end": 196.48, "start": 192.26, "text": " They might even have written that section about the data pre-processing." }, { "end": 202.64, "start": 196.48, "text": " And now they're asked to review entire papers about topics where they're not really familiar" }, { "end": 208.84, "start": 202.64, "text": " with or you have some outside collaborators or, you know, there are so many things wrong." }, { "end": 214.92, "start": 208.84, "text": " I think this attracts the wrong kind of people and by forcing people to do it, you encourage" }, { "end": 220.32, "start": 214.92, "text": " even more, like all these reviewers that would not have reviewed, what will happen is they" }, { "end": 227.07999999999998, "start": 220.32, "text": " will give shitty reviews and you will have even worse quality of reviews as a result." }, { "end": 233.2, "start": 227.07999999999998, "text": " I think this is the wrong move to reduce the number of load per reviewer." }, { "end": 239.12, "start": 233.2, "text": " I'd rather see abolish peer review completely in computer science, in machine learning at" }, { "end": 240.24, "start": 239.12, "text": " least." }, { "end": 245.48, "start": 240.24, "text": " That's my opinion, but that might be a video for another time." }, { "end": 250.07999999999998, "start": 245.48, "text": " I have plans how to replace it another time." }, { "end": 252.48000000000002, "start": 250.08, "text": " Resubmissions have to be clearly marked." }, { "end": 257.96000000000004, "start": 252.48000000000002, "text": " So if your paper is a resubmission of, like if you had already submitted it in the last" }, { "end": 263.92, "start": 257.96000000000004, "text": " 12 months, it's been rejected, you have to say it is a resubmission and the changes" }, { "end": 265.68, "start": 263.92, "text": " you made to the paper." }, { "end": 271.34000000000003, "start": 265.68, "text": " Again with a peer review process that actually works, this would make a lot of sense." }, { "end": 276.8, "start": 271.34000000000003, "text": " You can say, well, it got rejected last time and here is how I corrected for what the reviewers" }, { "end": 282.96000000000004, "start": 276.8, "text": " criticized, but with the review quality right now, I mean most of the papers, what are they" }, { "end": 284.88, "start": 282.96000000000004, "text": " going to say?" }, { "end": 291.52000000000004, "start": 284.88, "text": " It got rejected for nefarious reasons because the reviewer had a bad bowel movement that" }, { "end": 293.92, "start": 291.52000000000004, "text": " morning and I didn't really change much." }, { "end": 299.56, "start": 293.92, "text": " So you encourage people to kind of blow out of proportion the changes they made and put" }, { "end": 305.44, "start": 299.56, "text": " a lot of additional unnecessary work on two papers that would actually be already fine." }, { "end": 315.8, "start": 305.44, "text": " So all of these things, they are forcing people to do things and then the incentives of what" }, { "end": 320.22, "start": 315.8, "text": " we want aren't aligned with what we give." }, { "end": 326.15999999999997, "start": 320.22, "text": " So what you'll end up with is lower quality reviews and lower quality work." }, { "end": 330.56, "start": 326.15999999999997, "text": " So the next two points are of a different nature." }, { "end": 337.56, "start": 330.56, "text": " The first one though, that will probably, I mean even if the ACs aren't perfect, that's" }, { "end": 338.56, "start": 337.56, "text": " a good move." }, { "end": 340.72, "start": 338.56, "text": " I like that." }, { "end": 344.16, "start": 340.72, "text": " The fourth point and the fifth point are a bit different." }, { "end": 348.88, "start": 344.16, "text": " The fourth point is there is a new section in CMT apparently where you have to describe" }, { "end": 354.2, "start": 348.88, "text": " the broader societal impact and ethics around your work." }, { "end": 356.68, "start": 354.2, "text": " How will your work influence society?" }, { "end": 358.92, "start": 356.68, "text": " What are positives and negatives?" }, { "end": 360.32, "start": 358.92, "text": " Ethical outcomes?" }, { "end": 361.32, "start": 360.32, "text": " How can it be used?" }, { "end": 366.44, "start": 361.32, "text": " And this is targeted towards things like let's say facial recognition." }, { "end": 371.68, "start": 366.44, "text": " If you develop a new facial recognition algorithm, you may be able to argue, well this could" }, { "end": 378.8, "start": 371.68, "text": " be better used to identify victims in a big crowd." }, { "end": 382.56, "start": 378.8, "text": " There's a mass riot or something and then you don't know who is there." }, { "end": 390.03999999999996, "start": 382.56, "text": " Is my relative one of the people in the mass that gets stomped on?" }, { "end": 396.64000000000004, "start": 390.04, "text": " Or you can also say this potentially helps a dictatorial state to govern their people" }, { "end": 399.8, "start": 396.64000000000004, "text": " because they can now recognize everyone." }, { "end": 402.92, "start": 399.8, "text": " For most papers it will be a bit shaky." }, { "end": 409.76, "start": 402.92, "text": " Like if your third order optimization algorithm achieves a slightly better convergence rate," }, { "end": 412.28000000000003, "start": 409.76, "text": " I'm not sure what's here." }, { "end": 423.2, "start": 412.28, "text": " But what I feel is that this is dumb in a way because this just means more work." }, { "end": 427.84, "start": 423.2, "text": " Basically now you have to demonstrate and yeah it says you should discuss positive and" }, { "end": 433.78, "start": 427.84, "text": " negative aspects but in essence everyone will be demonstrating virtue signaling how good" }, { "end": 439.84, "start": 433.78, "text": " their work will be for society and what good can be done and maybe a bit of bad." }, { "end": 446.2, "start": 439.84, "text": " But that can be mitigated and it just pushes into a more PR world." }, { "end": 449.03999999999996, "start": 446.2, "text": " So it goes from the science world into a more PR world." }, { "end": 453.79999999999995, "start": 449.03999999999996, "text": " It means extra work and who are the people that can afford to do extra work?" }, { "end": 455.64, "start": 453.79999999999995, "text": " It's mostly the big companies." }, { "end": 460.67999999999995, "start": 455.64, "text": " They can just put an additional team member on that, maybe even do additional experiments" }, { "end": 467.79999999999995, "start": 460.67999999999995, "text": " to show the societal impact of the work and who will lose out are probably small universities," }, { "end": 469.55999999999995, "start": 467.79999999999995, "text": " independent researchers." }, { "end": 476.28000000000003, "start": 469.56, "text": " And so on that don't have that capacity that simply do their research because it's an interesting" }, { "end": 478, "start": 476.28000000000003, "text": " research question." }, { "end": 483.76, "start": 478, "text": " And for almost every single thing in the world that has an application it will have good" }, { "end": 485.68, "start": 483.76, "text": " and bad applications." }, { "end": 488.56, "start": 485.68, "text": " So yeah mixed feelings." }, { "end": 494.2, "start": 488.56, "text": " So the fifth is you are now supposed if your paper gets accepted to make a video about" }, { "end": 502.12, "start": 494.2, "text": " it and upload the poster basically link to the poster that you would use and also link" }, { "end": 504.76, "start": 502.12, "text": " to slides that you would give your talk with." }, { "end": 510.82, "start": 504.76, "text": " This is to make it more accessible to people that are not at the conference which again" }, { "end": 513.28, "start": 510.82, "text": " I have mixed feelings about." }, { "end": 517.56, "start": 513.28, "text": " Again it pushes it into this more PR realm." }, { "end": 521.16, "start": 517.56, "text": " Talks are already live streamed." }, { "end": 526.68, "start": 521.16, "text": " Most of them are for most of the large conferences and I feel it just gets people one step more" }, { "end": 530.68, "start": 526.68, "text": " away from the actual paper." }, { "end": 537.8399999999999, "start": 530.68, "text": " So it allows people to grandstand and PR up even more of their work because even people" }, { "end": 540.8399999999999, "start": 537.8399999999999, "text": " who don't attend the conference now they're not going to read the paper, they're just" }, { "end": 542.52, "start": 540.8399999999999, "text": " going to watch the video." }, { "end": 548.24, "start": 542.52, "text": " And in the video you can always leave away those things that you would have to like that" }, { "end": 553, "start": 548.24, "text": " a reviewer makes you put in the paper right and in the video you can overbought." }, { "end": 554.24, "start": 553, "text": " It's camera ready." }, { "end": 555.72, "start": 554.24, "text": " No one reviews the video." }, { "end": 556.84, "start": 555.72, "text": " You can say whatever you want." }, { "end": 562.2, "start": 556.84, "text": " So it's just where before if you didn't attend the conference I think many people actually" }, { "end": 569.84, "start": 562.2, "text": " did read the paper, watched talks where people could ask questions and now it's just one" }, { "end": 571.2, "start": 569.84, "text": " more PR thing." }, { "end": 578.4000000000001, "start": 571.2, "text": " And again who has time, energy and money to really invest a lot into this?" }, { "end": 584.4000000000001, "start": 578.4000000000001, "text": " It's mainly large companies right if you're small and you're time bound and so on you" }, { "end": 588.12, "start": 584.4000000000001, "text": " might not have equipment or time to do that." }, { "end": 593.36, "start": 588.12, "text": " I am not for hire to do your NURBS videos just saying." }, { "end": 597.7, "start": 593.36, "text": " I don't have time to make these videos really." }, { "end": 602.62, "start": 597.7, "text": " As you can see stellar quality I think there's a bright glare right here." }, { "end": 607.84, "start": 602.62, "text": " So that was it for my opinions on this and I wish you a nice day." }, { "end": 628.24, "start": 607.84, "text": " Bye bye." } ]
9Kec_7WFyp0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Growing Neural Cellular Automata
[ "Science & Technology" ]
[ "machine learning", "deep learning", "cellular automata", "game of life", "conway", "google", "distill", "interactive", "colab", "local", "global", "update" ]
The Game of Life on steroids! This model learns to grow complex patterns in an entirely local way. Each cell is trained to listen to its neighbors and update itself in a way such that, collectively, an overall goal is reached. Fascinating and interactive! https://distill.pub/2020/growing-ca/ https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life Abstract: Most multicellular organisms begin their life as a single egg cell - a single cell whose progeny reliably self-assemble into highly complex anatomies with many organs and tissues in precisely the same arrangement each time. The ability to build their own bodies is probably the most fundamental skill every living creature possesses. Morphogenesis (the process of an organism’s shape development) is one of the most striking examples of a phenomenon called self-organisation. Cells, the tiny building blocks of bodies, communicate with their neighbors to decide the shape of organs and body plans, where to grow each organ, how to interconnect them, and when to eventually stop. Understanding the interplay of the emergence of complex outcomes from simple rules and homeostatic 1 feedback loops is an active area of research. What is clear is that evolution has learned to exploit the laws of physics and computation to implement the highly robust morphogenetic software that runs on genome-encoded cellular hardware. This process is extremely robust to perturbations. Even when the organism is fully developed, some species still have the capability to repair damage - a process known as regeneration. Some creatures, such as salamanders, can fully regenerate vital organs, limbs, eyes, or even parts of the brain! Morphogenesis is a surprisingly adaptive process. Sometimes even a very atypical development process can result in a viable organism - for example, when an early mammalian embryo is cut in two, each half will form a complete individual - monozygotic twins! The biggest puzzle in this field is the question of how the cell collective knows what to build and when to stop. The sciences of genomics and stem cell biology are only part of the puzzle, as they explain the distribution of specific components in each cell, and the establishment of different types of cells. While we know of many genes that are required for the process of regeneration, we still do not know the algorithm that is sufficient for cells to know how to build or remodel complex organs to a very specific anatomical end-goal. Thus, one major lynch-pin of future work in biomedicine is the discovery of the process by which large-scale anatomy is specified within cell collectives, and how we can rewrite this information to have rational control of growth and form. It is also becoming clear that the software of life possesses numerous modules or subroutines, such as “build an eye here”, which can be activated with simple signal triggers. Discovery of such subroutines and a mapping out of the developmental logic is a new field at the intersection of developmental biology and computer science. An important next step is to try to formulate computational models of this process, both to enrich the conceptual toolkit of biologists and to help translate the discoveries of biology into better robotics and computational technology. Imagine if we could design systems of the same plasticity and robustness as biological life: structures and machines that could grow and repair themselves. Such technology would transform the current efforts in regenerative medicine, where scientists and clinicians seek to discover the inputs or stimuli that could cause cells in the body to build structures on demand as needed. To help crack the puzzle of the morphogenetic code, and also exploit the insights of biology to create self-repairing systems in real life, we try to replicate some of the desired properties in an in silico experiment. Authors: Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, Michael Levin Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there. Today I thought we would be looking at growing neural cellular automata, which is an article on distill.pub, which I found pretty neat. So this is kind of an interactive article. If you don't know distill.pub, check it out. It is a cool new concept as an alternative to the classical journals or the conference system. So what it allows you to do is to kind of write articles that are a bit more interactive, a bit more engaging and don't have the... There's no PDFs, there's no pages, there are animations and so on. So I thought we'd be looking at this article today, which is kind of a growing neural cellular automata. So if you don't know what cellular automata are, this is a very kind of old concept. The most famous one is called the game of life, where you have these cells. Here you can see every pixel is a cell and they follow some kind of update rule. And usually it's the update rule, something like if my neighbor is alive, I'm going to be alive as well in the next time step. Or if enough neighbors are alive and if only very few neighbors are alive, I'm going to die. So this gives rise to these kinds of patterns. And here the same is done with color. And the update rules are a bit more complicated. So basically, ah, traveler. Oh, nice. Okay. So in the game of life, if you play it, the most prestigious thing to get is are these kind of travelers. I've not... This is the first time I've managed to do this in this thing. So what does it do? So each pixel here is kind of an autonomous thing that is only allowed to look at its neighbors in order to decide whether or not in the next time step it is going to be alive. Look, it's like incorporating again. So each cell looks at its neighbors and then decides what its next state will be. And here it's not only alive or dead. Dead would be white and alive would be anything else. But it is also, I guess this white is... It is also the color. So each cell decides on what color it should have. And then this is a live thing. So it kind of reproduces, right? You can see if I start it new. If you double click here, it grows from somewhere else. And this is completely local. So these cells really only look at their neighbors. That's the special part, right? They don't look at the global structure. It's not like a GAN that can look at the entire picture and decide what's still missing. What these can also do if you destroy part of it, they can kind of grow back just, again, just out of local update rules at the level of the individual cells and their neighbors. They're trained to do these big structures. So let's look at how they do it. So basically, here's how they model a cell. And let's go over here. So each cell, as I said, is made up of 16 channels. And here it's modeled as three by three, but I think each cell is really one pixel. And each cell is allowed to look at its eight neighbors, right? So each cell is allowed to look at its eight neighbors across 16 different channels. And the 16 channels here mean the first three are RGB. So this is the actual color that is seen. Then there is an alive or dead channel. So what they call an alpha channel. So if this channel is high, the cell is considered alive. Otherwise, it is considered dead and not part of the pattern. So a cell can come alive or die, depending on its neighbors. And then the rest, the rest 12 channels are what they call hidden channels. So the cell is allowed to encode some hidden state there. So there's each cell is represented by the 16 dimensional vector, which is not much right. And then each cell is allowed to look at three things. So from the bottom here, it's allowed to look at its own state, so at its own 16 dimensional vectors, and it is allowed to look at its neighbors. And it does this by doing a convolution with a sobel filter. And the sobel filter is simply a fixed filter that you do a three by three convolution with, as you can see here, is basically a gradient filter. So basically measures the difference between what's to the left of the cell and what's to the right of the cell. And here in the sobel y direction, the same in the y direction. So it's basically allowed to look at gradients in states of its neighbors. This is modeled after real cells kind of looking at chemical gradients in their neighborhoods. So this is all this, this is all that the cell has to decide what it's supposed to do next, right. And what we want is we want that each individual cell only looking at its neighbors produces in total, they will produce these kind of very complex pattern. So the update rule is the following, you convolute with the sobel filters and you take the cell identity, you put this all into a vector, you put it through a very, very small neural network. So this is one dense layer, one relu, and then another dense layer to get the next 16 dimensional vector, which is the next state. And that defines your update rules. That doesn't really define the next state that defines the Delta to the next state, kind of like a residual neural network. So basically, which cells need to come alive in the next time step, which cells need to die and how are they to change their colors, right. And then you get the output of the next step, right. So that's, that's basically the entire thing. So all that is learned here is the the update rule of the neural network, right. So basically, the neural network decides, it looks at a cell and its neighbors and decides what the information in the cell in the next step should be, right. And you do this for multiple time steps. That's I actually want to go down here, you do this for multiple time steps, the initial state is simply one cell that is alive here in the middle, everything else is dead, this cell is alive and black, you do this for many steps, right. And then at some point, you get an output. And you compare the output to your desired output, you compute a loss that is differentiable. And because your update rule is differentiable, and your loss is differentiable, you can backprop through time to the original pattern here. And you can basically learn this update rule by backproping through time. This is a bit like an LSTM. And if you see in the architecture here, I think this residual connection is really the key to making this work over time. Because usually, I would not expect something like this to easily emerge over time because you have the problem of vanishing and exploding gradients. And you have no way of mitigating this problem here, this problem here in this simple neural network. But in any case, they backprop through time here. So each of these update steps, which again, this isn't one neural network with many layers, this is the same neural network applied over and over and over and over again, and then there is a loss computed. So basically, the gradients will accumulate over these steps, and they tell the network what it needs to adjust to go from this one single black pixel to this final desired state. If you do this over and over again, you learn things, you learn a update rule that will give rise to that pattern, hopefully. Now, here is a kind of an illustration of this alive and dead thing. So what they do is they consider cells that have an alpha channel, one of these channels called alpha, they have an alpha channel above 0.1, it's considered alive, right, and part of the loss. Then the neighbors, the neighbors of these cells that are below 0.1, but are neighboring a cell that is mature, alive, they're called growing, they're also part of the loss, right. So simply by being close to something, someone that is alive, a cell that is alive, you are considered alive as well, but your neighbors aren't, right, only the neighbors of really alive. So there's really alive, kind of alive, and then there is dead. And dead, the meaning of dead here, the gray ones, is they're not, they won't become part of the pattern, part of the loss, right, they're dead. All right, so what will this get you initially? So here is an animation, if they train this just like that, just back up through time with a target pattern, and then they let it run, you see these patterns actually emerge. So that's pretty cool. But then if you let them run for longer than they've been trained, you basically have no guarantees on what's going to happen. Like these update rules are simply trained to achieve the pattern within a certain number of steps, right. If you run for more than that, and apply the update rules for longer than that, you you have like there's little like you have no guarantee what's going to happen, these update rules will simply continue, as you can see here and produce some weird stuff. So they are trying to fix this. So what they do is basically they train for longer, but they do it in a in a kind of different way. So at each at each step of training, and as a step, I mean, a batch over these number of time steps. So so they sample a batch, initially, it's just all black pixels, right, as we see above. And then they optimize for these number of time steps. And then they're at the end. So what they do is they don't always start from the black pixel. But sometimes they also start from a previously seen end state. So basically, they take the end state of a previous training run, and then they just continue from that instead of starting from the initial point. And you see after some training, they get better and better. So initially, you see the thing on the left here. The thing on the left here being a starting state. And then it progressively gets better. So basically, by starting from end states of other things, you learn to. So if the end state of the other thing isn't very good, you basically learn to go to the good pattern to the pattern you want. But of course, over time, there's going to be more and more of these end states that you train from that are already pretty close to the pattern you want. And so then what that means is you learn to reproduce the pattern. So you are already at a good point, you learn to stay at that good point. And then that enables you to basically learn update rules that if you're not at the pattern you want, they go towards the pattern you want. But also if you run for longer, if you are already are at the pattern you want, then you stay at the pattern you want. So that's what we basically saw in the very initial demonstration where you could, this is a live demonstration like this thing up here, this is a live, this is running, right. And you see the update rules data, they are continuously applied, they basically stay at the pattern where they are. And that is also that is learned because of this protocol that you train from end states as well as from beginning states. So the next thing is what I'm doing here is I can destroy part of the pattern, and it will kind of regrow right you see that here. So this is also a part so for now we've also only learned to go from a single pixel like here from a black pixel to the pattern. But now we also want to learn to go to regrow when destroyed because that is, you can see this is modeled after kind of live tissue. So here you can see the parts are cut away and then the cells try to regrow. So this is I think initially, this is initially when you just train them, they exhibit some of that property, but not like very satisfying in some cases. So what they do is they train not only do they use end states, like we saw before, but also some of their training samples are simply the pattern destroyed a bit. So as you can see in some of these samples, like these here, they in each sample, they kind of cut out part of the sample and they train the update rules to regrow that part that gives you that now gives you the ability to if you damage to pretty consistently regrow the pattern, as you can see here. And they also train for rotation, which is non trivial if you have these kind of pixel based, pixel based models. But I want to jump that because I want to keep it kind of short here. So the entire goal of this is to kind of model the behavior of natural cells, because the natural cells, they don't have an overarching view, they only have the view of their neighbors, right, and they are able to grow into very complex structures. I invite you to give this a try. The distilled out pop journal is very cool. It's very interactive, you can play around with it, you can reproduce things in a collab. And yeah, shout out to the authors here, Alexander Morbintsev, Ettore Randazzo, Evan Nicholson and Michael Levin. Yeah, that was it from me. Thanks for watching and bye bye.
[ { "end": 8.16, "start": 0, "text": " Hi there. Today I thought we would be looking at growing neural cellular automata, which" }, { "end": 16.48, "start": 8.16, "text": " is an article on distill.pub, which I found pretty neat. So this is kind of an interactive" }, { "end": 23, "start": 16.48, "text": " article. If you don't know distill.pub, check it out. It is a cool new concept as an alternative" }, { "end": 31.32, "start": 23, "text": " to the classical journals or the conference system. So what it allows you to do is to" }, { "end": 41.519999999999996, "start": 31.32, "text": " kind of write articles that are a bit more interactive, a bit more engaging and don't" }, { "end": 48, "start": 41.519999999999996, "text": " have the... There's no PDFs, there's no pages, there are animations and so on. So I thought" }, { "end": 55.12, "start": 48, "text": " we'd be looking at this article today, which is kind of a growing neural cellular automata." }, { "end": 61.76, "start": 55.12, "text": " So if you don't know what cellular automata are, this is a very kind of old concept. The" }, { "end": 66.84, "start": 61.76, "text": " most famous one is called the game of life, where you have these cells. Here you can see" }, { "end": 73.2, "start": 66.84, "text": " every pixel is a cell and they follow some kind of update rule. And usually it's the" }, { "end": 78.2, "start": 73.2, "text": " update rule, something like if my neighbor is alive, I'm going to be alive as well in" }, { "end": 84.28, "start": 78.2, "text": " the next time step. Or if enough neighbors are alive and if only very few neighbors are" }, { "end": 88.88, "start": 84.28, "text": " alive, I'm going to die. So this gives rise to these kinds of patterns. And here the same" }, { "end": 96.96000000000001, "start": 88.88, "text": " is done with color. And the update rules are a bit more complicated. So basically, ah," }, { "end": 105.67999999999999, "start": 96.96, "text": " traveler. Oh, nice. Okay. So in the game of life, if you play it, the most prestigious" }, { "end": 112.6, "start": 105.67999999999999, "text": " thing to get is are these kind of travelers. I've not... This is the first time I've managed" }, { "end": 120.28, "start": 112.6, "text": " to do this in this thing. So what does it do? So each pixel here is kind of an autonomous" }, { "end": 125.75999999999999, "start": 120.28, "text": " thing that is only allowed to look at its neighbors in order to decide whether or not" }, { "end": 133.52, "start": 125.76, "text": " in the next time step it is going to be alive. Look, it's like incorporating again. So each" }, { "end": 139.28, "start": 133.52, "text": " cell looks at its neighbors and then decides what its next state will be. And here it's" }, { "end": 146.96, "start": 139.28, "text": " not only alive or dead. Dead would be white and alive would be anything else. But it is" }, { "end": 153.08, "start": 146.96, "text": " also, I guess this white is... It is also the color. So each cell decides on what color" }, { "end": 161.32000000000002, "start": 153.08, "text": " it should have. And then this is a live thing. So it kind of reproduces, right? You can see" }, { "end": 167.16000000000003, "start": 161.32000000000002, "text": " if I start it new. If you double click here, it grows from somewhere else. And this is" }, { "end": 172.08, "start": 167.16000000000003, "text": " completely local. So these cells really only look at their neighbors. That's the special" }, { "end": 176.8, "start": 172.08, "text": " part, right? They don't look at the global structure. It's not like a GAN that can look" }, { "end": 183.48000000000002, "start": 176.8, "text": " at the entire picture and decide what's still missing. What these can also do if you destroy" }, { "end": 190.56, "start": 183.48000000000002, "text": " part of it, they can kind of grow back just, again, just out of local update rules at the" }, { "end": 196.76000000000002, "start": 190.56, "text": " level of the individual cells and their neighbors. They're trained to do these big structures." }, { "end": 205.78, "start": 196.76000000000002, "text": " So let's look at how they do it. So basically, here's how they model a cell. And let's go" }, { "end": 213.56, "start": 205.78, "text": " over here. So each cell, as I said, is made up of 16 channels. And here it's modeled as" }, { "end": 220.84, "start": 213.56, "text": " three by three, but I think each cell is really one pixel. And each cell is allowed to look" }, { "end": 226.8, "start": 220.84, "text": " at its eight neighbors, right? So each cell is allowed to look at its eight neighbors" }, { "end": 235.32, "start": 226.8, "text": " across 16 different channels. And the 16 channels here mean the first three are RGB. So this" }, { "end": 240.95999999999998, "start": 235.32, "text": " is the actual color that is seen. Then there is an alive or dead channel. So what they" }, { "end": 248.35999999999999, "start": 240.95999999999998, "text": " call an alpha channel. So if this channel is high, the cell is considered alive. Otherwise," }, { "end": 254.51999999999998, "start": 248.35999999999999, "text": " it is considered dead and not part of the pattern. So a cell can come alive or die," }, { "end": 259.68, "start": 254.51999999999998, "text": " depending on its neighbors. And then the rest, the rest 12 channels are what they call hidden" }, { "end": 267.44, "start": 259.68, "text": " channels. So the cell is allowed to encode some hidden state there. So there's each cell" }, { "end": 272.6, "start": 267.44, "text": " is represented by the 16 dimensional vector, which is not much right. And then each cell" }, { "end": 278.78000000000003, "start": 272.6, "text": " is allowed to look at three things. So from the bottom here, it's allowed to look at its" }, { "end": 285.52, "start": 278.78000000000003, "text": " own state, so at its own 16 dimensional vectors, and it is allowed to look at its neighbors." }, { "end": 291.03999999999996, "start": 285.52, "text": " And it does this by doing a convolution with a sobel filter. And the sobel filter is simply" }, { "end": 298.2, "start": 291.03999999999996, "text": " a fixed filter that you do a three by three convolution with, as you can see here, is" }, { "end": 305.56, "start": 298.2, "text": " basically a gradient filter. So basically measures the difference between what's to" }, { "end": 309.96, "start": 305.56, "text": " the left of the cell and what's to the right of the cell. And here in the sobel y direction," }, { "end": 316.56, "start": 309.96, "text": " the same in the y direction. So it's basically allowed to look at gradients in states of" }, { "end": 324.44, "start": 316.56, "text": " its neighbors. This is modeled after real cells kind of looking at chemical gradients" }, { "end": 330.64, "start": 324.44, "text": " in their neighborhoods. So this is all this, this is all that the cell has to decide what" }, { "end": 337.06, "start": 330.64, "text": " it's supposed to do next, right. And what we want is we want that each individual cell" }, { "end": 342.44, "start": 337.06, "text": " only looking at its neighbors produces in total, they will produce these kind of very" }, { "end": 348.9, "start": 342.44, "text": " complex pattern. So the update rule is the following, you convolute with the sobel filters" }, { "end": 354.28, "start": 348.9, "text": " and you take the cell identity, you put this all into a vector, you put it through a very," }, { "end": 359.84000000000003, "start": 354.28, "text": " very small neural network. So this is one dense layer, one relu, and then another dense" }, { "end": 365.44, "start": 359.84000000000003, "text": " layer to get the next 16 dimensional vector, which is the next state. And that defines" }, { "end": 370.28, "start": 365.44, "text": " your update rules. That doesn't really define the next state that defines the Delta to the" }, { "end": 376.6, "start": 370.28, "text": " next state, kind of like a residual neural network. So basically, which cells need to" }, { "end": 381.24, "start": 376.6, "text": " come alive in the next time step, which cells need to die and how are they to change their" }, { "end": 389.1, "start": 381.24, "text": " colors, right. And then you get the output of the next step, right. So that's, that's" }, { "end": 395.64000000000004, "start": 389.1, "text": " basically the entire thing. So all that is learned here is the the update rule of the" }, { "end": 400.48, "start": 395.64000000000004, "text": " neural network, right. So basically, the neural network decides, it looks at a cell and its" }, { "end": 406.88, "start": 400.48, "text": " neighbors and decides what the information in the cell in the next step should be, right." }, { "end": 411.96000000000004, "start": 406.88, "text": " And you do this for multiple time steps. That's I actually want to go down here, you do this" }, { "end": 417.28000000000003, "start": 411.96000000000004, "text": " for multiple time steps, the initial state is simply one cell that is alive here in the" }, { "end": 422.59999999999997, "start": 417.28, "text": " middle, everything else is dead, this cell is alive and black, you do this for many steps," }, { "end": 428.84, "start": 422.59999999999997, "text": " right. And then at some point, you get an output. And you compare the output to your" }, { "end": 434.28, "start": 428.84, "text": " desired output, you compute a loss that is differentiable. And because your update rule" }, { "end": 442.88, "start": 434.28, "text": " is differentiable, and your loss is differentiable, you can backprop through time to the original" }, { "end": 447.76, "start": 442.88, "text": " pattern here. And you can basically learn this update rule by backproping through time." }, { "end": 453.64, "start": 447.76, "text": " This is a bit like an LSTM. And if you see in the architecture here, I think this residual" }, { "end": 459.96, "start": 453.64, "text": " connection is really the key to making this work over time. Because usually, I would not" }, { "end": 465.04, "start": 459.96, "text": " expect something like this to easily emerge over time because you have the problem of" }, { "end": 470.68, "start": 465.04, "text": " vanishing and exploding gradients. And you have no way of mitigating this problem here," }, { "end": 480.84000000000003, "start": 470.68, "text": " this problem here in this simple neural network. But in any case, they backprop through time" }, { "end": 487.6, "start": 480.84000000000003, "text": " here. So each of these update steps, which again, this isn't one neural network with" }, { "end": 493.24, "start": 487.6, "text": " many layers, this is the same neural network applied over and over and over and over again," }, { "end": 498.88, "start": 493.24, "text": " and then there is a loss computed. So basically, the gradients will accumulate over these steps," }, { "end": 504.32, "start": 498.88, "text": " and they tell the network what it needs to adjust to go from this one single black pixel" }, { "end": 511.24, "start": 504.32, "text": " to this final desired state. If you do this over and over again, you learn things, you" }, { "end": 518.96, "start": 511.24, "text": " learn a update rule that will give rise to that pattern, hopefully. Now, here is a kind" }, { "end": 525.4, "start": 518.96, "text": " of an illustration of this alive and dead thing. So what they do is they consider cells" }, { "end": 531.28, "start": 525.4, "text": " that have an alpha channel, one of these channels called alpha, they have an alpha channel above" }, { "end": 541.6, "start": 531.28, "text": " 0.1, it's considered alive, right, and part of the loss. Then the neighbors, the neighbors" }, { "end": 550.28, "start": 541.6, "text": " of these cells that are below 0.1, but are neighboring a cell that is mature, alive," }, { "end": 554.68, "start": 550.28, "text": " they're called growing, they're also part of the loss, right. So simply by being close" }, { "end": 560.64, "start": 554.68, "text": " to something, someone that is alive, a cell that is alive, you are considered alive as" }, { "end": 565.88, "start": 560.64, "text": " well, but your neighbors aren't, right, only the neighbors of really alive. So there's" }, { "end": 572.3599999999999, "start": 565.88, "text": " really alive, kind of alive, and then there is dead. And dead, the meaning of dead here," }, { "end": 578.12, "start": 572.3599999999999, "text": " the gray ones, is they're not, they won't become part of the pattern, part of the loss," }, { "end": 590.36, "start": 578.12, "text": " right, they're dead. All right, so what will this get you initially? So here is an animation," }, { "end": 595.68, "start": 590.36, "text": " if they train this just like that, just back up through time with a target pattern, and" }, { "end": 600.6, "start": 595.68, "text": " then they let it run, you see these patterns actually emerge. So that's pretty cool. But" }, { "end": 606.28, "start": 600.6, "text": " then if you let them run for longer than they've been trained, you basically have no guarantees" }, { "end": 612.68, "start": 606.28, "text": " on what's going to happen. Like these update rules are simply trained to achieve the pattern" }, { "end": 617.4399999999999, "start": 612.68, "text": " within a certain number of steps, right. If you run for more than that, and apply the" }, { "end": 624.0799999999999, "start": 617.4399999999999, "text": " update rules for longer than that, you you have like there's little like you have no" }, { "end": 629.06, "start": 624.0799999999999, "text": " guarantee what's going to happen, these update rules will simply continue, as you can see" }, { "end": 635.4399999999999, "start": 629.06, "text": " here and produce some weird stuff. So they are trying to fix this. So what they do is" }, { "end": 639.7600000000001, "start": 635.44, "text": " basically they train for longer, but they do it in a in a kind of different way. So" }, { "end": 649.7600000000001, "start": 639.7600000000001, "text": " at each at each step of training, and as a step, I mean, a batch over these number of" }, { "end": 656.5200000000001, "start": 649.7600000000001, "text": " time steps. So so they sample a batch, initially, it's just all black pixels, right, as we see" }, { "end": 663.44, "start": 656.5200000000001, "text": " above. And then they optimize for these number of time steps. And then they're at the end." }, { "end": 668.7600000000001, "start": 663.44, "text": " So what they do is they don't always start from the black pixel. But sometimes they also" }, { "end": 678.72, "start": 668.7600000000001, "text": " start from a previously seen end state. So basically, they take the end state of a previous" }, { "end": 684.7600000000001, "start": 678.72, "text": " training run, and then they just continue from that instead of starting from the initial" }, { "end": 693.32, "start": 684.7600000000001, "text": " point. And you see after some training, they get better and better. So initially, you see" }, { "end": 701.12, "start": 693.32, "text": " the thing on the left here. The thing on the left here being a starting state. And then" }, { "end": 708.1600000000001, "start": 701.12, "text": " it progressively gets better. So basically, by starting from end states of other things," }, { "end": 715.6800000000001, "start": 708.1600000000001, "text": " you learn to. So if the end state of the other thing isn't very good, you basically learn" }, { "end": 722.34, "start": 715.6800000000001, "text": " to go to the good pattern to the pattern you want. But of course, over time, there's going" }, { "end": 726.94, "start": 722.34, "text": " to be more and more of these end states that you train from that are already pretty close" }, { "end": 734.8000000000001, "start": 726.94, "text": " to the pattern you want. And so then what that means is you learn to reproduce the pattern." }, { "end": 740.44, "start": 734.8000000000001, "text": " So you are already at a good point, you learn to stay at that good point. And then that" }, { "end": 747.6, "start": 740.44, "text": " enables you to basically learn update rules that if you're not at the pattern you want," }, { "end": 753, "start": 747.6, "text": " they go towards the pattern you want. But also if you run for longer, if you are already" }, { "end": 759.5, "start": 753, "text": " are at the pattern you want, then you stay at the pattern you want. So that's what we" }, { "end": 765.36, "start": 759.5, "text": " basically saw in the very initial demonstration where you could, this is a live demonstration" }, { "end": 771.6, "start": 765.36, "text": " like this thing up here, this is a live, this is running, right. And you see the update" }, { "end": 776.6800000000001, "start": 771.6, "text": " rules data, they are continuously applied, they basically stay at the pattern where they" }, { "end": 782.4, "start": 776.68, "text": " are. And that is also that is learned because of this protocol that you train from end states" }, { "end": 791.88, "start": 782.4, "text": " as well as from beginning states. So the next thing is what I'm doing here is I can destroy" }, { "end": 799, "start": 791.88, "text": " part of the pattern, and it will kind of regrow right you see that here. So this is also a" }, { "end": 804.0799999999999, "start": 799, "text": " part so for now we've also only learned to go from a single pixel like here from a black" }, { "end": 811.8000000000001, "start": 804.08, "text": " pixel to the pattern. But now we also want to learn to go to regrow when destroyed because" }, { "end": 823.12, "start": 811.8000000000001, "text": " that is, you can see this is modeled after kind of live tissue. So here you can see the" }, { "end": 834, "start": 823.12, "text": " parts are cut away and then the cells try to regrow. So this is I think initially, this" }, { "end": 840.04, "start": 834, "text": " is initially when you just train them, they exhibit some of that property, but not like" }, { "end": 847.32, "start": 840.04, "text": " very satisfying in some cases. So what they do is they train not only do they use end" }, { "end": 854.36, "start": 847.32, "text": " states, like we saw before, but also some of their training samples are simply the pattern" }, { "end": 861.04, "start": 854.36, "text": " destroyed a bit. So as you can see in some of these samples, like these here, they in" }, { "end": 867.92, "start": 861.04, "text": " each sample, they kind of cut out part of the sample and they train the update rules" }, { "end": 875.76, "start": 867.92, "text": " to regrow that part that gives you that now gives you the ability to if you damage to" }, { "end": 884.52, "start": 875.76, "text": " pretty consistently regrow the pattern, as you can see here. And they also train for" }, { "end": 891.72, "start": 884.52, "text": " rotation, which is non trivial if you have these kind of pixel based, pixel based models." }, { "end": 898.64, "start": 891.72, "text": " But I want to jump that because I want to keep it kind of short here. So the entire" }, { "end": 905.76, "start": 898.64, "text": " goal of this is to kind of model the behavior of natural cells, because the natural cells," }, { "end": 910.92, "start": 905.76, "text": " they don't have an overarching view, they only have the view of their neighbors, right," }, { "end": 918.4399999999999, "start": 910.92, "text": " and they are able to grow into very complex structures. I invite you to give this a try." }, { "end": 923.5999999999999, "start": 918.4399999999999, "text": " The distilled out pop journal is very cool. It's very interactive, you can play around" }, { "end": 931.1999999999999, "start": 923.5999999999999, "text": " with it, you can reproduce things in a collab. And yeah, shout out to the authors here," }, { "end": 944.12, "start": 931.2, "text": " Alexander Morbintsev, Ettore Randazzo, Evan Nicholson and Michael Levin. Yeah, that was" }, { "end": 961.48, "start": 944.12, "text": " it from me. Thanks for watching and bye bye." } ]
tC01FRB0M7w
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Turing-NLG, DeepSpeed and the ZeRO optimizer
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "attention mechanism", "attention", "transformer", "seq2seq", "bert", "long sequence", "memory", "gpt-2", "Megatron", "Microsoft", "distributed", "parallelism" ]
Microsoft has trained a 17-billion parameter language model that achieves state-of-the-art perplexity. This video takes a look at the ZeRO optimizer that enabled this breakthrough. ZeRO allows you to do model- and data-parallelism without having huge cuts in training speed. https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/ https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/ https://github.com/microsoft/DeepSpeed https://arxiv.org/abs/1910.02054 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi everyone, today we're going to look at Turing NLGA 17 billion parameter language model by Microsoft. The latest and greatest of language modeling by Microsoft. What is this? It is a language model. A language model is basically a model that learns to produce language, given language. So if you start a sentence it's supposed to finish a sentence. If you start a paragraph it's supposed to finish the paragraph. That's a language model. Ultimately you can make it do different things like answer questions, have a conversation with you, anything to do with understanding language. The special thing about this one is that it's ginormous. So if you look at the scale of language models, so BERT was quite a large thing back in its day. Ye Olde BERT, you can see here it has about 340 million parameters. Now I have to say all of these language models are transformers. This is kind of the state of the art today. So all of these are kind of our transformer based models. Then GPT-2 here, you can see that was the model that was so large it was too dangerous to be released into the world. That stands at 1.5 billion parameters. Megatron LM by Nvidia 8.3 billion and now we are at 17 billion parameters for this language model. And it is a bit better. People just throw more and more and more resources at this language problem. So what you can do with it, you can of course do language modeling. So what happens is you take a bunch of text like all of Wikipedia and all of the internet and all of Reddit and so on and you let the model train on it to understand, to basically produce that sort of language. And then you can measure it, for example it's a perplexity on a validation set. And the Turing NLG is currently state-of-the-art on that. It can also do for example question answering. So you can ask the question and give it a passage about that question and it will then tell you the answer that it deduced from that passage given the question as you can see here. What is more interesting is that a usual QA system will point to the passage. So it will point to the words Tristan Prediman. Whereas with a generative model like this one what you can do is you can make it actually output an answer as a sentence. So it will generate the text Jason Bras was engaged to Tristan Prediman. If you ask a question without giving it a context and just ask it to generate an answer it will do so as well. I don't know if these answers are cherry-picked but they call this zero-shot question answering. So if you ask when did World War II end and it can output World War II ended in 1945. Simply out of regularities it detected in the training data. So I mean that's what I'm kind of wondering. At what point are these models, do they have so many parameters that they simply reproduce the training data? I mean this clearly some article from the training data is about World War II or many are and it simply learned that following a question when did World War II end it needs to answer with the appropriate passage. I'm not sure that is a proper measure of language understanding if you simply can bake more and more of the training data into these many many parameters but I'm not the judge of that here. It can do it very well. So yeah what I'm actually more interested in is this thing is called the zero optimizer that they use to train the model. So the model is just a transformer, it's just a big big transformer model. There is nothing really special about the model except that it is larger than the last model and therefore a bit better. What is interesting is that this would have been pretty impossible to train if it weren't for this zero optimizer of this deep speed library and Microsoft has released this deep speed library. It's compatible for now with PyTorch. You can check this out. I'll put a link into the description and I want to dive into this a bit. There's a paper, it's by Samyam Raj Bandari and all by Microsoft. The paper describes in detail the optimizer but it's not very visual. That's why we're going to the blog post. You can see it gives many speed ups over the previous Megatron LM model that Nvidia just trained using what Nvidia has. Nvidia has machines that are interconnected within the machine with very fast buses between GPUs. But this zero optimizer can now also go over the network and make it pretty fast. Let's explore that a bit. I have the copy this here. We'll look how the zero optimizer works. Usually what you do is if you have multiple GPUs you can do something like this. This is called data parallelism. What you have is a model and the model in this case fits on your GPU. It fits on a single GPU. The blue thing here is the model. I'll actually draw this. The model is a neural network so it has a bunch of layers. Layer, layer, layer, layer. What you want to do is you pass data forward. Here is some loss and then right into the loss function and then backward again. That's basically what you need to do. You need to pass it forward and backward in order to do back propagation training. If this all fits into one box that's completely fine. If this fits into one machine, cool. We can just put many batches of data through batch one, batch two, batch three and so on. Train the model. If you want to do a speed up using this you can do so. If you have lots of data you can do what's called, and I'm always confused, I think this is called data parallelism or is it called model parallelism. In any case what you can do is you can take a second machine or many of those, replicate the model. These two models here are exactly the same. What you do is you take your data and you split it up. You take double the amount of data and you put one batch of data through the top part and you put the other through the bottom part. You do your forward passes on the machines and you do your backward passes. Then what you want to do is you want to sync between the machines what they learned from the data. Each machine has a different set of data points. Each machine calculates its own parameter updates. It learns from the data it has and then they communicate to keep because this here and this here should be the same. It's the same model. They have to keep in sync. This can be usually can be done fairly efficiently especially if these aren't actually two machines but just two GPUs inside of one large machine. If this is a large machine this is GPU 0 and this is GPU 1. This is pretty standard because especially on Nvidia machines they have these whatever I think they call them InfiniBand or so. Nvidia has these connectors that connects the GPUs together really fast. You can keep these in sync but now the problem becomes what if you want to train a model that is larger than this. Let's forget about the data parallelism for now if that is what it's called and just consider a model that is too large. A model that is too large will not fit into a machine. This is a model as a large model. What you want to do is you want to pack some of the model onto your first machine and then take the other part of the model and pack it onto another machine. You separate the model and put it on different machines. If you have a batch of data what you have to do is you pass it pass it pass it forward propagate as you regularly would but then you have an intermediate result. You send that to the next machine and you forward propagate that. At the end here you have a loss. You want to back propagate regularly through this machine. You have an intermediate result of back propagation. Send it over the network and back prop all the way through the model. That's how you can train a model that is too large for one machine if you have multiple machines. The problem here of course is this part. Just as you had to keep in sync the model before, now your communication problem becomes one of... You have to send the intermediate stages to that model and you have to send the intermediate stage of the back propagation back to that part of the model. While this part is working this part is idling away. The network overhead is just very costly. Especially if your model is so large it can't even fit into one of these single boxes. This is very problematic here. It's still doable. But what the zero optimizer does is it does both data and model parallelism. It can train models that are too large for a single machine. It can do data parallelism at the same time. Basically everything is working all the time. There is not much wasted computation. The communication is efficient and so on. It's really a technical achievement. It's not so much a scientific advance. It's really a technical achievement this optimizer. We'll shortly go through. There is a kind of an animation on the website but it's super slow. I think this might be the first time that I will be faster at explaining something than a video. Let's see here. What you do is... Let's just consider these three GPUs. Before that it would all fit on one machine. But now let's say you don't actually have that much memory. You don't have these giant empty blocks here. You just have a bit of that. So you have to split your model. The blue parts here are your model. These are model parameters. The orange part here is memory you need to store gradients. You need as many gradients as you have model parameters. Because you do gradient descent. The green stuff here are what's called optimizer parameters. Now if you just have SGD these would be non-existent. But if you have something like AdaGrad or Atom they have additional parameters for each model parameter that they need to keep track of. So these are stored here. There can be significant overhead. There's also like a floating point 3216 conversion going on here. Don't want to go into that. So you split your model onto these three machines. Let's say that's your entire model. Your model is six blocks wide. You need to forward propagate now through everything. So here is what Xero does. I think it's pretty cool. What we need to do is we have these three different batches of data and we want to forward propagate them all through the model. Through the same model at the same time. As if the model were actually stored on all these machines. Like if all of these machines had the entire model. And we can do a bit of communication. So what we do first is... This one's easy. Data zero through the first two layers here is easy. Because we have them. So bang you go through the first you get an intermediate result here and here. How do we propagate data one through the first layer? We can't send data one here. That would be too expensive. And that's the whole point would be lost. We want to actually compute data one on this GPU at the same time. What we do is before we start we actually communicate these two blocks here to GPU one. We send these parameters around and fill them in here. We send them here and we also send them here. We send the parameters to all the machines. Then we can actually forward prop data one through this and data three through this. So we can do forward prop. After we've communicated all the GPUs can be working. Same with layer two. Layer two simply can send these two here. You can see that these two here to the other machines. Now while it's doing that we've already propagated through the first layer. We've already propagated here and here through the first layer. So we can actually delete these again. We can delete these first layer parameters that we sent around again. So here you see how we can save memory. We don't keep all the model in sync and all the machines. We send whatever we need on the other machines and then once the computation is done they can delete it again. Because there's always one machine, this one here for the middle parameters, that keeps track of the parameters and that can at any point if they're needed send them again. So that's the big kind of catch. You can forward prop now through these two. They're already present. Then you can delete those again on the machines where they're not natively stored. From here you can send those two. Also up here you can send those two and forward prop your model through to the end. That was a mistake. Then each machine calculates its own loss. The backward propagation happens in much the same way. If you follow so far you can already imagine. Now the loss is different because there's a different batch of data going through each machine. There's a different batch of data going through each machine but each machine has computed with the same model due to the communication of the zero optimizer. That's pretty cool. You get the benefits of data parallelism, lots of data on the different machines and you also split up the model across the machines. You don't actually store the model on any of these machines. You only send. From here you send as you need and then you delete again. For the backward propagation, same thing. You calculate gradients. You calculate gradients here and you send the gradients as needed to the other machines. You calculate gradients here and here and you send them to the machine where they're actually needed. This is a weird pen. You send them to that machine. That machine will aggregate all the gradients of all the machines. It will aggregate them and then locally it can compute using these optimizer parameters and so on. It can do all kinds of optimization locally because it has gathered gradients from all the other data. What you end up with, for example, GPU 2 here, for these two layers it has effectively broadcast the layers such that much much more data than it just had itself could run through the layers. It has aggregated gradients from all of that data and now it can use all of these gradients together to make a good update using the optimizer parameters. To make a good update to these model parameters and then in the next iteration it can go ahead and broadcast the model parameters. The new model parameters again. It is able to compute with much more data than it can just fit by itself. It is just doing its part. So Zero and DeepSpeed, Zero is the protocol and DeepSpeed is the actual library. They will do all of this communication and splitting and so on for you over the network in a way that is efficient, in a way that everything runs at the same time and the communication overhead is minimal. You can actually choose which stage you want, so what your trade-off of communication and memory saving will be. This is extremely cool. They say this goes up to whatever 100 billion parameter models if you use... This isn't something for your average Colab user. This is really something for big players. But that being said, I don't think language is solved by simply throwing more parameters at it. I think there's still a bit of a breakthrough ahead yet to come in language understanding with newer model architectures. Alright, that was it for me. Thanks.
[ { "end": 6.34, "start": 0, "text": " Hi everyone, today we're going to look at Turing NLGA 17 billion parameter" }, { "end": 11.78, "start": 6.34, "text": " language model by Microsoft. The latest and greatest of language modeling by" }, { "end": 18.5, "start": 11.78, "text": " Microsoft. What is this? It is a language model. A language model is basically a" }, { "end": 25.580000000000002, "start": 18.5, "text": " model that learns to produce language, given language. So if you start a" }, { "end": 28.66, "start": 25.580000000000002, "text": " sentence it's supposed to finish a sentence. If you start a paragraph it's" }, { "end": 33.6, "start": 28.66, "text": " supposed to finish the paragraph. That's a language model. Ultimately you can make" }, { "end": 37.44, "start": 33.6, "text": " it do different things like answer questions, have a conversation with you," }, { "end": 41.08, "start": 37.44, "text": " anything to do with understanding language. The special thing about this" }, { "end": 47.760000000000005, "start": 41.08, "text": " one is that it's ginormous. So if you look at the scale of language" }, { "end": 55.68, "start": 47.760000000000005, "text": " models, so BERT was quite a large thing back in its day. Ye Olde BERT, you" }, { "end": 62.2, "start": 55.68, "text": " can see here it has about 340 million parameters. Now I have to say all of" }, { "end": 66.08, "start": 62.2, "text": " these language models are transformers. This is kind of the state of the art" }, { "end": 74.32, "start": 66.08, "text": " today. So all of these are kind of our transformer based models. Then GPT-2" }, { "end": 79.92, "start": 74.32, "text": " here, you can see that was the model that was so large it was too dangerous to be" }, { "end": 87.12, "start": 79.92, "text": " released into the world. That stands at 1.5 billion parameters. Megatron LM by" }, { "end": 94.64, "start": 87.12, "text": " Nvidia 8.3 billion and now we are at 17 billion parameters for this language" }, { "end": 103.76, "start": 94.64, "text": " model. And it is a bit better. People just throw more and more and more" }, { "end": 111.96000000000001, "start": 103.76, "text": " resources at this language problem. So what you can do with it, you can" }, { "end": 116.4, "start": 111.96000000000001, "text": " of course do language modeling. So what happens is you take a bunch of text like" }, { "end": 122.36000000000001, "start": 116.4, "text": " all of Wikipedia and all of the internet and all of Reddit and so on and you let" }, { "end": 128.76, "start": 122.36000000000001, "text": " the model train on it to understand, to basically produce that sort of language." }, { "end": 134.07999999999998, "start": 128.76, "text": " And then you can measure it, for example it's a perplexity on a validation set." }, { "end": 142.92, "start": 134.07999999999998, "text": " And the Turing NLG is currently state-of-the-art on that. It can also do" }, { "end": 146.56, "start": 142.92, "text": " for example question answering. So you can ask the question and give it a" }, { "end": 152.76, "start": 146.56, "text": " passage about that question and it will then tell you the answer that it deduced" }, { "end": 158.6, "start": 152.76, "text": " from that passage given the question as you can see here. What is more interesting" }, { "end": 165.64, "start": 158.6, "text": " is that a usual QA system will point to the passage. So it will point to the" }, { "end": 174.68, "start": 165.64, "text": " words Tristan Prediman. Whereas with a generative model like this one what you" }, { "end": 180.51999999999998, "start": 174.68, "text": " can do is you can make it actually output an answer as a sentence. So it" }, { "end": 186.95999999999998, "start": 180.51999999999998, "text": " will generate the text Jason Bras was engaged to Tristan Prediman." }, { "end": 197.92000000000002, "start": 186.96, "text": " If you ask a question without giving it a context and just ask it to generate an" }, { "end": 202.52, "start": 197.92000000000002, "text": " answer it will do so as well. I don't know if these answers are cherry-picked" }, { "end": 206.76000000000002, "start": 202.52, "text": " but they call this zero-shot question answering. So if you ask when did World" }, { "end": 214.84, "start": 206.76000000000002, "text": " War II end and it can output World War II ended in 1945. Simply out of regularities" }, { "end": 220.72, "start": 214.84, "text": " it detected in the training data. So I mean that's what I'm kind of wondering." }, { "end": 227, "start": 220.72, "text": " At what point are these models, do they have so many parameters that they" }, { "end": 234.24, "start": 227, "text": " simply reproduce the training data? I mean this clearly some article" }, { "end": 240.32, "start": 234.24, "text": " from the training data is about World War II or many are and it simply learned" }, { "end": 247.79999999999998, "start": 240.32, "text": " that following a question when did World War II end it needs to answer with the" }, { "end": 254.68, "start": 247.79999999999998, "text": " appropriate passage. I'm not sure that is a proper measure of language" }, { "end": 260.44, "start": 254.68, "text": " understanding if you simply can bake more and more of the training data into" }, { "end": 269.44, "start": 260.44, "text": " these many many parameters but I'm not the judge of that here. It can do it very" }, { "end": 276.28, "start": 269.44, "text": " well. So yeah what I'm actually more interested in is this thing is called the" }, { "end": 281.76, "start": 276.28, "text": " zero optimizer that they use to train the model. So the model is just a" }, { "end": 285.8, "start": 281.76, "text": " transformer, it's just a big big transformer model. There is nothing really" }, { "end": 291.52, "start": 285.8, "text": " special about the model except that it is larger than the last model and" }, { "end": 296.88, "start": 291.52, "text": " therefore a bit better. What is interesting is that this would have been" }, { "end": 303.12, "start": 296.88, "text": " pretty impossible to train if it weren't for this zero optimizer of this deep" }, { "end": 307.88, "start": 303.12, "text": " speed library and Microsoft has released this deep speed library. It's compatible" }, { "end": 313.32, "start": 307.88, "text": " for now with PyTorch. You can check this out. I'll put a link into the description" }, { "end": 320.4, "start": 313.32, "text": " and I want to dive into this a bit. There's a paper, it's by Samyam Raj" }, { "end": 331.84, "start": 320.4, "text": " Bandari and all by Microsoft. The paper describes in detail the optimizer" }, { "end": 338.91999999999996, "start": 331.84, "text": " but it's not very visual. That's why we're going to the blog post. You can see" }, { "end": 348.28, "start": 338.91999999999996, "text": " it gives many speed ups over the previous Megatron LM model that" }, { "end": 355.84, "start": 348.28, "text": " Nvidia just trained using what Nvidia has. Nvidia has machines that" }, { "end": 361.91999999999996, "start": 355.84, "text": " are interconnected within the machine with very fast buses" }, { "end": 371.67999999999995, "start": 361.91999999999996, "text": " between GPUs. But this zero optimizer can now also go over the network and make it" }, { "end": 378.88, "start": 371.68, "text": " pretty fast. Let's explore that a bit. I have the copy this here. We'll" }, { "end": 383.52, "start": 378.88, "text": " look how the zero optimizer works. Usually what you do is if you have" }, { "end": 391.52, "start": 383.52, "text": " multiple GPUs you can do something like this. This is called data parallelism." }, { "end": 398.6, "start": 391.52, "text": " What you have is a model and the model in this case fits on your GPU." }, { "end": 403.76000000000005, "start": 398.6, "text": " It fits on a single GPU. The blue thing here is the model. I'll actually" }, { "end": 410.64000000000004, "start": 403.76000000000005, "text": " draw this. The model is a neural network so it has a bunch of" }, { "end": 415.76000000000005, "start": 410.64000000000004, "text": " layers. Layer, layer, layer, layer. What you want to do is you pass data" }, { "end": 423.72, "start": 415.76000000000005, "text": " forward. Here is some loss and then right into the loss function and then backward" }, { "end": 428.28000000000003, "start": 423.72, "text": " again. That's basically what you need to do. You need to pass it forward and" }, { "end": 433.47999999999996, "start": 428.28, "text": " backward in order to do back propagation training. If this all fits" }, { "end": 440.44, "start": 433.47999999999996, "text": " into one box that's completely fine. If this fits into one machine, cool." }, { "end": 445.21999999999997, "start": 440.44, "text": " We can just put many batches of data through batch one, batch two, batch three" }, { "end": 451.15999999999997, "start": 445.21999999999997, "text": " and so on. Train the model. If you want to do a speed up using this you can do so." }, { "end": 456.4, "start": 451.15999999999997, "text": " If you have lots of data you can do what's called, and I'm always confused, I" }, { "end": 462, "start": 456.4, "text": " think this is called data parallelism or is it called model parallelism." }, { "end": 466.91999999999996, "start": 462, "text": " In any case what you can do is you can take a second machine or many of those," }, { "end": 475.2, "start": 466.91999999999996, "text": " replicate the model. These two models here are exactly the same." }, { "end": 480.88, "start": 475.2, "text": " What you do is you take your data and you split it up. You take double" }, { "end": 486.32, "start": 480.88, "text": " the amount of data and you put one batch of data through the top part and you" }, { "end": 490.59999999999997, "start": 486.32, "text": " put the other through the bottom part. You do your forward passes on the" }, { "end": 496.24, "start": 490.59999999999997, "text": " machines and you do your backward passes. Then what you want to do is you want" }, { "end": 500.88, "start": 496.24, "text": " to sync between the machines what they learned from the data. Each machine" }, { "end": 506.92, "start": 500.88, "text": " has a different set of data points. Each machine calculates its own parameter" }, { "end": 513.08, "start": 506.92, "text": " updates. It learns from the data it has and then they communicate to keep" }, { "end": 518.6, "start": 513.08, "text": " because this here and this here should be the same. It's the same model." }, { "end": 524.6, "start": 518.6, "text": " They have to keep in sync. This can be usually can be done fairly efficiently" }, { "end": 529.96, "start": 524.6, "text": " especially if these aren't actually two machines but just two GPUs inside of one" }, { "end": 536.8000000000001, "start": 529.96, "text": " large machine. If this is a large machine this is GPU 0 and this is GPU 1." }, { "end": 541.9200000000001, "start": 536.8000000000001, "text": " This is pretty standard because especially on Nvidia machines they have" }, { "end": 548.52, "start": 541.92, "text": " these whatever I think they call them InfiniBand or so." }, { "end": 554.16, "start": 548.52, "text": " Nvidia has these connectors that connects the GPUs together really fast." }, { "end": 561.4399999999999, "start": 554.16, "text": " You can keep these in sync but now the problem becomes what if you want to" }, { "end": 567.24, "start": 561.4399999999999, "text": " train a model that is larger than this. Let's forget about the data parallelism" }, { "end": 572.36, "start": 567.24, "text": " for now if that is what it's called and just consider a model that is too large." }, { "end": 582, "start": 572.36, "text": " A model that is too large will not fit into a machine. This is a model as a" }, { "end": 589.48, "start": 582, "text": " large model. What you want to do is you want to pack some of the model onto" }, { "end": 597.36, "start": 589.48, "text": " your first machine and then take the other part of the model and pack" }, { "end": 602.44, "start": 597.36, "text": " it onto another machine. You separate the model and put it on different" }, { "end": 606.8000000000001, "start": 602.44, "text": " machines. If you have a batch of data what you have to do is you pass it" }, { "end": 611.08, "start": 606.8000000000001, "text": " pass it pass it forward propagate as you regularly would but then you have an" }, { "end": 615.9200000000001, "start": 611.08, "text": " intermediate result. You send that to the next machine and you forward" }, { "end": 622.0799999999999, "start": 615.92, "text": " propagate that. At the end here you have a loss. You want to back propagate" }, { "end": 625.68, "start": 622.0799999999999, "text": " regularly through this machine. You have an intermediate result of back" }, { "end": 631.9399999999999, "start": 625.68, "text": " propagation. Send it over the network and back prop all the way through the model." }, { "end": 637.88, "start": 631.9399999999999, "text": " That's how you can train a model that is too large for one machine if you" }, { "end": 645.1999999999999, "start": 637.88, "text": " have multiple machines. The problem here of course is this part. Just as you had" }, { "end": 650.0400000000001, "start": 645.2, "text": " to keep in sync the model before, now your communication problem" }, { "end": 660.24, "start": 650.0400000000001, "text": " becomes one of... You have to send the intermediate stages to that model and" }, { "end": 664.76, "start": 660.24, "text": " you have to send the intermediate stage of the back propagation back to that" }, { "end": 672.84, "start": 664.76, "text": " part of the model. While this part is working this part is idling away." }, { "end": 681.8000000000001, "start": 672.84, "text": " The network overhead is just very costly. Especially if your model is so" }, { "end": 690.12, "start": 681.8000000000001, "text": " large it can't even fit into one of these single boxes. This is very" }, { "end": 701.0400000000001, "start": 690.12, "text": " problematic here. It's still doable. But what the zero optimizer does is it does" }, { "end": 707.52, "start": 701.04, "text": " both data and model parallelism. It can train models that are too large" }, { "end": 718, "start": 707.52, "text": " for a single machine. It can do data parallelism at the same time." }, { "end": 724.8, "start": 718, "text": " Basically everything is working all the time. There is not much wasted" }, { "end": 728.8, "start": 724.8, "text": " computation. The communication is efficient and so on. It's really a" }, { "end": 733.4, "start": 728.8, "text": " technical achievement. It's not so much a scientific advance. It's really a" }, { "end": 739.28, "start": 733.4, "text": " technical achievement this optimizer. We'll shortly go through. There is a" }, { "end": 744.0799999999999, "start": 739.28, "text": " kind of an animation on the website but it's super slow. I think" }, { "end": 748.7199999999999, "start": 744.0799999999999, "text": " this might be the first time that I will be faster at explaining something than a" }, { "end": 755.4799999999999, "start": 748.7199999999999, "text": " video. Let's see here. What you do is... Let's just consider these" }, { "end": 759.28, "start": 755.48, "text": " three GPUs. Before that it would all fit on one machine. But now let's say you" }, { "end": 764.72, "start": 759.28, "text": " don't actually have that much memory. You don't have these giant" }, { "end": 769.84, "start": 764.72, "text": " empty blocks here. You just have a bit of that. So you have to split your model." }, { "end": 776.36, "start": 769.84, "text": " The blue parts here are your model. These are model parameters." }, { "end": 784.08, "start": 776.36, "text": " The orange part here is memory you need to store gradients. You need as" }, { "end": 789.6800000000001, "start": 784.08, "text": " many gradients as you have model parameters. Because you do gradient" }, { "end": 795.6800000000001, "start": 789.6800000000001, "text": " descent. The green stuff here are what's called optimizer parameters. Now if you" }, { "end": 801.96, "start": 795.6800000000001, "text": " just have SGD these would be non-existent. But if you have something" }, { "end": 806, "start": 801.96, "text": " like AdaGrad or Atom they have additional parameters for each model" }, { "end": 811.8000000000001, "start": 806, "text": " parameter that they need to keep track of. So these are stored here. There" }, { "end": 818.28, "start": 811.8, "text": " can be significant overhead. There's also like a floating point 3216" }, { "end": 822.3199999999999, "start": 818.28, "text": " conversion going on here. Don't want to go into that. So you split your" }, { "end": 825.9599999999999, "start": 822.3199999999999, "text": " model onto these three machines. Let's say that's your entire model. Your model" }, { "end": 832.76, "start": 825.9599999999999, "text": " is six blocks wide. You need to forward propagate now through everything." }, { "end": 838.68, "start": 832.76, "text": " So here is what Xero does. I think it's pretty cool. What we need to do" }, { "end": 843.68, "start": 838.68, "text": " is we have these three different batches of data and we want to forward" }, { "end": 850.0799999999999, "start": 843.68, "text": " propagate them all through the model. Through the same model at the same time." }, { "end": 856, "start": 850.0799999999999, "text": " As if the model were actually stored on all these machines. Like if all of these" }, { "end": 862.9599999999999, "start": 856, "text": " machines had the entire model. And we can do a bit of communication. So what" }, { "end": 870.2, "start": 862.96, "text": " we do first is... This one's easy. Data zero through the first two layers" }, { "end": 875.48, "start": 870.2, "text": " here is easy. Because we have them. So bang you go through the first" }, { "end": 886.24, "start": 875.48, "text": " you get an intermediate result here and here. How do we propagate data one" }, { "end": 892.1600000000001, "start": 886.24, "text": " through the first layer? We can't send data one here. That would be" }, { "end": 897.16, "start": 892.16, "text": " too expensive. And that's the whole point would be lost. We want to" }, { "end": 903.68, "start": 897.16, "text": " actually compute data one on this GPU at the same time. What we do is before we" }, { "end": 911.4399999999999, "start": 903.68, "text": " start we actually communicate these two blocks here to GPU one. We send" }, { "end": 919.4, "start": 911.4399999999999, "text": " these parameters around and fill them in here. We send them here and we" }, { "end": 925.12, "start": 919.4, "text": " also send them here. We send the parameters to all the machines." }, { "end": 931.48, "start": 925.12, "text": " Then we can actually forward prop data one through this and data three through" }, { "end": 937.84, "start": 931.48, "text": " this. So we can do forward prop. After we've communicated all the GPUs can be" }, { "end": 946.84, "start": 937.84, "text": " working. Same with layer two. Layer two simply can send these" }, { "end": 954.32, "start": 946.84, "text": " two here. You can see that these two here to the other machines. Now while" }, { "end": 958.48, "start": 954.32, "text": " it's doing that we've already propagated through the first layer." }, { "end": 964.64, "start": 958.48, "text": " We've already propagated here and here through the first layer. So we can" }, { "end": 970.8000000000001, "start": 964.64, "text": " actually delete these again. We can delete these first layer" }, { "end": 976.64, "start": 970.8000000000001, "text": " parameters that we sent around again. So here you see how we can save memory." }, { "end": 982.52, "start": 976.64, "text": " We don't keep all the model in sync and all the machines. We send whatever we" }, { "end": 989, "start": 982.52, "text": " need on the other machines and then once the computation is done they can delete" }, { "end": 993.84, "start": 989, "text": " it again. Because there's always one machine, this one here for the" }, { "end": 998.08, "start": 993.84, "text": " middle parameters, that keeps track of the parameters and that can at any point" }, { "end": 1003.6, "start": 998.08, "text": " if they're needed send them again. So that's the big kind of catch. You can" }, { "end": 1008.08, "start": 1003.6, "text": " forward prop now through these two. They're already present." }, { "end": 1012.96, "start": 1008.08, "text": " Then you can delete those again on the machines where they're not natively" }, { "end": 1021.24, "start": 1012.96, "text": " stored. From here you can send those two. Also up here you can send" }, { "end": 1030.64, "start": 1021.24, "text": " those two and forward prop your model through to the end." }, { "end": 1039.3200000000002, "start": 1030.64, "text": " That was a mistake. Then each machine calculates its own loss." }, { "end": 1045.8000000000002, "start": 1039.3200000000002, "text": " The backward propagation happens in much the same way." }, { "end": 1053.0800000000002, "start": 1045.8000000000002, "text": " If you follow so far you can already imagine." }, { "end": 1057.8400000000001, "start": 1053.0800000000002, "text": " Now the loss is different because there's a different batch of data" }, { "end": 1061.76, "start": 1057.84, "text": " going through each machine. There's a different batch of data going" }, { "end": 1067.28, "start": 1061.76, "text": " through each machine but each machine has computed with the same model due to" }, { "end": 1074.1599999999999, "start": 1067.28, "text": " the communication of the zero optimizer. That's pretty cool. You get the" }, { "end": 1079.74, "start": 1074.1599999999999, "text": " benefits of data parallelism, lots of data on the different machines and you" }, { "end": 1086.84, "start": 1079.74, "text": " also split up the model across the machines. You don't actually store" }, { "end": 1092.24, "start": 1086.84, "text": " the model on any of these machines. You only send." }, { "end": 1100.12, "start": 1092.24, "text": " From here you send as you need and then you delete again. For the backward" }, { "end": 1106.52, "start": 1100.12, "text": " propagation, same thing. You calculate gradients." }, { "end": 1112.3999999999999, "start": 1106.52, "text": " You calculate gradients here and you send the gradients as needed to the" }, { "end": 1120, "start": 1112.4, "text": " other machines. You calculate gradients here and here and you send them to the" }, { "end": 1124.64, "start": 1120, "text": " machine where they're actually needed. This is a weird pen. You send them to" }, { "end": 1129.44, "start": 1124.64, "text": " that machine. That machine will aggregate all the gradients of all the machines." }, { "end": 1138.3200000000002, "start": 1129.44, "text": " It will aggregate them and then locally it can compute using" }, { "end": 1142.24, "start": 1138.3200000000002, "text": " these optimizer parameters and so on. It can do all kinds of optimization" }, { "end": 1148.48, "start": 1142.24, "text": " locally because it has gathered gradients from all the other data." }, { "end": 1157.44, "start": 1148.48, "text": " What you end up with, for example, GPU 2 here, for these two layers it has" }, { "end": 1164.72, "start": 1157.44, "text": " effectively broadcast the layers such that much much more data than it just" }, { "end": 1172.72, "start": 1164.72, "text": " had itself could run through the layers. It has aggregated gradients from all of" }, { "end": 1178.08, "start": 1172.72, "text": " that data and now it can use all of these gradients together to make a good" }, { "end": 1184.68, "start": 1178.08, "text": " update using the optimizer parameters. To make a good update to these model" }, { "end": 1189.08, "start": 1184.68, "text": " parameters and then in the next iteration it can go ahead and broadcast" }, { "end": 1193.3600000000001, "start": 1189.08, "text": " the model parameters. The new model parameters again. It is able to" }, { "end": 1200, "start": 1193.36, "text": " compute with much more data than it can just fit by itself. It is just doing" }, { "end": 1207.36, "start": 1200, "text": " its part. So Zero and DeepSpeed, Zero is the protocol and DeepSpeed is the" }, { "end": 1213.04, "start": 1207.36, "text": " actual library. They will do all of this communication and splitting and so on" }, { "end": 1218.8799999999999, "start": 1213.04, "text": " for you over the network in a way that is efficient, in a way that everything" }, { "end": 1225.96, "start": 1218.88, "text": " runs at the same time and the communication overhead is minimal. You" }, { "end": 1232.2800000000002, "start": 1225.96, "text": " can actually choose which stage you want, so what your trade-off of communication" }, { "end": 1238.96, "start": 1232.2800000000002, "text": " and memory saving will be. This is extremely cool. They say this goes up to" }, { "end": 1248.72, "start": 1238.96, "text": " whatever 100 billion parameter models if you use... This isn't something for" }, { "end": 1254.48, "start": 1248.72, "text": " your average Colab user. This is really something for big players." }, { "end": 1261.64, "start": 1254.48, "text": " But that being said, I don't think language is solved by simply throwing" }, { "end": 1265.28, "start": 1261.64, "text": " more parameters at it. I think there's still a bit of a breakthrough" }, { "end": 1274.2, "start": 1265.28, "text": " ahead yet to come in language understanding with newer model" }, { "end": 1278.8400000000001, "start": 1274.2, "text": " architectures. Alright, that was it for me. Thanks." } ]
vB_hQ5NmtPs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[Interview] Mark Ledwich - Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization
[ "Science & Technology" ]
[ "machine learning", "youtube", "recommendation", "algorithm", "extremism", "alt right", "pipeline", "pathway", "mainstream", "radicalization" ]
Interview with one of the authors of a widely reported study on YouTube's recommendation engine and where it leads its users. https://arxiv.org/abs/1912.11211 https://www.recfluence.net/ https://github.com/markledwich2/Recfluence https://www.patreon.com/ledwich Abstract: The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets. Authors: Mark Ledwich, Anna Zaitsev
Alright, I'm very pleased to have Mark Ladoitch here today in In he's the he's one of the authors of this paper. That's called algorithmic extremism examining YouTube's rabbit hole of radicalization So I've done a video about a topic like this before actually several and this is basically one in a line of research that examines the recommendation algorithm of YouTube specifically but also kind of the general Social media platforms. So Mark, thanks for being here Could you maybe for people who do not know anything about this could you kind of Explain where your work fits into what's been done before or kind of also what comes out of the of the mainstream Media about this topic because there's been quite a bit of of talk Yeah, so I'm not a researcher by trade I'm a programmer and the reason why I got into this was because I could see clear bias in the way The YouTube recommendation system is being reported on and also in the research a There's some narratives. I think it might be because There's a lot of people worried about rhyming populism and this is a way to explain that They're looking for ways YouTube are radicalizing people and finding evidence for that or But that could be anecdotes or in some of the studies at sexual quantitative data But they're only looking to confirm it. So there's really obvious things. I think you covered in your video Some of them will just look for movement towards alright channels Through like centrist Or alt-light they call it instead of looking for both ways. Just really obvious things like that Calling it calling it an infection that cliche clearly shows that really looked at it Like a curious person would so I thought I could easily as a software engineer just collect all the data And without any complicated statistics just looking at the overall Flow of recommendations between the two the overall flow of recommendations between videos What their political influences? Yeah, this this was a thing that that bugged me of the paper that I made a video about is that they claim there's this radicalization pipeline, right and with pipeline everyone sort of understands a different thing But I think the general consensus is that the recommendation algorithm itself will steer you towards like more extreme content and in this case towards like the alt-right extremist content and the paper actually analyzed and said Okay, we found evidence that there is movement in this direction But they never shown that this is significantly more movement than like in in the other direction So in order to justify a pipeline one would need to show that the movement this way is about larger than this way in some notion and So I've I've found I've actually spoken to the author of that paper and he agrees with that but Obviously doesn't Doesn't have like energy to go into every you know go Refute everything that comes at them. They've also been a bunch of like they've also been exposed to a lot of Criticism, let's say as have you and I think even more when when your paper came out, I think The four days there was just a giant storm of people Attacking your paper basically Basically just just listing every single thing that's wrong with it and why this isn't valid and Things like this. So let's actually jump into what you did specifically so if I'm if I'm can summarize and you can then maybe correct so that we can Establish what happened you basically collected? recommendations, so you scrape these videos on YouTube and you collected these recommendations and We can we can see this so in your paper you then can make such diagrams Such as this one or these where in the middle the white bar is a Channel or a group that you're interested in and then to the left you can see where all the impressions of that Channel or group come from so what's where where basically the views come from? Through the recommendation system and on the right you can see of all the views the channel has retrieved Where do they go to so what what what is recommended next? Right, so it basically shows both directions for for every group and then you've also labeled these by multiple methods so that you can kind of establish these groups and What is pretty cool? We've built this website where you can analyze this and my computer is a bit overloaded at the moment But I promise it's really interactive. All right, so during the interview my computer crashed So I'm doing this in post-production Just to show you how this website operates So what you have here is an overview over all the channels of what rare recommendations were collected And they are grouped into groups for example here after partisan left Center left social justice partisan right and so on so you can see for each group or channel where recommendations come from and where they go to For example the large red one here. I happen to know that is Fox News You can see Fox News of the daily impression it received from itself 36 million impressions and it gives to itself 36 million these numbers have to agree by nature of how the data is collected of course But you can also see it gets 2.7 million impressions from CNN 2.6 million from the next news network and so on and it also gives quite a bit of recommendations to CNN and so on so You can go for example at some individual channel. Here's the daily wire the daily wire is Mainly run by Ben Shapiro So it's a bit more to the right of Fox News and a bit more on the direction of alternative media You can see the daily wire gets some most of its impression Count wise from itself from the daily wire, but it gives most of them to Fox News So actually you can see here that itself is a long way down Like in whatever sixth or seventh place So actually if you were to watch the daily wire the recommendation system would most likely steer you towards something like Fox News Whereas the the claim is that the YouTube algorithm would actually steer you towards more radical content Actually in in reality, it seems like it would steer towards more of these mainstream content So actually want to go to this tab you can see different groupings here and The radicalization pathways is the previous paper we have looked at So they have all these channels here of this radicalization pathway and you can see here the control group gives very very very few Impressions to the IDW The IDW gives much more impressions to the control group, right? Again, and the IDW gives very few impressions to the alt light compared to the amount of Impressions the alt light gives to the IDW and even to the control group And if you look at the alt right and we're going to zoom in on that here It's even more so the alt right of course receives most of its impressions from itself Which you could expect for any kind of group. This is your classic filter bubble situation But if we analyze the question of is there a pipeline you can see that Next most likely you are diverted to the IDW and to the control group much more Than you come from the IDW or the control group, right? Let's look at the the alt light so called this is kind of the so called gateway to the control group So called gateway to the alt right you can see here the alt light gives most of its impressions next to itself To the control group and the IDW so deradicalizing If you look at its way to the alt right, you'll see that it gets about four times as much impressions From the alt right as it gives to the alt right. So Basically, it's kind of taking the steam out of a quarter of all of these sessions and gives it To either the control group or the IDW or itself So this is exactly the opposite of what you would expect If you were to claim that there is a pipeline You would expect their most recommendations to come from more moderate content and go towards more extreme content But it's exactly the opposite and again, these are the exact channels that this original paper used Now what this paper find that the one that we're discussing if you go to media type here What you'll be able to see is the division into mainstream media youtube creator and so-called missing link media Which we'll leave out for a moment Let's focus on mainstream versus youtube creators. You can see the mainstream media gives most recommendations to itself While giving only very little recommendations to youtube creators and the missing link media While the youtube creators actually give almost half of their impressions. Look at that They they like give almost half of their impressions to the mainstream media Which means that there is a big big push by the algorithm to Towards these mainstream media away from youtube creators. So in general and I invite you to look at this website In general, you can pretty much see that the exact opposite of a radicalization pipeline is happening if you of course if you look at these recommendations and how they are distributed actually Most recommendation pathways are towards moderate centrist content and of course creating creating filter bubbles Which is a problem by itself, but is not a radicalization pipeline Lastly, I want to look at white identitarians because it's a one of the groups that people are referring to when they Claim that there are these radicalization pipelines. Look at that So of the white identitarian they get most of their impressions, of course from other white identitarian Videos which would be your filter bubble phenomenon But they give most and this is a group right the white identitarian channels give most of their Recommendations to the partisan right to the central and left mass mainstream media libertarians and and so on and uh Themselves are like really really really far down So to claim that there is this radicalization pipeline if you look at this data to me Seems not justified from this data and if I look at the other paper That really left out the important analysis Of the the backwards direction It seems that given everything it seems that the claim is not warranted All right back to the interview. Um Is that about like what you've done is that a good summary of of the data collection and analysis Um, there's a yeah, it's a good summary I can go into detail. Yeah, please Um, so youtube doesn't make it easy so I started this back in november in 2018 And I was using the youtube api And to get enough uh to get enough quota because they limit the amount of requests you can actually make to their api I created multiple keys, which is against their um policy Um, and they also asked you to delete all your data after 30 days That's also part of their policy. So um later about I think it was october 2019 they cut off my access because I was doing that So I had to move to just uh scraping websites and now My collection process actually just loads up the website and gets the recommendations from the actual page like a user would Um And that's difficult because they block access after a couple of hundred requests. They'll They'll stop you that machine from actually requesting from the website So I need to Use a proxy service that That's fairly expensive and what they do is they simulate or they have actual residential connections through your home connection like atnt and my requests get tunneled through that like a variety of locations in the states to get um A representative kind of sample Cool so so the data collection is Would you say that's that's the hardest part? I feel the labeling of channels is also not so easy But you've you've managed to kind of do that Half automated also half collecting things from kind of um sources that analyze these channels But at least for for most of the things that i've inspected I found the labeling to be pretty sane I think this is always something you can attack the the original paper was also attacked on how they label I find this to be kind of vicarish Mostly I think your labels are pretty good as well. The other papers labels are also mostly pretty okay Yeah, so let's let's go to it. Sorry Yeah, it's quite subjective I expected the labeling to be what I get my pushback on but it turns out it was um the anonymous collection So what you've actually found here what are what would you say are your your main results and I can maybe Show So you've analyzed a bit where do things come from where do things go to and I found this this part here to be One of the even though it's pretty simple One of the core things to say about this is that mostly what you found could be said is It's simply a recommendation algorithm working as a recommendation algorithm should which means it creates You know your typical filter bubbles if you if I watch one minute of this video All of a sudden my site is filled with makeup tutorials and things like this But also you found that there is quite some Over the top push towards what could be considered mainstream media and there is A bit of a draw away from the smaller YouTuber like channels is that is that something that like is that character? I don't know That's right. So it yeah, that's a good way to characterize it if that chart we're looking at now If it was a neutral algorithm The green bars would be the same as the gray ones. So you you receive the same amount of recommendations as you give That would be proportional to the views that you get the future organically The recommendations that you get from the green bars That you get the future organically. Um the recommendations you receive be equivalent to that but we find that it disproportionately recommends mainstream media channels That's not even though. So it's not like um, it doesn't look like it's consistently doing that So you can find exceptions to that rule is um, I I believe one of the main criticisms of your paper has been that you Only use data from 2019 onwards and I have actually looked at your website and your website a lot of times says that the data has been collected from way earlier than that um, so is it that you've almost only used 2019 data in your paper or what is in in the pipe the pipe is just from um november and december 2019 and the reason we did that um Is that we only had 400 channels before that And the collection process have changed over time So this is a clean set of data we could look at and I thought the most recent was the most relevant So what is it doing now? But um, i've provided i've got the same analysis over time So i've got a gift that I made that everyone can look at which goes through all the months i've been collecting Um, and you can see that chart for where it goes to and has gone through a bunch of changes so in about april 2019 That's when they really clamped down on conspiracies and other fringe channels Before that was it was much closer to neutral Okay, so but it never it never looked like a a rabbit hole it's never favoring Fringe channels. Yeah. I mean that that has been my experience also personally on youtube. I've I've joined youtube very early or i've i've watched youtube very early when Young earth creationism was still active and then these things were kind of completely discredited by simply having you having People exposed to other points of view and even I find this now Even though youtube makes it kind of easy to find let's say niche content It also exposes you to a bunch of of different views. Um, and and I've always found this to be very very optimistic in the sense of This is probably deradicalizing much more people than radicalizing But you've you've received like as I said a bunch of criticism in so if you could What was the The largest criticism irrespective of whether it was valid or not. What do you have you found was kind of what most people? were criticizing Most people criticizing that we were collecting anonymous recommendations. It wasn't the personalized ones Yeah, and it's actually like it is a valid limitation. We had it. There's a first limitation we talked about in this paper And It's still an open question How personalization would affect these aggregate results that we've got but I think it's reasonable To assume it will be quite similar once you average it out. So for any one person it might be different But you would expect personalization based on someone's history to even out because It's kind of the algorithms kind of like the average of all that when it's anonymous Yeah, I feel like you'll get that the the the notion of the the the notion that If because if you're not logged in the recommendation is like a person with only one video of history, right? So it's it's the same thing, but there's only one hit point of history instead of multiple I find Why should why should the behavior be qualitatively different if you have multiple points of history? like this is a strong claim that you have to you'd have to really show that there is a qualitative difference not just a more or less accuracy and I feel the people making this criticism are it's really on them to show that there is a substantial difference rather than saying that this is a giant limitation of the work Yeah, and it's also very hypocritical for a lot of the people saying it because some of them like Zion out who was mockingly saying that her article her original article in New York Times Used algo transparency, which is anonymous as well, but she doesn't she never looked into that I think a lot of this is completely motivated reasoning. They don't they don't care about the details I've I've seen this one this one twitter user She she comment she said something to the effect of if you've seen this article, please consult someone that works in this space like it's It's please don't don't read the article yourself. You must you must get your information through someone I've actually i've read the article I've I find it's pretty straightforward the limitations are clear But also the the results are pretty clear and it's it's actually mostly a boring article, right if if I'm sorry, like it's not a criticism. This is good. Like it's mostly you find that things work as expected There is a bit of a push towards mainstream which can be probably explaining that youtube wants to be Advertiser friendly right and these mainstream channels are already are Advertiser friendly so they probably get bumped a bit. Um, if what would you say is Maybe the most the most valid criticism that you've heard maybe not the biggest but the most Where do you where you say? Yeah, this is really This is really something that is you know I think um, I guess what's Um, there was criticism that i'm overclaiming not in the paper so much but in my tweets and medium I guess that's that's fair But I guess when I tweet and write in medium, those are what I believe in kind of a vasian way I'm not catching my claims that you would When you're writing a paper So I guess that's valid But I think a lot of people read into what I was saying More than what I was so when I say the algorithm Has a de-radicalizing influence. I'm just talking about the recommendations whereas a lot of people consider that to be Talking about all things considered so Even if it isn't doesn't have a bias towards a fringe maybe sociologically youtube Radicalizes people it could be the case. I don't know Um, but that's what i'm talking about. I'm talking about just the influence through recommendations And that's all we can hold google accountable for or at least it's what probably all could agree that google Should be held accountable for with its recommendation system Yeah, do you um, do you expect something to come or have you heard something to come out of youtube themselves? Like the the company any form of official statement to this? Nothing nothing at all. Um, the only I got a vague I got a vague a reporter was complaining that youtube sent them this So I think they've read it But I have no absolutely no contact with them Okay Cool, are you doing any anything in follow-up or do you have plans for more research? None of this i've just gone back to work i've applied a bunch for a bunch of independent grant money But i'm not optimistic. So if I don't get that i'll keep i'll keep it pattering along. I'll probably reduce the amount of recommendations because i'm spending like About 500 a month at the moment just keeping it running. So I gotta reduce my costs Yeah, and you do have a patreon for people to to chip into that, right? Yeah, so if you can link to that that'd be good. So if i'm getting something like Like 22 a month, so it doesn't really cover it Yeah all right, so Okay, this this has been very very pleasant. I think we've we've kind of looked at a lot of things is there anything you would like to amend To this that people should know about the research or about this this field No, I just have a um, I encourage you to have a play digging into data yourself. There's Um, if you're in this area the data is free to use the code's free to use Um, just consider this a contribution to knowledge Cool Well, thanks a lot mark. Um, I wish you a very pleasant evening for you, I guess and Cheers. Thanks Thanks for having me. Bye Bye
[ { "end": 2.94, "start": 0, "text": " Alright, I'm very pleased to have Mark Ladoitch here today" }, { "end": 4.72, "start": 3.6, "text": " in" }, { "end": 12.74, "start": 4.72, "text": " In he's the he's one of the authors of this paper. That's called algorithmic extremism examining YouTube's rabbit hole of radicalization" }, { "end": 21, "start": 13.36, "text": " So I've done a video about a topic like this before actually several and this is basically one in a line of" }, { "end": 27.66, "start": 21.64, "text": " research that examines the recommendation algorithm of YouTube specifically but also kind of the" }, { "end": 29.32, "start": 28.2, "text": " general" }, { "end": 32.84, "start": 29.32, "text": " Social media platforms. So Mark, thanks for being here" }, { "end": 40.08, "start": 34.8, "text": " Could you maybe for people who do not know anything about this could you kind of" }, { "end": 49.08, "start": 40.64, "text": " Explain where your work fits into what's been done before or kind of also what comes out of the of the mainstream" }, { "end": 54.04, "start": 49.760000000000005, "text": " Media about this topic because there's been quite a bit of of talk" }, { "end": 62.04, "start": 54.04, "text": " Yeah, so I'm not a researcher by trade I'm a programmer and the reason why I got into this was because I" }, { "end": 65.56, "start": 62.56, "text": " could see clear bias in the way" }, { "end": 69.68, "start": 65.56, "text": " The YouTube recommendation system is being reported on and also in the research" }, { "end": 71.96000000000001, "start": 70.48, "text": " a" }, { "end": 75.08, "start": 71.96000000000001, "text": " There's some narratives. I think it might be because" }, { "end": 80.68, "start": 75.96000000000001, "text": " There's a lot of people worried about rhyming populism and this is a way to explain that" }, { "end": 87, "start": 80.68, "text": " They're looking for ways YouTube are radicalizing people and finding evidence for that or" }, { "end": 91.08000000000001, "start": 87.88000000000001, "text": " But that could be anecdotes or in some of the studies at sexual" }, { "end": 93.76, "start": 92.16000000000001, "text": " quantitative data" }, { "end": 98.64000000000001, "start": 93.76, "text": " But they're only looking to confirm it. So there's really obvious things. I think you covered in your video" }, { "end": 104.36000000000001, "start": 99.76, "text": " Some of them will just look for movement towards alright channels" }, { "end": 106.88000000000001, "start": 105.08000000000001, "text": " Through like centrist" }, { "end": 112.72, "start": 106.88, "text": " Or alt-light they call it instead of looking for both ways. Just really obvious things like that" }, { "end": 118.6, "start": 113.47999999999999, "text": " Calling it calling it an infection that cliche clearly shows that really looked at it" }, { "end": 124.36, "start": 118.6, "text": " Like a curious person would so I thought I could easily as a software engineer just collect all the data" }, { "end": 130.48, "start": 125.19999999999999, "text": " And without any complicated statistics just looking at the overall" }, { "end": 133.76, "start": 131.48, "text": " Flow of recommendations between the two" }, { "end": 137.48, "start": 133.76, "text": " the overall flow of recommendations between videos" }, { "end": 140.95999999999998, "start": 138.95999999999998, "text": " What their political influences?" }, { "end": 149.44, "start": 141.88, "text": " Yeah, this this was a thing that that bugged me of the paper that I made a video about is that they claim there's this" }, { "end": 155.62, "start": 150.07999999999998, "text": " radicalization pipeline, right and with pipeline everyone sort of understands a different thing" }, { "end": 162.12, "start": 155.62, "text": " But I think the general consensus is that the recommendation algorithm itself will steer you" }, { "end": 168.48000000000002, "start": 162.12, "text": " towards like more extreme content and in this case towards like the" }, { "end": 172.32, "start": 169, "text": " alt-right extremist content and the paper actually" }, { "end": 174.88, "start": 173.20000000000002, "text": " analyzed and said" }, { "end": 178.52, "start": 174.88, "text": " Okay, we found evidence that there is movement in this direction" }, { "end": 186.32, "start": 179.04000000000002, "text": " But they never shown that this is significantly more movement than like in in the other direction" }, { "end": 193.84, "start": 186.32, "text": " So in order to justify a pipeline one would need to show that the movement this way is about larger than" }, { "end": 196.44, "start": 194.35999999999999, "text": " this way in some notion and" }, { "end": 205.68, "start": 197.79999999999998, "text": " So I've I've found I've actually spoken to the author of that paper and he agrees with that but" }, { "end": 209.24, "start": 207.24, "text": " Obviously doesn't" }, { "end": 213.01999999999998, "start": 209.24, "text": " Doesn't have like energy to go into every you know go" }, { "end": 220.22, "start": 213.02, "text": " Refute everything that comes at them. They've also been a bunch of like they've also been exposed to a lot of" }, { "end": 228.54000000000002, "start": 221.34, "text": " Criticism, let's say as have you and I think even more when when your paper came out, I think" }, { "end": 233.5, "start": 229.34, "text": " The four days there was just a giant storm of people" }, { "end": 237.54000000000002, "start": 235.54000000000002, "text": " Attacking your paper basically" }, { "end": 246.06, "start": 237.54, "text": " Basically just just listing every single thing that's wrong with it and why this isn't valid and" }, { "end": 251.7, "start": 246.62, "text": " Things like this. So let's actually jump into what you did specifically" }, { "end": 253.82, "start": 252.5, "text": " so" }, { "end": 259.9, "start": 253.82, "text": " if I'm if I'm can summarize and you can then maybe correct so that we can" }, { "end": 263.4, "start": 260.46, "text": " Establish what happened you basically collected?" }, { "end": 269.32, "start": 263.4, "text": " recommendations, so you scrape these videos on YouTube and you collected these recommendations and" }, { "end": 277.28, "start": 270.84, "text": " We can we can see this so in your paper you then can make such diagrams" }, { "end": 280.84, "start": 278.08, "text": " Such as this one or these" }, { "end": 285.67999999999995, "start": 281.79999999999995, "text": " where in the middle the white bar is a" }, { "end": 292.26, "start": 286.44, "text": " Channel or a group that you're interested in and then to the left you can see where all the" }, { "end": 294.26, "start": 292.26, "text": " impressions of that" }, { "end": 299.53999999999996, "start": 294.86, "text": " Channel or group come from so what's where where basically the views come from?" }, { "end": 305.62, "start": 300.09999999999997, "text": " Through the recommendation system and on the right you can see of all the views the channel has retrieved" }, { "end": 309.26, "start": 305.62, "text": " Where do they go to so what what what is recommended next?" }, { "end": 317.34, "start": 309.26, "text": " Right, so it basically shows both directions for for every group and then you've also labeled these by multiple" }, { "end": 321.38, "start": 317.94, "text": " methods so that you can kind of establish these groups and" }, { "end": 322.58, "start": 321.38, "text": " What is pretty cool?" }, { "end": 330.38, "start": 322.58, "text": " We've built this website where you can analyze this and my computer is a bit overloaded at the moment" }, { "end": 336.65999999999997, "start": 330.38, "text": " But I promise it's really interactive. All right, so during the interview my computer crashed" }, { "end": 338.98, "start": 336.65999999999997, "text": " So I'm doing this in post-production" }, { "end": 341.7, "start": 339.7, "text": " Just to show you how this website operates" }, { "end": 347.38, "start": 342.02, "text": " So what you have here is an overview over all the channels of what rare recommendations were collected" }, { "end": 351.98, "start": 347.38, "text": " And they are grouped into groups for example here after partisan left" }, { "end": 358.7, "start": 352.3, "text": " Center left social justice partisan right and so on so you can see for each group or channel where" }, { "end": 361.02, "start": 359.02, "text": " recommendations come from and where they go to" }, { "end": 366.42, "start": 361.65999999999997, "text": " For example the large red one here. I happen to know that is Fox News" }, { "end": 374.38, "start": 367.74, "text": " You can see Fox News of the daily impression it received from itself" }, { "end": 382.38, "start": 374.38, "text": " 36 million impressions and it gives to itself 36 million these numbers have to agree by nature of how the data is collected" }, { "end": 383.86, "start": 382.38, "text": " of course" }, { "end": 388.1, "start": 383.86, "text": " But you can also see it gets 2.7 million impressions from CNN" }, { "end": 395.76, "start": 388.5, "text": " 2.6 million from the next news network and so on and it also gives quite a bit of recommendations to CNN and so on so" }, { "end": 402.78, "start": 396.54, "text": " You can go for example at some individual channel. Here's the daily wire the daily wire is" }, { "end": 404.78, "start": 402.78, "text": " Mainly run by Ben Shapiro" }, { "end": 410.38, "start": 404.78, "text": " So it's a bit more to the right of Fox News and a bit more on the direction of alternative media" }, { "end": 415.82, "start": 410.78, "text": " You can see the daily wire gets some most of its impression" }, { "end": 422.38, "start": 416.29999999999995, "text": " Count wise from itself from the daily wire, but it gives most of them to Fox News" }, { "end": 429.02, "start": 422.38, "text": " So actually you can see here that itself is a long way down" }, { "end": 432.29999999999995, "start": 429.02, "text": " Like in whatever sixth or seventh place" }, { "end": 440.21999999999997, "start": 432.85999999999996, "text": " So actually if you were to watch the daily wire the recommendation system would most likely steer you towards something like Fox News" }, { "end": 448.46, "start": 440.53999999999996, "text": " Whereas the the claim is that the YouTube algorithm would actually steer you towards more radical content" }, { "end": 455.41999999999996, "start": 448.85999999999996, "text": " Actually in in reality, it seems like it would steer towards more of these mainstream content" }, { "end": 460.14000000000004, "start": 455.42, "text": " So actually want to go to this tab you can see different groupings here and" }, { "end": 465.66, "start": 461.66, "text": " The radicalization pathways is the previous paper we have looked at" }, { "end": 472.94, "start": 465.66, "text": " So they have all these channels here of this radicalization pathway and you can see here the control group" }, { "end": 476.62, "start": 473.74, "text": " gives very very very few" }, { "end": 479.34000000000003, "start": 477.34000000000003, "text": " Impressions to the IDW" }, { "end": 485.5, "start": 479.34, "text": " The IDW gives much more impressions to the control group, right?" }, { "end": 492.14, "start": 486.29999999999995, "text": " Again, and the IDW gives very few impressions to the alt light compared to the amount of" }, { "end": 497.26, "start": 492.46, "text": " Impressions the alt light gives to the IDW and even to the control group" }, { "end": 501.09999999999997, "start": 497.26, "text": " And if you look at the alt right and we're going to zoom in on that here" }, { "end": 506.29999999999995, "start": 501.09999999999997, "text": " It's even more so the alt right of course receives most of its impressions from itself" }, { "end": 510.7, "start": 506.3, "text": " Which you could expect for any kind of group. This is your classic filter bubble situation" }, { "end": 517.26, "start": 511.26, "text": " But if we analyze the question of is there a pipeline you can see that" }, { "end": 525.26, "start": 518.46, "text": " Next most likely you are diverted to the IDW and to the control group much more" }, { "end": 529.5, "start": 525.26, "text": " Than you come from the IDW or the control group, right?" }, { "end": 535.42, "start": 529.5, "text": " Let's look at the the alt light so called this is kind of the so called gateway to the control group" }, { "end": 542.2199999999999, "start": 535.42, "text": " So called gateway to the alt right you can see here the alt light gives most of its impressions next to itself" }, { "end": 546.38, "start": 542.2199999999999, "text": " To the control group and the IDW so deradicalizing" }, { "end": 553.66, "start": 547.02, "text": " If you look at its way to the alt right, you'll see that it gets about four times as much impressions" }, { "end": 557.9799999999999, "start": 554.14, "text": " From the alt right as it gives to the alt right. So" }, { "end": 563.98, "start": 558.62, "text": " Basically, it's kind of taking the steam out of a quarter of all of these sessions and gives it" }, { "end": 568.46, "start": 563.98, "text": " To either the control group or the IDW or itself" }, { "end": 573.1800000000001, "start": 568.46, "text": " So this is exactly the opposite of what you would expect" }, { "end": 576.46, "start": 573.98, "text": " If you were to claim that there is a pipeline" }, { "end": 584.14, "start": 577.1800000000001, "text": " You would expect their most recommendations to come from more moderate content and go towards more extreme content" }, { "end": 589.4200000000001, "start": 584.38, "text": " But it's exactly the opposite and again, these are the exact channels that this original paper used" }, { "end": 594.6999999999999, "start": 589.42, "text": " Now what this paper find that the one that we're discussing if you go to media type here" }, { "end": 601.9799999999999, "start": 595.3399999999999, "text": " What you'll be able to see is the division into mainstream media youtube creator and so-called missing link media" }, { "end": 603.9799999999999, "start": 601.9799999999999, "text": " Which we'll leave out for a moment" }, { "end": 611.5799999999999, "start": 604.3, "text": " Let's focus on mainstream versus youtube creators. You can see the mainstream media gives most recommendations to itself" }, { "end": 618.86, "start": 611.58, "text": " While giving only very little recommendations to youtube creators and the missing link media" }, { "end": 624.38, "start": 618.86, "text": " While the youtube creators actually give almost half of their impressions. Look at that" }, { "end": 629.5, "start": 624.38, "text": " They they like give almost half of their impressions to the mainstream media" }, { "end": 636.5400000000001, "start": 631.0200000000001, "text": " Which means that there is a big big push by the algorithm to" }, { "end": 644.14, "start": 636.54, "text": " Towards these mainstream media away from youtube creators. So in general and I invite you to look at this website" }, { "end": 650.9399999999999, "start": 645.5, "text": " In general, you can pretty much see that the exact opposite of a" }, { "end": 659.9, "start": 651.5799999999999, "text": " radicalization pipeline is happening if you of course if you look at these recommendations and how they are distributed actually" }, { "end": 669.66, "start": 659.9, "text": " Most recommendation pathways are towards moderate centrist content and of course creating creating filter bubbles" }, { "end": 673.42, "start": 669.66, "text": " Which is a problem by itself, but is not a radicalization pipeline" }, { "end": 681.98, "start": 674.22, "text": " Lastly, I want to look at white identitarians because it's a one of the groups that people are referring to when they" }, { "end": 686.14, "start": 682.54, "text": " Claim that there are these radicalization pipelines. Look at that" }, { "end": 693.42, "start": 686.14, "text": " So of the white identitarian they get most of their impressions, of course from other white identitarian" }, { "end": 697.98, "start": 694.9399999999999, "text": " Videos which would be your filter bubble phenomenon" }, { "end": 705.34, "start": 699.1, "text": " But they give most and this is a group right the white identitarian channels give most of their" }, { "end": 712.06, "start": 705.98, "text": " Recommendations to the partisan right to the central and left mass mainstream media" }, { "end": 715.66, "start": 712.06, "text": " libertarians and and so on and uh" }, { "end": 719.8199999999999, "start": 716.1999999999999, "text": " Themselves are like really really really far down" }, { "end": 727.02, "start": 720.78, "text": " So to claim that there is this radicalization pipeline if you look at this data to me" }, { "end": 732.06, "start": 727.02, "text": " Seems not justified from this data and if I look at the other paper" }, { "end": 735.42, "start": 732.54, "text": " That really left out the important analysis" }, { "end": 738.4599999999999, "start": 736.14, "text": " Of the the backwards direction" }, { "end": 743.1800000000001, "start": 738.46, "text": " It seems that given everything it seems that the claim is not warranted" }, { "end": 746.46, "start": 743.9000000000001, "text": " All right back to the interview. Um" }, { "end": 755.6600000000001, "start": 748.5400000000001, "text": " Is that about like what you've done is that a good summary of of the data collection and analysis" }, { "end": 762.94, "start": 758.3000000000001, "text": " Um, there's a yeah, it's a good summary I can go into detail. Yeah, please" }, { "end": 769.5200000000001, "start": 762.94, "text": " Um, so youtube doesn't make it easy so I started this back in november in 2018" }, { "end": 772.62, "start": 770.22, "text": " And I was using the youtube api" }, { "end": 778.7, "start": 773.2600000000001, "text": " And to get enough uh to get enough quota because they limit the amount of requests you can actually make to their api" }, { "end": 782.46, "start": 779.4200000000001, "text": " I created multiple keys, which is against their um policy" }, { "end": 787.4200000000001, "start": 783.2600000000001, "text": " Um, and they also asked you to delete all your data after 30 days" }, { "end": 793.18, "start": 787.42, "text": " That's also part of their policy. So um later" }, { "end": 795.42, "start": 793.42, "text": " about I think it was october" }, { "end": 799.02, "start": 795.9799999999999, "text": " 2019 they cut off my access because I was doing that" }, { "end": 803.42, "start": 799.8199999999999, "text": " So I had to move to just uh scraping websites and now" }, { "end": 809.42, "start": 804.06, "text": " My collection process actually just loads up the website and gets the recommendations from the actual page like a user would" }, { "end": 812.54, "start": 811.0999999999999, "text": " Um" }, { "end": 818.9399999999999, "start": 812.54, "text": " And that's difficult because they block access after a couple of hundred requests. They'll" }, { "end": 823.02, "start": 819.5, "text": " They'll stop you that machine from actually requesting from the website" }, { "end": 825.26, "start": 823.74, "text": " So I need to" }, { "end": 827.26, "start": 825.26, "text": " Use a proxy service that" }, { "end": 836.62, "start": 828.14, "text": " That's fairly expensive and what they do is they simulate or they have actual residential connections through your home connection like atnt" }, { "end": 838.62, "start": 837.5, "text": " and" }, { "end": 843.98, "start": 838.62, "text": " my requests get tunneled through that like a variety of locations in the states to get um" }, { "end": 846.54, "start": 844.54, "text": " A representative kind of sample" }, { "end": 852.86, "start": 849.82, "text": " Cool so so the data collection is" }, { "end": 855.82, "start": 853.58, "text": " Would you say that's that's the hardest part?" }, { "end": 860.38, "start": 857.1, "text": " I feel the labeling of channels is also not so easy" }, { "end": 863.82, "start": 860.62, "text": " But you've you've managed to kind of do that" }, { "end": 870.46, "start": 863.82, "text": " Half automated also half collecting things from kind of um sources that analyze these channels" }, { "end": 877.74, "start": 871.0200000000001, "text": " But at least for for most of the things that i've inspected I found the labeling to be pretty sane" }, { "end": 884.94, "start": 878.22, "text": " I think this is always something you can attack the the original paper was also attacked on how they label" }, { "end": 887.98, "start": 884.94, "text": " I find this to be kind of vicarish" }, { "end": 894.54, "start": 887.98, "text": " Mostly I think your labels are pretty good as well. The other papers labels are also mostly pretty okay" }, { "end": 897.5, "start": 894.54, "text": " Yeah, so let's let's go to it. Sorry" }, { "end": 907.02, "start": 899.02, "text": " Yeah, it's quite subjective I expected the labeling to be what I get my pushback on but it turns out it was um" }, { "end": 909.82, "start": 907.82, "text": " the anonymous" }, { "end": 911.82, "start": 909.82, "text": " collection" }, { "end": 919.74, "start": 911.82, "text": " So what you've actually found here what are what would you say are your your main results and I can maybe" }, { "end": 922.7, "start": 921.1800000000001, "text": " Show" }, { "end": 928.38, "start": 922.7, "text": " So you've analyzed a bit where do things come from where do things go to and" }, { "end": 933.2600000000001, "start": 929.74, "text": " I found this this part here to be" }, { "end": 936.3800000000001, "start": 933.98, "text": " One of the even though it's pretty simple" }, { "end": 943.66, "start": 936.38, "text": " One of the core things to say about this is that mostly what you found" }, { "end": 946.38, "start": 944.86, "text": " could be" }, { "end": 948.38, "start": 946.38, "text": " said is" }, { "end": 955.42, "start": 948.54, "text": " It's simply a recommendation algorithm working as a recommendation algorithm should which means it creates" }, { "end": 961.42, "start": 956.06, "text": " You know your typical filter bubbles if you if I watch one minute of this video" }, { "end": 965.8199999999999, "start": 961.42, "text": " All of a sudden my site is filled with makeup tutorials and things like this" }, { "end": 968.78, "start": 965.9799999999999, "text": " But also you found that there is quite some" }, { "end": 975.66, "start": 969.74, "text": " Over the top push towards what could be considered mainstream media and there is" }, { "end": 978.78, "start": 976.3, "text": " A bit of a draw away from the smaller" }, { "end": 986.2199999999999, "start": 979.5, "text": " YouTuber like channels is that is that something that like is that character? I don't know" }, { "end": 991.82, "start": 986.22, "text": " That's right. So it yeah, that's a good way to characterize it if that chart we're looking at now" }, { "end": 994.62, "start": 992.62, "text": " If it was a neutral algorithm" }, { "end": 1001.74, "start": 995.58, "text": " The green bars would be the same as the gray ones. So you you receive the same amount of recommendations as you give" }, { "end": 1007.26, "start": 1003.5, "text": " That would be proportional to the views that you get the future organically" }, { "end": 1011.98, "start": 1009.58, "text": " The recommendations that you get from the green bars" }, { "end": 1019.9200000000001, "start": 1011.98, "text": " That you get the future organically. Um the recommendations you receive be equivalent to that but we find that it disproportionately" }, { "end": 1023, "start": 1021, "text": " recommends mainstream media channels" }, { "end": 1027.98, "start": 1023.26, "text": " That's not even though. So it's not like um, it doesn't look like it's consistently doing that" }, { "end": 1030.8600000000001, "start": 1028.8600000000001, "text": " So you can find exceptions to that" }, { "end": 1032.6200000000001, "start": 1030.94, "text": " rule" }, { "end": 1039.26, "start": 1032.6200000000001, "text": " is um, I I believe one of the main criticisms of your paper has been that you" }, { "end": 1042.7, "start": 1039.26, "text": " Only use data from 2019 onwards" }, { "end": 1044.3799999999999, "start": 1043.42, "text": " and" }, { "end": 1051.9, "start": 1044.3799999999999, "text": " I have actually looked at your website and your website a lot of times says that the data has been collected from way earlier than that" }, { "end": 1056.06, "start": 1052.86, "text": " um, so is it that you've almost only used" }, { "end": 1059.74, "start": 1056.54, "text": " 2019 data in your paper" }, { "end": 1064.3799999999999, "start": 1060.3, "text": " or what is in in the pipe the pipe is just from" }, { "end": 1067.58, "start": 1065.34, "text": " um november and december 2019" }, { "end": 1070.1399999999999, "start": 1067.58, "text": " and the reason we did that um" }, { "end": 1074.86, "start": 1071.26, "text": " Is that we only had 400 channels before that" }, { "end": 1078.86, "start": 1075.8999999999999, "text": " And the collection process have changed over time" }, { "end": 1083.26, "start": 1078.86, "text": " So this is a clean set of data we could look at and I thought the most recent was the most relevant" }, { "end": 1084.6999999999998, "start": 1083.26, "text": " So what is it doing now?" }, { "end": 1088.22, "start": 1084.6999999999998, "text": " But um, i've provided i've got the same analysis over time" }, { "end": 1093.1, "start": 1088.22, "text": " So i've got a gift that I made that everyone can look at which goes through all the months i've been collecting" }, { "end": 1099.26, "start": 1093.1, "text": " Um, and you can see that chart for where it goes to and has gone through a bunch of changes so in about april 2019" }, { "end": 1103.82, "start": 1100.06, "text": " That's when they really clamped down on conspiracies and other fringe channels" }, { "end": 1107.02, "start": 1104.6999999999998, "text": " Before that was it was much closer to neutral" }, { "end": 1112.86, "start": 1108.6999999999998, "text": " Okay, so but it never it never looked like a a rabbit hole it's never favoring" }, { "end": 1119.34, "start": 1113.74, "text": " Fringe channels. Yeah. I mean that that has been my experience also personally on youtube. I've" }, { "end": 1123.58, "start": 1119.34, "text": " I've joined youtube very early or i've i've watched youtube very early when" }, { "end": 1130.9399999999998, "start": 1124.62, "text": " Young earth creationism was still active and then these things were kind of completely discredited by simply" }, { "end": 1133.82, "start": 1131.82, "text": " having you having" }, { "end": 1139.1799999999998, "start": 1134.22, "text": " People exposed to other points of view and even I find this now" }, { "end": 1143.4199999999998, "start": 1139.1799999999998, "text": " Even though youtube makes it kind of easy to find let's say niche content" }, { "end": 1149.5800000000002, "start": 1143.42, "text": " It also exposes you to a bunch of of different views. Um, and and" }, { "end": 1152.38, "start": 1150.38, "text": " I've always found this to be very" }, { "end": 1153.98, "start": 1152.7, "text": " very" }, { "end": 1155.98, "start": 1153.98, "text": " optimistic in the sense of" }, { "end": 1159.9, "start": 1156.3000000000002, "text": " This is probably deradicalizing much more people than radicalizing" }, { "end": 1166.38, "start": 1160.38, "text": " But you've you've received like as I said a bunch of criticism in so if you could" }, { "end": 1168.7, "start": 1167.26, "text": " What was the" }, { "end": 1175.66, "start": 1168.7, "text": " The largest criticism irrespective of whether it was valid or not. What do you have you found was kind of what most people?" }, { "end": 1178.3, "start": 1176.3, "text": " were criticizing" }, { "end": 1184.94, "start": 1179.42, "text": " Most people criticizing that we were collecting anonymous recommendations. It wasn't the personalized ones" }, { "end": 1191.66, "start": 1184.94, "text": " Yeah, and it's actually like it is a valid limitation. We had it. There's a first limitation we talked about in this paper" }, { "end": 1194.54, "start": 1193.18, "text": " And" }, { "end": 1196.54, "start": 1194.54, "text": " It's still an open question" }, { "end": 1201.98, "start": 1196.54, "text": " How personalization would affect these aggregate results that we've got but I think it's reasonable" }, { "end": 1208.1399999999999, "start": 1202.54, "text": " To assume it will be quite similar once you average it out. So for any one person it might be different" }, { "end": 1213.8999999999999, "start": 1209.18, "text": " But you would expect personalization based on someone's history to even out because" }, { "end": 1217.98, "start": 1214.46, "text": " It's kind of the algorithms kind of like the average of all that when it's anonymous" }, { "end": 1220.54, "start": 1218.54, "text": " Yeah, I feel like you'll get that" }, { "end": 1222.78, "start": 1221.5, "text": " the" }, { "end": 1224.78, "start": 1222.78, "text": " the the notion of" }, { "end": 1227.1, "start": 1224.78, "text": " the the the notion that" }, { "end": 1234.94, "start": 1228.06, "text": " If because if you're not logged in the recommendation is like a person with only one video of history, right?" }, { "end": 1241.18, "start": 1235.5, "text": " So it's it's the same thing, but there's only one hit point of history instead of multiple I find" }, { "end": 1248.7, "start": 1241.8999999999999, "text": " Why should why should the behavior be qualitatively different if you have multiple points of history?" }, { "end": 1256.3, "start": 1248.7, "text": " like this is a strong claim that you have to you'd have to really show that there is a qualitative difference not just" }, { "end": 1263.98, "start": 1256.8600000000001, "text": " a more or less accuracy and I feel the people making this criticism are it's really on them to show that there is" }, { "end": 1271.18, "start": 1264.8600000000001, "text": " a substantial difference rather than saying that this is a giant limitation of the work" }, { "end": 1277.82, "start": 1273.42, "text": " Yeah, and it's also very hypocritical for a lot of the people saying it because" }, { "end": 1279.34, "start": 1277.82, "text": " some of them like" }, { "end": 1285.82, "start": 1279.34, "text": " Zion out who was mockingly saying that her article her original article in New York Times" }, { "end": 1291.4199999999998, "start": 1286.3, "text": " Used algo transparency, which is anonymous as well, but she doesn't she never looked into that" }, { "end": 1296.46, "start": 1291.4199999999998, "text": " I think a lot of this is completely motivated reasoning. They don't they don't care about the details" }, { "end": 1301.1, "start": 1297.34, "text": " I've I've seen this one this one twitter user" }, { "end": 1308.4599999999998, "start": 1301.1, "text": " She she comment she said something to the effect of if you've seen this article, please consult" }, { "end": 1311.02, "start": 1308.62, "text": " someone that works in this space like" }, { "end": 1313.4199999999998, "start": 1312.2199999999998, "text": " it's" }, { "end": 1319.26, "start": 1313.4199999999998, "text": " It's please don't don't read the article yourself. You must you must get your information through someone" }, { "end": 1322.9399999999998, "start": 1320.9399999999998, "text": " I've actually i've read the article" }, { "end": 1327.02, "start": 1323.4199999999998, "text": " I've I find it's pretty straightforward the limitations are clear" }, { "end": 1332.86, "start": 1327.02, "text": " But also the the results are pretty clear and it's it's actually mostly a boring article, right if if" }, { "end": 1340.7, "start": 1334.06, "text": " I'm sorry, like it's not a criticism. This is good. Like it's mostly you find that things work as expected" }, { "end": 1346.46, "start": 1340.78, "text": " There is a bit of a push towards mainstream which can be probably explaining that youtube wants to be" }, { "end": 1351.42, "start": 1347.16, "text": " Advertiser friendly right and these mainstream channels are already are" }, { "end": 1358.8000000000002, "start": 1351.42, "text": " Advertiser friendly so they probably get bumped a bit. Um, if what would you say is" }, { "end": 1361.92, "start": 1359.92, "text": " Maybe the most the most" }, { "end": 1366.24, "start": 1362.24, "text": " valid criticism that you've heard maybe not the biggest but the most" }, { "end": 1368.8000000000002, "start": 1366.8000000000002, "text": " Where do you where you say? Yeah, this is really" }, { "end": 1371.8400000000001, "start": 1369.28, "text": " This is really something that is you know" }, { "end": 1376.3200000000002, "start": 1374.3200000000002, "text": " I think um, I guess what's" }, { "end": 1383.36, "start": 1376.32, "text": " Um, there was criticism that i'm overclaiming not in the paper so much but in my tweets and medium" }, { "end": 1385.9199999999998, "start": 1383.9199999999998, "text": " I guess that's that's fair" }, { "end": 1390.72, "start": 1386, "text": " But I guess when I tweet and write in medium, those are what I believe in kind of a vasian way" }, { "end": 1393.28, "start": 1391.12, "text": " I'm not catching my claims that you would" }, { "end": 1396.6399999999999, "start": 1394.6399999999999, "text": " When you're writing a paper" }, { "end": 1400.8, "start": 1398.8, "text": " So I guess that's valid" }, { "end": 1403.36, "start": 1401.36, "text": " But I think a lot of people read into what I was saying" }, { "end": 1406.6399999999999, "start": 1403.36, "text": " More than what I was so when I say the algorithm" }, { "end": 1413.52, "start": 1407.76, "text": " Has a de-radicalizing influence. I'm just talking about the recommendations whereas a lot of people consider that to be" }, { "end": 1416.32, "start": 1414.08, "text": " Talking about all things considered so" }, { "end": 1422.24, "start": 1417.28, "text": " Even if it isn't doesn't have a bias towards a fringe maybe sociologically youtube" }, { "end": 1426.1599999999999, "start": 1422.9399999999998, "text": " Radicalizes people it could be the case. I don't know" }, { "end": 1431.54, "start": 1426.9599999999998, "text": " Um, but that's what i'm talking about. I'm talking about just the influence through recommendations" }, { "end": 1437.46, "start": 1431.54, "text": " And that's all we can hold google accountable for or at least it's what probably all could agree that google" }, { "end": 1440.74, "start": 1437.7, "text": " Should be held accountable for with its recommendation system" }, { "end": 1449.62, "start": 1442.42, "text": " Yeah, do you um, do you expect something to come or have you heard something to come out of youtube themselves?" }, { "end": 1454.5, "start": 1449.62, "text": " Like the the company any form of official statement to this?" }, { "end": 1460.26, "start": 1456.6599999999999, "text": " Nothing nothing at all. Um, the only I got a vague" }, { "end": 1463.78, "start": 1460.26, "text": " I got a vague a reporter was complaining that youtube sent them this" }, { "end": 1466.82, "start": 1464.82, "text": " So I think they've read it" }, { "end": 1469.86, "start": 1467.78, "text": " But I have no absolutely no contact with them" }, { "end": 1473.14, "start": 1471.3799999999999, "text": " Okay" }, { "end": 1477.54, "start": 1473.14, "text": " Cool, are you doing any anything in follow-up or do you have plans for more research?" }, { "end": 1484.9, "start": 1479.62, "text": " None of this i've just gone back to work i've applied a bunch for a bunch of independent grant money" }, { "end": 1492.1200000000001, "start": 1484.9, "text": " But i'm not optimistic. So if I don't get that i'll keep i'll keep it pattering along. I'll probably reduce the amount of recommendations" }, { "end": 1494.98, "start": 1492.98, "text": " because i'm spending like" }, { "end": 1500.9, "start": 1495.6200000000001, "text": " About 500 a month at the moment just keeping it running. So I gotta reduce my costs" }, { "end": 1506.26, "start": 1501.6200000000001, "text": " Yeah, and you do have a patreon for people to to chip into that, right?" }, { "end": 1510.66, "start": 1507.5400000000002, "text": " Yeah, so if you can link to that that'd be good. So if i'm getting something like" }, { "end": 1515.38, "start": 1510.66, "text": " Like 22 a month, so it doesn't really cover it" }, { "end": 1517.3000000000002, "start": 1516.1000000000001, "text": " Yeah" }, { "end": 1518.66, "start": 1517.3000000000002, "text": " all right, so" }, { "end": 1523.14, "start": 1518.66, "text": " Okay, this this has been very very pleasant. I think we've we've kind of looked at" }, { "end": 1526.18, "start": 1523.94, "text": " a lot of things is there anything you would like to" }, { "end": 1528.26, "start": 1526.8200000000002, "text": " amend" }, { "end": 1532.1000000000001, "start": 1528.26, "text": " To this that people should know about the research or about this this field" }, { "end": 1538.5800000000002, "start": 1533.6200000000001, "text": " No, I just have a um, I encourage you to have a play digging into data yourself. There's" }, { "end": 1542.58, "start": 1538.58, "text": " Um, if you're in this area the data is free to use the code's free to use" }, { "end": 1546.58, "start": 1543.54, "text": " Um, just consider this a contribution to knowledge" }, { "end": 1549.62, "start": 1548.4199999999998, "text": " Cool" }, { "end": 1554.82, "start": 1549.62, "text": " Well, thanks a lot mark. Um, I wish you a very pleasant evening for you, I guess" }, { "end": 1556.8999999999999, "start": 1555.3799999999999, "text": " and" }, { "end": 1558.8999999999999, "start": 1556.8999999999999, "text": " Cheers. Thanks" }, { "end": 1560.8999999999999, "start": 1558.8999999999999, "text": " Thanks for having me. Bye" }, { "end": 1568.9, "start": 1560.9, "text": " Bye" } ]
i4H0kjxrias
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reformer: The Efficient Transformer
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "seq2seq", "bert", "memory", "lsh", "locality sensitive hashing", "reversible", "revertible", "flow", "long sequence" ]
The Transformer for the masses! Reformer solves the biggest problem with the famous Transformer model: Its huge resource requirements. By cleverly combining Locality Sensitive Hashing and ideas from Reversible Networks, the classically huge footprint of the Transformer is drastically reduced. Not only does that mean the model uses less memory, but it can process much longer input sequences, up to 16K tokens with just 16gb of memory! https://arxiv.org/abs/2001.04451 https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html Abstract: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L2) to O(LlogL), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. Authors: Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at Reformer, the efficient transformer by Nikita Kitaev, Lukas Kaiser and Anselm Levskaia. This is a paper that tries to reduce the extreme resource requirements of the transformer model. Now if you haven't seen the transformer model before, that's this thing, I suggest you go watch for example my video on it, Attention is All You Need, it's called, where the transformer is introduced. The most famous transformer is called BERT, B-E-R-T, and you can also look that up, I've made a video about this. So what's the issue here? If you remember transformers, they need a lot of memory. And why? That's because they compute, in each layer they compute these attention things. Let's recap shortly. In a transformer you propagate information layer by layer. So you have layer here with some signal, and then the next layer that you try to propagate the signal. Now what you do, you assign, you assign key queries to each of the next layer. So each of the next layer has queries, and queries are just vectors. This is a vector, this is a vector, this is a vector, and so on. So basically the next layer has the ability to ask, to ask the last layer what it wants. This is a kind of an intrinsic property of attention, and I, as I said, I explained this in detail in the video, Attention is All You Need. Basically these are what's called queries, Q. And then this layer is exposing what are called keys, and keys again are vectors. So vector, vector, vector, vector, and so on. So keys are vectors, and the way that the information is propagated to the next layer is whenever, whatever, we consider for example this node here, right, this node, let's make that yellow. When we consider this node here, it is going to look in the last layer which, which keys match my key the most. And in this case it will probably be this key and this key, right, they match the key the most. And here we look at the inner product, so the angle between the vectors. And then information is aggregated by simply having a weighted average of the values. So information is coming in here and here. Actually information is coming into all the nodes, but since only these keys match, the information will be propagated like this, to this unit. We could do this for another unit, for example this unit right here. What's the value of this unit? Well we have to look at the key here. Which key is it going to be matched to? It's probably going to be matched to this key right here. And probably no other key really. Maybe this key a little bit. So the information of that node in the next layer will be whatever's information is coming in here, routed there, and a little bit of this information. So this is kind of a, it's not a hard, it's called soft attention. So there's a little bit of information going everywhere, but the majority of the information is coming from the nodes where the keys match. So these are queries, these are keys, and technically these things coming in here are called values. But imagine the values simply as the information to be propagated, and the queries and the keys are responsible for routing that information to the next layer. All of these things are learned. So the queries, the keys, and the values. Now what's the problem? The problem is between the queries and the keys. As you can see, what you have to do is you have to match every single query with every single key in order to find out where information goes. So this becomes order of, if you have D keys and D queries, order of D squared operations that you have to do. And of course D squared values that you have to compute. And since these are all vectors, of course there is D will not only be the number of keys, but then again this is multiplied, so there is an inner multiplication with the dimensionality, let's call that capital D, of the... no sorry that's not an inner multiplication. Let's just remain at this. So D squared inner products between vectors of capital D dimensions. So it's not an easy thing for resources to do. You need a lot of resources to hold this, all of this in memory at the same time and to compute all of these things. The reformer aims to solve this problem. So this giant space problem that the transformers have, space, memory, also computational problem to a lesser degree. Mostly it's a memory issue. Alright, so what is happening here? And you see here that this product between two matrices clearly gives you this kind of squared thing. So what's happening in the reformer to do this? The trick is, if we go back to this drawing, the trick is to create what's called a hashing scheme or buckets. In creating buckets what you want to do is you want to group similar things together. So let's say we create four buckets. Bucket one, bucket two, bucket three, bucket four. And each bucket we label. And bucket one we label with the up direction, this with the right direction, with the down direction, the left direction as vectors. And now we simply put each of the things into the bucket where it belongs most. So let's for example this vector here, it goes here. Sorry, that is like absolutely not the right place. It goes probably here, right? This vector here, probably this one goes here, right? And so on. So you'll end up each of these assigning a bucket. So these all go into that bucket. Let's continue, actually let's also put the keys in the same buckets. So also the keys, this key here probably goes to this bucket. This key here probably goes to this bucket. Let's say this key here probably goes to the bucket over here. You already see, so before, right before, we cared about this particular query and this particular key. We just looked and we said those two will probably route information to each other because they're similar. And now you can see they both ended up in the same bucket. So the idea is to create a scheme where you throw these things into buckets such that if two vectors are similar they will end up in the same bucket with high probability. So you'll only have to really compare things within the same bucket and not across all of these d squared elements. That's the idea and the technique here is called locality sensitive hashing. So locality sensitive hashing. And short this is called LSH. The idea is the following, if you have two vectors v1 and v2 and they have and you have a distance measure distance measure d. D is a distance. What you want is if the distance between v1 and v2 is small, I'm getting confused with color, with small then you want them in the same bucket. And if the distance is large then you want them in a different bucket. Different buckets. You know with high probability. So all of these things where you say you want them in the same bucket with probability p with probability p with high probability p and here you want them in different buckets with high probability. Or you want them in the same pocket with low probability. That's an equivalent form of stating. This is all formalized and I can direct you to the Wikipedia page of that. It's pretty good. It gives a concise definition. Here you can see that and it gives a number of examples. So one example I'd like to give here for locality sensitive hashing is of course the scheme of bucketing will all depend on what your distance measure is. If you consider the distance measure simply to be the jacquard distance. So let's say we have two vectors 0 1 0 1 and here we have 1 0 1 1 0 1 and here it's 0 0 0 1. Alright so maybe you can see the first two vectors here are much more close together than the last vector. Now in terms of bit differences, one scheme to do locality sensitive hashing is to simply sub sample bits. So in this case this is a slightly constructed example. We will just sub sample the first two bits and then construct the buckets according to these bit values. So if since we sample two bits we have four buckets. Here is 0 0, here is 0 1, here is 1 0 and here is 1 1. That's the concept of locality sensitive hashing. You have these buckets and then you can say alright this vector has 1 0, goes into this, this goes into this and then that goes into the 0 1 bucket. And you end up with what you have. You have the two close vectors in the same bucket and the two far apart vectors in that bucket. Of course that doesn't always work. You can be unlucky in sub sampling but that's kind of trade-off you'll have to go for. If things that are close together happen with it's a low probability but if they happen to end up in the different buckets then basically you lose the fact that they are close to each other and that's the trade-off. The kind of locality sensitive hashing they use in the reformer now is what are called random projections. So let's say you have a bunch of vectors and that's really what we care about. You have a bunch of vectors and what you want, you want the keys and queries. So you have a bunch of vectors like this and you want to create buckets such that vectors that are close together will end up in the same bucket and vectors that are far apart will end up in the in different buckets. A cool way to do is, and this is in the cosine distance so we care about the angle between vectors, a cool way to do this is to use random plane projections and the cool thing about it is it works for the cosine distance and you can basically choose how many buckets you create. Let's say we want to create four buckets here again. What we need is two hyper planes and what we'll do is, so here is the origin, we'll simply create two hyper planes through the origin at random. So I'm gonna draw a random hyper plane here like this and then a second random hyper plane like this. So you would agree those are pretty random hyper planes as much as I can be a random generator and then we'll simply label, so this will label hyper plane one, this will label hyper plane two. Now we simply assign each vector bits according to the, on which side of the hyper plane they lie. So let's call this here the plus side and this here the minus side or even yeah let's call this the plus and the minus and here also we call this the plus side and this the minus side. So this vector here is, its signs are plus plus right because it's on the plus side of both of hyper planes. This vector plus plus, this one plus plus, this one here is called, it's on the negative side of plane two but on the positive side of plane one so it's plus minus, this one here minus minus, minus minus, minus minus and these are your buckets. So you would group these vectors together because they have they have the same signs. You would group that vector, you would group these vectors together. The combination of this with attention, since in attention you've seen attention uses a softmax and the softmax is dominated usually by the largest elements and since we compute inner products it means that this softmax thing is dominated by vectors that have small inner products. So basically you don't have to look at all of these d squared vectors if you can find the ones that have the closest distance. You can pretty much ignore the others. And LSH allows you to do this. So build buckets of vectors with similar directions. Then you only have to care about these vectors comparing them to each other. So that's not a lot of vectors generally and that's how you save a lot of work. So you will only have to care about these three vectors if your key vector for example is right here. You'll only have to care about these things in the same bucket and you can ignore all of that rest of the space. Of course the more hyperplanes you have the more buckets you'll have, the less vectors you'll have in the same bucket. That's the general idea. I find this explanation to be a bit easy. You can equivalently explain it by doing these kind of random rotations in the space. You can think about how that will end up actually being the exact same thing as what I just explained. I just like that my explanation better I think. Alright so the way they use this, they have an illustration right here, is the following. So they have these keys right? Sequence of queries and keys. So they do equivalent queries and keys which is a thing you can do in transformers. Don't worry too much about it whether they're different or not. But then they do this LSH bucketing and here the color of the cell is just the bucket, the LSH bucket which will end up. Then they sort that right as you can see and now they do an additional thing which is called the chunk. As you can see there are not the same amount of vectors in each bucket and that is sometimes a problem because even though you've reduced the memory, the memory requirements are still dominated by the largest bucket. By whatever bucket has the most number of vectors that will pretty much be your memory requirement. Because now you don't have to, if this is D, you have to compute all the D squared things anymore. But you'll only have to compute this quantity, let's call that B. So the maximum bucket size. But that could still be large right? If you look at a distribution it's probably going to be something like this right? Where most buckets have a kind of a standard number of vectors but some buckets will have a lot of vectors and that's, sorry, some few buckets will have a lot of vectors and your memory requirement is still dominated by this. So they do an additional thing which is called chunking which means they actually take fixed size chunks here, fixed size. Here they always take four and they say all right these are our chunks and we will only compute attention within the chunks right? So it could be that there's the same bucket is actually split between chunks and that's why they do an additional thing is that you can attend two things in a different chunk right here. You can attend two things in your neighboring chunks so you're restricted to either your own chunk or your neighboring chunk. Note that there aren't any any arrows going over here. So you can attend, they have this diagram here, which things you can attend to. You can attend to yourself or attend to your neighboring thing but not to any other thing or the other way around right? So that's basically the the concept of saving memory. Now your memory requirements are, if we call this quantity now, we call the other one B, let's call this the chunk size C right? Your memory requirements are pretty much C squared plus whatever this unidirectional, so not this isn't squared, plus probably O of C something like this. So you bring your memory requirements down quite a bit. Now that's the general idea here. The problem they face again is, so they face another problem where they say hold on, I can't find it right here, they say hold on, we do have actually another problem and that is that these transformers have to back propagate. So you'll have to forward propagate these things and now we've kind of solved this D square computation issue but what you'll have to do is if you go from layer to layer right? Layer, layer, layer, layer. What you have to do is if you propagate information forward you still have to back propagate and in order to back propagate usually, usually you'll have to remember all of these activations right? So these activations, these activations. In order to do back prop it is often the case that you actually have to remember the activations because in each forward propagation, in each layer here you might lose some information. Imagine you have a layer that maps these two-dimensional vectors both to, so here actually let's make this blue, maps these three vectors to the following configuration. So a layer maps these vectors to this, this and this. So it maps two things to one thing which you know can be if you in a linear layer can decide to map it to a lower dimensional subspace. So you could actually decide to map it to in fact two points right? This is also a possibility. You could do dimension reduction. So because all of this in order to do back prop you actually have to remember these things in order to do proper back prop. This is a problem again for the transformer because all these activations even though we've gotten rid of the d-square computation they will have to be remembered and that takes a lot of memory. The way to solve this is actually to do invertible layers. What that means is that if I propagate information forward, forward, forward, forward, I can figure out what the information here was simply by looking at the back prop activations. And this happens if the layer is invertible. So if this function here is invertible. So if f here technically is invertible. So I can actually write down the inverse of f and that is defined. This of course is a pretty big restriction and the way they achieve it, I like to go to the blog here, the way they achieve it is they do what's called an idea from reversible networks where they always have two sets of activations. That's what you see here. X1 and X2. And in each layer only one of them is updated in a residual fashion. You can see here layer 1 updates X2 but X1 remains the same and goes to Y1. And then in the next layer, layer 2 only updates Y1 in order to construct Z1. But Y2 remains the same to be Z2. And then you can revert the layers. You can basically figure out what the activations were from the back prop signal. Now that's extremely good if you want to save memory but of course it restricts clearly. You have to be restricted to this kind of architecture similar. This idea actually isn't new. This has been used many times in things like normalizing flows and I want to highlight this paper. Actually want to highlight specific... I chose this paper because they have these nice diagrams where they show exactly this. You see they have two sets X1 and X2 that in forward propagation they only update one of them. And then in backward in what's called inverse propagation they can figure out what those were. And they couple these in exactly the same way. Like here this drawing might be even more similar where they alternate between updating the two activations. So you can think of this as a way to simply make the function that you're representing with the neural network invertible. That is a giant constraint on your architecture but these methods here, these normalizing flow methods, use that so they can actually define an invertible layer because they need the Jacobian inverse in order to compute their normalizing flow. So you see that's why they originally did it. And I'm sure that that's not a new idea or particularly new again. Strangely I haven't found any of the flow literature cited. They do cite the reversible residual net paper that they probably got the idea from. So with these two things now you can save the giant computation. And you can also not store the forward activations. So they say they can take now giant giant giant input sizes. You may remember transformers like BERT. So BERT it can use something like 512 tokens. In its input sequence. That means the sequence that you can look at with BERT at a time is 512 long and not a bit longer. There have been some extensions to that. For example I believe in XL net. So XL net has pushed this to something like C times 512 where C is a smallish constant. That where you can kind of carry over information between sequences. But this thing here as you can see they calculate it could take up something like 64,000 tokens and that would use in total 16 gigabytes of memory. Which is available on a high-end GPU. So this is a giant this is a giant step forward in in producing transformers that can actually take large models. And here you see the memory and time complexity. You can look at these things yourself but you can see maybe here that these squares here from the original transformer they now vanish from this. And all of these constants are a lot of these constants are actually smaller. For example that chunk size is in there instead of kind of the entire sequence length. So that's basically the the paper. They show that I can actually input those long sequences. They can apply this to images. You see there's image net pixel by pixel which is a lot of pixels and would have been absolutely unthinkable with one of the original transformers. And with that I invite you to check out the paper and the blog post and I'll see you next time. Bye bye.
[ { "end": 5.84, "start": 0, "text": " Hi there! Today we'll look at Reformer, the efficient transformer by Nikita" }, { "end": 13.72, "start": 5.84, "text": " Kitaev, Lukas Kaiser and Anselm Levskaia. This is a paper that tries to reduce the" }, { "end": 18.6, "start": 13.72, "text": " extreme resource requirements of the transformer model. Now if you haven't" }, { "end": 25.2, "start": 18.6, "text": " seen the transformer model before, that's this thing, I suggest you go watch for" }, { "end": 29.36, "start": 25.2, "text": " example my video on it, Attention is All You Need, it's called, where the" }, { "end": 36.56, "start": 29.36, "text": " transformer is introduced. The most famous transformer is called BERT, B-E-R-T," }, { "end": 43.72, "start": 36.56, "text": " and you can also look that up, I've made a video about this. So what's the issue" }, { "end": 50.480000000000004, "start": 43.72, "text": " here? If you remember transformers, they need a lot of memory. And why? That's" }, { "end": 56.92, "start": 50.480000000000004, "text": " because they compute, in each layer they compute these attention things. Let's" }, { "end": 63.64, "start": 56.92, "text": " recap shortly. In a transformer you propagate information layer by layer. So" }, { "end": 71.48, "start": 63.64, "text": " you have layer here with some signal, and then the next layer that you try to" }, { "end": 80.44, "start": 71.48, "text": " propagate the signal. Now what you do, you assign, you assign key queries to each of" }, { "end": 84.92, "start": 80.44, "text": " the next layer. So each of the next layer has queries, and queries are just" }, { "end": 90.44, "start": 84.92, "text": " vectors. This is a vector, this is a vector, this is a vector, and so on. So" }, { "end": 97.48, "start": 90.44, "text": " basically the next layer has the ability to ask, to ask the last layer what it" }, { "end": 104.2, "start": 97.48, "text": " wants. This is a kind of an intrinsic property of attention, and I, as I said, I" }, { "end": 108.92, "start": 104.2, "text": " explained this in detail in the video, Attention is All You Need. Basically" }, { "end": 115.88, "start": 108.92, "text": " these are what's called queries, Q. And then this layer is exposing what are" }, { "end": 124.28, "start": 115.88, "text": " called keys, and keys again are vectors. So vector, vector, vector, vector, and so on." }, { "end": 130.48, "start": 124.28, "text": " So keys are vectors, and the way that the information is propagated to the next" }, { "end": 138.83999999999997, "start": 130.48, "text": " layer is whenever, whatever, we consider for example this node here, right, this" }, { "end": 144.79999999999998, "start": 138.83999999999997, "text": " node, let's make that yellow. When we consider this node here, it is going to" }, { "end": 152.48, "start": 144.79999999999998, "text": " look in the last layer which, which keys match my key the most. And in this case" }, { "end": 158.2, "start": 152.48, "text": " it will probably be this key and this key, right, they match the key the most." }, { "end": 164.11999999999998, "start": 158.2, "text": " And here we look at the inner product, so the angle between the vectors. And then" }, { "end": 171.56, "start": 164.11999999999998, "text": " information is aggregated by simply having a weighted average of the values." }, { "end": 176.56, "start": 171.56, "text": " So information is coming in here and here. Actually information is coming into" }, { "end": 181.48, "start": 176.56, "text": " all the nodes, but since only these keys match, the information will be propagated" }, { "end": 189.79999999999998, "start": 181.48, "text": " like this, to this unit. We could do this for another unit, for example this unit" }, { "end": 195.83999999999997, "start": 189.79999999999998, "text": " right here. What's the value of this unit? Well we have to look at the key here." }, { "end": 201, "start": 195.83999999999997, "text": " Which key is it going to be matched to? It's probably going to be matched to" }, { "end": 208.23999999999998, "start": 201, "text": " this key right here. And probably no other key really. Maybe this key a little" }, { "end": 213.04000000000002, "start": 208.24, "text": " bit. So the information of that node in the next layer will be whatever's" }, { "end": 218.08, "start": 213.04000000000002, "text": " information is coming in here, routed there, and a little bit of this" }, { "end": 223, "start": 218.08, "text": " information. So this is kind of a, it's not a hard, it's called soft attention." }, { "end": 228.48000000000002, "start": 223, "text": " So there's a little bit of information going everywhere, but the majority of the" }, { "end": 232.12, "start": 228.48000000000002, "text": " information is coming from the nodes where the keys match. So these are" }, { "end": 237.60000000000002, "start": 232.12, "text": " queries, these are keys, and technically these things coming in here are called" }, { "end": 243.48, "start": 237.6, "text": " values. But imagine the values simply as the information to be propagated, and the" }, { "end": 248.56, "start": 243.48, "text": " queries and the keys are responsible for routing that information to the next" }, { "end": 254.28, "start": 248.56, "text": " layer. All of these things are learned. So the queries, the keys, and the values." }, { "end": 259.15999999999997, "start": 254.28, "text": " Now what's the problem? The problem is between the queries and the keys. As you" }, { "end": 264.88, "start": 259.15999999999997, "text": " can see, what you have to do is you have to match every single query with every" }, { "end": 270.28, "start": 264.88, "text": " single key in order to find out where information goes. So this becomes order" }, { "end": 278.6, "start": 270.28, "text": " of, if you have D keys and D queries, order of D squared operations that you" }, { "end": 283.96, "start": 278.6, "text": " have to do. And of course D squared values that you have to compute. And" }, { "end": 290.96, "start": 283.96, "text": " since these are all vectors, of course there is D will not only be the number" }, { "end": 294.91999999999996, "start": 290.96, "text": " of keys, but then again this is multiplied, so there is an inner" }, { "end": 303.52, "start": 294.91999999999996, "text": " multiplication with the dimensionality, let's call that capital D, of the... no" }, { "end": 310.35999999999996, "start": 303.52, "text": " sorry that's not an inner multiplication. Let's just remain at this. So D squared" }, { "end": 317, "start": 310.35999999999996, "text": " inner products between vectors of capital D dimensions. So it's not an" }, { "end": 324.8, "start": 317, "text": " easy thing for resources to do. You need a lot of resources to hold this, all of" }, { "end": 331.22, "start": 324.8, "text": " this in memory at the same time and to compute all of these things. The reformer" }, { "end": 336.64, "start": 331.22, "text": " aims to solve this problem. So this giant space problem that the" }, { "end": 343.24, "start": 336.64, "text": " transformers have, space, memory, also computational problem to a lesser degree." }, { "end": 350.44, "start": 343.24, "text": " Mostly it's a memory issue. Alright, so what is happening here? And you see" }, { "end": 356.84000000000003, "start": 350.44, "text": " here that this product between two matrices clearly gives you this" }, { "end": 365.08, "start": 356.84000000000003, "text": " kind of squared thing. So what's happening in the reformer to do this?" }, { "end": 371.96000000000004, "start": 365.08, "text": " The trick is, if we go back to this drawing, the trick is to create" }, { "end": 378.35999999999996, "start": 371.96, "text": " what's called a hashing scheme or buckets. In creating buckets what you" }, { "end": 385.4, "start": 378.35999999999996, "text": " want to do is you want to group similar things together. So let's say we create" }, { "end": 395.88, "start": 385.4, "text": " four buckets. Bucket one, bucket two, bucket three, bucket four. And each" }, { "end": 402.56, "start": 395.88, "text": " bucket we label. And bucket one we label with the up direction, this with the right" }, { "end": 408.56, "start": 402.56, "text": " direction, with the down direction, the left direction as vectors. And now we" }, { "end": 415.36, "start": 408.56, "text": " simply put each of the things into the bucket where it belongs most. So let's" }, { "end": 422.76, "start": 415.36, "text": " for example this vector here, it goes here. Sorry, that is like absolutely not" }, { "end": 432.12, "start": 422.76, "text": " the right place. It goes probably here, right? This vector here, probably this one" }, { "end": 437.8, "start": 432.12, "text": " goes here, right? And so on. So you'll end up each of these assigning a bucket. So" }, { "end": 445.4, "start": 437.8, "text": " these all go into that bucket. Let's continue, actually let's also" }, { "end": 453, "start": 445.4, "text": " put the keys in the same buckets. So also the keys, this key here probably goes" }, { "end": 462.64, "start": 453, "text": " to this bucket. This key here probably goes to this bucket. Let's say this key" }, { "end": 468.12, "start": 462.64, "text": " here probably goes to the bucket over here. You already see, so before, right" }, { "end": 476.04, "start": 468.12, "text": " before, we cared about this particular query and this particular key. We just" }, { "end": 480.8, "start": 476.04, "text": " looked and we said those two will probably route information to each other" }, { "end": 486.72, "start": 480.8, "text": " because they're similar. And now you can see they both ended up in the same" }, { "end": 493.84000000000003, "start": 486.72, "text": " bucket. So the idea is to create a scheme where you throw these things into" }, { "end": 499.56, "start": 493.84, "text": " buckets such that if two vectors are similar they will end up in the same" }, { "end": 504.76, "start": 499.56, "text": " bucket with high probability. So you'll only have to really compare things within" }, { "end": 511.96, "start": 504.76, "text": " the same bucket and not across all of these d squared elements. That's the idea" }, { "end": 520.16, "start": 511.96, "text": " and the technique here is called locality sensitive hashing. So locality" }, { "end": 531.56, "start": 520.16, "text": " sensitive hashing. And short this is called LSH. The idea is the following, if" }, { "end": 539.92, "start": 531.56, "text": " you have two vectors v1 and v2 and they have and you have a distance measure" }, { "end": 551.64, "start": 539.92, "text": " distance measure d. D is a distance. What you want is if the distance between v1" }, { "end": 564.8399999999999, "start": 551.64, "text": " and v2 is small, I'm getting confused with color, with small then you want them in the" }, { "end": 579.0400000000001, "start": 564.84, "text": " same bucket. And if the distance is large then you want them in a different bucket." }, { "end": 589.88, "start": 579.0400000000001, "text": " Different buckets. You know with high probability. So all of these things" }, { "end": 597.4399999999999, "start": 589.88, "text": " where you say you want them in the same bucket with probability p with" }, { "end": 602.76, "start": 597.4399999999999, "text": " probability p with high probability p and here you want them in different" }, { "end": 606.88, "start": 602.76, "text": " buckets with high probability. Or you want them in the same pocket with low" }, { "end": 612.32, "start": 606.88, "text": " probability. That's an equivalent form of stating. This is all formalized and I" }, { "end": 618.56, "start": 612.32, "text": " can direct you to the Wikipedia page of that. It's pretty good. It gives a concise" }, { "end": 625.1199999999999, "start": 618.56, "text": " definition. Here you can see that and it gives a number of examples. So one" }, { "end": 630.04, "start": 625.1199999999999, "text": " example I'd like to give here for locality sensitive hashing is of course" }, { "end": 636.4799999999999, "start": 630.04, "text": " the scheme of bucketing will all depend on what your distance measure is. If you" }, { "end": 641.3599999999999, "start": 636.4799999999999, "text": " consider the distance measure simply to be the jacquard distance. So let's say we" }, { "end": 656.12, "start": 641.36, "text": " have two vectors 0 1 0 1 and here we have 1 0 1 1 0 1 and here it's 0 0 0 1." }, { "end": 664.16, "start": 656.12, "text": " Alright so maybe you can see the first two vectors here are much more close" }, { "end": 672.9599999999999, "start": 664.16, "text": " together than the last vector. Now in terms of bit differences, one scheme" }, { "end": 680.12, "start": 672.9599999999999, "text": " to do locality sensitive hashing is to simply sub sample bits. So in this case" }, { "end": 686.52, "start": 680.12, "text": " this is a slightly constructed example. We will just sub sample the first two" }, { "end": 691.88, "start": 686.52, "text": " bits and then construct the buckets according to these bit values. So if" }, { "end": 698.24, "start": 691.88, "text": " since we sample two bits we have four buckets. Here is 0 0, here is 0 1," }, { "end": 703.76, "start": 698.24, "text": " here is 1 0 and here is 1 1. That's the concept of locality sensitive hashing." }, { "end": 708.12, "start": 703.76, "text": " You have these buckets and then you can say alright this vector has 1 0," }, { "end": 716.76, "start": 708.12, "text": " goes into this, this goes into this and then that goes into the 0 1 bucket." }, { "end": 722, "start": 716.76, "text": " And you end up with what you have. You have the two close vectors in the same" }, { "end": 726.08, "start": 722, "text": " bucket and the two far apart vectors in that bucket. Of course that doesn't" }, { "end": 730.36, "start": 726.08, "text": " always work. You can be unlucky in sub sampling but that's kind of" }, { "end": 735.36, "start": 730.36, "text": " trade-off you'll have to go for. If things that are close together" }, { "end": 740.96, "start": 735.36, "text": " happen with it's a low probability but if they happen to end up in the different" }, { "end": 747.44, "start": 740.96, "text": " buckets then basically you lose the fact that they are close to each other and" }, { "end": 752.6, "start": 747.44, "text": " that's the trade-off. The kind of locality sensitive hashing they use in" }, { "end": 757.84, "start": 752.6, "text": " the reformer now is what are called random projections. So let's say you have" }, { "end": 761.48, "start": 757.84, "text": " a bunch of vectors and that's really what we care about. You have a bunch" }, { "end": 770.76, "start": 761.48, "text": " of vectors and what you want, you want the keys and queries. So you have a" }, { "end": 775.8, "start": 770.76, "text": " bunch of vectors like this and you want to create buckets such that vectors that" }, { "end": 780.64, "start": 775.8, "text": " are close together will end up in the same bucket and vectors that are far" }, { "end": 787.4, "start": 780.64, "text": " apart will end up in the in different buckets. A cool way to do is," }, { "end": 791.72, "start": 787.4, "text": " and this is in the cosine distance so we care about the angle between vectors," }, { "end": 799.48, "start": 791.72, "text": " a cool way to do this is to use random plane projections and the cool" }, { "end": 803.44, "start": 799.48, "text": " thing about it is it works for the cosine distance and you can basically" }, { "end": 810.4, "start": 803.44, "text": " choose how many buckets you create. Let's say we want to create four" }, { "end": 816.16, "start": 810.4, "text": " buckets here again. What we need is two hyper planes and what we'll do is, so" }, { "end": 822.04, "start": 816.16, "text": " here is the origin, we'll simply create two hyper planes through the origin at" }, { "end": 829.44, "start": 822.04, "text": " random. So I'm gonna draw a random hyper plane here like this and then a second" }, { "end": 837.24, "start": 829.44, "text": " random hyper plane like this. So you would agree those are pretty random" }, { "end": 843.12, "start": 837.24, "text": " hyper planes as much as I can be a random generator and then we'll simply" }, { "end": 848.8000000000001, "start": 843.12, "text": " label, so this will label hyper plane one, this will label hyper plane two." }, { "end": 857, "start": 848.8000000000001, "text": " Now we simply assign each vector bits according to the, on which" }, { "end": 862, "start": 857, "text": " side of the hyper plane they lie. So let's call this here the plus side and" }, { "end": 866.88, "start": 862, "text": " this here the minus side or even yeah let's call this the plus and the minus" }, { "end": 872.24, "start": 866.88, "text": " and here also we call this the plus side and this the minus side. So this vector" }, { "end": 880.8, "start": 872.24, "text": " here is, its signs are plus plus right because it's on the plus side of both of" }, { "end": 888.64, "start": 880.8, "text": " hyper planes. This vector plus plus, this one plus plus, this one here is called," }, { "end": 894.12, "start": 888.64, "text": " it's on the negative side of plane two but on the positive side of plane one so" }, { "end": 902.12, "start": 894.12, "text": " it's plus minus, this one here minus minus, minus minus, minus minus and these" }, { "end": 907.12, "start": 902.12, "text": " are your buckets. So you would group these vectors together because they have" }, { "end": 911.48, "start": 907.12, "text": " they have the same signs. You would group that vector, you would group these" }, { "end": 918.64, "start": 911.48, "text": " vectors together. The combination of this with attention, since in attention you've" }, { "end": 926.44, "start": 918.64, "text": " seen attention uses a softmax and the softmax is dominated usually by the" }, { "end": 932.44, "start": 926.44, "text": " largest elements and since we compute inner products it means that this softmax" }, { "end": 938.48, "start": 932.44, "text": " thing is dominated by vectors that have small inner products. So basically" }, { "end": 944.6800000000001, "start": 938.48, "text": " you don't have to look at all of these d squared vectors if you can find the" }, { "end": 950.48, "start": 944.6800000000001, "text": " ones that have the closest distance. You can pretty much ignore the others." }, { "end": 957.8800000000001, "start": 950.48, "text": " And LSH allows you to do this. So build buckets of vectors with" }, { "end": 964.68, "start": 957.88, "text": " similar directions. Then you only have to care about these vectors comparing them" }, { "end": 971.32, "start": 964.68, "text": " to each other. So that's not a lot of vectors generally and that's how you" }, { "end": 976.32, "start": 971.32, "text": " save a lot of work. So you will only have to care about these three vectors if" }, { "end": 981.36, "start": 976.32, "text": " your key vector for example is right here. You'll only have to care about these" }, { "end": 988.4, "start": 981.36, "text": " things in the same bucket and you can ignore all of that rest of the space. Of" }, { "end": 992.72, "start": 988.4, "text": " course the more hyperplanes you have the more buckets you'll have, the less" }, { "end": 997.04, "start": 992.72, "text": " vectors you'll have in the same bucket. That's the general idea. I find this" }, { "end": 1001.36, "start": 997.04, "text": " explanation to be a bit easy. You can equivalently explain it by doing these" }, { "end": 1007.84, "start": 1001.36, "text": " kind of random rotations in the space. You can think about how that will end up" }, { "end": 1012.5600000000001, "start": 1007.84, "text": " actually being the exact same thing as what I just explained. I just like that" }, { "end": 1020.48, "start": 1012.5600000000001, "text": " my explanation better I think. Alright so the way they use this, they have an" }, { "end": 1026.88, "start": 1020.48, "text": " illustration right here, is the following. So they have these keys right?" }, { "end": 1031.68, "start": 1026.88, "text": " Sequence of queries and keys. So they do equivalent queries and keys which is a" }, { "end": 1036.48, "start": 1031.68, "text": " thing you can do in transformers. Don't worry too much about it whether they're" }, { "end": 1042.16, "start": 1036.48, "text": " different or not. But then they do this LSH bucketing and here the color of the" }, { "end": 1048.84, "start": 1042.16, "text": " cell is just the bucket, the LSH bucket which will end up. Then they sort that" }, { "end": 1055.3600000000001, "start": 1048.84, "text": " right as you can see and now they do an additional thing which is called the" }, { "end": 1061.4, "start": 1055.3600000000001, "text": " chunk. As you can see there are not the same amount of vectors in each bucket" }, { "end": 1068.3200000000002, "start": 1061.4, "text": " and that is sometimes a problem because even though you've reduced the" }, { "end": 1073.4, "start": 1068.3200000000002, "text": " memory, the memory requirements are still dominated by the" }, { "end": 1080.3200000000002, "start": 1073.4, "text": " largest bucket. By whatever bucket has the most number of vectors that will" }, { "end": 1085.48, "start": 1080.3200000000002, "text": " pretty much be your memory requirement. Because now you don't have to, if" }, { "end": 1091.2800000000002, "start": 1085.48, "text": " this is D, you have to compute all the D squared things anymore. But you'll" }, { "end": 1099.84, "start": 1091.28, "text": " only have to compute this quantity, let's call that B. So the maximum" }, { "end": 1105.6399999999999, "start": 1099.84, "text": " bucket size. But that could still be large right? If you look at a" }, { "end": 1110.6, "start": 1105.6399999999999, "text": " distribution it's probably going to be something like this right? Where most" }, { "end": 1116.44, "start": 1110.6, "text": " buckets have a kind of a standard number of vectors but some buckets will have a" }, { "end": 1122.64, "start": 1116.44, "text": " lot of vectors and that's, sorry, some few buckets will have a lot of vectors and" }, { "end": 1126.16, "start": 1122.64, "text": " your memory requirement is still dominated by this. So they do an" }, { "end": 1129.04, "start": 1126.16, "text": " additional thing which is called chunking which means they actually take" }, { "end": 1136.24, "start": 1129.04, "text": " fixed size chunks here, fixed size. Here they always take four and they say all" }, { "end": 1143.8, "start": 1136.24, "text": " right these are our chunks and we will only compute attention within the chunks" }, { "end": 1149, "start": 1143.8, "text": " right? So it could be that there's the same bucket is actually split" }, { "end": 1153.2, "start": 1149, "text": " between chunks and that's why they do an additional thing is that you can attend" }, { "end": 1159.84, "start": 1153.2, "text": " two things in a different chunk right here. You can attend two things" }, { "end": 1165.52, "start": 1159.84, "text": " in your neighboring chunks so you're restricted to either your own chunk or" }, { "end": 1173.48, "start": 1165.52, "text": " your neighboring chunk. Note that there aren't any any arrows going over here." }, { "end": 1180.08, "start": 1173.48, "text": " So you can attend, they have this diagram here, which things you can" }, { "end": 1185.6, "start": 1180.08, "text": " attend to. You can attend to yourself or attend to your neighboring thing but not" }, { "end": 1192.4, "start": 1185.6, "text": " to any other thing or the other way around right? So that's basically the" }, { "end": 1201.32, "start": 1192.4, "text": " the concept of saving memory. Now your memory requirements are, if we call this" }, { "end": 1208.28, "start": 1201.32, "text": " quantity now, we call the other one B, let's call this the chunk size C right?" }, { "end": 1213.76, "start": 1208.28, "text": " Your memory requirements are pretty much C squared plus whatever this" }, { "end": 1220.52, "start": 1213.76, "text": " unidirectional, so not this isn't squared, plus probably O of C something" }, { "end": 1230.3999999999999, "start": 1220.52, "text": " like this. So you bring your memory requirements down quite a bit. Now" }, { "end": 1240.0400000000002, "start": 1230.4, "text": " that's the general idea here. The problem they face again is, so they face" }, { "end": 1249.92, "start": 1240.0400000000002, "text": " another problem where they say hold on, I can't find it right here, they say hold on," }, { "end": 1254.72, "start": 1249.92, "text": " we do have actually another problem and that is that these transformers" }, { "end": 1260.64, "start": 1254.72, "text": " have to back propagate. So you'll have to forward propagate these things and now" }, { "end": 1264.48, "start": 1260.64, "text": " we've kind of solved this D square computation issue but what you'll have to" }, { "end": 1270.64, "start": 1264.48, "text": " do is if you go from layer to layer right? Layer, layer, layer, layer. What you" }, { "end": 1274.96, "start": 1270.64, "text": " have to do is if you propagate information forward you still have to" }, { "end": 1280.68, "start": 1274.96, "text": " back propagate and in order to back propagate usually, usually you'll have to" }, { "end": 1287.3600000000001, "start": 1280.68, "text": " remember all of these activations right? So these activations, these activations." }, { "end": 1292.4, "start": 1287.3600000000001, "text": " In order to do back prop it is often the case that you actually have to remember" }, { "end": 1296.96, "start": 1292.4, "text": " the activations because in each forward propagation, in each layer here you might" }, { "end": 1304.5600000000002, "start": 1296.96, "text": " lose some information. Imagine you have a layer that maps these" }, { "end": 1314.12, "start": 1304.56, "text": " two-dimensional vectors both to, so here actually let's make this blue, maps these" }, { "end": 1319.96, "start": 1314.12, "text": " three vectors to the following configuration. So a layer maps these" }, { "end": 1329.32, "start": 1319.96, "text": " vectors to this, this and this. So it maps two things to one thing which" }, { "end": 1335.32, "start": 1329.32, "text": " you know can be if you in a linear layer can decide to map it to a lower" }, { "end": 1340.6799999999998, "start": 1335.32, "text": " dimensional subspace. So you could actually decide to map it to in fact" }, { "end": 1346, "start": 1340.6799999999998, "text": " two points right? This is also a possibility. You could do dimension reduction." }, { "end": 1349.52, "start": 1346, "text": " So because all of this in order to do back prop you actually have to remember" }, { "end": 1357.32, "start": 1349.52, "text": " these things in order to do proper back prop. This is a problem again for the" }, { "end": 1361.4399999999998, "start": 1357.32, "text": " transformer because all these activations even though we've gotten rid" }, { "end": 1366.72, "start": 1361.4399999999998, "text": " of the d-square computation they will have to be remembered and that takes a" }, { "end": 1374.6, "start": 1366.72, "text": " lot of memory. The way to solve this is actually to do invertible layers. What" }, { "end": 1378.96, "start": 1374.6, "text": " that means is that if I propagate information forward, forward, forward," }, { "end": 1385.76, "start": 1378.96, "text": " forward, I can figure out what the information here was simply by looking" }, { "end": 1392.56, "start": 1385.76, "text": " at the back prop activations. And this happens if the layer is invertible." }, { "end": 1400.48, "start": 1392.56, "text": " So if this function here is invertible. So if f here technically is invertible." }, { "end": 1408.6, "start": 1400.48, "text": " So I can actually write down the inverse of f and that is defined. This of course" }, { "end": 1419.1999999999998, "start": 1408.6, "text": " is a pretty big restriction and the way they achieve it, I like to go to the blog" }, { "end": 1430.4399999999998, "start": 1419.1999999999998, "text": " here, the way they achieve it is they do what's called an idea from reversible" }, { "end": 1434.4399999999998, "start": 1430.4399999999998, "text": " networks where they always have two sets of activations. That's what you see here." }, { "end": 1441.56, "start": 1434.44, "text": " X1 and X2. And in each layer only one of them is updated in a residual fashion." }, { "end": 1449.52, "start": 1441.56, "text": " You can see here layer 1 updates X2 but X1 remains the same and goes to Y1." }, { "end": 1458.76, "start": 1449.52, "text": " And then in the next layer, layer 2 only updates Y1 in order to" }, { "end": 1466.28, "start": 1458.76, "text": " construct Z1. But Y2 remains the same to be Z2. And then you can revert the layers." }, { "end": 1471.84, "start": 1466.28, "text": " You can basically figure out what the activations were from the back prop" }, { "end": 1479.24, "start": 1471.84, "text": " signal. Now that's extremely good if you want to save memory but of course it" }, { "end": 1483.4, "start": 1479.24, "text": " restricts clearly. You have to be restricted to this kind of architecture" }, { "end": 1490.52, "start": 1483.4, "text": " similar. This idea actually isn't new. This has been used many times in things" }, { "end": 1494.8000000000002, "start": 1490.52, "text": " like normalizing flows and I want to highlight this paper. Actually want to" }, { "end": 1501.16, "start": 1494.8000000000002, "text": " highlight specific... I chose this paper because they have these nice diagrams" }, { "end": 1509.5600000000002, "start": 1501.16, "text": " where they show exactly this. You see they have two sets X1 and X2 that in" }, { "end": 1514.52, "start": 1509.56, "text": " forward propagation they only update one of them. And then in backward in what's" }, { "end": 1520.44, "start": 1514.52, "text": " called inverse propagation they can figure out what those were. And they" }, { "end": 1527.32, "start": 1520.44, "text": " couple these in exactly the same way. Like here this drawing might be even more" }, { "end": 1534.04, "start": 1527.32, "text": " similar where they alternate between updating the two activations. So you can" }, { "end": 1539.76, "start": 1534.04, "text": " think of this as a way to simply make the function that you're representing" }, { "end": 1544.68, "start": 1539.76, "text": " with the neural network invertible. That is a giant constraint on your" }, { "end": 1549.24, "start": 1544.68, "text": " architecture but these methods here, these normalizing flow methods, use that" }, { "end": 1554.84, "start": 1549.24, "text": " so they can actually define an invertible layer because they need the" }, { "end": 1562.8799999999999, "start": 1554.84, "text": " Jacobian inverse in order to compute their normalizing flow. So you see that's" }, { "end": 1569.3600000000001, "start": 1562.88, "text": " why they originally did it. And I'm sure that that's not a new idea or" }, { "end": 1576.3600000000001, "start": 1569.3600000000001, "text": " particularly new again. Strangely I haven't found any of the flow" }, { "end": 1585, "start": 1576.3600000000001, "text": " literature cited. They do cite the reversible residual net paper that they" }, { "end": 1592.0800000000002, "start": 1585, "text": " probably got the idea from. So with these two things now you can save the" }, { "end": 1599.84, "start": 1592.08, "text": " giant computation. And you can also not store the forward activations. So" }, { "end": 1612.1599999999999, "start": 1599.84, "text": " they say they can take now giant giant giant input sizes. You may remember" }, { "end": 1622, "start": 1612.1599999999999, "text": " transformers like BERT. So BERT it can use something like 512 tokens." }, { "end": 1628, "start": 1622, "text": " In its input sequence. That means the sequence that you can look at with BERT" }, { "end": 1634.72, "start": 1628, "text": " at a time is 512 long and not a bit longer. There have been some" }, { "end": 1644.12, "start": 1634.72, "text": " extensions to that. For example I believe in XL net. So XL net has pushed this to" }, { "end": 1655.1599999999999, "start": 1644.12, "text": " something like C times 512 where C is a smallish constant. That where you" }, { "end": 1659.6399999999999, "start": 1655.1599999999999, "text": " can kind of carry over information between sequences. But this thing here" }, { "end": 1668.04, "start": 1659.6399999999999, "text": " as you can see they calculate it could take up something like 64,000 tokens and" }, { "end": 1675.32, "start": 1668.04, "text": " that would use in total 16 gigabytes of memory. Which is available on a high-end" }, { "end": 1687, "start": 1675.32, "text": " GPU. So this is a giant this is a giant step forward in in producing" }, { "end": 1693.12, "start": 1687, "text": " transformers that can actually take large models. And here you see the memory" }, { "end": 1698.9599999999998, "start": 1693.12, "text": " and time complexity. You can look at these things yourself but you can see" }, { "end": 1704.4399999999998, "start": 1698.9599999999998, "text": " maybe here that these squares here from the original transformer they now" }, { "end": 1710.3999999999999, "start": 1704.4399999999998, "text": " vanish from this. And all of these constants are a lot of these constants" }, { "end": 1715.12, "start": 1710.3999999999999, "text": " are actually smaller. For example that chunk size is in there instead of kind" }, { "end": 1724.3999999999999, "start": 1715.12, "text": " of the entire sequence length. So that's basically the the paper. They show that" }, { "end": 1729.76, "start": 1724.3999999999999, "text": " I can actually input those long sequences. They can apply this to images." }, { "end": 1735.8, "start": 1729.76, "text": " You see there's image net pixel by pixel which is a lot of pixels and would have" }, { "end": 1742.6799999999998, "start": 1735.8, "text": " been absolutely unthinkable with one of the original transformers. And with that" }, { "end": 1749.04, "start": 1742.68, "text": " I invite you to check out the paper and the blog post and I'll see you next time." }, { "end": 1775.84, "start": 1749.04, "text": " Bye bye." } ]
EbFosdOi5SY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Go-Explore: a New Approach for Hard-Exploration Problems
[ "Science & Technology" ]
[ "machine learning", "ml", "reinforcement learning", "rl", "ai", "artificial intelligence", "uber", "exploration", "hard exploration", "research", "novelty", "graph", "robustify", "explore", "montezuma", "montezuma's revenge", "pitfall", "atari" ]
This algorithm solves the hardest games in the Atari suite and makes it look so easy! This modern version of Dijkstra's shortest path algorithm is outperforming everything else by orders of magnitude, and all based on random exploration. https://arxiv.org/abs/1901.10995 https://eng.uber.com/go-explore/ https://github.com/uber-research/go-explore Abstract: A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics). Authors: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune
Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem for a long time for reinforcement learning algorithms. What you can see is this little person that has to kind of jump around, collect keys, collect these coins, kind of get over enemies and so on, and all of this is super hard because the reward is so sparse, so sometimes you have to do hundreds of actions until you get the next improvement in score. You can see on the top how your score is increasing and it seems like this algorithm is pretty efficient on this, but keep in mind this algorithm has to learn from just the pixel input. It has to learn every single move of the agent. So if you see here for example jumping over the enemies, stopping when these blue bars come and going down the ladders without hitting the spider, this is a really really hard problem. So far reinforcement learning algorithms have had a very hard time doing this until this algorithm showed up. GoExplore, which was the first one that actually surpassed I believe human experts or widely surpassed human experts at this game, in fact the first reinforcement learning algorithm that without human demonstration could do anything at all at this game. So let's dive in and see how this algorithm does what it does. And the paper to this is called GoExplore, a new approach for hard exploration problems by Adria Ecofe, Joost Huizinga, Joel Lehmann, Kenneth O. Stanley and Jeff Klun from Uber AI Labs. So they break down the problem into what they call two problems. So these hard exploration problems, they say they suffer from two things, detachment and derailment. You can see here detachment and derailment. So they explain those in detail. Detachment and derailment are related to each other. Detachment is when an exploration algorithm that has some sort of intrinsic motivation, right? This is how you usually do these hard exploration problems. You give intrinsic motivation to the agent to explore new things, like in absence of a reward, if there's no reward around, it should just reach some kind of new state. And you give the algorithm points for reaching states that it has never seen before. But this can come to this sort of detachment problem. They illustrate this here. So let's say your algorithm starts actually here in the middle, right? And everything that's green here is intrinsic reward. So you collect the green stuff that gives you points, right? So the goal might actually be in here or in here. But you have to teach the algorithm to go all this way around. And you do that by simply motivating it to go to new states by giving it a reward for every state it hasn't been. So it starts exploring, goes here, and maybe the first episode reaches here right before it is reset, usually reset after, well, like it bounces kind of around, it's like, ah, there's new stuff. And then it goes here and it will explore kind of it. And it will be motivated to explore because there's always this green stuff here. So after a while here, whatever is purple has been explored, right? Recently. So with purple, they mark what has been recently explored. All of this has been recently explored, right? So it is gone until here. But usually you also have like a component that isn't purely seeking this green stuff, but is also doing some kind of random exploration. And so what happens, what can happen in these algorithms is that if you at one of these times you start the episode here, by chance, it actually goes into the other direction. All right. And then it's like, wow, there's all this green stuff over here, right? And then it's like, woo, so much green stuff. Right. And then what usually happens is it kind of forgets that there's green stuff over here. So it explores all of this stuff around here. It explores, explores, explores, but there's no more stuff. And then it's stuck, right? It's stuck here. And it says, where, where am I going to go? Like I know over here, there's no more green stuff. And over here, there doesn't appear to be any green stuff because it's forgotten about this. So this, they claim these intrinsic motivation algorithms, what they can lead to is you can detach from your frontier of new knowledge, right? Like they can forget that there is, that here at one point they were here and the algorithm, what the algorithm did, it was it explored here until here, and then it explored over here. So it thinks that this thing over here is its most recent frontier of knowledge, right? This is, this is my state here. This is where I go explore from, but there is nowhere to explore from, right? What it should remember is that here it actually kind of jumped over by random chance. I hope this makes sense. This is called detachment of intrinsic motivation algorithms. And it happens when you, when you kind of give these points according to simply reaching new states. And then another thing is what they call derailment. And derailment is a bit of a more subtle problem. So in derailment, what happens is maybe you, maybe you've actually, let's say this same situation. You've discovered a promising state, right, by some miracle. Here is the goal, right? You've reached the goal. You've done this by exploration. You've explored a bunch and you've reached the goal. Now the problem is, can you do it again? Right? Especially if the environment is a bit stochastic, right? If there is noise, if the environment isn't always the same, can you actually learn how to do this robustly, like such that you can repeat your success? And in derailment is the problem that often these algorithms, while they find promising things, they kind of struggle to robustly reach those promising states. Go Explorer solves these problems in two separate phases, one for each, basically. So what it does is in a phase one, it explores, right? Explore and this is a crucial part, until solved. So this is an explorer, a method that explores until the problem is solved with the focus on explore, right? And then in stage two, robustify. And by robustify means that if stage one has resulted in trajectories that have solved the game or the environment, then phase two is simply tasked with robustly finding those. So let's look at phase one. Phase one is kind of like, think of Dijkstra's algorithm. So in Dijkstra's algorithm, this is a shortest path algorithm in graphs. So in Dijkstra's algorithm, you have a graph and you want to reach this from the start, let's call this the start. And this is the end or the goal. And the graph is connected with edges. And these edges have usually sometimes they have weights. We can simply, the goal is how to go the shortest path from start to the end. And what Dijkstra's algorithm does, it starts exploring. So it's like it goes here. All right, and then it says, ah, this is a new state. I reached the state in one step. All right, explore some more. I reached this state in two steps. And then it's like, I reached a state in three steps. Okay, but I can also go here, I reached this state in one step, in two steps. I've already been here. Okay. But then it can, it can say, okay, from here, I reached this state into this is a bad example. Let's say we actually have to make a shortest path. This is the graph, right? So it reaches this state in two steps, but then it explores this thing. It's like, ah, wait a minute, I've seen this state. But before I've reached it in two steps. Now I'm reaching it in one step. This is better. So this path here is better than this path here. And then it goes on from here. It goes on it says, okay, I'm reaching this goal in two steps. I've reached it in three steps before. So clearly, this bottom path here is better than what I've done before this top or this path. So this is this is what Go Explorer does. In a nutshell, what it does is has an archive of states, right? An archive of states that it has visited previously. And the crucial thing here is, and this is kind of necessary to their algorithm, that this is completely deterministic. So what they actually do is they will save the state of the game emulator, right? They are here, right? And they do some exploration, jumping some until their person is here, their game is in some state, and they will save the emulator to a buffer. This is kind of crucial, such that at a later point, they can select this, this exactly this state that they were in, and from here, run a bunch of explorations again, right? So if they say select state from archive, and then go to that state, this is simply restoring the emulator state. But you could also what you could also do if if this is a purely deterministic environment, you could simply save the sequence of actions that you've done to come here, and simply buy so maybe you gone right, right, and here you jump, and you go right, you can simply replay those to get to the exact same state, they discuss that this can be expanded to also handle a kind of stochastic environments. But in their case, at the phase one, the environment is completely deterministic. So they can do this, they can go, sorry, they can go to a state deterministically. So they'll select a state from an archive, they have an algorithm for selecting kind of promising states. They go to that state, and then they explore from that state and they simply do this random. So this is random. And then they update the archive. So what do they do? Right? So we saw so here, maybe a new graph, so they go to a state, this is their state, and then they explore. Now there, there are multiple things that can happen. One they can encounter a new state, right? New state never seen before. All right, what they do is they save it to the buffer. They say, okay, this new state, let's call it n, this new state, I've reached it in. And here we have done s steps, I've reached an s plus one step. And whatever here is the emulator state that we had before, right? So I can at any point, I can go back. If, however, the state has already been seen, let's call this m, they retrieve m, m prime from the buffer because they've already seen it, it's in the buffer, right? They compare, hey, these steps, so is s prime, is this smaller or larger than s plus one? So basically, I've seen this state before, but using this path, can I reach it in fewer steps than I've reached it before? If yes, then I'm going to replace this, replace this s by s plus one, and then save it again in the buffer. All right, so I can, I now have a better path to reach this state than before. So it's almost exactly like Dijkstra's algorithm in that you simply explore and every new state you find you've either already seen, so you just simply have a new way of getting to that state. If you haven't seen it, you simply remember it, and then you do it all again. So you can imagine with time, these number of states in this buffer will explode. And it's not feasible for Montezuma's revenge. Like imagine this game, right? You have to, you have to go everywhere and explore everything, right? This, I mean, every single action here could be a state. That's why, let me pause this. That's why what they do is they, they have to come up with a notion of state that is, doesn't simply include every single game state there is. And what they do is, this is sampled here, they down sample the image. And then this, sorry, I've tried drawing over a blog post, they down sample the image, and then they simply say, all right, so this, this thing would become this thing. And they simply say, okay, if two of these images have the same representation, so grayscale, down sampled, quantized, then they are the same state. And that's kind of the crux of the algorithm I find. So if two things have the same state, then the algorithm is prone to kind of confusing them for each other. It thinks one is the other, not exactly, but it does kind of assume that they are close actually here. But there is a crucial difference between the two. The algorithm will have a very hard time in some situations. I don't want to, like, you can think of, it needs to be kind of convoluted situations, but it can be the kind of crux of the algorithm very much if the state representation isn't done well. And they actually have two methods. One simply relies on this down sampling and the other one, they provide domain knowledge, which means kind of which level you're in, where the player is, and so on. But this is, this is pretty cool. So if you are able, so if, if your reinforcement learning problem, first of all, is deterministic. At least in a simulator. And second, allows for good state representations, kind of for, for low dimensional state representations. If those two things are given, you can use GoExplore. And as I said, this, this representation here is key. So now you know how they do it. They simply explore these states. And if they come on a new state, and every state is, is, is, so we don't mean this here, we actually mean this representation of it, they store it and they remember how to get to it. And simply by exploring like this and having a smart algorithm that picks which state to explore from, which of course is also a lot of domain knowledge, they are able to solve the game, right? So you see, goes way past human expert, and they're, they're able to, to actually perform really well simply by exploring. This is the exploration phase. This is simply random exploration from promising states. And then in the second part, in the second phase, they now robustify it. So now they introduce noise into their environment, right? Because usually environments have noise or some sort of stochasticity, and they run imitation learning on the best trajectories they found. And what that does is, what they do is they have a trajectory, let's say, let's say this is a trajectory, right? These are actions you need to reach this goal state. This imitation learning algorithm, what they do is they take a few steps back, say here, and they just use imitation learning, which is basically a form of reinforcement learning to reach the goal state from here, simply reach the goal state, right? Once in under noise, right? So you can't just take the exact same actions. Once this has been learned, back up a few more steps, maybe here, and then try to reach the goal state. Now you've already learned how to do this part. So this this bigger part should become should be easier than simply starting from here. And you do that until you've kind of backed up your entire trajectory. This is a well known method from imitation learning. But usually you have usually this red thing is a human demonstration. But now this red trajectory has been found by go explore. It turns out if you have a bunch of these trajectories from go explore, you can do a pretty good job at that. All right, that's basically all that I wanted to say about go explore. It's basically Dijkstra's algorithm. It works under very specific circumstances, but I think it's super promising. And it's kind of a new way of thinking about it. So the video I've shown is actually go explore solving Montezuma's revenge getting like a new high score. And you can see how like skilled this this algorithm becomes. All right, with that, I say goodbye and hope to see you next time.
[ { "end": 7.8, "start": 0, "text": " Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem" }, { "end": 11.120000000000001, "start": 7.8, "text": " for a long time for reinforcement learning algorithms." }, { "end": 17.64, "start": 11.120000000000001, "text": " What you can see is this little person that has to kind of jump around, collect keys," }, { "end": 25.48, "start": 17.64, "text": " collect these coins, kind of get over enemies and so on, and all of this is super hard because" }, { "end": 31.16, "start": 25.48, "text": " the reward is so sparse, so sometimes you have to do hundreds of actions until you get" }, { "end": 33.88, "start": 31.16, "text": " the next improvement in score." }, { "end": 38.68, "start": 33.88, "text": " You can see on the top how your score is increasing and it seems like this algorithm is pretty" }, { "end": 45.64, "start": 38.68, "text": " efficient on this, but keep in mind this algorithm has to learn from just the pixel input." }, { "end": 49.84, "start": 45.64, "text": " It has to learn every single move of the agent." }, { "end": 56.480000000000004, "start": 49.84, "text": " So if you see here for example jumping over the enemies, stopping when these blue bars" }, { "end": 62.760000000000005, "start": 56.480000000000004, "text": " come and going down the ladders without hitting the spider, this is a really really hard problem." }, { "end": 69.48, "start": 62.760000000000005, "text": " So far reinforcement learning algorithms have had a very hard time doing this until this" }, { "end": 71.04, "start": 69.48, "text": " algorithm showed up." }, { "end": 79.68, "start": 71.04, "text": " GoExplore, which was the first one that actually surpassed I believe human experts or widely" }, { "end": 86.48, "start": 79.68, "text": " surpassed human experts at this game, in fact the first reinforcement learning algorithm" }, { "end": 92.12, "start": 86.48, "text": " that without human demonstration could do anything at all at this game." }, { "end": 97.16000000000001, "start": 92.12, "text": " So let's dive in and see how this algorithm does what it does." }, { "end": 103.04, "start": 97.16000000000001, "text": " And the paper to this is called GoExplore, a new approach for hard exploration problems" }, { "end": 112.04, "start": 103.04, "text": " by Adria Ecofe, Joost Huizinga, Joel Lehmann, Kenneth O. Stanley and Jeff Klun from Uber" }, { "end": 114.28, "start": 112.04, "text": " AI Labs." }, { "end": 121.38000000000001, "start": 114.28, "text": " So they break down the problem into what they call two problems." }, { "end": 126.52000000000001, "start": 121.38000000000001, "text": " So these hard exploration problems, they say they suffer from two things, detachment and" }, { "end": 127.86000000000001, "start": 126.52000000000001, "text": " derailment." }, { "end": 132.96, "start": 127.86000000000001, "text": " You can see here detachment and derailment." }, { "end": 137.72, "start": 132.96, "text": " So they explain those in detail." }, { "end": 143.12, "start": 137.72, "text": " Detachment and derailment are related to each other." }, { "end": 150.12, "start": 143.12, "text": " Detachment is when an exploration algorithm that has some sort of intrinsic motivation," }, { "end": 151.12, "start": 150.12, "text": " right?" }, { "end": 153.76000000000002, "start": 151.12, "text": " This is how you usually do these hard exploration problems." }, { "end": 160.32, "start": 153.76000000000002, "text": " You give intrinsic motivation to the agent to explore new things, like in absence of" }, { "end": 165.54, "start": 160.32, "text": " a reward, if there's no reward around, it should just reach some kind of new state." }, { "end": 171.92, "start": 165.54, "text": " And you give the algorithm points for reaching states that it has never seen before." }, { "end": 177.51999999999998, "start": 171.92, "text": " But this can come to this sort of detachment problem." }, { "end": 179.16, "start": 177.51999999999998, "text": " They illustrate this here." }, { "end": 184.6, "start": 179.16, "text": " So let's say your algorithm starts actually here in the middle, right?" }, { "end": 190.4, "start": 184.6, "text": " And everything that's green here is intrinsic reward." }, { "end": 193.6, "start": 190.4, "text": " So you collect the green stuff that gives you points, right?" }, { "end": 197.76, "start": 193.6, "text": " So the goal might actually be in here or in here." }, { "end": 201.32, "start": 197.76, "text": " But you have to teach the algorithm to go all this way around." }, { "end": 207.68, "start": 201.32, "text": " And you do that by simply motivating it to go to new states by giving it a reward for" }, { "end": 209.32, "start": 207.68, "text": " every state it hasn't been." }, { "end": 214.44, "start": 209.32, "text": " So it starts exploring, goes here, and maybe the first episode reaches here right before" }, { "end": 219.32, "start": 214.44, "text": " it is reset, usually reset after, well, like it bounces kind of around, it's like, ah," }, { "end": 220.32, "start": 219.32, "text": " there's new stuff." }, { "end": 224.28, "start": 220.32, "text": " And then it goes here and it will explore kind of it." }, { "end": 229.56, "start": 224.28, "text": " And it will be motivated to explore because there's always this green stuff here." }, { "end": 234.6, "start": 229.56, "text": " So after a while here, whatever is purple has been explored, right?" }, { "end": 235.6, "start": 234.6, "text": " Recently." }, { "end": 237.68, "start": 235.6, "text": " So with purple, they mark what has been recently explored." }, { "end": 240, "start": 237.68, "text": " All of this has been recently explored, right?" }, { "end": 242, "start": 240, "text": " So it is gone until here." }, { "end": 246.72, "start": 242, "text": " But usually you also have like a component that isn't purely seeking this green stuff," }, { "end": 249.86, "start": 246.72, "text": " but is also doing some kind of random exploration." }, { "end": 254.44, "start": 249.86, "text": " And so what happens, what can happen in these algorithms is that if you at one of these" }, { "end": 260.2, "start": 254.44, "text": " times you start the episode here, by chance, it actually goes into the other direction." }, { "end": 261.2, "start": 260.2, "text": " All right." }, { "end": 265.08, "start": 261.2, "text": " And then it's like, wow, there's all this green stuff over here, right?" }, { "end": 268.16, "start": 265.08, "text": " And then it's like, woo, so much green stuff." }, { "end": 269.16, "start": 268.16, "text": " Right." }, { "end": 275.8, "start": 269.16, "text": " And then what usually happens is it kind of forgets that there's green stuff over here." }, { "end": 278.96000000000004, "start": 275.8, "text": " So it explores all of this stuff around here." }, { "end": 283.12, "start": 278.96000000000004, "text": " It explores, explores, explores, but there's no more stuff." }, { "end": 285.32000000000005, "start": 283.12, "text": " And then it's stuck, right?" }, { "end": 287.64000000000004, "start": 285.32000000000005, "text": " It's stuck here." }, { "end": 290.20000000000005, "start": 287.64000000000004, "text": " And it says, where, where am I going to go?" }, { "end": 294.74, "start": 290.20000000000005, "text": " Like I know over here, there's no more green stuff." }, { "end": 299.24, "start": 294.74, "text": " And over here, there doesn't appear to be any green stuff because it's forgotten about" }, { "end": 300.24, "start": 299.24, "text": " this." }, { "end": 304.56, "start": 300.24, "text": " So this, they claim these intrinsic motivation algorithms, what they can lead to is you can" }, { "end": 308.6, "start": 304.56, "text": " detach from your frontier of new knowledge, right?" }, { "end": 316.76, "start": 308.6, "text": " Like they can forget that there is, that here at one point they were here and the algorithm," }, { "end": 321.88, "start": 316.76, "text": " what the algorithm did, it was it explored here until here, and then it explored over" }, { "end": 322.88, "start": 321.88, "text": " here." }, { "end": 331, "start": 322.88, "text": " So it thinks that this thing over here is its most recent frontier of knowledge, right?" }, { "end": 332.76, "start": 331, "text": " This is, this is my state here." }, { "end": 336.48, "start": 332.76, "text": " This is where I go explore from, but there is nowhere to explore from, right?" }, { "end": 342.2, "start": 336.48, "text": " What it should remember is that here it actually kind of jumped over by random chance." }, { "end": 343.88, "start": 342.2, "text": " I hope this makes sense." }, { "end": 348.8, "start": 343.88, "text": " This is called detachment of intrinsic motivation algorithms." }, { "end": 355.16, "start": 348.8, "text": " And it happens when you, when you kind of give these points according to simply reaching" }, { "end": 357.54, "start": 355.16, "text": " new states." }, { "end": 361.72, "start": 357.54, "text": " And then another thing is what they call derailment." }, { "end": 364.96000000000004, "start": 361.72, "text": " And derailment is a bit of a more subtle problem." }, { "end": 372.96000000000004, "start": 364.96000000000004, "text": " So in derailment, what happens is maybe you, maybe you've actually, let's say this same" }, { "end": 374.1, "start": 372.96000000000004, "text": " situation." }, { "end": 379.84000000000003, "start": 374.1, "text": " You've discovered a promising state, right, by some miracle." }, { "end": 381.92, "start": 379.84000000000003, "text": " Here is the goal, right?" }, { "end": 383.8, "start": 381.92, "text": " You've reached the goal." }, { "end": 386.20000000000005, "start": 383.8, "text": " You've done this by exploration." }, { "end": 389.24, "start": 386.20000000000005, "text": " You've explored a bunch and you've reached the goal." }, { "end": 392.32000000000005, "start": 389.24, "text": " Now the problem is, can you do it again?" }, { "end": 393.32000000000005, "start": 392.32000000000005, "text": " Right?" }, { "end": 396.08000000000004, "start": 393.32000000000005, "text": " Especially if the environment is a bit stochastic, right?" }, { "end": 402.42, "start": 396.08000000000004, "text": " If there is noise, if the environment isn't always the same, can you actually learn how" }, { "end": 407.48, "start": 402.42, "text": " to do this robustly, like such that you can repeat your success?" }, { "end": 414.04, "start": 407.48, "text": " And in derailment is the problem that often these algorithms, while they find promising" }, { "end": 420.52000000000004, "start": 414.04, "text": " things, they kind of struggle to robustly reach those promising states." }, { "end": 427.12, "start": 420.52000000000004, "text": " Go Explorer solves these problems in two separate phases, one for each, basically." }, { "end": 434.72, "start": 427.12, "text": " So what it does is in a phase one, it explores, right?" }, { "end": 437.68, "start": 434.72, "text": " Explore and this is a crucial part, until solved." }, { "end": 444.34000000000003, "start": 437.68, "text": " So this is an explorer, a method that explores until the problem is solved with the focus" }, { "end": 448.14, "start": 444.34000000000003, "text": " on explore, right?" }, { "end": 452.88, "start": 448.14, "text": " And then in stage two, robustify." }, { "end": 459.24, "start": 452.88, "text": " And by robustify means that if stage one has resulted in trajectories that have solved" }, { "end": 467.26, "start": 459.24, "text": " the game or the environment, then phase two is simply tasked with robustly finding those." }, { "end": 470.54, "start": 467.26, "text": " So let's look at phase one." }, { "end": 475.94, "start": 470.54, "text": " Phase one is kind of like, think of Dijkstra's algorithm." }, { "end": 480.86, "start": 475.94, "text": " So in Dijkstra's algorithm, this is a shortest path algorithm in graphs." }, { "end": 488.6, "start": 480.86, "text": " So in Dijkstra's algorithm, you have a graph and you want to reach this from the start," }, { "end": 490.32, "start": 488.6, "text": " let's call this the start." }, { "end": 493.72, "start": 490.32, "text": " And this is the end or the goal." }, { "end": 497.66, "start": 493.72, "text": " And the graph is connected with edges." }, { "end": 500.88, "start": 497.66, "text": " And these edges have usually sometimes they have weights." }, { "end": 507.12, "start": 500.88, "text": " We can simply, the goal is how to go the shortest path from start to the end." }, { "end": 510.68, "start": 507.12, "text": " And what Dijkstra's algorithm does, it starts exploring." }, { "end": 511.88, "start": 510.68, "text": " So it's like it goes here." }, { "end": 514.52, "start": 511.88, "text": " All right, and then it says, ah, this is a new state." }, { "end": 516.44, "start": 514.52, "text": " I reached the state in one step." }, { "end": 518.2, "start": 516.44, "text": " All right, explore some more." }, { "end": 520.04, "start": 518.2, "text": " I reached this state in two steps." }, { "end": 523, "start": 520.04, "text": " And then it's like, I reached a state in three steps." }, { "end": 528.04, "start": 523, "text": " Okay, but I can also go here, I reached this state in one step, in two steps." }, { "end": 529.5600000000001, "start": 528.04, "text": " I've already been here." }, { "end": 530.5600000000001, "start": 529.5600000000001, "text": " Okay." }, { "end": 537.4, "start": 530.5600000000001, "text": " But then it can, it can say, okay, from here, I reached this state into this is a bad example." }, { "end": 540.44, "start": 537.4, "text": " Let's say we actually have to make a shortest path." }, { "end": 541.6400000000001, "start": 540.44, "text": " This is the graph, right?" }, { "end": 544.6, "start": 541.6400000000001, "text": " So it reaches this state in two steps, but then it explores this thing." }, { "end": 547.5, "start": 544.6, "text": " It's like, ah, wait a minute, I've seen this state." }, { "end": 550.08, "start": 547.5, "text": " But before I've reached it in two steps." }, { "end": 552.0600000000001, "start": 550.08, "text": " Now I'm reaching it in one step." }, { "end": 553.0600000000001, "start": 552.0600000000001, "text": " This is better." }, { "end": 557.6400000000001, "start": 553.0600000000001, "text": " So this path here is better than this path here." }, { "end": 561.2, "start": 557.6400000000001, "text": " And then it goes on from here." }, { "end": 566.2600000000001, "start": 561.2, "text": " It goes on it says, okay, I'm reaching this goal in two steps." }, { "end": 567.9000000000001, "start": 566.2600000000001, "text": " I've reached it in three steps before." }, { "end": 574.28, "start": 567.9, "text": " So clearly, this bottom path here is better than what I've done before this top or this" }, { "end": 575.28, "start": 574.28, "text": " path." }, { "end": 577.8, "start": 575.28, "text": " So this is this is what Go Explorer does." }, { "end": 583.48, "start": 577.8, "text": " In a nutshell, what it does is has an archive of states, right?" }, { "end": 586.68, "start": 583.48, "text": " An archive of states that it has visited previously." }, { "end": 591.84, "start": 586.68, "text": " And the crucial thing here is, and this is kind of necessary to their algorithm, that" }, { "end": 593.64, "start": 591.84, "text": " this is completely deterministic." }, { "end": 601.04, "start": 593.64, "text": " So what they actually do is they will save the state of the game emulator, right?" }, { "end": 602.64, "start": 601.04, "text": " They are here, right?" }, { "end": 609.9, "start": 602.64, "text": " And they do some exploration, jumping some until their person is here, their game is" }, { "end": 617.4, "start": 609.9, "text": " in some state, and they will save the emulator to a buffer." }, { "end": 623.9599999999999, "start": 617.4, "text": " This is kind of crucial, such that at a later point, they can select this, this exactly" }, { "end": 630.8199999999999, "start": 623.9599999999999, "text": " this state that they were in, and from here, run a bunch of explorations again, right?" }, { "end": 636.18, "start": 630.8199999999999, "text": " So if they say select state from archive, and then go to that state, this is simply" }, { "end": 638.16, "start": 636.18, "text": " restoring the emulator state." }, { "end": 643.16, "start": 638.16, "text": " But you could also what you could also do if if this is a purely deterministic environment," }, { "end": 649.04, "start": 643.16, "text": " you could simply save the sequence of actions that you've done to come here, and simply" }, { "end": 655.6, "start": 649.04, "text": " buy so maybe you gone right, right, and here you jump, and you go right, you can simply" }, { "end": 661.88, "start": 655.6, "text": " replay those to get to the exact same state, they discuss that this can be expanded to" }, { "end": 664.4399999999999, "start": 661.88, "text": " also handle a kind of stochastic environments." }, { "end": 670.12, "start": 664.4399999999999, "text": " But in their case, at the phase one, the environment is completely deterministic." }, { "end": 676.82, "start": 670.12, "text": " So they can do this, they can go, sorry, they can go to a state deterministically." }, { "end": 680.8, "start": 676.82, "text": " So they'll select a state from an archive, they have an algorithm for selecting kind" }, { "end": 683.2, "start": 680.8, "text": " of promising states." }, { "end": 688.26, "start": 683.2, "text": " They go to that state, and then they explore from that state and they simply do this random." }, { "end": 692.5600000000001, "start": 688.26, "text": " So this is random." }, { "end": 694.08, "start": 692.5600000000001, "text": " And then they update the archive." }, { "end": 695.6, "start": 694.08, "text": " So what do they do?" }, { "end": 696.6, "start": 695.6, "text": " Right?" }, { "end": 704.16, "start": 696.6, "text": " So we saw so here, maybe a new graph, so they go to a state, this is their state, and then" }, { "end": 706.48, "start": 704.16, "text": " they explore." }, { "end": 710.9200000000001, "start": 706.48, "text": " Now there, there are multiple things that can happen." }, { "end": 713.44, "start": 710.9200000000001, "text": " One they can encounter a new state, right?" }, { "end": 714.96, "start": 713.44, "text": " New state never seen before." }, { "end": 718.36, "start": 714.96, "text": " All right, what they do is they save it to the buffer." }, { "end": 724.86, "start": 718.36, "text": " They say, okay, this new state, let's call it n, this new state, I've reached it in." }, { "end": 729.5600000000001, "start": 724.86, "text": " And here we have done s steps, I've reached an s plus one step." }, { "end": 734.12, "start": 729.5600000000001, "text": " And whatever here is the emulator state that we had before, right?" }, { "end": 736.48, "start": 734.12, "text": " So I can at any point, I can go back." }, { "end": 745.98, "start": 736.48, "text": " If, however, the state has already been seen, let's call this m, they retrieve m, m prime" }, { "end": 749.32, "start": 745.98, "text": " from the buffer because they've already seen it, it's in the buffer, right?" }, { "end": 762.24, "start": 749.32, "text": " They compare, hey, these steps, so is s prime, is this smaller or larger than s plus one?" }, { "end": 770.4000000000001, "start": 762.24, "text": " So basically, I've seen this state before, but using this path, can I reach it in fewer" }, { "end": 772.7600000000001, "start": 770.4000000000001, "text": " steps than I've reached it before?" }, { "end": 779.64, "start": 772.76, "text": " If yes, then I'm going to replace this, replace this s by s plus one, and then save it again" }, { "end": 780.64, "start": 779.64, "text": " in the buffer." }, { "end": 787.2, "start": 780.64, "text": " All right, so I can, I now have a better path to reach this state than before." }, { "end": 793.96, "start": 787.2, "text": " So it's almost exactly like Dijkstra's algorithm in that you simply explore and every new state" }, { "end": 799.72, "start": 793.96, "text": " you find you've either already seen, so you just simply have a new way of getting to that" }, { "end": 800.76, "start": 799.72, "text": " state." }, { "end": 806.28, "start": 800.76, "text": " If you haven't seen it, you simply remember it, and then you do it all again." }, { "end": 816.8, "start": 806.28, "text": " So you can imagine with time, these number of states in this buffer will explode." }, { "end": 819.3199999999999, "start": 816.8, "text": " And it's not feasible for Montezuma's revenge." }, { "end": 820.84, "start": 819.3199999999999, "text": " Like imagine this game, right?" }, { "end": 825.56, "start": 820.84, "text": " You have to, you have to go everywhere and explore everything, right?" }, { "end": 829.78, "start": 825.56, "text": " This, I mean, every single action here could be a state." }, { "end": 833.22, "start": 829.78, "text": " That's why, let me pause this." }, { "end": 840.3199999999999, "start": 833.22, "text": " That's why what they do is they, they have to come up with a notion of state that is," }, { "end": 843.62, "start": 840.3199999999999, "text": " doesn't simply include every single game state there is." }, { "end": 848.54, "start": 843.62, "text": " And what they do is, this is sampled here, they down sample the image." }, { "end": 855.9599999999999, "start": 848.54, "text": " And then this, sorry, I've tried drawing over a blog post, they down sample the image, and" }, { "end": 864.72, "start": 855.96, "text": " then they simply say, all right, so this, this thing would become this thing." }, { "end": 871.52, "start": 864.72, "text": " And they simply say, okay, if two of these images have the same representation, so grayscale," }, { "end": 876.22, "start": 871.52, "text": " down sampled, quantized, then they are the same state." }, { "end": 878.8000000000001, "start": 876.22, "text": " And that's kind of the crux of the algorithm I find." }, { "end": 885.26, "start": 878.8000000000001, "text": " So if two things have the same state, then the algorithm is prone to kind of confusing" }, { "end": 886.26, "start": 885.26, "text": " them for each other." }, { "end": 893.8199999999999, "start": 886.26, "text": " It thinks one is the other, not exactly, but it does kind of assume that they are close" }, { "end": 895.46, "start": 893.8199999999999, "text": " actually here." }, { "end": 897.68, "start": 895.46, "text": " But there is a crucial difference between the two." }, { "end": 902.06, "start": 897.68, "text": " The algorithm will have a very hard time in some situations." }, { "end": 907.06, "start": 902.06, "text": " I don't want to, like, you can think of, it needs to be kind of convoluted situations," }, { "end": 913.4, "start": 907.06, "text": " but it can be the kind of crux of the algorithm very much if the state representation isn't" }, { "end": 914.4, "start": 913.4, "text": " done well." }, { "end": 915.6999999999999, "start": 914.4, "text": " And they actually have two methods." }, { "end": 920.54, "start": 915.6999999999999, "text": " One simply relies on this down sampling and the other one, they provide domain knowledge," }, { "end": 927.22, "start": 920.54, "text": " which means kind of which level you're in, where the player is, and so on." }, { "end": 928.86, "start": 927.22, "text": " But this is, this is pretty cool." }, { "end": 937.42, "start": 928.86, "text": " So if you are able, so if, if your reinforcement learning problem, first of all, is deterministic." }, { "end": 944.8199999999999, "start": 937.42, "text": " At least in a simulator." }, { "end": 959.1999999999999, "start": 944.8199999999999, "text": " And second, allows for good state representations, kind of for, for low dimensional state representations." }, { "end": 965.28, "start": 959.1999999999999, "text": " If those two things are given, you can use GoExplore." }, { "end": 968.72, "start": 965.28, "text": " And as I said, this, this representation here is key." }, { "end": 971.78, "start": 968.72, "text": " So now you know how they do it." }, { "end": 974.38, "start": 971.78, "text": " They simply explore these states." }, { "end": 981.28, "start": 974.38, "text": " And if they come on a new state, and every state is, is, is, so we don't mean this here," }, { "end": 986.8199999999999, "start": 981.28, "text": " we actually mean this representation of it, they store it and they remember how to get" }, { "end": 988.12, "start": 986.8199999999999, "text": " to it." }, { "end": 994.26, "start": 988.12, "text": " And simply by exploring like this and having a smart algorithm that picks which state to" }, { "end": 1000.54, "start": 994.26, "text": " explore from, which of course is also a lot of domain knowledge, they are able to solve" }, { "end": 1002.9, "start": 1000.54, "text": " the game, right?" }, { "end": 1009.64, "start": 1002.9, "text": " So you see, goes way past human expert, and they're, they're able to, to actually perform" }, { "end": 1012.6, "start": 1009.64, "text": " really well simply by exploring." }, { "end": 1014.08, "start": 1012.6, "text": " This is the exploration phase." }, { "end": 1017.9, "start": 1014.08, "text": " This is simply random exploration from promising states." }, { "end": 1024.98, "start": 1017.9, "text": " And then in the second part, in the second phase, they now robustify it." }, { "end": 1029.58, "start": 1024.98, "text": " So now they introduce noise into their environment, right?" }, { "end": 1035.5, "start": 1029.58, "text": " Because usually environments have noise or some sort of stochasticity, and they run imitation" }, { "end": 1038.6, "start": 1035.5, "text": " learning on the best trajectories they found." }, { "end": 1045.1399999999999, "start": 1038.6, "text": " And what that does is, what they do is they have a trajectory, let's say, let's say this" }, { "end": 1046.7, "start": 1045.1399999999999, "text": " is a trajectory, right?" }, { "end": 1050.14, "start": 1046.7, "text": " These are actions you need to reach this goal state." }, { "end": 1054.32, "start": 1050.14, "text": " This imitation learning algorithm, what they do is they take a few steps back, say here," }, { "end": 1058.8600000000001, "start": 1054.32, "text": " and they just use imitation learning, which is basically a form of reinforcement learning" }, { "end": 1063.66, "start": 1058.8600000000001, "text": " to reach the goal state from here, simply reach the goal state, right?" }, { "end": 1066.02, "start": 1063.66, "text": " Once in under noise, right?" }, { "end": 1068.72, "start": 1066.02, "text": " So you can't just take the exact same actions." }, { "end": 1074.5, "start": 1068.72, "text": " Once this has been learned, back up a few more steps, maybe here, and then try to reach" }, { "end": 1075.96, "start": 1074.5, "text": " the goal state." }, { "end": 1078.78, "start": 1075.96, "text": " Now you've already learned how to do this part." }, { "end": 1084.98, "start": 1078.78, "text": " So this this bigger part should become should be easier than simply starting from here." }, { "end": 1090.66, "start": 1084.98, "text": " And you do that until you've kind of backed up your entire trajectory." }, { "end": 1094.06, "start": 1090.66, "text": " This is a well known method from imitation learning." }, { "end": 1099.3, "start": 1094.06, "text": " But usually you have usually this red thing is a human demonstration." }, { "end": 1103.1000000000001, "start": 1099.3, "text": " But now this red trajectory has been found by go explore." }, { "end": 1107.62, "start": 1103.1, "text": " It turns out if you have a bunch of these trajectories from go explore, you can do a" }, { "end": 1110.06, "start": 1107.62, "text": " pretty good job at that." }, { "end": 1113.74, "start": 1110.06, "text": " All right, that's basically all that I wanted to say about go explore." }, { "end": 1116.06, "start": 1113.74, "text": " It's basically Dijkstra's algorithm." }, { "end": 1119.9599999999998, "start": 1116.06, "text": " It works under very specific circumstances, but I think it's super promising." }, { "end": 1123.1, "start": 1119.9599999999998, "text": " And it's kind of a new way of thinking about it." }, { "end": 1127.8799999999999, "start": 1123.1, "text": " So the video I've shown is actually go explore solving Montezuma's revenge getting like a" }, { "end": 1129.1599999999999, "start": 1127.8799999999999, "text": " new high score." }, { "end": 1136.78, "start": 1129.16, "text": " And you can see how like skilled this this algorithm becomes." }, { "end": 1163.78, "start": 1136.78, "text": " All right, with that, I say goodbye and hope to see you next time." } ]
waK7AD-AEyc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
NeurIPS 19 Poster Session
[ "Science & Technology" ]
[ "machine learning", "conference", "posters", "research", "bubble" ]
I'm at the poster session and the amount of people here is just crazy
Hi there, we are here at the NURBS 2019 poster session, one of the poster sessions specifically. There are two poster sessions a day, three days, so this is day two, the first poster session. It's technically lunchtime, so most people are out, but you can see there's still so many people here. There are about 250 posters in this room, and every poster has a ball of people around it. This is not peak time. Yesterday they didn't even let people into this room. That's the kind of the only reason you come to the conference to actually talk to the people doing the work, but it's almost impossible because they're constantly trying to explain their work to about 20 people at a time, asking any meaningful questions, getting into a conversation is almost impossible. It's about 10 degrees warmer in here than outside. It is sweaty, it smells, it's absolutely beautiful. I don't know, there is a kind of a feeling in the air that this is a bubble, just a sheer amount of people attending this is crazy. I don't know what this looks like in a few years. Maybe this is peak, or maybe it's just going to grow and grow and grow. I don't know. So you can see what it looks like, and maybe I've described well what it feels like to be here. With that, I am going to dive in, and bye bye.
[ { "end": 8.540000000000001, "start": 0, "text": " Hi there, we are here at the NURBS 2019 poster session, one of the poster sessions specifically." }, { "end": 13.76, "start": 8.540000000000001, "text": " There are two poster sessions a day, three days, so this is day two, the first poster" }, { "end": 14.76, "start": 13.76, "text": " session." }, { "end": 18.02, "start": 14.76, "text": " It's technically lunchtime, so most people are out, but you can see there's still so" }, { "end": 20.52, "start": 18.02, "text": " many people here." }, { "end": 27.32, "start": 20.52, "text": " There are about 250 posters in this room, and every poster has a ball of people around" }, { "end": 29.36, "start": 27.32, "text": " it." }, { "end": 30.56, "start": 29.36, "text": " This is not peak time." }, { "end": 37.08, "start": 30.56, "text": " Yesterday they didn't even let people into this room." }, { "end": 41.04, "start": 37.08, "text": " That's the kind of the only reason you come to the conference to actually talk to the" }, { "end": 45.68, "start": 41.04, "text": " people doing the work, but it's almost impossible because they're constantly trying to explain" }, { "end": 58.2, "start": 45.68, "text": " their work to about 20 people at a time, asking any meaningful questions, getting into a conversation" }, { "end": 61.760000000000005, "start": 58.2, "text": " is almost impossible." }, { "end": 65.60000000000001, "start": 61.760000000000005, "text": " It's about 10 degrees warmer in here than outside." }, { "end": 73.48, "start": 65.60000000000001, "text": " It is sweaty, it smells, it's absolutely beautiful." }, { "end": 82, "start": 73.48, "text": " I don't know, there is a kind of a feeling in the air that this is a bubble, just a sheer" }, { "end": 87.60000000000001, "start": 82, "text": " amount of people attending this is crazy." }, { "end": 89.88, "start": 87.6, "text": " I don't know what this looks like in a few years." }, { "end": 93.96, "start": 89.88, "text": " Maybe this is peak, or maybe it's just going to grow and grow and grow." }, { "end": 96.39999999999999, "start": 93.96, "text": " I don't know." }, { "end": 103, "start": 96.39999999999999, "text": " So you can see what it looks like, and maybe I've described well what it feels like to" }, { "end": 106, "start": 103, "text": " be here." }, { "end": 123.72, "start": 106, "text": " With that, I am going to dive in, and bye bye." } ]
RrvC8YW0pT0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning Upside Down: Don't Predict Rewards -- Just Map Them to Actions
[ "Science & Technology" ]
[ "rl", "reinforcement learning", "ai", "artificial intelligence", "udrl", "schmidhuber", "policy", "value", "reward" ]
Schmidhuber thinking outside the box! Upside-Down RL turns RL on its head and constructs a behavior function that uses the desired reward as an input. The new paradigm shows surprising performance compared to classic RL algorithms. Abstract: We transform reinforcement learning (RL) into a form of supervised learning (SL) by turning traditional RL on its head, calling this Upside Down RL (UDRL). Standard RL predicts rewards, while UDRL instead uses rewards as task-defining inputs, together with representations of time horizons and other computable functions of historic and desired future data. UDRL learns to interpret these input observations as commands, mapping them to actions (or action probabilities) through SL on past (possibly accidental) experience. UDRL generalizes to achieve high rewards or other goals, through input commands such as: get lots of reward within at most so much time! A separate paper [61] on first experiments with UDRL shows that even a pilot version of UDRL can outperform traditional baseline algorithms on certain challenging RL problems. We also introduce a related simple but general approach for teaching a robot to imitate humans. First videotape humans imitating the robot's current behaviors, then let the robot learn through SL to map the videos (as input commands) to these behaviors, then let it generalize and imitate videos of humans executing previously unknown behavior. This Imitate-Imitator concept may actually explain why biological evolution has resulted in parents who imitate the babbling of their babies. Author: Juergen Schmidhuber https://arxiv.org/abs/1912.02875 https://arxiv.org/abs/1912.02877
He did it! Crazy son of a bitch did it again! What am I talking about? Jürgen Schmidhuber reinforcement learning upside down! New paper just dropped on the verge of the NeurIPS conference being presented at a workshop here. Presenting upside down reinforcement learning. I am pumped for this one, can you tell? It says we transform reinforcement learning into a form of supervised learning by turning traditional RL on its head. Calling this RL-lar. What do we call this? We'll just call it lar. Upside down reinforcement learning. And so this is upside down. Never mind. Okay, let's just check out how it works. So I'm going to give a brief overview before we go into this paper. Alright, so let's say you have a reinforcement learning problem. Let's say an Atari game for example. And in an Atari game you usually have a screen, right? And let's just say you're playing this marine commander. So there's water here, right? And there might be a bunch of... Here's your boat, right? There's a boat, a little boat. There might be a bunch of opponents right here. Fishy fish opponents, fishy fish opponents and so on. And there are a bunch of gold coins like here. That's a big gold coin, right? And you're kind of supposed to, I think you're supposed to like go get air. You have some air meter over here. Whatever. So there's this Atari game, right? You're supposed to get the reward which is this maybe this coin here. And stay alive as long as possible and so on. So this is a classic reinforcement learning problem. And there are various techniques for this. We've looked at a couple of them. And what upside down reinforcement learning does is basically what you do is you want to transform this input to a new representation. Which basically, well, if I can, maybe I can... Let me get this correctly. So then there's this over here and then there's a little fishy, a little fishy here. And there's a coin right here. So what you want to do is basically turn this input on its head like upside down. And so this way is kind of up or down or whatever in this new representation. And if you actually learn on this new representation with pretty the same techniques, it works much better than the classic RL setting. And this is not only for like these Atari games. Like this appears to hold throughout the RL space. So in robotics, like if you have a robot or whatever, this is a robot. It has a square head, as you can tell. You know, it's supposed to like open a door. You've seen this DARPA challenge. This doesn't work, right? But if you just transform this and actually turn the robot upside down, the robot will be able to open the door just fine. And even like if you have a chessboard and there's like a bunch of pieces on it. The problem in this case is you have to simulate this chessboard. And if you turn this around now, basically all the pieces will fall off. So what you need to do is you need to have a simulator that encodes a magnetic chessboard such that the pieces don't fall off. So it's a bit of programming effort. But if you do that... All right, I'm kidding. This is a new paradigm for RL, but it's unfortunately not as good. Someone should try the magnetic chessboard simulator. Upside down RL is a new paradigm for RL where basically the kind of notion of inputs and outputs of the RL algorithm are switched around a bit. So basic ideas here is that you have an RL algorithm that is also fed with a bunch of commands. So in classic RL what you'll have... Let's actually go back to this Atari game here, right? In classic RL, an RL algorithm will get the Atari game as a screen as an input and is asked from this to predict a bunch of outputs. So in classic Atari, these are eight actions. I'm going to draw three here, like go to the left, go to the right, or press the button for shoot, right? These are the actions you have and the algorithm is tasked. And there are different versions of this. In policy methods, policy gradient methods, typically the algorithm is tasked with outputting a distribution over these actions. In other methods like value learning, Q learning, the algorithm is tasked with assigning each of these actions a value. So in this situation, going to the left will be worth three in the future. Going to the right will be worth negative one and shooting will be worth zero. So you might want to go with this action here. Now in upside-down reinforcement learning, we've had observation going into the model and the model coming up with the value estimation of the different actions. In upside-down reinforcement learning, you'll have the observation and something else going into the model and the model coming up with an action. And this something else is the key. What you input here is your desire, your future desire. And in this paper, they call it a command. So you'll have a command as an input together with the observations. You basically say, here's my state and I would like to achieve, let's say five reward in the next five reward in the next two time steps, right? Make this happen. Right. This is this is your command going into the model and the model will then try to find actions such that in the next two time steps, you'll get five reward. You can easily see a model that learns this will actually be able to, you know, do various things, including doing the classic RL things like get as much reward as possible in given or in the shortest amount of time, but can also do much more. And in the general sense, the difference is how this is trained now. This model, when you train it, as you can see, you don't it's not trained with in my having in mind kind of only to get the maximum reward. It is trained to be much more a general kind of understanding of the world. I mean, learning what do I need to do to achieve a variety of goals? Specifically, what you want to do to train this is the following. Say you have a method of of moving in the world and collecting traces, right? So you go from state, state one, state two, state three. You go with like your action one, action two. Let's draw action three. The state four. And in each of these, you get a you get rewards, right? Reward one reward to reward three. Now, this in classic RL, this will kind of give you one training example, right? So this is if you consider this to be an episode, this will give you one training example to to run this sequence of actions. Upside down RL, you can actually consider this as many, many training examples. And here's what I mean. So if you, for example, start at state one, you can say, aha, within one time step, one one time step, I go to state two and I have achieved our one rewards by doing action a one. Right. So this now can be an input to your model. Your model could learn if you get as an observation, remember the previous thing as an observation, you get s one as a command. You get I want to achieve in one time step. Are one reward. Right. And you train this goes into the model and the model is trained to say a one given if I am in s one and I do a one, I will achieve that. Right. So you train the model to give a one as an output. And this is valid because in the past you've observed going from s one using a one to a state where you get this this kind of reward in this kind of time. But you can also so you can do all of these single steps. They will all provide individual training examples to your model. Right. But then also you can consider a two step thing. So you can say I'm in state s one and I go I go in two time steps. I have achieved our one plus our two reward by doing actions a one then a two. Right. And a two I'm going to do in parents here because what you want to do is you want to always always consider the action that comes right after where you are now. So again your training sample let me draw this up here. Maybe your training sample would be the following. I am in state s one. This would be my observation. My command would be I would like to achieve in two time steps reward r one plus r two reward. Right. This reward this both goes into the model. Right. You tell the model please given the state s one achieve this reward in this time. And the model is supposed to output a one saying ha in the past I was in this state and I did achieve this goal by using that. So the model is supposed to learn to achieve different goals. Right. So now you can not only train from good episodes right. You can train for any episode any episode usually in classic or you kind of want to focus on the good episodes because you want to maximize your reward. But here you can tell the model hey if you've done something particularly stupid let's say here in s three you done something the a three was particularly stupid gave you. So r three here was really bad reward like a negative five billion trillion. And you can actually train the model to recognize this can be a hey look if you are in s three and within one time step you want to achieve negative five billion billion billion trillion. Reward you all you have to do is action a three right. And then the cool thing now is if you are at evaluation time you actually want the big reward what you'll do is you simply plug in a different command simply in one time step still I'm in state s three in one time step. I want to achieve actually three reward not negative a lot right. And the model will have learned that a three will lead to a situation where you get a lot of negative reward. So the model will be like I'm for sure not going to do a three right. I'm going to do something else here because I have learned to map a three to like this really low reward. So in essence this has connections to things like hindsight experience replay and kind of universal value function where you kind of learn to go from any state to any other state in this. But none of these do none of these have this kind of command what Schmidhuber calls command here as an input to the model. And I think actually this is this is really positive to input this because usually in universal value functions what you would say is let's consider a simple grid world right. Whatever your agent is here and you need to you need to reach a goal that's down here. But you might not be able to learn it because it's super sparse reward and so on. But what you can do is you can learn to reach this position and this position and this position from various positions like go here go from here to here. You can learn to go from here to here. And you know in essence you would like it eventually to generalize to all the fields. So you basically learn to go from any position to any other position with your agent with these universal value or universal policy functions having sub goals. But they during that phase where they learn to go from anything to anything they don't they don't necessarily include this reward thing as a as an input. It's more like kind of either a sub goal or like the usual value function will simply approximate the reward. Whereas whereas in this technique we actually have a policy learning we actually output a an action value. Also hindsight experience replay what hindsight experience replay would do in the same situation right. You're here. We might do a videos on this in the future. You're here and you try right. And your agent actually it ends up here right ends up right here. What you can do is you can simply say oh well actually this this was my goal all along and then simply train train your model as if as if this thing here was your goal all along. And not this thing here and treat it as kind of a positive reward for this. At least that's how I understand it. Right. And both of these things are quite different than here where we have this command as input and I do I do like it. So I think this this is very much the basic things here. This it is extra extrapolated to kind of noisy inputs and noisy environments and so on. But this is the basic the basic gist of it. So here you see your you what you will learn is to map all and all is your representation of your input. So the screen for example or the chessboard. And I think also kind of the last action and there were you get in this step plus your horizon and desire. So in how much time you would like to achieve how much reward and then you can also get input some extra goals that you have. And so you can see basically any any episode that you've run in the past will give you a valid training example for this. Your model will simply learn to match the previous experience with the goals that were achieved in the previous experience. So there is lots of lots of generalizations here like how exactly these things are represented. This this time horizon can be a high dimensional object. The desire can be as I understand it somewhat a dimensional object. The extra commands can be like conditionals on these two things. It gets very complicated, but I want to jump ahead to a different paper where so this paper is basically just describing the algorithm. And then the next paper is doing experiments with this. Let's scroll past here. All right. So this paper training agents using up that down reinforcement learning released on the same day, but different authors that have used also made who was also here but have used this to implement a variant of this. And here you see again what I was trying to to explain. So in traditional RL, this especially here Q learning, you'll have this function which gets an observation as input and then Q learning especially. So you also get the action as an input and you're supposed to say for the given observation this particular action has this expected value as a return. Right. That's what I explained at the beginning. That's kind of value based reinforcement learning. Whereas the behavior function here, which would be upside down reinforcement learning gets the observation and a command and will map that to an action. And here again is what we've gone over. This is a bit of a different thing. So this agent has apparently run two different episodes. One point it did this sequence of actions and at the other point from the same starting state it did this sequence of action and you can see here on the right all the training samples we can we can derive from this. So we can say from state s 0 right. If I want to return in one time step, I have experienced this in the past right to return in one time step. All I have to do is take action a one. But if I want one return in one time step, I have to take action a two and you teach your behavior function to learn these things to learn to output these actions with these things here as inputs. And then what you hope of course is that this will generalize that it will learn to generalize that you can say now give me more reward than I have ever seen before right. And it will kind of learn which things correspond to lower reward, which things correspond to higher award and will be able to extrapolate which things will correspond to even higher report reward. Sorry. So they have two algorithms and this is kind of this is reminiscent of the old of the old RL kind of world where you do kind of one algorithm is continuously learning from the experience gathered by another algorithm. So you have one set of algorithms and this even in modern RL this this this is how it's done right. You have two different boxes right. Actually you have probably one box learning the model like this is I'm going to represent this here learner right. And the learner distributes the model to many many machines interacting with the simulators and these machines all they do is run episodes with the learned model and they will send back their experience here. And then the learner can learn from it and then at the end send it again. So so. All right here we go. So in each step what we do in order to to generate a new episode we don't always want to want to kind of execute one given policy. What we do is we sample from the end of the replay buffer and the replay buffer is sorted by returns right. So the highest return episodes are on top. So we want to sample the highest return episodes then we want to say maybe some of them are 10 steps long maybe some of them are five steps long and so on. So we set the horizon to be the mean of the length of these right and we set the desired return how much return should be achieved in this time to be the unit to sample from the uniform distribution between M and M plus S and M is the mean and S is the standard deviation of the selected episode. So so what this means is is like here is a bunch of episodes from the start at the same time. Here's a bunch of episodes that I ran right from here is time zero and then time goes on that I ran that had really high returns right. Now I'm going to take the mean time that these episodes ran like this. This is maybe five time steps. So in five time I want to achieve now how much reward now you look at all the rewards that were achieved. This is maybe a distribution that has some mean here like so and then you say I want to achieve a reward between here and one standard deviation higher than here. So right and this this would be the reward you want to achieve. So what you do is you kind of push your learned model to just go a bit beyond what it has seen so far is basically say look I you can do this but you can just do a bit more in the same amount of time. Please do this and you hope the model has learned to kind of generalize to do this. And if so you will execute these episodes and then these episodes will go back to the learner right. I'll go back to the learner here and the learner will learn from them and hopefully then you can like generalize even more and then you can say I now know how to achieve this bit more reward. Now I can if I run the episode I will achieve even more reward. I can push the model even further right. So at eval time you can always ask the model to produce as much reward as possible in the given time. And of course every episode sent back here is not only one training example as we saw but many many training examples can be derived from these models even beyond what's in what's in this paper. All right. So I think this was a good first shot at describing this algorithm. I hope you get the gist of it. I enjoy this a bit of a criticism for me would be it's still kind of doesn't it. So it doesn't touch the exploration dilemma. So it again deals with kind of incremental incrementally getting better whereas I feel this can easily get stuck in some minimum where it's not possible to do this incremental generalization of the model where you really need a new approach. And that's why games like Montezuma's Revenge are solved using algorithms like Go Explore and not any of the classic algorithms. That being said they have experiments where they show that especially in sparse reward environments they do better than classic or algorithms. So if you for example here take the lunar lander where A to C beats upside down RL and I guess you didn't get Matt Ploidlip to do the upside down. Well the in other in other environments upside down RL clearly beats the classic algorithms. And what I like here is they took a lunar lander and which basically at every time step you get a reward in lunar lander and they hypothesized. Okay this is really good for these classic algorithms that do reward maximization instead of kind of learning this general behavior function. And what they did is they modified the game such that all the reward is given at the end of the episode. And then you see that upside down RL will actually outperform here the classic things where it's exactly the same game you just get the reward at the end. So upside down RL kind of learns the structure of the world learns that you get this reward at the end after such and such many time steps. So you can it will learn please get me zero reward in 50 time steps like no problem. But please get me a thousand rewards in a hundred time steps. No problem. I just go to the end of the episode right. Whereas these pure reward maximization techniques they don't they somehow have a harder time to do that. I like this investigation. I like the thinking outside the box. The Schmidhuber ism of the paper. It's just all great. It's a great time to be alive and check this out and I'll see you. Bye bye.
[ { "end": 6.4, "start": 0, "text": " He did it! Crazy son of a bitch did it again!" }, { "end": 12.8, "start": 6.4, "text": " What am I talking about? Jürgen Schmidhuber reinforcement learning upside down!" }, { "end": 20.6, "start": 12.8, "text": " New paper just dropped on the verge of the NeurIPS conference being presented at a workshop here." }, { "end": 26.2, "start": 20.6, "text": " Presenting upside down reinforcement learning. I am pumped for this one, can you tell?" }, { "end": 35.6, "start": 26.2, "text": " It says we transform reinforcement learning into a form of supervised learning by turning traditional RL on its head." }, { "end": 42.4, "start": 35.6, "text": " Calling this RL-lar. What do we call this? We'll just call it lar." }, { "end": 45.4, "start": 42.4, "text": " Upside down reinforcement learning." }, { "end": 52.6, "start": 45.4, "text": " And so this is upside down. Never mind." }, { "end": 56.6, "start": 52.6, "text": " Okay, let's just check out how it works." }, { "end": 62.400000000000006, "start": 56.6, "text": " So I'm going to give a brief overview before we go into this paper." }, { "end": 69.8, "start": 62.400000000000006, "text": " Alright, so let's say you have a reinforcement learning problem. Let's say an Atari game for example." }, { "end": 73, "start": 69.8, "text": " And in an Atari game you usually have a screen, right?" }, { "end": 79.4, "start": 73, "text": " And let's just say you're playing this marine commander. So there's water here, right?" }, { "end": 84.60000000000001, "start": 79.4, "text": " And there might be a bunch of... Here's your boat, right?" }, { "end": 88.4, "start": 84.60000000000001, "text": " There's a boat, a little boat. There might be a bunch of opponents right here." }, { "end": 92.60000000000001, "start": 88.4, "text": " Fishy fish opponents, fishy fish opponents and so on." }, { "end": 96.9, "start": 92.60000000000001, "text": " And there are a bunch of gold coins like here. That's a big gold coin, right?" }, { "end": 101.80000000000001, "start": 96.9, "text": " And you're kind of supposed to, I think you're supposed to like go get air." }, { "end": 104.30000000000001, "start": 101.80000000000001, "text": " You have some air meter over here. Whatever." }, { "end": 106.80000000000001, "start": 104.30000000000001, "text": " So there's this Atari game, right?" }, { "end": 111.2, "start": 106.8, "text": " You're supposed to get the reward which is this maybe this coin here." }, { "end": 114, "start": 111.2, "text": " And stay alive as long as possible and so on." }, { "end": 116.5, "start": 114, "text": " So this is a classic reinforcement learning problem." }, { "end": 120.7, "start": 116.5, "text": " And there are various techniques for this. We've looked at a couple of them." }, { "end": 128.7, "start": 120.7, "text": " And what upside down reinforcement learning does is basically what you do is you want to transform this input to a new representation." }, { "end": 138.6, "start": 128.7, "text": " Which basically, well, if I can, maybe I can... Let me get this correctly." }, { "end": 145.7, "start": 138.6, "text": " So then there's this over here and then there's a little fishy, a little fishy here." }, { "end": 147.89999999999998, "start": 145.7, "text": " And there's a coin right here." }, { "end": 153.39999999999998, "start": 147.89999999999998, "text": " So what you want to do is basically turn this input on its head like upside down." }, { "end": 159.1, "start": 153.4, "text": " And so this way is kind of up or down or whatever in this new representation." }, { "end": 166.4, "start": 159.1, "text": " And if you actually learn on this new representation with pretty the same techniques," }, { "end": 169.70000000000002, "start": 166.4, "text": " it works much better than the classic RL setting." }, { "end": 172.3, "start": 169.70000000000002, "text": " And this is not only for like these Atari games." }, { "end": 177.5, "start": 172.3, "text": " Like this appears to hold throughout the RL space." }, { "end": 181.70000000000002, "start": 177.5, "text": " So in robotics, like if you have a robot or whatever, this is a robot." }, { "end": 184.79999999999998, "start": 181.7, "text": " It has a square head, as you can tell." }, { "end": 186.89999999999998, "start": 184.79999999999998, "text": " You know, it's supposed to like open a door." }, { "end": 190.6, "start": 186.89999999999998, "text": " You've seen this DARPA challenge. This doesn't work, right?" }, { "end": 198.7, "start": 190.6, "text": " But if you just transform this and actually turn the robot upside down," }, { "end": 202.1, "start": 198.7, "text": " the robot will be able to open the door just fine." }, { "end": 207.6, "start": 202.1, "text": " And even like if you have a chessboard and there's like a bunch of pieces on it." }, { "end": 212.7, "start": 207.6, "text": " The problem in this case is you have to simulate this chessboard." }, { "end": 218.1, "start": 212.7, "text": " And if you turn this around now, basically all the pieces will fall off." }, { "end": 224, "start": 218.1, "text": " So what you need to do is you need to have a simulator that encodes a magnetic chessboard" }, { "end": 226.7, "start": 224, "text": " such that the pieces don't fall off." }, { "end": 230.4, "start": 226.7, "text": " So it's a bit of programming effort. But if you do that..." }, { "end": 234.1, "start": 230.4, "text": " All right, I'm kidding." }, { "end": 240.29999999999998, "start": 234.1, "text": " This is a new paradigm for RL, but it's unfortunately not as good." }, { "end": 244.4, "start": 240.29999999999998, "text": " Someone should try the magnetic chessboard simulator." }, { "end": 254, "start": 244.4, "text": " Upside down RL is a new paradigm for RL where basically the kind of notion of inputs" }, { "end": 259.5, "start": 254, "text": " and outputs of the RL algorithm are switched around a bit." }, { "end": 272.4, "start": 259.5, "text": " So basic ideas here is that you have an RL algorithm that is also fed with a bunch of commands." }, { "end": 275.2, "start": 272.4, "text": " So in classic RL what you'll have..." }, { "end": 279.3, "start": 275.2, "text": " Let's actually go back to this Atari game here, right?" }, { "end": 285.8, "start": 279.3, "text": " In classic RL, an RL algorithm will get the Atari game as a screen as an input" }, { "end": 290.3, "start": 285.8, "text": " and is asked from this to predict a bunch of outputs." }, { "end": 293.5, "start": 290.3, "text": " So in classic Atari, these are eight actions." }, { "end": 300.3, "start": 293.5, "text": " I'm going to draw three here, like go to the left, go to the right, or press the button for shoot, right?" }, { "end": 305.5, "start": 300.3, "text": " These are the actions you have and the algorithm is tasked." }, { "end": 307.5, "start": 305.5, "text": " And there are different versions of this." }, { "end": 312, "start": 307.5, "text": " In policy methods, policy gradient methods, typically the algorithm is tasked" }, { "end": 316, "start": 312, "text": " with outputting a distribution over these actions." }, { "end": 323.5, "start": 316, "text": " In other methods like value learning, Q learning, the algorithm is tasked with assigning each of these actions a value." }, { "end": 330, "start": 323.5, "text": " So in this situation, going to the left will be worth three in the future." }, { "end": 336.2, "start": 330, "text": " Going to the right will be worth negative one and shooting will be worth zero." }, { "end": 342.2, "start": 336.2, "text": " So you might want to go with this action here." }, { "end": 349.2, "start": 342.2, "text": " Now in upside-down reinforcement learning, we've had observation going into the model" }, { "end": 355.4, "start": 349.2, "text": " and the model coming up with the value estimation of the different actions." }, { "end": 363.09999999999997, "start": 355.4, "text": " In upside-down reinforcement learning, you'll have the observation and something else going into the model" }, { "end": 366.70000000000005, "start": 363.1, "text": " and the model coming up with an action." }, { "end": 368.8, "start": 366.70000000000005, "text": " And this something else is the key." }, { "end": 374.1, "start": 368.8, "text": " What you input here is your desire, your future desire." }, { "end": 377.1, "start": 374.1, "text": " And in this paper, they call it a command." }, { "end": 380.40000000000003, "start": 377.1, "text": " So you'll have a command as an input together with the observations." }, { "end": 386.90000000000003, "start": 380.40000000000003, "text": " You basically say, here's my state and I would like to achieve," }, { "end": 393.4, "start": 386.9, "text": " let's say five reward in the next five reward in the next two time steps, right?" }, { "end": 394.59999999999997, "start": 393.4, "text": " Make this happen." }, { "end": 400.59999999999997, "start": 394.59999999999997, "text": " Right. This is this is your command going into the model and the model will then try to find actions" }, { "end": 406.09999999999997, "start": 400.59999999999997, "text": " such that in the next two time steps, you'll get five reward." }, { "end": 413, "start": 406.09999999999997, "text": " You can easily see a model that learns this will actually be able to, you know, do various things," }, { "end": 418.9, "start": 413, "text": " including doing the classic RL things like get as much reward as possible in given" }, { "end": 424.4, "start": 418.9, "text": " or in the shortest amount of time, but can also do much more." }, { "end": 429.5, "start": 424.4, "text": " And in the general sense, the difference is how this is trained now." }, { "end": 436.8, "start": 429.5, "text": " This model, when you train it, as you can see, you don't it's not trained with in my having in mind" }, { "end": 440.1, "start": 436.8, "text": " kind of only to get the maximum reward." }, { "end": 445.20000000000005, "start": 440.1, "text": " It is trained to be much more a general kind of understanding of the world." }, { "end": 452.40000000000003, "start": 445.20000000000005, "text": " I mean, learning what do I need to do to achieve a variety of goals?" }, { "end": 457.90000000000003, "start": 452.40000000000003, "text": " Specifically, what you want to do to train this is the following." }, { "end": 464.90000000000003, "start": 457.90000000000003, "text": " Say you have a method of of moving in the world and collecting traces, right?" }, { "end": 472.79999999999995, "start": 464.9, "text": " So you go from state, state one, state two, state three." }, { "end": 478.29999999999995, "start": 472.79999999999995, "text": " You go with like your action one, action two." }, { "end": 481.79999999999995, "start": 478.29999999999995, "text": " Let's draw action three." }, { "end": 484.5, "start": 481.79999999999995, "text": " The state four." }, { "end": 487.5, "start": 484.5, "text": " And in each of these, you get a you get rewards, right?" }, { "end": 492, "start": 487.5, "text": " Reward one reward to reward three." }, { "end": 498.9, "start": 492, "text": " Now, this in classic RL, this will kind of give you one training example, right?" }, { "end": 508.1, "start": 498.9, "text": " So this is if you consider this to be an episode, this will give you one training example to to run this sequence of actions." }, { "end": 513.6, "start": 508.1, "text": " Upside down RL, you can actually consider this as many, many training examples." }, { "end": 515, "start": 513.6, "text": " And here's what I mean." }, { "end": 529.5, "start": 515, "text": " So if you, for example, start at state one, you can say, aha, within one time step, one one time step," }, { "end": 537.3, "start": 529.5, "text": " I go to state two and I have achieved our one rewards by doing action a one." }, { "end": 538.8, "start": 537.3, "text": " Right." }, { "end": 541.9, "start": 538.8, "text": " So this now can be an input to your model." }, { "end": 552, "start": 541.9, "text": " Your model could learn if you get as an observation, remember the previous thing as an observation, you get s one as a command." }, { "end": 557.5, "start": 552, "text": " You get I want to achieve in one time step." }, { "end": 560.4, "start": 557.5, "text": " Are one reward." }, { "end": 570.6999999999999, "start": 560.4, "text": " Right. And you train this goes into the model and the model is trained to say a one given if I am in s one" }, { "end": 574.3000000000001, "start": 570.7, "text": " and I do a one, I will achieve that." }, { "end": 578, "start": 574.3000000000001, "text": " Right. So you train the model to give a one as an output." }, { "end": 590.8000000000001, "start": 578, "text": " And this is valid because in the past you've observed going from s one using a one to a state where you get this this kind of reward in this kind of time." }, { "end": 594, "start": 590.8000000000001, "text": " But you can also so you can do all of these single steps." }, { "end": 598.2, "start": 594, "text": " They will all provide individual training examples to your model." }, { "end": 601.5, "start": 598.2, "text": " Right. But then also you can consider a two step thing." }, { "end": 609, "start": 601.5, "text": " So you can say I'm in state s one and I go I go in two time steps." }, { "end": 618.4000000000001, "start": 609, "text": " I have achieved our one plus our two reward by doing actions a one then a two." }, { "end": 627.6, "start": 618.4000000000001, "text": " Right. And a two I'm going to do in parents here because what you want to do is you want to always always consider the action that comes right after where you are now." }, { "end": 631.9, "start": 627.6, "text": " So again your training sample let me draw this up here." }, { "end": 635.2, "start": 631.9, "text": " Maybe your training sample would be the following." }, { "end": 637.1, "start": 635.2, "text": " I am in state s one." }, { "end": 638.7, "start": 637.1, "text": " This would be my observation." }, { "end": 646.6, "start": 638.7, "text": " My command would be I would like to achieve in two time steps reward r one plus r two reward." }, { "end": 650.3000000000001, "start": 646.6, "text": " Right. This reward this both goes into the model." }, { "end": 656.2, "start": 650.3000000000001, "text": " Right. You tell the model please given the state s one achieve this reward in this time." }, { "end": 666.3000000000001, "start": 656.2, "text": " And the model is supposed to output a one saying ha in the past I was in this state and I did achieve this goal by using that." }, { "end": 670.2, "start": 666.3000000000001, "text": " So the model is supposed to learn to achieve different goals." }, { "end": 674, "start": 670.2, "text": " Right. So now you can not only train from good episodes right." }, { "end": 683.8000000000001, "start": 674, "text": " You can train for any episode any episode usually in classic or you kind of want to focus on the good episodes because you want to maximize your reward." }, { "end": 694.3, "start": 683.8, "text": " But here you can tell the model hey if you've done something particularly stupid let's say here in s three you done something the a three was particularly stupid gave you." }, { "end": 700.8, "start": 694.3, "text": " So r three here was really bad reward like a negative five billion trillion." }, { "end": 713.6999999999999, "start": 700.8, "text": " And you can actually train the model to recognize this can be a hey look if you are in s three and within one time step you want to achieve negative five billion billion billion trillion." }, { "end": 717.7, "start": 713.7, "text": " Reward you all you have to do is action a three right." }, { "end": 730.6, "start": 717.7, "text": " And then the cool thing now is if you are at evaluation time you actually want the big reward what you'll do is you simply plug in a different command simply in one time step still I'm in state s three in one time step." }, { "end": 736.3000000000001, "start": 730.6, "text": " I want to achieve actually three reward not negative a lot right." }, { "end": 744.4, "start": 736.3, "text": " And the model will have learned that a three will lead to a situation where you get a lot of negative reward." }, { "end": 750.3, "start": 744.4, "text": " So the model will be like I'm for sure not going to do a three right." }, { "end": 757.4, "start": 750.3, "text": " I'm going to do something else here because I have learned to map a three to like this really low reward." }, { "end": 772.6, "start": 757.4, "text": " So in essence this has connections to things like hindsight experience replay and kind of universal value function where you kind of learn to go from any state to any other state in this." }, { "end": 781.5, "start": 772.6, "text": " But none of these do none of these have this kind of command what Schmidhuber calls command here as an input to the model." }, { "end": 794.2, "start": 781.5, "text": " And I think actually this is this is really positive to input this because usually in universal value functions what you would say is let's consider a simple grid world right." }, { "end": 801.3, "start": 794.2, "text": " Whatever your agent is here and you need to you need to reach a goal that's down here." }, { "end": 805.3, "start": 801.3, "text": " But you might not be able to learn it because it's super sparse reward and so on." }, { "end": 814.8, "start": 805.3, "text": " But what you can do is you can learn to reach this position and this position and this position from various positions like go here go from here to here." }, { "end": 816.5, "start": 814.8, "text": " You can learn to go from here to here." }, { "end": 822, "start": 816.5, "text": " And you know in essence you would like it eventually to generalize to all the fields." }, { "end": 832.4, "start": 822, "text": " So you basically learn to go from any position to any other position with your agent with these universal value or universal policy functions having sub goals." }, { "end": 842.1999999999999, "start": 832.4, "text": " But they during that phase where they learn to go from anything to anything they don't they don't necessarily include this reward thing as a as an input." }, { "end": 853.8, "start": 842.1999999999999, "text": " It's more like kind of either a sub goal or like the usual value function will simply approximate the reward." }, { "end": 862.3, "start": 853.8, "text": " Whereas whereas in this technique we actually have a policy learning we actually output a an action value." }, { "end": 867.6999999999999, "start": 862.3, "text": " Also hindsight experience replay what hindsight experience replay would do in the same situation right." }, { "end": 869.9, "start": 867.6999999999999, "text": " You're here." }, { "end": 872.5999999999999, "start": 869.9, "text": " We might do a videos on this in the future." }, { "end": 875.4, "start": 872.5999999999999, "text": " You're here and you try right." }, { "end": 879.6999999999999, "start": 875.4, "text": " And your agent actually it ends up here right ends up right here." }, { "end": 892.1999999999999, "start": 879.6999999999999, "text": " What you can do is you can simply say oh well actually this this was my goal all along and then simply train train your model as if as if this thing here was your goal all along." }, { "end": 899, "start": 892.2, "text": " And not this thing here and treat it as kind of a positive reward for this." }, { "end": 901.5, "start": 899, "text": " At least that's how I understand it." }, { "end": 902.9000000000001, "start": 901.5, "text": " Right." }, { "end": 910.5, "start": 902.9000000000001, "text": " And both of these things are quite different than here where we have this command as input and I do I do like it." }, { "end": 918.9000000000001, "start": 910.5, "text": " So I think this this is very much the basic things here." }, { "end": 927.5, "start": 918.9, "text": " This it is extra extrapolated to kind of noisy inputs and noisy environments and so on." }, { "end": 933.1999999999999, "start": 927.5, "text": " But this is the basic the basic gist of it." }, { "end": 944.4, "start": 933.1999999999999, "text": " So here you see your you what you will learn is to map all and all is your representation of your input." }, { "end": 947, "start": 944.4, "text": " So the screen for example or the chessboard." }, { "end": 953.2, "start": 947, "text": " And I think also kind of the last action and there were you get in this step plus your horizon and desire." }, { "end": 962, "start": 953.2, "text": " So in how much time you would like to achieve how much reward and then you can also get input some extra goals that you have." }, { "end": 972.1, "start": 962, "text": " And so you can see basically any any episode that you've run in the past will give you a valid training example for this." }, { "end": 982.9, "start": 972.1, "text": " Your model will simply learn to match the previous experience with the goals that were achieved in the previous experience." }, { "end": 988.9, "start": 982.9, "text": " So there is lots of lots of generalizations here like how exactly these things are represented." }, { "end": 991.8000000000001, "start": 988.9, "text": " This this time horizon can be a high dimensional object." }, { "end": 996.2, "start": 991.8000000000001, "text": " The desire can be as I understand it somewhat a dimensional object." }, { "end": 1000.7, "start": 996.2, "text": " The extra commands can be like conditionals on these two things." }, { "end": 1012.2, "start": 1000.7, "text": " It gets very complicated, but I want to jump ahead to a different paper where so this paper is basically just describing the algorithm." }, { "end": 1017.4000000000001, "start": 1012.2, "text": " And then the next paper is doing experiments with this." }, { "end": 1019, "start": 1017.4000000000001, "text": " Let's scroll past here." }, { "end": 1019.4000000000001, "start": 1019, "text": " All right." }, { "end": 1025.8, "start": 1019.4000000000001, "text": " So this paper training agents using up that down reinforcement learning released on the same day," }, { "end": 1038.1, "start": 1025.8, "text": " but different authors that have used also made who was also here but have used this to implement a variant of this." }, { "end": 1041.3999999999999, "start": 1038.1, "text": " And here you see again what I was trying to to explain." }, { "end": 1051, "start": 1041.3999999999999, "text": " So in traditional RL, this especially here Q learning, you'll have this function which gets an observation as input and then Q learning especially." }, { "end": 1062.6, "start": 1051, "text": " So you also get the action as an input and you're supposed to say for the given observation this particular action has this expected value as a return." }, { "end": 1062.9, "start": 1062.6, "text": " Right." }, { "end": 1064.4, "start": 1062.9, "text": " That's what I explained at the beginning." }, { "end": 1068.4, "start": 1064.4, "text": " That's kind of value based reinforcement learning." }, { "end": 1079.8, "start": 1068.4, "text": " Whereas the behavior function here, which would be upside down reinforcement learning gets the observation and a command and will map that to an action." }, { "end": 1082, "start": 1079.8, "text": " And here again is what we've gone over." }, { "end": 1083.6, "start": 1082, "text": " This is a bit of a different thing." }, { "end": 1087.3999999999999, "start": 1083.6, "text": " So this agent has apparently run two different episodes." }, { "end": 1101.5, "start": 1087.3999999999999, "text": " One point it did this sequence of actions and at the other point from the same starting state it did this sequence of action and you can see here on the right all the training samples we can we can derive from this." }, { "end": 1106.5, "start": 1101.5, "text": " So we can say from state s 0 right." }, { "end": 1114.3, "start": 1106.5, "text": " If I want to return in one time step, I have experienced this in the past right to return in one time step." }, { "end": 1117.5, "start": 1114.3, "text": " All I have to do is take action a one." }, { "end": 1133.5, "start": 1117.5, "text": " But if I want one return in one time step, I have to take action a two and you teach your behavior function to learn these things to learn to output these actions with these things here as inputs." }, { "end": 1144.7, "start": 1133.5, "text": " And then what you hope of course is that this will generalize that it will learn to generalize that you can say now give me more reward than I have ever seen before right." }, { "end": 1158.5, "start": 1144.7, "text": " And it will kind of learn which things correspond to lower reward, which things correspond to higher award and will be able to extrapolate which things will correspond to even higher report reward." }, { "end": 1159.6, "start": 1158.5, "text": " Sorry." }, { "end": 1180.5, "start": 1159.6, "text": " So they have two algorithms and this is kind of this is reminiscent of the old of the old RL kind of world where you do kind of one algorithm is continuously learning from the experience gathered by another algorithm." }, { "end": 1185.6999999999998, "start": 1180.5, "text": " So you have one set of algorithms and this even in modern RL this this this is how it's done right." }, { "end": 1188.3, "start": 1185.6999999999998, "text": " You have two different boxes right." }, { "end": 1195.7, "start": 1188.3, "text": " Actually you have probably one box learning the model like this is I'm going to represent this here learner right." }, { "end": 1211.3999999999999, "start": 1195.7, "text": " And the learner distributes the model to many many machines interacting with the simulators and these machines all they do is run episodes with the learned model and they will send back their experience here." }, { "end": 1216.6, "start": 1211.3999999999999, "text": " And then the learner can learn from it and then at the end send it again." }, { "end": 1226.6, "start": 1216.6, "text": " So so." }, { "end": 1230.1, "start": 1226.6, "text": " All right here we go." }, { "end": 1242.6, "start": 1230.1, "text": " So in each step what we do in order to to generate a new episode we don't always want to want to kind of execute one given policy." }, { "end": 1249.3999999999999, "start": 1242.6, "text": " What we do is we sample from the end of the replay buffer and the replay buffer is sorted by returns right." }, { "end": 1252.1, "start": 1249.3999999999999, "text": " So the highest return episodes are on top." }, { "end": 1261.8, "start": 1252.1, "text": " So we want to sample the highest return episodes then we want to say maybe some of them are 10 steps long maybe some of them are five steps long and so on." }, { "end": 1286.1, "start": 1261.8, "text": " So we set the horizon to be the mean of the length of these right and we set the desired return how much return should be achieved in this time to be the unit to sample from the uniform distribution between M and M plus S and M is the mean and S is the standard deviation of the selected episode." }, { "end": 1292.6, "start": 1286.1, "text": " So so what this means is is like here is a bunch of episodes from the start at the same time." }, { "end": 1305, "start": 1292.6, "text": " Here's a bunch of episodes that I ran right from here is time zero and then time goes on that I ran that had really high returns right." }, { "end": 1310.8, "start": 1305, "text": " Now I'm going to take the mean time that these episodes ran like this." }, { "end": 1321.8, "start": 1310.8, "text": " This is maybe five time steps. So in five time I want to achieve now how much reward now you look at all the rewards that were achieved." }, { "end": 1334.5, "start": 1321.8, "text": " This is maybe a distribution that has some mean here like so and then you say I want to achieve a reward between here and one standard deviation higher than here." }, { "end": 1353.3, "start": 1334.5, "text": " So right and this this would be the reward you want to achieve. So what you do is you kind of push your learned model to just go a bit beyond what it has seen so far is basically say look I you can do this but you can just do a bit more in the same amount of time." }, { "end": 1357.4, "start": 1353.3, "text": " Please do this and you hope the model has learned to kind of generalize to do this." }, { "end": 1365, "start": 1357.4, "text": " And if so you will execute these episodes and then these episodes will go back to the learner right." }, { "end": 1377.8000000000002, "start": 1365, "text": " I'll go back to the learner here and the learner will learn from them and hopefully then you can like generalize even more and then you can say I now know how to achieve this bit more reward." }, { "end": 1381.5, "start": 1377.8000000000002, "text": " Now I can if I run the episode I will achieve even more reward." }, { "end": 1391, "start": 1381.5, "text": " I can push the model even further right. So at eval time you can always ask the model to produce as much reward as possible in the given time." }, { "end": 1404.9, "start": 1391, "text": " And of course every episode sent back here is not only one training example as we saw but many many training examples can be derived from these models even beyond what's in what's in this paper." }, { "end": 1413.7, "start": 1404.9, "text": " All right. So I think this was a good first shot at describing this algorithm. I hope you get the gist of it." }, { "end": 1419.6000000000001, "start": 1413.7, "text": " I enjoy this a bit of a criticism for me would be it's still kind of doesn't it." }, { "end": 1422.7, "start": 1419.6000000000001, "text": " So it doesn't touch the exploration dilemma." }, { "end": 1440.7, "start": 1422.7, "text": " So it again deals with kind of incremental incrementally getting better whereas I feel this can easily get stuck in some minimum where it's not possible to do this incremental generalization of the model where you really need a new approach." }, { "end": 1449.6000000000001, "start": 1440.7, "text": " And that's why games like Montezuma's Revenge are solved using algorithms like Go Explore and not any of the classic algorithms." }, { "end": 1459.3999999999999, "start": 1449.6, "text": " That being said they have experiments where they show that especially in sparse reward environments they do better than classic or algorithms." }, { "end": 1476.3, "start": 1459.3999999999999, "text": " So if you for example here take the lunar lander where A to C beats upside down RL and I guess you didn't get Matt Ploidlip to do the upside down." }, { "end": 1483.5, "start": 1476.3, "text": " Well the in other in other environments upside down RL clearly beats the classic algorithms." }, { "end": 1492.3, "start": 1483.5, "text": " And what I like here is they took a lunar lander and which basically at every time step you get a reward in lunar lander and they hypothesized." }, { "end": 1499.3999999999999, "start": 1492.3, "text": " Okay this is really good for these classic algorithms that do reward maximization instead of kind of learning this general behavior function." }, { "end": 1504.8, "start": 1499.3999999999999, "text": " And what they did is they modified the game such that all the reward is given at the end of the episode." }, { "end": 1515.8999999999999, "start": 1504.8, "text": " And then you see that upside down RL will actually outperform here the classic things where it's exactly the same game you just get the reward at the end." }, { "end": 1523.5, "start": 1515.8999999999999, "text": " So upside down RL kind of learns the structure of the world learns that you get this reward at the end after such and such many time steps." }, { "end": 1529.6, "start": 1523.5, "text": " So you can it will learn please get me zero reward in 50 time steps like no problem." }, { "end": 1532.5, "start": 1529.6, "text": " But please get me a thousand rewards in a hundred time steps." }, { "end": 1536.9, "start": 1532.5, "text": " No problem. I just go to the end of the episode right." }, { "end": 1543.4, "start": 1536.9, "text": " Whereas these pure reward maximization techniques they don't they somehow have a harder time to do that." }, { "end": 1548.1, "start": 1543.4, "text": " I like this investigation. I like the thinking outside the box." }, { "end": 1552.4, "start": 1548.1, "text": " The Schmidhuber ism of the paper. It's just all great." }, { "end": 1562.7, "start": 1552.4, "text": " It's a great time to be alive and check this out and I'll see you. Bye bye." } ]
Z6ea_AbnnCc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
NeurIPS 2019
[ "Science & Technology" ]
[ "machine learning", "conference", "ai", "neurips", "neurips2019", "canada", "research" ]
I'm at the 2019 conference on Neural Information Processing Systems in Vancouver, trying to register, but the line was just so long that I decided to bail :D
Good morning learners! We are here in beautiful Vancouver in Canada and attending the NURIPS conference 2019. Of course one of the largest conferences in machine learning of the year. It's actually there's been a lottery system for the tickets because so many people wanted to register. There were 8,000 people attending I think and it's Sunday morning so even before the conference starts I thought I was smart going really early to register but today is company expo day and I didn't register for that because you know usually companies will make fair bit of fuss about their research online so there's kind of little need to attend that in person you can just catch up later but everyone wants to get in on that and it's it's crazy here like the line starts so you go in here but actually have to go downstairs and the line starts somewhere like way back here underground then you go all line all the way queue all the way up there go there over there up the escalator circle a bunch of times go up some more I guess then you maybe see people all the way over there up until the registration desks that are finally I guess over there I didn't look but it's absolutely crazy these conferences exploding with people from all over the planet I don't even know what kind of the composition is I would be interested how many of them are students of course machine learning departments probably exploding right now with people every company wants to get in on that and I don't know where the trend is going that growth can't continue forever I feel and the it's it's kind of questionable how long we can uphold this how good this is I don't know any of these things I'll just try to get back later going to work a bit now get back later get my ticket and then I hope I can report a bit from the conference over the next few days I can get some good nuggets out of there that said I hope you're doing well and I'll see you later bye bye
[ { "end": 7.24, "start": 0, "text": " Good morning learners! We are here in beautiful Vancouver in Canada and" }, { "end": 13.56, "start": 7.24, "text": " attending the NURIPS conference 2019. Of course one of the largest conferences" }, { "end": 20.04, "start": 13.56, "text": " in machine learning of the year. It's actually there's been a lottery system" }, { "end": 24.44, "start": 20.04, "text": " for the tickets because so many people wanted to register. There were 8,000" }, { "end": 29.84, "start": 24.44, "text": " people attending I think and it's Sunday morning so even before the conference" }, { "end": 34.6, "start": 29.84, "text": " starts I thought I was smart going really early to register but today is" }, { "end": 39.68, "start": 34.6, "text": " company expo day and I didn't register for that because you know usually" }, { "end": 45.72, "start": 39.68, "text": " companies will make fair bit of fuss about their research online so there's" }, { "end": 53.6, "start": 45.72, "text": " kind of little need to attend that in person you can just catch up later but" }, { "end": 58.84, "start": 53.6, "text": " everyone wants to get in on that and it's it's crazy here like the line" }, { "end": 62.96, "start": 58.84, "text": " starts so you go in here but actually have to go downstairs and the line" }, { "end": 67.52000000000001, "start": 62.96, "text": " starts somewhere like way back here underground then you go all line all" }, { "end": 71.88000000000001, "start": 67.52000000000001, "text": " the way queue all the way up there go there over there up the escalator circle" }, { "end": 76.68, "start": 71.88000000000001, "text": " a bunch of times go up some more I guess then you maybe see people all the way" }, { "end": 83.36, "start": 76.68, "text": " over there up until the registration desks that are finally I guess over" }, { "end": 88.08000000000001, "start": 83.36, "text": " there I didn't look but it's absolutely crazy these conferences exploding with" }, { "end": 91.96, "start": 88.08, "text": " people from all over the planet I don't even know what kind of the composition" }, { "end": 96.08, "start": 91.96, "text": " is I would be interested how many of them are students of course machine" }, { "end": 103.24, "start": 96.08, "text": " learning departments probably exploding right now with people every company" }, { "end": 107.2, "start": 103.24, "text": " wants to get in on that and I don't know where the trend is going that growth" }, { "end": 114.6, "start": 107.2, "text": " can't continue forever I feel and the it's it's kind of questionable how long" }, { "end": 120.75999999999999, "start": 114.6, "text": " we can uphold this how good this is I don't know any of these things I'll just" }, { "end": 125.47999999999999, "start": 120.75999999999999, "text": " try to get back later going to work a bit now get back later get my ticket and" }, { "end": 131.72, "start": 125.47999999999999, "text": " then I hope I can report a bit from the conference over the next few days I can" }, { "end": 139, "start": 131.72, "text": " get some good nuggets out of there that said I hope you're doing well and I'll" }, { "end": 144.84, "start": 139, "text": " see you later bye bye" } ]
We20YSAJZSE
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
[ "Science & Technology" ]
[ "ml", "ai", "machine learning", "reinforcement learning", "deep rl", "deepmind", "google", "alphago", "alphazero", "value function", "policy", "artificial intelligence", "rl", "deep reinforcement learning", "model-free", "model-based", "environment model", "hidden representation", "latent state", "transition", "chess", "shogi", "go", "atari" ]
MuZero harnesses the power of AlphaZero, but without relying on an accurate environment model. This opens up planning-based reinforcement learning to entirely new domains, where such environment models aren't available. The difference to previous work is that, instead of learning a model predicting future observations, MuZero predicts the future observations' latent representations, and thus learns to only represent things that matter to the task! Abstract: Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules. Authors: Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver https://arxiv.org/abs/1911.08265 Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we're looking at mastering Atari Go, Chess and Shogi by planning with a learned model by Julian Schrittweiser and people generally from DeepMind. So this paper is an extension to AlphaZero, the kind of famous algorithm that learned to play Go and Chess simply by playing itself and the kind of cool thing about this model is that it has a learned environment model. So what does this mean? Usually if you have a game such as chess, I believe there is a picture of chess down here, if you have a game such as chess and you want to learn to play it, you need to know the kind of the rules of chess, right? So in chess you have the rules like the pawn can move two or one, right? The bishop can move diagonally and so on. Similarly in Shogi or Go here, you know where you can place the stones and when you win everything is clearly defined. So what you can do is actually you can plan, right? You can now think of okay if I do this opening, right, my opponent could do either this or this or you know this and for each of the three moves I'll have response. So if they do, if they move this pawn, I'll go for like a gambit here and if they move this pawn then I can, you know, move on. Something like this, right? So what in a sense what you have is a tree search. So you start out with the state you're currently in, right? And then your opponent, sorry, this should be your state you're currently in, your opponent has the option of performing any one of these moves. Let's say there are three moves and then from each of these three moves you again have the option of performing any of these moves. And the good thing is in chess you know each exactly what they do. Like if I move my pawn then the new board configuration will be the pawn will no longer be here but here, right? So you know exactly what's going to happen. You can calculate that you have perfect simulator. And other domains you don't have that. For example in Atari all you have in Atari is this screen, right? Maybe you have a little submarine here, right? You have some opponents, right? The opponent, I don't know, what do your opponents look like? Are they fish? I don't even know in this game, right? And you can, I think you can shoot? There's coins to select? I don't know. Okay, in any case and sometimes you need to go up and there is like a health bar. But in essence you only have this screen here, right? You don't have more. And if you press a button you don't exactly know what's going to happen. You don't exactly know what the pixel space will look like as this shot moves forward, right? I guess you could know but you can't use that to plan because the kind of space is too big and your actions may be not clearly predictable. And when you win aren't clearly predictable and there may be randomness. So all of this stuff, usually what people do is here they do use a model-free reinforcement learning. We've had this discussion before. So this would be model-free and while chess here you'd go about model-based. Now what MuZero does is it uses a model-based planning but it learns the model. So it tries to construct a model for this here. It tries to say, okay if I have this screen A here, right? My thing is here and I press the button right then probably my submarine is going to be a bit more to the right. But it doesn't do this exactly. So this has been done before and this is what's kind of known as learning an environment model where you map current environment plus action to the next step in the environment, right? And this usually doesn't work too well because you're really trying to generate this entire pixel space here. What the cool thing about MuZero is it doesn't do that. It doesn't predict the next state. What it does predict is a hidden state and let's draw the hidden state as a little cloud here. It predicts a hidden state of the next step and from the hidden state it will predict things like the reward, the policy, the value and then it can use from that hidden state it'll predict the next hidden state. And from that it will again predict the reward. So the base idea is you only predict what you absolutely need to obtain the values that are important for doing reinforcement learning. You're not trying to predict the full environment. You're simply trying to predict whatever is necessary and this here is a learned quantity. Whatever is necessary to predict what your RL model is going to need. So that's the basic gist of it and we'll look at how they do it or how they describe what they're doing. So basically the picture A here is how MuZero plans. So imagine you have a configuration, a current state. This is an observation. This could be a chessboard. This could also be a position in shogi but it could also be a screen in an Atari game or a camera input of a self-driving car and so on. And the first thing it does it encodes that observation using this H here. I believe they call this a representation function. You encode that to this hidden state. Now the hidden state, this is appropriately sized, the hidden state here is supposed to capture everything you need about the state to predict the kind of RL quantities in the future. And you learn this function H which in this case of course is going to be a neural network in order to produce such a state. Now from this state you do two things. First of all you have this function F here and they call this the I don't remember but you have a function to predict the following two quantities. You predict the value function at that state and the value function simply means if you are in this state here, this is now not a true state but a hidden state, but still if you're in this state, in this hidden state that belongs to this observation, then in the future you're going to make this much reward on average with your current policy. That's the value function. So the value function basically tells you how good it is to be in a given state. And then the policy, this is a bit special, the policy is predicting how you would act in this state. Now this is a bit confusing or it was to me when I first learned it because we're going to see over here how a mu0 decides on how to act. Namely it does this entire tree search thing up to a certain depth, right? And then it creates this histogram and from that it produces the action. But in order to produce, to do this tree search, this is exactly this picture A. This is that tree search that is done. And in order to do that you need these p-values because we'll go there in a second, you need these p-values and they cannot themselves again do a tree search, right? That would be like infinite recursion. So what you need is you need kind of an estimate, right? Like if I were, and especially down, it makes more sense, if I were in that state how would I act, right? If I were to do a tree search like this. So you simply build a neural network that tells you with one evaluation without having to do the entire tree search down from here how you would act. This doesn't need to be a perfect approximation of how you would actually act but it needs to be good enough, right? So this simply tells you how you would act in that state. And that's important because what we do next is we use this policy to generate this action. And this is a simulated action. This isn't a real action because the real action would go here to the next actual observation. This is a simulated action saying if I'm in this hidden state, right, my policy approximately would be this thing. And so I can sample from that and say my action in that state would be this action. And so now I have a hidden state and an action and from that I can produce the next hidden state. Now of course if I were to apply the action up here to the observation, right, action one, I would get the next observation. And that is exactly how alpha zero works, right? You use your simulator, your perfect simulator, to take the current observation, the current state, with a given action that this policy gives you and you produce the next state. But we don't have a perfect simulator, right? And we don't want to learn a model that predicts the entire state. But what we want to do is we want to predict the following. If we were to take a one here, if, right, we would get an observation, can we predict the result when we would apply the function h to that, right, giving me s prime, right? This is observation prime. So this function h here, which is the function that maps from observation space to hidden space, if we were to apply this to the next hidden, to the next observation, we would obtain some hidden state for that observation. Can we predict that thing? So we need a function that maps from the hidden state given an action, right, to the next hidden state. And that's exactly what what happens down here, right? This function g here maps exactly this hidden state plus the action to the next hidden state. And also, also at the same time, it will predict a reward, right? Because in each step you might get a reward. So each transition here gives you a reward. And we're trying to predict that as well. Not that important, especially for games like chess or shogi, where there's only win or lose at the very end. But they incorporate this here to also be able to play these Atari games and like a broader range of reinforcement learning games. But in essence, that's what it is, right? We're trying to predict the next hidden state. And now we can basically recursively apply this. So from here, I have an idea of what my policy might be in that state, right? My proximate policy, my kind of mini policy that only needs one evaluation. I can sample an action from that policy. And if maybe it's action two here, and I can then predict the next hidden state that I would be in. Also the reward, right? And therefore, using this, I can do like a tree search. So I can simulate future trajectories, right? First, all of these policies, I can sample from them. I can sample from them, giving me different actions so that that will lead me down the tree different routes. So I can simulate future trajectories in this tree. And at the end, I have a pretty good idea. I can do this up to a certain depth, right? I don't have to do it until the very end, I can. And then I'll have a pretty good idea of how my immediate the immediate future looks right, which actions lead me to approximately which states and for each state, of course, especially for each bottom state here, I have an estimation of the value of that state. So basically, I can, the easiest thing would simply be to whatever search, how many steps is this? One, no, this is zero. One, two, three steps into the future. And for each of these states, obtain the value v here, v here, v, v, v, v, v. And then I simply pick the action up, the action up here. I'm running out of colors. And simply pick the action up here that will lead me eventually to the highest value state. So that's, we of course, we've not incorporated opponent plays here and so on. But that's the basic idea. You can do this more sophisticated this tree search. And this is a topic that we might cover in a video about AlphaGo or AlphaZero. But in essence, you can do the same thing as AlphaGo or AlphaZero, except if you're not working with the simulator, but you're working with a learned model on the hidden states of the true observations. So B is how you would actually act, right? So for each observation here, we'd say you'd run such a tree search, and you kind of get a histogram over visited actions. And again, we'll skip over that here. But this, this is part of the AlphaZero paper. And you decide on an action. And that will give you a reward and a next observation. And that's how you act. And then you train these things end to end. So you train the networks such that, of course, the reward, you know what the rewards are, right? The reward prediction of G, you know what that should be, right? From given a trajectory and action sequence, you know what the individual reward should be. So that's, you can train G for that. First of all, you can also train to predict the correct value functions like in classic reinforcement learning, you can do like an end step into the future prediction, or you can play until the end sample trajectories and so on. And the policy you predict, you, you predict the policy, your approximate policy to to match your true actions, right? Because your true actions you've generated by doing this entire tree search thing, which is, you know, the your what you're actually going to do. So you're training your approximate policy predictor that you use to run the tree search to match as close as possible to your actual actions, right? This in this fashion. So this policy resulting from hidden state zero should be as close as possible to the action you actually took in the observation that led to hidden state zero. Yeah, so this is how you search, search, act and train using mu zero. And this is pretty, this is it, right? This is the rest is experiments. The rest is simply showing that they can handle these games, they can keep the performance basically of the simulator based alpha zero in, in games. Sorry, where are the results here? Yeah, so in these games in these left hand games, they can keep the performance of alpha zero even exceeded here in go. And remember, they don't have a simulator like alpha zero, they have to learn this model. And in Atari, they actually out compete the current state of the art, which is I think, or to D two, or Impala. But it's it's some model, I guess some model free RL baseline here on the on Atari. So that's pretty cool. And I think that brings RL to kind of a new level with this hidden learning. And yeah, they so they compare it against against multiple ones are two D two different things. All right. Yeah, so that's that's that. For me, it's a cool paper. It's short. Read it if you if you want. I invite you to also look at the additional experiments where they basically ablate what they need is the learned model really as good or better as the real simulator? Does it take as much time actually takes less time, which for for higher elo, which is pretty cool. How many simulations are needed? Things like this. All right, that was it. I like this paper, check it out. Bye bye.
[ { "end": 5.82, "start": 0, "text": " Hi there! Today we're looking at mastering Atari Go, Chess and Shogi by" }, { "end": 12.120000000000001, "start": 5.82, "text": " planning with a learned model by Julian Schrittweiser and people generally from" }, { "end": 21.32, "start": 12.120000000000001, "text": " DeepMind. So this paper is an extension to AlphaZero, the kind of famous" }, { "end": 29.400000000000002, "start": 21.32, "text": " algorithm that learned to play Go and Chess simply by playing itself and the" }, { "end": 35.48, "start": 29.4, "text": " kind of cool thing about this model is that it has a learned environment" }, { "end": 40.92, "start": 35.48, "text": " model. So what does this mean? Usually if you have a game such as chess, I believe" }, { "end": 45.9, "start": 40.92, "text": " there is a picture of chess down here, if you have a game such as chess and you" }, { "end": 50.08, "start": 45.9, "text": " want to learn to play it, you need to know the kind of the rules of chess," }, { "end": 58.16, "start": 50.08, "text": " right? So in chess you have the rules like the pawn can move two or one, right?" }, { "end": 65.11999999999999, "start": 58.16, "text": " The bishop can move diagonally and so on. Similarly in Shogi or Go here, you know" }, { "end": 70.6, "start": 65.11999999999999, "text": " where you can place the stones and when you win everything is clearly defined." }, { "end": 76.12, "start": 70.6, "text": " So what you can do is actually you can plan, right? You can now think of" }, { "end": 83.88, "start": 76.12, "text": " okay if I do this opening, right, my opponent could do either this or" }, { "end": 91.72, "start": 83.88, "text": " this or you know this and for each of the three moves I'll have response. So if" }, { "end": 98.84, "start": 91.72, "text": " they do, if they move this pawn, I'll go for like a gambit here and if they move" }, { "end": 106.39999999999999, "start": 98.84, "text": " this pawn then I can, you know, move on. Something like this, right? So what in a" }, { "end": 110.36, "start": 106.39999999999999, "text": " sense what you have is a tree search. So you start out with the state you're" }, { "end": 115.76, "start": 110.36, "text": " currently in, right? And then your opponent, sorry, this should be your" }, { "end": 120.68, "start": 115.76, "text": " state you're currently in, your opponent has the option of performing any one of" }, { "end": 125.6, "start": 120.68, "text": " these moves. Let's say there are three moves and then from each of these three" }, { "end": 131.24, "start": 125.6, "text": " moves you again have the option of performing any of these moves. And the" }, { "end": 137, "start": 131.24, "text": " good thing is in chess you know each exactly what they do. Like if I move my" }, { "end": 144.52, "start": 137, "text": " pawn then the new board configuration will be the pawn will no longer be here" }, { "end": 148.8, "start": 144.52, "text": " but here, right? So you know exactly what's going to happen. You can calculate" }, { "end": 154.56, "start": 148.8, "text": " that you have perfect simulator. And other domains you don't have that. For example" }, { "end": 162.2, "start": 154.56, "text": " in Atari all you have in Atari is this screen, right? Maybe you have a" }, { "end": 168.6, "start": 162.2, "text": " little submarine here, right? You have some opponents, right? The" }, { "end": 174.92, "start": 168.6, "text": " opponent, I don't know, what do your opponents look like? Are they fish? I don't even know in this" }, { "end": 181.23999999999998, "start": 174.92, "text": " game, right? And you can, I think you can shoot? There's coins to select? I don't" }, { "end": 185.6, "start": 181.23999999999998, "text": " know. Okay, in any case and sometimes you need to go up and there is like a health" }, { "end": 192.92, "start": 185.6, "text": " bar. But in essence you only have this screen here, right? You don't" }, { "end": 199.72, "start": 192.92, "text": " have more. And if you press a button you don't" }, { "end": 203.32, "start": 199.72, "text": " exactly know what's going to happen. You don't exactly know what the pixel space" }, { "end": 210, "start": 203.32, "text": " will look like as this shot moves forward, right? I guess you could know but" }, { "end": 215.76, "start": 210, "text": " you can't use that to plan because the kind of space is too big and" }, { "end": 221.32, "start": 215.76, "text": " your actions may be not clearly predictable. And when you win aren't" }, { "end": 226.08, "start": 221.32, "text": " clearly predictable and there may be randomness. So all of this stuff, usually" }, { "end": 229.6, "start": 226.08, "text": " what people do is here they do use a model-free reinforcement learning. We've" }, { "end": 237.72, "start": 229.6, "text": " had this discussion before. So this would be model-free and while chess" }, { "end": 248.16, "start": 237.72, "text": " here you'd go about model-based. Now what MuZero does is it uses a model-based" }, { "end": 254.96, "start": 248.16, "text": " planning but it learns the model. So it tries to construct a model for this here." }, { "end": 261.48, "start": 254.96, "text": " It tries to say, okay if I have this screen A here, right? My thing is here and" }, { "end": 270, "start": 261.48, "text": " I press the button right then probably my submarine is going to be a bit more to" }, { "end": 276.32, "start": 270, "text": " the right. But it doesn't do this exactly. So this has been done before and this is" }, { "end": 281.40000000000003, "start": 276.32, "text": " what's kind of known as learning an environment model where you map current" }, { "end": 288.68, "start": 281.40000000000003, "text": " environment plus action to the next step in the environment, right? And this" }, { "end": 294.44, "start": 288.68, "text": " usually doesn't work too well because you're really trying to generate this" }, { "end": 300.44, "start": 294.44, "text": " entire pixel space here. What the cool thing about MuZero is it doesn't do that." }, { "end": 306.04, "start": 300.44, "text": " It doesn't predict the next state. What it does predict is a hidden state and" }, { "end": 310.64, "start": 306.04, "text": " let's draw the hidden state as a little cloud here. It predicts a hidden" }, { "end": 315, "start": 310.64, "text": " state of the next step and from the hidden state it will predict things like" }, { "end": 323.04, "start": 315, "text": " the reward, the policy, the value and then it can use from that hidden state it'll" }, { "end": 329.2, "start": 323.04, "text": " predict the next hidden state. And from that it will again predict the" }, { "end": 334.76, "start": 329.2, "text": " reward. So the base idea is you only predict what you" }, { "end": 341.48, "start": 334.76, "text": " absolutely need to obtain the values that are important for doing reinforcement" }, { "end": 346.64000000000004, "start": 341.48, "text": " learning. You're not trying to predict the full environment. You're simply trying" }, { "end": 351.56, "start": 346.64000000000004, "text": " to predict whatever is necessary and this here is a learned quantity. Whatever" }, { "end": 358.04, "start": 351.56, "text": " is necessary to predict what your RL model is going to need. So" }, { "end": 367, "start": 358.04, "text": " that's the basic gist of it and we'll look at how they do it or how" }, { "end": 374.12, "start": 367, "text": " they describe what they're doing. So basically the picture A here is how MuZero" }, { "end": 380.16, "start": 374.12, "text": " plans. So imagine you have a configuration, a current state. This is an" }, { "end": 384.56, "start": 380.16, "text": " observation. This could be a chessboard. This could also be a position in" }, { "end": 389.8, "start": 384.56, "text": " shogi but it could also be a screen in an Atari game or a camera input of a" }, { "end": 394.92, "start": 389.8, "text": " self-driving car and so on. And the first thing it does it encodes that" }, { "end": 400.88, "start": 394.92, "text": " observation using this H here. I believe they call this a representation" }, { "end": 408.08000000000004, "start": 400.88, "text": " function. You encode that to this hidden state. Now the hidden state, this is" }, { "end": 416.88, "start": 408.08000000000004, "text": " appropriately sized, the hidden state here is supposed to capture everything" }, { "end": 422.56, "start": 416.88, "text": " you need about the state to predict the kind of RL quantities in the future." }, { "end": 428.2, "start": 422.56, "text": " And you learn this function H which in this case of course is going to be a" }, { "end": 434.48, "start": 428.2, "text": " neural network in order to produce such a state. Now from this state you do two" }, { "end": 440.72, "start": 434.48, "text": " things. First of all you have this function F here and they call this the" }, { "end": 446.36, "start": 440.72, "text": " I don't remember but you have a function to predict the following two quantities." }, { "end": 452.2, "start": 446.36, "text": " You predict the value function at that state and the value function simply" }, { "end": 458.15999999999997, "start": 452.2, "text": " means if you are in this state here, this is now not a true state but a" }, { "end": 463.47999999999996, "start": 458.15999999999997, "text": " hidden state, but still if you're in this state, in this hidden state that belongs" }, { "end": 471.96, "start": 463.47999999999996, "text": " to this observation, then in the future you're going to make this much reward on" }, { "end": 476.64, "start": 471.96, "text": " average with your current policy. That's the value function. So the value" }, { "end": 481.52, "start": 476.64, "text": " function basically tells you how good it is to be in a given state. And" }, { "end": 490.03999999999996, "start": 481.52, "text": " then the policy, this is a bit special, the policy is predicting how you would" }, { "end": 495.52, "start": 490.03999999999996, "text": " act in this state. Now this is a bit confusing or it was to me when I" }, { "end": 502.76, "start": 495.52, "text": " first learned it because we're going to see over here how a mu0 decides on how" }, { "end": 507.84, "start": 502.76, "text": " to act. Namely it does this entire tree search thing up to a certain depth, right?" }, { "end": 512.64, "start": 507.84, "text": " And then it creates this histogram and from that it produces the action. But in" }, { "end": 518.64, "start": 512.64, "text": " order to produce, to do this tree search, this is exactly this picture A. This is" }, { "end": 524.24, "start": 518.64, "text": " that tree search that is done. And in order to do that you need these p-values" }, { "end": 530.36, "start": 524.24, "text": " because we'll go there in a second, you need these p-values and they cannot" }, { "end": 535.24, "start": 530.36, "text": " themselves again do a tree search, right? That would be like infinite recursion. So" }, { "end": 542.08, "start": 535.24, "text": " what you need is you need kind of an estimate, right? Like if I were, and" }, { "end": 549.8, "start": 542.08, "text": " especially down, it makes more sense, if I were in that state how would I" }, { "end": 554.76, "start": 549.8, "text": " act, right? If I were to do a tree search like this. So you simply build a neural" }, { "end": 559.36, "start": 554.76, "text": " network that tells you with one evaluation without having to do the" }, { "end": 565.36, "start": 559.36, "text": " entire tree search down from here how you would act. This doesn't need to be a" }, { "end": 570.64, "start": 565.36, "text": " perfect approximation of how you would actually act but it needs to be good" }, { "end": 575.08, "start": 570.64, "text": " enough, right? So this simply tells you how you would act in that state. And" }, { "end": 581.5600000000001, "start": 575.08, "text": " that's important because what we do next is we use this policy to generate this" }, { "end": 586.16, "start": 581.5600000000001, "text": " action. And this is a simulated action. This isn't a real action because the" }, { "end": 590.48, "start": 586.16, "text": " real action would go here to the next actual observation. This is a simulated" }, { "end": 597.12, "start": 590.48, "text": " action saying if I'm in this hidden state, right, my policy approximately" }, { "end": 602.88, "start": 597.12, "text": " would be this thing. And so I can sample from that and say my action in that" }, { "end": 609.8, "start": 602.88, "text": " state would be this action. And so now I have a hidden state and an action and" }, { "end": 615.48, "start": 609.8, "text": " from that I can produce the next hidden state. Now of course if I were to apply" }, { "end": 620.72, "start": 615.48, "text": " the action up here to the observation, right, action one, I would get the next" }, { "end": 627, "start": 620.72, "text": " observation. And that is exactly how alpha zero works, right? You use your" }, { "end": 632.04, "start": 627, "text": " simulator, your perfect simulator, to take the current observation, the current" }, { "end": 637.9200000000001, "start": 632.04, "text": " state, with a given action that this policy gives you and you produce the" }, { "end": 641.48, "start": 637.9200000000001, "text": " next state. But we don't have a perfect simulator, right? And we don't want to" }, { "end": 646.44, "start": 641.48, "text": " learn a model that predicts the entire state. But what we want to do is we want" }, { "end": 654.04, "start": 646.44, "text": " to predict the following. If we were to take a one here, if, right, we would get" }, { "end": 663.28, "start": 654.04, "text": " an observation, can we predict the result when we would apply the function h to" }, { "end": 669.64, "start": 663.28, "text": " that, right, giving me s prime, right? This is observation prime. So this" }, { "end": 674.28, "start": 669.64, "text": " function h here, which is the function that maps from observation space to" }, { "end": 680.3199999999999, "start": 674.28, "text": " hidden space, if we were to apply this to the next hidden, to the next observation," }, { "end": 688.3199999999999, "start": 680.3199999999999, "text": " we would obtain some hidden state for that observation. Can we predict that" }, { "end": 694.96, "start": 688.3199999999999, "text": " thing? So we need a function that maps from the hidden state given an action," }, { "end": 701.2800000000001, "start": 694.96, "text": " right, to the next hidden state. And that's exactly what what happens down" }, { "end": 708.0400000000001, "start": 701.2800000000001, "text": " here, right? This function g here maps exactly this hidden state plus the" }, { "end": 717.84, "start": 708.0400000000001, "text": " action to the next hidden state. And also, also at the same time, it will predict a" }, { "end": 723.08, "start": 717.84, "text": " reward, right? Because in each step you might get a reward. So each transition" }, { "end": 727.32, "start": 723.08, "text": " here gives you a reward. And we're trying to predict that as well. Not that" }, { "end": 731.08, "start": 727.32, "text": " important, especially for games like chess or shogi, where there's only win" }, { "end": 735.1600000000001, "start": 731.08, "text": " or lose at the very end. But they incorporate this here to also be able to" }, { "end": 739.44, "start": 735.1600000000001, "text": " play these Atari games and like a broader range of reinforcement learning" }, { "end": 744.2, "start": 739.44, "text": " games. But in essence, that's what it is, right? We're trying to predict the next" }, { "end": 748.2800000000001, "start": 744.2, "text": " hidden state. And now we can basically recursively apply this. So from here, I" }, { "end": 754.28, "start": 748.28, "text": " have an idea of what my policy might be in that state, right? My proximate policy," }, { "end": 761.28, "start": 754.28, "text": " my kind of mini policy that only needs one evaluation. I can sample an action" }, { "end": 766.68, "start": 761.28, "text": " from that policy. And if maybe it's action two here, and I can then predict" }, { "end": 774.76, "start": 766.68, "text": " the next hidden state that I would be in. Also the reward, right? And therefore," }, { "end": 780.84, "start": 774.76, "text": " using this, I can do like a tree search. So I can simulate future trajectories," }, { "end": 787.36, "start": 780.84, "text": " right? First, all of these policies, I can sample from them. I can sample" }, { "end": 791.92, "start": 787.36, "text": " from them, giving me different actions so that that will lead me down" }, { "end": 797.4399999999999, "start": 791.92, "text": " the tree different routes. So I can simulate future trajectories in this" }, { "end": 802.36, "start": 797.4399999999999, "text": " tree. And at the end, I have a pretty good idea. I can do this up to a certain" }, { "end": 807.64, "start": 802.36, "text": " depth, right? I don't have to do it until the very end, I can. And then I'll have a" }, { "end": 815.2, "start": 807.64, "text": " pretty good idea of how my immediate the immediate future looks right, which" }, { "end": 820.36, "start": 815.2, "text": " actions lead me to approximately which states and for each state, of course," }, { "end": 824.2, "start": 820.36, "text": " especially for each bottom state here, I have an estimation of the value of that" }, { "end": 829.4, "start": 824.2, "text": " state. So basically, I can, the easiest thing would simply be to whatever" }, { "end": 837.68, "start": 829.4, "text": " search, how many steps is this? One, no, this is zero. One, two, three steps into" }, { "end": 844.6, "start": 837.68, "text": " the future. And for each of these states, obtain the value v here, v here, v, v, v," }, { "end": 850.4399999999999, "start": 844.6, "text": " v, v. And then I simply pick the action up, the action up here. I'm running out" }, { "end": 855.84, "start": 850.4399999999999, "text": " of colors. And simply pick the action up here that will lead me eventually to the" }, { "end": 864.24, "start": 855.84, "text": " highest value state. So that's, we of course, we've not incorporated opponent" }, { "end": 868.2800000000001, "start": 864.24, "text": " plays here and so on. But that's the basic idea. You can do this more" }, { "end": 873.36, "start": 868.2800000000001, "text": " sophisticated this tree search. And this is a topic that we might cover in a" }, { "end": 880, "start": 873.36, "text": " video about AlphaGo or AlphaZero. But in essence, you can do the same thing as" }, { "end": 885.4000000000001, "start": 880, "text": " AlphaGo or AlphaZero, except if you're not working with the simulator, but" }, { "end": 890.4, "start": 885.4, "text": " you're working with a learned model on the hidden states of the true" }, { "end": 895.9599999999999, "start": 890.4, "text": " observations. So B is how you would actually act, right? So for each" }, { "end": 901.56, "start": 895.9599999999999, "text": " observation here, we'd say you'd run such a tree search, and you kind of get a" }, { "end": 906.88, "start": 901.56, "text": " histogram over visited actions. And again, we'll skip over that here. But this," }, { "end": 912.8, "start": 906.88, "text": " this is part of the AlphaZero paper. And you decide on an action. And that will" }, { "end": 918.28, "start": 912.8, "text": " give you a reward and a next observation. And that's how you act. And then you" }, { "end": 931.04, "start": 918.28, "text": " train these things end to end. So you train the networks such that, of" }, { "end": 935.7199999999999, "start": 931.04, "text": " course, the reward, you know what the rewards are, right? The reward prediction" }, { "end": 940.24, "start": 935.7199999999999, "text": " of G, you know what that should be, right? From given a trajectory and action" }, { "end": 945.2, "start": 940.24, "text": " sequence, you know what the individual reward should be. So that's, you can train" }, { "end": 952.96, "start": 945.2, "text": " G for that. First of all, you can also train to predict the correct value" }, { "end": 957.4, "start": 952.96, "text": " functions like in classic reinforcement learning, you can do like an end step" }, { "end": 962.64, "start": 957.6, "text": " into the future prediction, or you can play until the end sample trajectories" }, { "end": 969.4, "start": 962.64, "text": " and so on. And the policy you predict, you, you predict the policy, your" }, { "end": 975.12, "start": 969.4, "text": " approximate policy to to match your true actions, right? Because your true" }, { "end": 981.68, "start": 975.12, "text": " actions you've generated by doing this entire tree search thing, which is, you" }, { "end": 987.0799999999999, "start": 981.68, "text": " know, the your what you're actually going to do. So you're training your" }, { "end": 993.8, "start": 987.0799999999999, "text": " approximate policy predictor that you use to run the tree search to match as" }, { "end": 1004.64, "start": 993.8, "text": " close as possible to your actual actions, right? This in this fashion. So this" }, { "end": 1011.52, "start": 1004.8, "text": " policy resulting from hidden state zero should be as close as possible to the" }, { "end": 1016.4799999999999, "start": 1011.52, "text": " action you actually took in the observation that led to hidden state zero." }, { "end": 1025.3600000000001, "start": 1016.48, "text": " Yeah, so this is how you search, search, act and train using mu zero. And this is" }, { "end": 1033.76, "start": 1025.3600000000001, "text": " pretty, this is it, right? This is the rest is experiments. The rest is simply" }, { "end": 1039.04, "start": 1033.76, "text": " showing that they can handle these games, they can keep the performance basically" }, { "end": 1046.6399999999999, "start": 1039.04, "text": " of the simulator based alpha zero in, in games. Sorry, where are the results here?" }, { "end": 1050.3999999999999, "start": 1046.6399999999999, "text": " Yeah, so in these games in these left hand games, they can keep the" }, { "end": 1057.76, "start": 1050.3999999999999, "text": " performance of alpha zero even exceeded here in go. And remember, they don't have" }, { "end": 1064.1599999999999, "start": 1057.76, "text": " a simulator like alpha zero, they have to learn this model. And in Atari, they" }, { "end": 1073.0400000000002, "start": 1064.16, "text": " actually out compete the current state of the art, which is I think, or to D two, or" }, { "end": 1080.4, "start": 1073.0400000000002, "text": " Impala. But it's it's some model, I guess some model free RL baseline here on the" }, { "end": 1087.0400000000002, "start": 1080.4, "text": " on Atari. So that's pretty cool. And I think that brings RL to kind of a new" }, { "end": 1094.56, "start": 1087.04, "text": " level with this hidden learning. And yeah, they so they compare it against against" }, { "end": 1104.96, "start": 1094.56, "text": " multiple ones are two D two different things. All right. Yeah, so that's that's" }, { "end": 1112.8, "start": 1104.96, "text": " that. For me, it's a cool paper. It's short. Read it if you if you want. I" }, { "end": 1118, "start": 1112.8, "text": " invite you to also look at the additional experiments where they basically ablate" }, { "end": 1122.56, "start": 1118, "text": " what they need is the learned model really as good or better as the real" }, { "end": 1127.28, "start": 1122.56, "text": " simulator? Does it take as much time actually takes less time, which for for" }, { "end": 1131.84, "start": 1127.28, "text": " higher elo, which is pretty cool. How many simulations are needed? Things like" }, { "end": 1143.28, "start": 1131.84, "text": " this. All right, that was it. I like this paper, check it out. Bye bye." } ]
KXEEqcwXn8w
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
A neurally plausible model learns successor representations in partially observable environments
[ "Science & Technology" ]
[ "ml", "ai", "machine learning", "artificial ingelligence", "deep learning", "reinforcement learning", "model-free", "model-based", "search", "markov", "mdp", "pomdp", "implicit", "expectation", "wake-sleep" ]
Successor representations are a mid-point between model-based and model-free reinforcement learning. This paper learns successor representation in environments where only incomplete information is available. Abstract: Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations. Task-relevant states, such as the agent's location within an environment or the presence of a predator, are often not directly observable but must be inferred using available sensory information. Successor representations (SR) have been proposed as a middle-ground between model-based and model-free reinforcement learning strategies, allowing for fast value computation and rapid adaptation to changes in the reward function or goal locations. Indeed, recent studies suggest that features of neural responses are consistent with the SR framework. However, it is not clear how such representations might be learned and computed in partially observed, noisy environments. Here, we introduce a neurally plausible model using distributional successor features, which builds on the distributed distributional code for the representation and computation of uncertainty, and which allows for efficient value function computation in partially observed environments via the successor representation. We show that distributional successor features can support reinforcement learning in noisy environments in which direct learning of successful policies is infeasible. Authors: Eszter Vertes, Maneesh Sahani Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, hi there! Today we're looking at a neurally plausible model, learned successor representations in partially observable environments, by Esther Vertes and Manish Sani. This paper is a paper on a topic that has been interesting me for a while, and that's successor representations. So we'll dive into all of this. The title is fairly lengthy and complicated, but ultimately we're dealing with a setting of reinforcement learning. So if you know something about reinforcement learning, in reinforcement learning usually you have an agent, which, let's just say this is you, and there is an environment which is a big black box that you don't know anything about. This is environment. And what the environment gives you is what's called an observation. So an observation could be anything, but in this case let's just assume you get a little picture of what's in front of you. So in front of you might be a tree, and in front of you might be a house. And then you can perform an action, and this action in this case might be to enter the house. And then the environment in the next step, it gives you back a new picture and says, ah you're now inside the house. So here is a door that leads you to this room, and the door that leads you that room, and there's a little table in front of you. So it's just this cycle of action observation. And with that you're trying to collect some reward over time. Now there are different ways of achieving this reward over time. So basically the reward is going to be, for example, you could get a reward for finding the kitchen, or for going into as many rooms as possible, or you know anything like this. So the other objective is to learn what's called a policy. So which actions to take. So action one, action two, action three, given the observations that maximizes your rewards. So there's mainly two ways to go about this. There's the model-free and the model-based reinforcement learning approach. Let's split them. So in the model-free approach, what you're trying to do is you're trying to simply learn a policy, and we call this here pi of s, and s is your state. And the state you can think of it as the observation. So in this policy we'll simply output an action. And this is the kind of the simple setup of model-free reinforcement learning. The important thing here is you're trying to learn this. Usually there's parameters theta of this policy pi. This could be a neural network and the theta are then the weights of the neural network. So you're trying to learn the neural network such that if you give it a state it just outputs the action. So you have this neural network with your state, you input the state into layer, layer, layer, layer, layer, and then it outputs one of maybe three actions. Go north, go south, go west, maybe go east. This could be four actions. You're just trying to train the neural network using backprop and the reward signal through what's called the reinforce trick or variance thereof. This is model-free reinforcement learning. It's very easy to implement, let's say, and it's very applicable. It will simply give you a mapping. You don't have to know nothing about how the world works. It'll simply tell you at the end if you're in this state do that action and the reward will be high. In contrast there is the other world. This is the model-based reinforcement learning. So in model-based reinforcement learning what you have is a model of the world. The model of the world is best described for example if you play chess. If you play chess, and this is a let's do a simplified chess board here, four by four, and you have a pawn right here. You have a pawn and you know if I do the action of moving the pawn forward, I know the pawn will then be in this square right here, in the next time step. I know that because I have a model of the world, I know how the world works, and I can predict basically the results of my actions. So if you have a model-based reinforcement learning setup, if you know how the world works, you can do something like a search. So given you're here in a state, you know if I do action one I go to this state, if I do action two I go to that state, and if I do action three I go to this other state. From each of the states you can then say ah but again I have three actions and I can you know go into these three states, go into these maybe here two, and maybe here I can go into these, actually let's do three as well. Then the question more becomes, can you find a path through this thing such that at the end you are in the state that you want to end up? So for example here is outside, and then here you can go to the tree, to the house, or to the field, and in the house you can go to the bedroom, the bathroom, the kitchen, and you know all of this, you have a model. So you can actually kind of compute what would happen if I do something and then search for the best path. Whereas in the model-free reinforcement learning approach, what you'd simply do is you'd say here is a state, and the state is for example I am in the house, and now give me the action that would maximize my future reward, and you're trying to learn this directly. So it's a very different style of reinforcement learning. Basically one is a pure machine learning approach, and the other one is a search problem. Now you can of course mix and match the two, like for example people in AlphaGo have done, they have a model-based reinforcement learning that also has kind of a learning machine learning elements, but in between now we have the successor features. So the successor representations, they are, if you will, they are somewhere in between the two. So they kind of trade off the advantages of model-free, where you you only have to learn a function, right, from state to something, with the advantages of model-based, the fact that you actually have a bit of an idea of how the world works, and can adjust quickly to let's say different reward structures or things like this. So what do successor representations do? Successor representations basically learn how states are connected, and this is a classic successor representation. So the successor representation M here of policy pi, the policy remember is what tells you which action you should take in a given state. You define it as a connection between state i and state j, and M of si as j means given that I am in si, so this could be the kitchen, and your goal is to find the bedroom, and if this is the kitchen, given that I am in state si, what's the probability that in the future at some point I will transition to si, right? Given that I'm in the kitchen, what's the probability that I'll end up in the bedroom at some point in the future? And this is formally expressed, this is the expectation over your policy, and it's the indicator function that the future state, sorry, this is the future state t plus k, you see k goes from zero to infinity, so for all of the future, and st is the one you're in now, so for any future state this is equal to sj. Now of course this makes no sense unless you kind of discount, have a discount factor here, so if you're in state, if you're in the bedroom further in the future, then this value would be lower. So this value is high if you will transition from si to sj with high probability in the near future, and this is a successor representation, right? It basically tells you if you want to go from state si to state sj, how likely is that in the near future, right? So if this number is high, you know that these two states are closely connected, that you can expect to end up in state sj somewhere down the line if you're in si now. One more representation, if you consider the vector m pi of si given all of the sj's, so I'm doing a dot here, so this is a vector, you can actually compare two states si, so if one is, if you plug in here, you plug in the kitchen, and then also you plug in the, I don't know, the garage. If they, and you'll get out two vectors, right? You get two vectors, if those vectors are very similar, then you know that if you're in the kitchen or in the garage, it doesn't matter, you're going to end up, you have a similar future trajectories basically. However, if those two vectors are far apart, you know that these two states are far apart with respect to your policy. So this is pretty cool things you can do with successor representations, and I hope this gives you kind of some insight. So another neat trick is that if you have a value function, so and the value function, in this case there's a simplified assumption, but you don't actually need it, the simplified assumption is that the reward only depends on the state you're in. Basically, it doesn't matter how you get to the state, like the actions you perform, if you're in a given state, if you're in a given room in the house, you'll get some reward. Like for example, if you find the bedroom, then you win. That's a reward that would only be characterized by the state. If that's the case, you can compute the value function of the reinforcement learning problem simply by integrating over the success representations. So for each state, you simply go over all of the possible other states, and you ask how likely am I to go to that state, and what reward will I have in that state, and that's your value function. So pretty simple. You can actually learn the successor representations by TD learning, by temporal difference learning, which is a method that's applied throughout reinforcement learning, especially in places like Q learning, and also for learning value functions. So pretty neat successor representations. This paper then goes from successor representations of individual state to successor representations over continuous space. So right now we have these states, state kitchen, you go to the bedroom, you go to somewhere, and these states were kind of discrete places. So there was a house and you have different rooms in the house, and you can go between them. Now we're dealing more with continuous states. So you can generalize these successor representations to continuous state by considering not the states themselves, but features of the state. And a feature, in this here you have to kind of imagine as binary features. And the features, let me give like some really dumb examples, but maybe it helps you. Like one feature could be the smell. Does it smell in the room? Like just binary. Does it smell or doesn't it smell? And then one feature could there be, is there sunlight? And then one feature could be, is it warm? And these are all binary features. So you have to build the features such that if the features are the same, then the states should be fairly close in whatever sense. So for example, if it smells but there is no sunlight, you're probably somewhere in the bathroom. Like where exactly in xy coordinates you are in the bathroom, it doesn't really matter to this as long as the features are high. And so if it smells and there is no sunlight, you're probably somewhere in the bathroom. And that makes all the states in the bathroom, all the coordinates, close together. So this is how you have to imagine these features. You can define your successor representations exactly the same over these features, except that the representation is now not from state i to state j, but from a state to a given feature. So that means if I am in state st at the current time, what is the probability that in the near future this feature will be high? So if I am right now in the or close to the bathroom, let's say, the probability that smell, oh sorry, this should be a highlight, the probability that smell is high in the future is very high, right? So this this number would be high. So exactly the same except for these continuous features now. And you can do the same thing including defining the value function as a simple linear multiplication with these features. That is an assumption under the assumption that the reward is a linear function of the features of the states, which is the analogous assumption to saying that the reward only depends on the state in the linear case, or somewhat of an analogous function, not entirely. All right, so you can also learn this by temporal difference learning exactly the same. So this is pretty cool. These are the successor representations and you can actually, if you learn them, you have kind of a model of how the world works. Not as much a model as the model based reinforcement learning where you know exactly how it works, right? Here you know exactly how the world works, you have this model. In model three, you don't know how the world works at all. You simply know, oh if I'm in this state and do this action, that that'll turn out really well. But in the successor representation framework, you have you have an idea of what states there are. We'll do the discrete case right now. So this could be kitchen, this could be outdoor, this could be bedroom. And so you have an idea what states there are and so on, and how they connect to each other. Like you say, from the kitchen I can easily go to the bedroom, but I cannot as well go to maybe the bathroom. From outdoor I can easily go to the kitchen, but I can't go to the bedroom and so on. So you have kind of an idea of how all of these states connect to each other. And that is the success representation. You can already see how that helps learning agent a lot if you introduce the successor, if you have the successor representation. Now what this this paper deals with in essence is it says, okay these successor representations are cool, but it has only so far been done in a case where you have full observability. And the full observability is the case where you kind of know what state you're in, right? You kind of know that, sorry, you are in the kitchen, you are outdoors, you are in the bedroom. That is not known. But what if you don't? And in most problems you don't. What if you just have a picture, like here, right? You just see a tree in the house, right? You don't, you kind of have to infer that you are outdoor, right? And if you're here, you just get this picture of a couple of doors and a table and you have to infer that you are now in the living room. So in essence there is an additional layer of complexity. Not only do you go from state to state to state, but you don't actually observe the states. What you observe is from each state you observe what are called observations, right? So you only observe these and you have to infer what the, you kind of have to guess what the underlying states are in order to know what you should do to get to the next state, right? You only ever observe the observations. So this here is the actual thing, this is kitchen, and this here could be a picture of the kitchen, right? There's a counter, there's a stove, yeah. And so you get kind of what I mean. In their example they simplify this to kind of a toy data setup where you have this environment and this is one beautiful picture. I don't know why. Oh well. Just you have one this setup and this is this box basically. This box and it has this wall, right? And then you have an agent that is able to walk around in here like with whatever policy. The policy determines how it walks around. But then what you observe is not the actual position, but what you observe is for example for this position you observe a random point here. So they basically add noise to each observer, to each state. And if you're in this state you will observe one of these points in this circle, right? So your trajectory might look to you as you observe it much more, much like for example from here to here to here to here. And you kind of have to guess what the underlying state is. And you see this here. This blue thing is what the agent actually does, but the gray thing is what it observes. And the observations are sometimes even outside of this boundary. And this orange thing is now the inferred thing. And that's what we actually want, is to go from the observed to these inferred. And we want that the inferred is as close as possible to this true latent state. So the way they do it is they introduce this distributional distributed coding for the expectation of the features. And basically what they say is they say we will build a framework where we represent the features as expectations over some distribution. And the expectation we'll call mu. And mu is simply the kind of mean of this feature under this distribution. This is very general so let's look at how to plug this in. So what they now have to do is they have to learn these two things. First of all if I draw this picture again these are the underlying states and they kind of transition into each other. So this is state one, state two, state three. And with action one, action two we transition from state to state. But also there are these observations. Observation one, observation two, observation three. So the agent needs to learn two different things. First of all it needs to learn, given an observation, what state am I probably in. This is the first thing it needs to learn. And then the second thing it needs to learn is given this state and this action what's the next state that I will go to. And of course these things down here they're not observed. So these things down here you can only do in distribution. So I'm going to represent this with a p here. You can only kind of do this in distribution and the way they handle it is they always maintain the expected value of these things. And that's, they do this in this wake-sleep algorithm. Alright so this is me re-recording this part because I have done a terrible job at the first time. So I want to understand this wake-sleep algorithm to compute the things that we don't know. Let me draw this actually again. So the way this algorithm does it is actually pretty cool. It has two phases, a sleep phase and a wake phase and it alternates between the two constantly. It's kind of like expectation maximization. Well ultimately what you want to learn are two different sets of parameters W and T. Now you, whenever you learn T you use W, the one that you've already learned. And whenever you learn W you use the T that you've already learned. So it's kind of a bootstrapping each other up. The two functions you learn here are this FW and the T here. So T is just a matrix and F of W is a function. The function has weights W. So you see in the sleep phase you update W and in the wake phase you update T. Now why is this called wake and sleep? It's because in the wake phase you're actually so called awake and you use real observations. So in the wake phase, and I find it easier to start actually at the wake phase, in the wake phase you collect observations. So you let your agent go around its environment and collect a bunch of observations. You don't know what the states are, but what you do is simply you collect these observations. Now it's not that important what the policy is here. So you basically follow some policy and you collect these observations. And then what you say is, okay I have the function F of W and remember since we're in the wake phase we're learning T so we assume we already have the W. In essence in practice we start out with a random one and then kind of alternate between the two phases until both get really good. So we already have a W and we use it to update T. How do we do this? We need to understand what this function F of W does. F of W takes this mu and the current observation and produces a new mu. So what is a mu? This mu here, this mu here as we saw above here, the mu is the expectation over the features. And in essence the mu is a guess. The mu is your best guess of what the features of the state are. Or in the discrete case you could also say a guess of what the state is. So you don't know the state, but what you want to maintain is a distribution over state. So you want to kind of maintain this distribution. But you can't calculate, you can't properly efficiently calculate with an entire distribution unless you assume it's some sort of Gaussian or so. But what you can do is you can simply take its mean, mu, and that's your best guess for what the state is. The state could be anywhere here according to this distribution, but you simply come up with mu which is your best guess. So the function F of W takes in the best guess of where you were up until the last step. And it also takes as an argument your current observation and it gives you the output of F is mu t. It's the best guess of where you are now. It's pretty straightforward if you think about it. So for every observation you want to have kind of a guess of what your state is. And that's mu. So what F does is it takes whatever observations you had, these observations gave rise to a mu that guess where you are. You take this mu and you take this observation and from that you derive the next guess of where you are. You just say I guessed I was in the kitchen before, now I moved, I observed that I moved through some sort of door and there's some sort of table. So given that I thought I was in the kitchen and that I observed this thing, now I'm probably in the living room. That's what FW does. So you input the observations that you had and you input your current observation to get the guess of where you're next. And these are real observations. And then you simply update t. What does t do? t relates your current and your next guess. And that's important. We already said that F takes your last guess and gives you the next guess. t does kind of the same thing, but t does it without relying on an additional observation. t simply says well if I am here or if my guess is that I am in the kitchen, then what's the probability that in the next step I'll be in the living room without observing anything? t is simply relating states to each other or relating guesses of states to each other. So it's simply saying well under the current policy that I am, what is the kind of distribution of going from one room to the next room? So in the wake phase you learn the t. The t simply represents how you move from state to state. So it's exactly basically this function here. Except that it's not from state to state, but it relates your guess about your guess, your mu of the state 1 to the mu of the state 2. And then in the sleep phase, you now assume that you have a good estimate of how the states relate to each other. And what you can then do is you can actually sample trajectories. And this is why it's called sleeping. It's kind of like dreaming. So given that you have a model t of how states transition to each other or your your guesses about states more precisely, you can now sample state trajectories. So you can dream up how you would move in an environment. And the assumption here is that you know the process that if you have a state that gives you an observation. For example in their experiments is always the state is x-y coordinates and that's corrupted by Gaussian noise. There is also ways to learn this transition. This is what's called the observation process. But you assume you know it. So you can sample trajectories of states and corresponding observations. Now this is not the real world, but this is using this t down here. You kind of know how or you kind of have some sort of model. You learn a model of how you move about the world. So you sample these trajectories and from these trajectories you can now learn the F of W function. So you see since you know what the state is, you can compute these features exactly. And then you can learn this F of W function that gives you a guess of the last state and the current observation and gives you the next the guess of the next state. And that you can then use temporal difference learning. This is always here. Also with the t here we have temporal difference kind of a temporal difference learning to learn the parameters W. So it's very kind of convoluted, but ultimately it's a simple process. In the wake phase you go into the world and actually collect real observations. And you have a method of deriving from these observations, deriving the guesses about the states. So what you can do is you can learn a transition between the states. If you have a good guess of what the states are given each observation you can learn how to transition from one state to the next state. Except you don't do it in actual states, you do it in guesses about states. Then once you have a model of how you move from one state to the next state you can go and dream up such state trajectories. You can dream state trajectories and therefore also you can dream how you would observe them. And given that you can learn then a better function that relates your guess about a state given the observation to the actual features of the state. Since for this particular thing you know what the state is. So this is this two-step process. Notice the cool thing. We've never actually had to learn this mu explicitly. We never had to learn how to go from observations to your guesses about states because we can compute this recursively. So you simply start out with mu0 which is a guess about the initial state and then you go to mu1 and mu2 and you never actually have to learn that function. So that's how they learn these success representations and the experiments of this are fairly cool. Here is another diagram of how that looks like. You have a state this gives you an observation and from that you derive a guess of what this state is. So you can now look at what the agent learned. The agent actually learns dynamics of this room. It means if you're here you probably go somewhere. There is no clear direction but if you're close to the wall your next states are probably going to be inwards of this wall. And yeah I've already shown you this picture. So they have a last cool experiment here where what they do is they specify a reward and the reward is down here. And from each state you want to know which way do I have to go to get the reward. Now if they give the agent the value of the latent state and the latent state here are just your x y coordinates. If they give this to the agent and they let it run, they let it learn the structure of the world, it will correctly conclude these are the high value states, lower, lower, lower, lower, lower value states. Up until over here are the most low value states because you travel the longest to go to the reward. If you just give it the observation, the noisy observation, it will actually assign high value to states here. Because of course it doesn't infer the latent state. It simply takes the observation as the phase value says. Well I was here and I reached here pretty quickly so it must be a good state. But in fact it wasn't here, it was here and the added noise would just corrupt the observation. So you see it learns kind of a wrong model of the world. Whereas if you use this DDC you see, sorry about that, if you use this DDC you see you're much closer to the true state of the world, like to the one on the left here. So on the left here you actually kind of cheat, you give it the actual state. But here you give it the observation but tell it it's actually a noisy observation. You use what this paper proposes and again it will learn to assign a low value to these states because it needs to go all the way around. Even though it has supposedly seen the agent go from here to here directly, but it kind of understands that it's just a noisy observation. Alright so this was this from this paper. It's a very very cool approach I think to reinforcement learning and there's some more experiments where you can see that this DDC actually helps. I'm excited about successor representations and how to incorporate them in reinforcement learning because it seems a perfect kind of middle ground between model-based and model-free RL. With that thanks for listening and bye bye!
[ { "end": 4.5600000000000005, "start": 0, "text": " Alright, hi there! Today we're looking at a neurally plausible model," }, { "end": 8.96, "start": 4.5600000000000005, "text": " learned successor representations in partially observable environments," }, { "end": 12.56, "start": 8.96, "text": " by Esther Vertes and Manish Sani." }, { "end": 20.080000000000002, "start": 12.56, "text": " This paper is a paper on a topic that has been interesting me for a while," }, { "end": 22.400000000000002, "start": 20.080000000000002, "text": " and that's successor representations." }, { "end": 28.8, "start": 22.400000000000002, "text": " So we'll dive into all of this. The title is fairly lengthy and complicated," }, { "end": 33.52, "start": 28.8, "text": " but ultimately we're dealing with a setting of reinforcement learning." }, { "end": 37.04, "start": 33.52, "text": " So if you know something about reinforcement learning," }, { "end": 42.08, "start": 37.04, "text": " in reinforcement learning usually you have an agent," }, { "end": 45.36, "start": 42.08, "text": " which, let's just say this is you," }, { "end": 51.519999999999996, "start": 45.36, "text": " and there is an environment which is a big black box" }, { "end": 54.56, "start": 51.519999999999996, "text": " that you don't know anything about. This is environment." }, { "end": 57.92, "start": 54.56, "text": " And what the environment gives you is what's called an observation." }, { "end": 62.160000000000004, "start": 57.92, "text": " So an observation could be anything, but in this case let's just assume" }, { "end": 67.28, "start": 62.160000000000004, "text": " you get a little picture of what's in front of you." }, { "end": 72.96000000000001, "start": 67.28, "text": " So in front of you might be a tree, and in front of you might be a house." }, { "end": 78.24000000000001, "start": 72.96000000000001, "text": " And then you can perform an action, and this action in this case might be to" }, { "end": 82.4, "start": 78.24000000000001, "text": " enter the house. And then the environment in the next step," }, { "end": 86.64, "start": 82.4, "text": " it gives you back a new picture and says, ah you're now inside the house." }, { "end": 91.36, "start": 86.64, "text": " So here is a door that leads you to this room, and the door that leads you that" }, { "end": 93.76, "start": 91.36, "text": " room, and there's a little table in front of you." }, { "end": 99.12, "start": 93.76, "text": " So it's just this cycle of action observation." }, { "end": 103.36, "start": 99.12, "text": " And with that you're trying to collect some reward" }, { "end": 107.84, "start": 103.36, "text": " over time. Now there are different ways of achieving" }, { "end": 113.28, "start": 107.84, "text": " this reward over time. So basically the reward is going to be," }, { "end": 117.52, "start": 113.28, "text": " for example, you could get a reward for finding the kitchen," }, { "end": 122.96000000000001, "start": 117.52, "text": " or for going into as many rooms as possible, or" }, { "end": 126.64, "start": 122.96000000000001, "text": " you know anything like this. So the other objective is to learn" }, { "end": 130.4, "start": 126.64, "text": " what's called a policy. So which actions to take. So action one," }, { "end": 136.16, "start": 130.4, "text": " action two, action three, given the observations that maximizes your rewards." }, { "end": 139.84, "start": 136.16, "text": " So there's mainly two ways to go about this. There's the model-free and the" }, { "end": 144.64000000000001, "start": 139.84, "text": " model-based reinforcement learning approach. Let's split them. So in the" }, { "end": 150.16, "start": 144.64000000000001, "text": " model-free approach, what you're trying to do" }, { "end": 154.48000000000002, "start": 150.16, "text": " is you're trying to simply learn a policy, and we call this here" }, { "end": 159.92000000000002, "start": 154.48000000000002, "text": " pi of s, and s is your state. And the state you can think of it as the" }, { "end": 164.72, "start": 159.92000000000002, "text": " observation. So in this policy we'll simply output" }, { "end": 170.4, "start": 164.72, "text": " an action. And this is the kind of the simple setup of" }, { "end": 173.04, "start": 170.4, "text": " model-free reinforcement learning. The important thing here is" }, { "end": 177.04, "start": 173.04, "text": " you're trying to learn this. Usually there's parameters theta" }, { "end": 181.04, "start": 177.04, "text": " of this policy pi. This could be a neural network and the theta" }, { "end": 184.4, "start": 181.04, "text": " are then the weights of the neural network. So you're trying to learn the" }, { "end": 188.8, "start": 184.4, "text": " neural network such that if you give it a state it just" }, { "end": 192.56, "start": 188.8, "text": " outputs the action. So you have this neural network with your state, you" }, { "end": 197.28, "start": 192.56, "text": " input the state into layer, layer, layer, layer, layer, and then it outputs one of" }, { "end": 202.88, "start": 197.28, "text": " maybe three actions. Go north, go south, go west, maybe go" }, { "end": 206.24, "start": 202.88, "text": " east. This could be four actions." }, { "end": 209.68, "start": 206.24, "text": " You're just trying to train the neural network using backprop" }, { "end": 212.48000000000002, "start": 209.68, "text": " and the reward signal through what's called the" }, { "end": 217.68, "start": 212.48000000000002, "text": " reinforce trick or variance thereof. This is model-free reinforcement learning." }, { "end": 221.92000000000002, "start": 217.68, "text": " It's very easy to implement, let's say," }, { "end": 226.79999999999998, "start": 221.92, "text": " and it's very applicable. It will simply give you a mapping." }, { "end": 230.48, "start": 226.79999999999998, "text": " You don't have to know nothing about how the world works. It'll simply" }, { "end": 233.2, "start": 230.48, "text": " tell you at the end if you're in this state" }, { "end": 236.64, "start": 233.2, "text": " do that action and the reward will be high." }, { "end": 240.79999999999998, "start": 236.64, "text": " In contrast there is the other world. This is the model-based reinforcement" }, { "end": 243.2, "start": 240.79999999999998, "text": " learning." }, { "end": 246.95999999999998, "start": 243.51999999999998, "text": " So in model-based reinforcement learning what you have is a" }, { "end": 250.79999999999998, "start": 246.95999999999998, "text": " model of the world. The model of the world" }, { "end": 254.08, "start": 250.8, "text": " is best described for example if you play chess." }, { "end": 258.88, "start": 254.08, "text": " If you play chess, and this is a let's do a simplified chess board" }, { "end": 263.28000000000003, "start": 258.88, "text": " here, four by four, and you have a pawn right here." }, { "end": 269.84000000000003, "start": 263.28000000000003, "text": " You have a pawn and you know if I do the action of moving the" }, { "end": 274.08000000000004, "start": 269.84000000000003, "text": " pawn forward, I know the pawn will then be in this" }, { "end": 277.92, "start": 274.08000000000004, "text": " square right here, in the next time step. I know that" }, { "end": 281.76, "start": 277.92, "text": " because I have a model of the world, I know how the world works," }, { "end": 285.52000000000004, "start": 281.76, "text": " and I can predict basically the results of my actions." }, { "end": 289.28000000000003, "start": 285.52000000000004, "text": " So if you have a model-based reinforcement learning setup," }, { "end": 292.88, "start": 289.28000000000003, "text": " if you know how the world works, you can do something like a search." }, { "end": 297.68, "start": 292.88, "text": " So given you're here in a state, you know if I do action one" }, { "end": 301.04, "start": 297.68, "text": " I go to this state, if I do action two I go to that state," }, { "end": 305.28000000000003, "start": 301.04, "text": " and if I do action three I go to this other state. From each of the states" }, { "end": 310.47999999999996, "start": 305.28, "text": " you can then say ah but again I have three actions and I can you know go" }, { "end": 314.32, "start": 310.47999999999996, "text": " into these three states, go into these maybe here two, and maybe" }, { "end": 318.15999999999997, "start": 314.32, "text": " here I can go into these, actually let's do three as well." }, { "end": 322.64, "start": 318.15999999999997, "text": " Then the question more becomes, can you find a path" }, { "end": 328.71999999999997, "start": 322.64, "text": " through this thing such that at the end you are in the state that you" }, { "end": 333.76, "start": 328.71999999999997, "text": " want to end up? So for example here is outside," }, { "end": 337.28, "start": 333.76, "text": " and then here you can go to the tree, to the house," }, { "end": 342.48, "start": 337.28, "text": " or to the field, and in the house you can go to the bedroom," }, { "end": 347.84, "start": 342.48, "text": " the bathroom, the kitchen, and you know all of this, you have a model." }, { "end": 351.03999999999996, "start": 347.84, "text": " So you can actually kind of compute what would happen if I do" }, { "end": 353.92, "start": 351.03999999999996, "text": " something and then search for the best path." }, { "end": 357.12, "start": 353.92, "text": " Whereas in the model-free reinforcement learning approach," }, { "end": 361.36, "start": 357.12, "text": " what you'd simply do is you'd say here is a state, and the state is for example" }, { "end": 367.12, "start": 361.36, "text": " I am in the house, and now give me the action that would" }, { "end": 371.2, "start": 367.12, "text": " maximize my future reward, and you're trying to learn this directly." }, { "end": 374.88, "start": 371.2, "text": " So it's a very different style of reinforcement" }, { "end": 379.2, "start": 374.88, "text": " learning. Basically one is a pure machine learning approach, and the" }, { "end": 382.64, "start": 379.2, "text": " other one is a search problem. Now you can of course mix and match the two," }, { "end": 386.88, "start": 382.64, "text": " like for example people in AlphaGo have done, they have a model-based" }, { "end": 391.2, "start": 386.88, "text": " reinforcement learning that also has kind of a learning machine learning" }, { "end": 395.44, "start": 391.2, "text": " elements, but in between now we have the successor" }, { "end": 400.71999999999997, "start": 395.44, "text": " features. So the successor representations, they are," }, { "end": 404.15999999999997, "start": 400.71999999999997, "text": " if you will, they are somewhere in between the two." }, { "end": 410.24, "start": 404.15999999999997, "text": " So they kind of trade off the advantages of model-free, where you" }, { "end": 414.56, "start": 410.24, "text": " you only have to learn a function, right, from state to something," }, { "end": 419.12, "start": 414.56, "text": " with the advantages of model-based, the fact that you actually have a bit of an" }, { "end": 422.56, "start": 419.12, "text": " idea of how the world works, and can adjust quickly to" }, { "end": 426.96, "start": 422.56, "text": " let's say different reward structures or things like this." }, { "end": 432.64, "start": 426.96, "text": " So what do successor representations do? Successor representations basically" }, { "end": 438.08, "start": 432.64, "text": " learn how states are connected, and this is a classic successor" }, { "end": 442.4, "start": 438.08, "text": " representation. So the successor representation M here" }, { "end": 447.44, "start": 442.4, "text": " of policy pi, the policy remember is what tells you which action you should take" }, { "end": 453.52, "start": 447.44, "text": " in a given state. You define it as a" }, { "end": 460.4, "start": 453.52, "text": " connection between state i and state j, and M of si as j means" }, { "end": 464.72, "start": 460.4, "text": " given that I am in si, so this could be the kitchen," }, { "end": 471.92, "start": 464.72, "text": " and your goal is to find the bedroom, and if this is the kitchen," }, { "end": 475.92, "start": 471.92, "text": " given that I am in state si, what's the probability" }, { "end": 479.84000000000003, "start": 475.92, "text": " that in the future at some point I will transition" }, { "end": 486.40000000000003, "start": 479.84000000000003, "text": " to si, right? Given that I'm in the kitchen, what's the probability that" }, { "end": 491.28000000000003, "start": 486.40000000000003, "text": " I'll end up in the bedroom at some point in the future?" }, { "end": 496.40000000000003, "start": 491.28000000000003, "text": " And this is formally expressed, this is the expectation over your policy," }, { "end": 503.6, "start": 496.40000000000003, "text": " and it's the indicator function that the future state," }, { "end": 509.20000000000005, "start": 503.6, "text": " sorry, this is the future state t plus k, you see k goes from zero to infinity, so" }, { "end": 513.12, "start": 509.20000000000005, "text": " for all of the future, and st is the one you're in now," }, { "end": 516.96, "start": 513.12, "text": " so for any future state this is equal to sj." }, { "end": 520.16, "start": 516.96, "text": " Now of course this makes no sense unless you kind of" }, { "end": 525.52, "start": 520.16, "text": " discount, have a discount factor here, so if you're in state, if you're in the" }, { "end": 528.88, "start": 525.52, "text": " bedroom further in the future, then this value would be lower." }, { "end": 534.24, "start": 528.88, "text": " So this value is high if you will transition from si to sj with high" }, { "end": 537.28, "start": 534.24, "text": " probability in the near future, and this is a" }, { "end": 541.76, "start": 537.28, "text": " successor representation, right? It basically tells you if you want to" }, { "end": 547.04, "start": 541.76, "text": " go from state si to state sj, how likely is that in the near future," }, { "end": 553.44, "start": 547.04, "text": " right? So if this number is high, you know that" }, { "end": 557.28, "start": 553.44, "text": " these two states are closely connected, that you can" }, { "end": 563.4399999999999, "start": 557.28, "text": " expect to end up in state sj somewhere down the line if you're in si now." }, { "end": 567.04, "start": 563.4399999999999, "text": " One more representation, if you consider the vector" }, { "end": 575.36, "start": 567.04, "text": " m pi of si given all of the sj's, so I'm doing a dot here, so this is a vector," }, { "end": 581.68, "start": 575.36, "text": " you can actually compare two states si, so if one is, if you plug in here," }, { "end": 586.16, "start": 581.68, "text": " you plug in the kitchen, and then also you plug in" }, { "end": 593.4399999999999, "start": 586.16, "text": " the, I don't know, the garage. If they, and you'll get out two vectors," }, { "end": 596.88, "start": 593.4399999999999, "text": " right? You get two vectors, if those vectors are very similar," }, { "end": 600.88, "start": 596.88, "text": " then you know that if you're in the kitchen or in the garage, it doesn't" }, { "end": 603.1999999999999, "start": 600.88, "text": " matter, you're going to end up, you have a" }, { "end": 608.24, "start": 603.1999999999999, "text": " similar future trajectories basically. However, if those two" }, { "end": 610.48, "start": 608.24, "text": " vectors are far apart, you know that these two" }, { "end": 613.76, "start": 610.48, "text": " states are far apart with respect to your policy." }, { "end": 618.08, "start": 613.76, "text": " So this is pretty cool things you can do with successor representations," }, { "end": 621.36, "start": 618.08, "text": " and I hope this gives you kind of some insight." }, { "end": 629.12, "start": 621.36, "text": " So another neat trick is that if you have a value function, so" }, { "end": 632.72, "start": 629.12, "text": " and the value function, in this case there's a simplified assumption, but you" }, { "end": 635.6, "start": 632.72, "text": " don't actually need it, the simplified assumption is that the" }, { "end": 638.96, "start": 635.6, "text": " reward only depends on the state you're in." }, { "end": 642, "start": 638.96, "text": " Basically, it doesn't matter how you get to the state, like the actions you" }, { "end": 645.36, "start": 642, "text": " perform, if you're in a given state, if you're in a given room in the house," }, { "end": 649.2, "start": 645.36, "text": " you'll get some reward. Like for example, if you find the bedroom," }, { "end": 652.56, "start": 649.2, "text": " then you win. That's a reward that would only be" }, { "end": 656, "start": 652.56, "text": " characterized by the state. If that's the case," }, { "end": 662.32, "start": 656, "text": " you can compute the value function of the reinforcement learning problem" }, { "end": 668.64, "start": 662.32, "text": " simply by integrating over the success representations. So for each" }, { "end": 674, "start": 668.64, "text": " state, you simply go over all of the possible other states, and you ask how" }, { "end": 678, "start": 674, "text": " likely am I to go to that state, and what reward will I have in that state, and" }, { "end": 682.56, "start": 678, "text": " that's your value function. So pretty simple." }, { "end": 685.6, "start": 682.56, "text": " You can actually learn the successor representations" }, { "end": 689.76, "start": 685.6, "text": " by TD learning, by temporal difference learning," }, { "end": 694.96, "start": 689.76, "text": " which is a method that's applied throughout reinforcement learning," }, { "end": 702.5600000000001, "start": 694.96, "text": " especially in places like Q learning, and also for learning value functions." }, { "end": 708, "start": 702.5600000000001, "text": " So pretty neat successor representations." }, { "end": 714.08, "start": 708.72, "text": " This paper then goes from successor representations of individual state" }, { "end": 720.64, "start": 714.08, "text": " to successor representations over continuous space. So right now we have" }, { "end": 723.9200000000001, "start": 720.64, "text": " these states, state kitchen, you go to the" }, { "end": 727.76, "start": 723.92, "text": " bedroom, you go to somewhere, and these states were kind of" }, { "end": 732.9599999999999, "start": 727.76, "text": " discrete places. So there was a house and you have different" }, { "end": 736.56, "start": 732.9599999999999, "text": " rooms in the house, and you can go between them." }, { "end": 743.1999999999999, "start": 736.56, "text": " Now we're dealing more with continuous states. So you can generalize" }, { "end": 746.88, "start": 743.1999999999999, "text": " these successor representations to continuous state by considering" }, { "end": 750.56, "start": 746.88, "text": " not the states themselves, but features of the" }, { "end": 755.92, "start": 750.56, "text": " state. And a feature, in this here you have to kind of imagine as" }, { "end": 761.8399999999999, "start": 755.92, "text": " binary features. And the features, let me give like some really dumb" }, { "end": 766.9599999999999, "start": 761.8399999999999, "text": " examples, but maybe it helps you. Like one feature could be the smell." }, { "end": 770.88, "start": 766.9599999999999, "text": " Does it smell in the room? Like just binary. Does it smell or doesn't it smell?" }, { "end": 776.7199999999999, "start": 770.88, "text": " And then one feature could there be, is there sunlight?" }, { "end": 784.1600000000001, "start": 776.72, "text": " And then one feature could be, is it warm?" }, { "end": 790.5600000000001, "start": 784.96, "text": " And these are all binary features." }, { "end": 796.5600000000001, "start": 790.5600000000001, "text": " So you have to build the features such that if the" }, { "end": 802.08, "start": 796.5600000000001, "text": " features are the same, then the states should be fairly close in" }, { "end": 808, "start": 802.08, "text": " whatever sense. So for example, if it smells but there is no" }, { "end": 812.32, "start": 808, "text": " sunlight, you're probably somewhere in the bathroom. Like where exactly in xy" }, { "end": 816.96, "start": 812.32, "text": " coordinates you are in the bathroom, it doesn't really matter to this as long" }, { "end": 821.5200000000001, "start": 816.96, "text": " as the features are high. And so if it smells and there is no" }, { "end": 825.9200000000001, "start": 821.5200000000001, "text": " sunlight, you're probably somewhere in the bathroom. And that makes" }, { "end": 830.88, "start": 825.9200000000001, "text": " all the states in the bathroom, all the coordinates, close together." }, { "end": 834.96, "start": 830.88, "text": " So this is how you have to imagine these features. You can define your successor" }, { "end": 839.28, "start": 834.96, "text": " representations exactly the same over these features, except that the" }, { "end": 845.12, "start": 839.28, "text": " representation is now not from state i to state j, but from a state to" }, { "end": 852.16, "start": 845.12, "text": " a given feature. So that means if I am in state st at the current time, what is" }, { "end": 858.24, "start": 852.16, "text": " the probability that in the near future this feature will be high?" }, { "end": 863.44, "start": 858.24, "text": " So if I am right now in the or close to the bathroom, let's say," }, { "end": 870.72, "start": 864.5600000000001, "text": " the probability that smell, oh sorry, this should be a highlight, the" }, { "end": 876.72, "start": 870.72, "text": " probability that smell is high in the future is very high, right? So this" }, { "end": 881.36, "start": 876.72, "text": " this number would be high. So exactly the same except for these continuous" }, { "end": 887.84, "start": 881.36, "text": " features now. And you can do the same thing including defining the value" }, { "end": 893.44, "start": 887.84, "text": " function as a simple linear multiplication with these features." }, { "end": 898, "start": 894.32, "text": " That is an assumption under the assumption that the reward is a linear" }, { "end": 902.88, "start": 898, "text": " function of the features of the states, which is the analogous assumption to" }, { "end": 907.6, "start": 902.88, "text": " saying that the reward only depends on the state in the linear case, or" }, { "end": 910.1600000000001, "start": 907.6, "text": " somewhat of an analogous function, not entirely." }, { "end": 917.0400000000001, "start": 912.96, "text": " All right, so you can also learn this by temporal difference learning exactly" }, { "end": 922.56, "start": 917.04, "text": " the same. So this is pretty cool. These are the successor representations and" }, { "end": 929.28, "start": 922.56, "text": " you can actually, if you learn them, you have kind of a model of how the world" }, { "end": 935.4399999999999, "start": 929.28, "text": " works. Not as much a model as the model based reinforcement learning where you" }, { "end": 941.04, "start": 935.4399999999999, "text": " know exactly how it works, right? Here you know exactly how the world works," }, { "end": 944.88, "start": 941.04, "text": " you have this model. In model three, you don't know how the world works at all." }, { "end": 949.28, "start": 944.88, "text": " You simply know, oh if I'm in this state and do this action, that that'll turn out" }, { "end": 953.76, "start": 949.28, "text": " really well. But in the successor representation framework, you have" }, { "end": 961.04, "start": 956.08, "text": " you have an idea of what states there are. We'll do the discrete case right now." }, { "end": 966.56, "start": 961.04, "text": " So this could be kitchen, this could be outdoor, this could be bedroom." }, { "end": 974.48, "start": 967.6, "text": " And so you have an idea what states there are and so on, and how they connect to" }, { "end": 979.12, "start": 974.48, "text": " each other. Like you say, from the kitchen I can easily go to the bedroom, but I" }, { "end": 986.72, "start": 979.12, "text": " cannot as well go to maybe the bathroom. From outdoor I can easily go to the" }, { "end": 991.84, "start": 986.72, "text": " kitchen, but I can't go to the bedroom and so on. So you have kind of an idea" }, { "end": 997.28, "start": 991.84, "text": " of how all of these states connect to each other. And that is the success" }, { "end": 1002.88, "start": 997.28, "text": " representation. You can already see how that helps learning agent a lot if you" }, { "end": 1008.48, "start": 1002.88, "text": " introduce the successor, if you have the successor representation. Now what this" }, { "end": 1012.96, "start": 1008.48, "text": " this paper deals with in essence is it says, okay these successor" }, { "end": 1018.4, "start": 1012.96, "text": " representations are cool, but it has only so far been done in a case where you" }, { "end": 1024.4, "start": 1018.4, "text": " have full observability. And the full observability is the case where you kind" }, { "end": 1030.64, "start": 1024.4, "text": " of know what state you're in, right? You kind of know that, sorry, you are in the" }, { "end": 1037.68, "start": 1030.64, "text": " kitchen, you are outdoors, you are in the bedroom. That is not known. But what if" }, { "end": 1042.24, "start": 1037.68, "text": " you don't? And in most problems you don't. What if you just have a picture, like" }, { "end": 1046.88, "start": 1042.24, "text": " here, right? You just see a tree in the house, right? You don't, you kind of have" }, { "end": 1052, "start": 1046.88, "text": " to infer that you are outdoor, right? And if you're here, you just get this picture" }, { "end": 1057.8400000000001, "start": 1052, "text": " of a couple of doors and a table and you have to infer that you are now in the" }, { "end": 1064.3999999999999, "start": 1057.84, "text": " living room. So in essence there is an additional layer of complexity. Not" }, { "end": 1075.04, "start": 1064.3999999999999, "text": " only do you go from state to state to state, but you don't actually" }, { "end": 1081.1999999999998, "start": 1075.04, "text": " observe the states. What you observe is from each state you observe what are" }, { "end": 1089.92, "start": 1081.2, "text": " called observations, right? So you only observe these and you have to infer what" }, { "end": 1095.28, "start": 1089.92, "text": " the, you kind of have to guess what the underlying states are in order to know" }, { "end": 1099.92, "start": 1095.28, "text": " what you should do to get to the next state, right? You only ever observe the" }, { "end": 1106.8400000000001, "start": 1099.92, "text": " observations. So this here is the actual thing, this is kitchen, and this" }, { "end": 1113.36, "start": 1106.84, "text": " here could be a picture of the kitchen, right? There's a counter, there's a stove," }, { "end": 1120.6399999999999, "start": 1113.36, "text": " yeah. And so you get kind of what I mean. In their example they" }, { "end": 1127.48, "start": 1120.6399999999999, "text": " simplify this to kind of a toy data setup where you have this environment" }, { "end": 1134.24, "start": 1127.48, "text": " and this is one beautiful picture. I don't know why. Oh well. Just you have" }, { "end": 1140.8, "start": 1134.24, "text": " one this setup and this is this box basically. This box and it has this wall," }, { "end": 1148.72, "start": 1140.8, "text": " right? And then you have an agent that is able to walk around in here like with" }, { "end": 1152.8, "start": 1148.72, "text": " whatever policy. The policy determines how it walks around. But then what you" }, { "end": 1157.68, "start": 1152.8, "text": " observe is not the actual position, but what you observe is for example for this" }, { "end": 1163.14, "start": 1157.68, "text": " position you observe a random point here. So they basically add noise to each" }, { "end": 1168.0400000000002, "start": 1163.14, "text": " observer, to each state. And if you're in this state you will observe one of these" }, { "end": 1174.44, "start": 1168.0400000000002, "text": " points in this circle, right? So your trajectory might look to you as you" }, { "end": 1180.1200000000001, "start": 1174.44, "text": " observe it much more, much like for example from here to here to here to" }, { "end": 1186.42, "start": 1180.1200000000001, "text": " here. And you kind of have to guess what the underlying state is. And you see" }, { "end": 1193.0800000000002, "start": 1186.42, "text": " this here. This blue thing is what the agent actually does, but the gray" }, { "end": 1198.04, "start": 1193.08, "text": " thing is what it observes. And the observations are sometimes even outside" }, { "end": 1205.24, "start": 1198.04, "text": " of this boundary. And this orange thing is now the inferred thing." }, { "end": 1212.52, "start": 1205.24, "text": " And that's what we actually want, is to go from the observed to these inferred." }, { "end": 1218.24, "start": 1212.52, "text": " And we want that the inferred is as close as possible to this true latent" }, { "end": 1224.6, "start": 1218.24, "text": " state. So the way they do it is they introduce this distributional" }, { "end": 1234, "start": 1224.6, "text": " distributed coding for the expectation of the features." }, { "end": 1242.84, "start": 1234, "text": " And basically what they say is they say we will build a framework where" }, { "end": 1251.9199999999998, "start": 1242.84, "text": " we represent the features as expectations over some distribution." }, { "end": 1260.4399999999998, "start": 1251.9199999999998, "text": " And the expectation we'll call mu. And mu is simply the kind of mean of" }, { "end": 1266.6799999999998, "start": 1260.4399999999998, "text": " this feature under this distribution. This is very general so let's" }, { "end": 1278.28, "start": 1266.68, "text": " look at how to plug this in. So what they now have to do is they" }, { "end": 1283.5600000000002, "start": 1278.28, "text": " have to learn these two things. First of all if I draw this" }, { "end": 1290.5600000000002, "start": 1283.5600000000002, "text": " picture again these are the underlying states and they kind of transition into" }, { "end": 1295.5600000000002, "start": 1290.5600000000002, "text": " each other. So this is state one, state two, state three. And with action one," }, { "end": 1299.96, "start": 1295.56, "text": " action two we transition from state to state. But also there are these" }, { "end": 1308.56, "start": 1299.96, "text": " observations. Observation one, observation two, observation three. So the agent needs" }, { "end": 1314.8799999999999, "start": 1308.56, "text": " to learn two different things. First of all it needs to learn, given an" }, { "end": 1321.12, "start": 1314.8799999999999, "text": " observation, what state am I probably in. This is the first thing it needs" }, { "end": 1325.6799999999998, "start": 1321.12, "text": " to learn. And then the second thing it needs to learn is given this state and" }, { "end": 1335.28, "start": 1325.6799999999998, "text": " this action what's the next state that I will go to. And of" }, { "end": 1339.76, "start": 1335.28, "text": " course these things down here they're not observed. So these things down here" }, { "end": 1345.32, "start": 1339.76, "text": " you can only do in distribution. So I'm going to represent this with a p here." }, { "end": 1349.8799999999999, "start": 1345.32, "text": " You can only kind of do this in distribution and the way they handle it" }, { "end": 1359.92, "start": 1349.88, "text": " is they always maintain the expected value of these things. And that's, they" }, { "end": 1365, "start": 1359.92, "text": " do this in this wake-sleep algorithm. Alright so this is me re-recording this" }, { "end": 1370.92, "start": 1365, "text": " part because I have done a terrible job at the first time. So I want to" }, { "end": 1376.68, "start": 1370.92, "text": " understand this wake-sleep algorithm to compute the things that we don't know." }, { "end": 1390, "start": 1376.68, "text": " Let me draw this actually again. So the way this algorithm does it is actually" }, { "end": 1396.3600000000001, "start": 1390, "text": " pretty cool. It has two phases, a sleep phase and a wake phase and it alternates" }, { "end": 1401.16, "start": 1396.3600000000001, "text": " between the two constantly. It's kind of like expectation maximization. Well" }, { "end": 1405.88, "start": 1401.16, "text": " ultimately what you want to learn are two different sets of parameters W and T." }, { "end": 1414.5200000000002, "start": 1405.88, "text": " Now you, whenever you learn T you use W, the one that you've already learned. And" }, { "end": 1419, "start": 1414.5200000000002, "text": " whenever you learn W you use the T that you've already learned. So it's kind of" }, { "end": 1426.8400000000001, "start": 1419, "text": " a bootstrapping each other up. The two functions you learn here are this FW" }, { "end": 1437.48, "start": 1426.84, "text": " and the T here. So T is just a matrix and F of W is a function. The function has" }, { "end": 1443.48, "start": 1437.48, "text": " weights W. So you see in the sleep phase you update W and in the wake" }, { "end": 1449.06, "start": 1443.48, "text": " phase you update T. Now why is this called wake and sleep? It's because in the" }, { "end": 1455.1599999999999, "start": 1449.06, "text": " wake phase you're actually so called awake and you use real observations. So" }, { "end": 1460.0400000000002, "start": 1455.16, "text": " in the wake phase, and I find it easier to start actually at the wake phase, in" }, { "end": 1465.8400000000001, "start": 1460.0400000000002, "text": " the wake phase you collect observations. So you let your agent go around its" }, { "end": 1469.88, "start": 1465.8400000000001, "text": " environment and collect a bunch of observations. You don't know what the" }, { "end": 1475.4, "start": 1469.88, "text": " states are, but what you do is simply you collect these observations. Now it's not" }, { "end": 1480.64, "start": 1475.4, "text": " that important what the policy is here. So you basically follow some policy and" }, { "end": 1490.6200000000001, "start": 1480.64, "text": " you collect these observations. And then what you say is, okay I have" }, { "end": 1495.48, "start": 1490.6200000000001, "text": " the function F of W and remember since we're in the wake phase we're learning" }, { "end": 1502.44, "start": 1495.48, "text": " T so we assume we already have the W. In essence in practice we start out with a" }, { "end": 1506.92, "start": 1502.44, "text": " random one and then kind of alternate between the two phases until" }, { "end": 1514.28, "start": 1506.92, "text": " both get really good. So we already have a W and we use it to update T. How" }, { "end": 1519.8400000000001, "start": 1514.28, "text": " do we do this? We need to understand what this function F of W does. F of" }, { "end": 1530.48, "start": 1519.8400000000001, "text": " W takes this mu and the current observation and produces a new mu. So" }, { "end": 1539.64, "start": 1530.48, "text": " what is a mu? This mu here, this mu here as we saw above here, the" }, { "end": 1548.1200000000001, "start": 1539.64, "text": " mu is the expectation over the features. And in essence the mu is a guess. The mu" }, { "end": 1553.56, "start": 1548.1200000000001, "text": " is your best guess of what the features of the state are. Or in the" }, { "end": 1560.76, "start": 1553.56, "text": " discrete case you could also say a guess of what the state is. So you" }, { "end": 1566.2, "start": 1560.76, "text": " don't know the state, but what you want to maintain is a distribution" }, { "end": 1570.6399999999999, "start": 1566.2, "text": " over state. So you want to kind of maintain this distribution. But you can't" }, { "end": 1575.48, "start": 1570.6399999999999, "text": " calculate, you can't properly efficiently calculate with an entire" }, { "end": 1580.56, "start": 1575.48, "text": " distribution unless you assume it's some sort of Gaussian or so. But what you can" }, { "end": 1588.6399999999999, "start": 1580.56, "text": " do is you can simply take its mean, mu, and that's your best guess" }, { "end": 1594.36, "start": 1588.6399999999999, "text": " for what the state is. The state could be anywhere here" }, { "end": 1599.56, "start": 1594.36, "text": " according to this distribution, but you simply come up with mu which is your" }, { "end": 1611.08, "start": 1599.56, "text": " best guess. So the function F of W takes in the best guess of where" }, { "end": 1617.72, "start": 1611.08, "text": " you were up until the last step. And it also takes as an argument your current" }, { "end": 1625.52, "start": 1617.72, "text": " observation and it gives you the output of F is mu t. It's the best guess" }, { "end": 1630.16, "start": 1625.52, "text": " of where you are now. It's pretty straightforward if you think" }, { "end": 1638.56, "start": 1630.16, "text": " about it. So for every observation you want to have kind of a guess of" }, { "end": 1645.04, "start": 1638.56, "text": " what your state is. And that's mu. So what F does is it" }, { "end": 1650.8799999999999, "start": 1645.04, "text": " takes whatever observations you had, these observations gave rise to a mu" }, { "end": 1655.64, "start": 1650.88, "text": " that guess where you are. You take this mu and you take this observation and" }, { "end": 1661.64, "start": 1655.64, "text": " from that you derive the next guess of where you are. You just say I guessed I" }, { "end": 1669.2800000000002, "start": 1661.64, "text": " was in the kitchen before, now I moved, I observed that I moved through some" }, { "end": 1674.2800000000002, "start": 1669.2800000000002, "text": " sort of door and there's some sort of table. So given that I thought I" }, { "end": 1677.8000000000002, "start": 1674.2800000000002, "text": " was in the kitchen and that I observed this thing, now I'm probably in the" }, { "end": 1687.6, "start": 1677.8, "text": " living room. That's what FW does. So you input the observations that you had" }, { "end": 1692.9199999999998, "start": 1687.6, "text": " and you input your current observation to get the guess of where you're" }, { "end": 1698.56, "start": 1692.9199999999998, "text": " next. And these are real observations. And then you simply update t. What" }, { "end": 1706.28, "start": 1698.56, "text": " does t do? t relates your current and your next guess. And that's important. We" }, { "end": 1713.56, "start": 1706.28, "text": " already said that F takes your last guess and gives you the next guess." }, { "end": 1720.56, "start": 1713.56, "text": " t does kind of the same thing, but t does it without relying on" }, { "end": 1726.8799999999999, "start": 1720.56, "text": " an additional observation. t simply says well if I am here or if my guess is that" }, { "end": 1732.52, "start": 1726.8799999999999, "text": " I am in the kitchen, then what's the probability that in the next step I'll" }, { "end": 1737.16, "start": 1732.52, "text": " be in the living room without observing anything? t is simply" }, { "end": 1743.84, "start": 1737.16, "text": " relating states to each other or relating guesses of states to each other." }, { "end": 1750.84, "start": 1743.84, "text": " So it's simply saying well under the current policy that I am," }, { "end": 1756.76, "start": 1750.84, "text": " what is the kind of distribution of going from one room to the next room?" }, { "end": 1762.8, "start": 1756.76, "text": " So in the wake phase you learn the t. The t simply represents how" }, { "end": 1767.8, "start": 1762.8, "text": " you move from state to state. So it's exactly basically this function here." }, { "end": 1773.44, "start": 1767.8, "text": " Except that it's not from state to state, but it relates your guess about your" }, { "end": 1783.16, "start": 1773.44, "text": " guess, your mu of the state 1 to the mu of the state 2. And then in the" }, { "end": 1791.24, "start": 1783.16, "text": " sleep phase, you now assume that you have a good estimate of how" }, { "end": 1795.48, "start": 1791.24, "text": " the states relate to each other. And what you can then do is you can actually" }, { "end": 1799.92, "start": 1795.48, "text": " sample trajectories. And this is why it's called sleeping. It's kind of like" }, { "end": 1806.6000000000001, "start": 1799.92, "text": " dreaming. So given that you have a model t of how states transition to each other" }, { "end": 1812.5800000000002, "start": 1806.6000000000001, "text": " or your your guesses about states more precisely, you can now sample state" }, { "end": 1817.72, "start": 1812.58, "text": " trajectories. So you can dream up how you would move in an environment." }, { "end": 1824.6799999999998, "start": 1817.72, "text": " And the assumption here is that you know the process that if you have a" }, { "end": 1829.04, "start": 1824.6799999999998, "text": " state that gives you an observation. For example in their experiments is always" }, { "end": 1835.36, "start": 1829.04, "text": " the state is x-y coordinates and that's corrupted by Gaussian noise. There is" }, { "end": 1840.52, "start": 1835.36, "text": " also ways to learn this transition. This is what's called the" }, { "end": 1846.32, "start": 1840.52, "text": " observation process. But you assume you know it. So you can sample" }, { "end": 1853.48, "start": 1846.32, "text": " trajectories of states and corresponding observations. Now this is" }, { "end": 1860.52, "start": 1853.48, "text": " not the real world, but this is using this t down here. You kind of know how" }, { "end": 1864.68, "start": 1860.52, "text": " or you kind of have some sort of model. You learn a model of how you" }, { "end": 1868.98, "start": 1864.68, "text": " move about the world. So you sample these trajectories and from these" }, { "end": 1874.88, "start": 1868.98, "text": " trajectories you can now learn the F of W function. So you see since you know" }, { "end": 1881.52, "start": 1874.88, "text": " what the state is, you can compute these features exactly. And then you" }, { "end": 1888.96, "start": 1881.52, "text": " can learn this F of W function that gives you a guess of the" }, { "end": 1894.78, "start": 1888.96, "text": " last state and the current observation and gives you the next the guess of the" }, { "end": 1902.94, "start": 1894.78, "text": " next state. And that you can then use temporal difference learning. This is" }, { "end": 1907.8, "start": 1902.94, "text": " always here. Also with the t here we have temporal difference kind of a" }, { "end": 1917.76, "start": 1907.8, "text": " temporal difference learning to learn the parameters W. So it's very kind of" }, { "end": 1925.36, "start": 1917.76, "text": " convoluted, but ultimately it's a simple process. In the wake phase you go into" }, { "end": 1930.76, "start": 1925.36, "text": " the world and actually collect real observations. And you have a method" }, { "end": 1939.64, "start": 1930.76, "text": " of deriving from these observations, deriving the guesses about the states." }, { "end": 1945.72, "start": 1939.64, "text": " So what you can do is you can learn a transition between the states. If" }, { "end": 1950.72, "start": 1945.72, "text": " you have a good guess of what the states are given each observation you can learn" }, { "end": 1955.6000000000001, "start": 1950.72, "text": " how to transition from one state to the next state. Except you don't do it in" }, { "end": 1961.4, "start": 1955.6000000000001, "text": " actual states, you do it in guesses about states. Then once you have a model of how" }, { "end": 1967.56, "start": 1961.4, "text": " you move from one state to the next state you can go and dream up such state" }, { "end": 1973.6200000000001, "start": 1967.56, "text": " trajectories. You can dream state trajectories and therefore also you can" }, { "end": 1978.7399999999998, "start": 1973.62, "text": " dream how you would observe them. And given that you can learn then a better" }, { "end": 1985.32, "start": 1978.7399999999998, "text": " function that relates your guess about a state given the observation" }, { "end": 1990.76, "start": 1985.32, "text": " to the actual features of the state. Since for this particular thing you know" }, { "end": 2000.12, "start": 1990.76, "text": " what the state is. So this is this two-step process. Notice the cool thing." }, { "end": 2007.1999999999998, "start": 2000.12, "text": " We've never actually had to learn this mu explicitly. We never had to learn how" }, { "end": 2013.84, "start": 2007.1999999999998, "text": " to go from observations to your guesses about states because we can compute this" }, { "end": 2019.6, "start": 2013.84, "text": " recursively. So you simply start out with mu0 which is a guess about the" }, { "end": 2026.6, "start": 2019.6, "text": " initial state and then you go to mu1 and mu2 and you never actually have to" }, { "end": 2032, "start": 2026.6, "text": " learn that function. So that's how they" }, { "end": 2037.3999999999999, "start": 2032, "text": " learn these success representations and the experiments of this are" }, { "end": 2042.9599999999998, "start": 2037.3999999999999, "text": " fairly cool. Here is another diagram of how that looks like. You have a state" }, { "end": 2046.7199999999998, "start": 2042.9599999999998, "text": " this gives you an observation and from that you derive a guess of what this" }, { "end": 2052.88, "start": 2046.7199999999998, "text": " state is. So you can now look at what the agent learned. The agent actually" }, { "end": 2060.44, "start": 2052.88, "text": " learns dynamics of this room. It means if you're here you probably go somewhere." }, { "end": 2064.92, "start": 2060.44, "text": " There is no clear direction but if you're close to the wall your next" }, { "end": 2070.88, "start": 2064.92, "text": " states are probably going to be inwards of this wall. And yeah I've" }, { "end": 2078.76, "start": 2070.88, "text": " already shown you this picture. So they have a last cool experiment here where" }, { "end": 2085.76, "start": 2078.76, "text": " what they do is they specify a reward and the reward is down here. And from each" }, { "end": 2091.4, "start": 2085.76, "text": " state you want to know which way do I have to go to get the reward." }, { "end": 2098.48, "start": 2091.4, "text": " Now if they give the agent the value of the latent state and the latent state" }, { "end": 2102.6000000000004, "start": 2098.48, "text": " here are just your x y coordinates. If they give this to the agent and they let" }, { "end": 2106.76, "start": 2102.6000000000004, "text": " it run, they let it learn the structure of the world, it will correctly conclude" }, { "end": 2111.5600000000004, "start": 2106.76, "text": " these are the high value states, lower, lower, lower, lower, lower" }, { "end": 2116.6400000000003, "start": 2111.5600000000004, "text": " value states. Up until over here are the most low value states because you" }, { "end": 2124.84, "start": 2116.6400000000003, "text": " travel the longest to go to the reward. If you just give it the observation, the" }, { "end": 2129.6400000000003, "start": 2124.84, "text": " noisy observation, it will actually assign high value to states here." }, { "end": 2135.5200000000004, "start": 2129.6400000000003, "text": " Because of course it doesn't infer the latent state. It simply takes the" }, { "end": 2140, "start": 2135.52, "text": " observation as the phase value says. Well I was here and I reached here pretty" }, { "end": 2145.84, "start": 2140, "text": " quickly so it must be a good state. But in fact it wasn't here, it was here and" }, { "end": 2151.12, "start": 2145.84, "text": " the added noise would just corrupt the observation. So you see it learns kind of" }, { "end": 2158.6, "start": 2151.12, "text": " a wrong model of the world. Whereas if you use this DDC you see, sorry about" }, { "end": 2164.24, "start": 2158.6, "text": " that, if you use this DDC you see you're much closer to the true state of the" }, { "end": 2171, "start": 2164.24, "text": " world, like to the one on the left here. So on the left here you" }, { "end": 2175.2799999999997, "start": 2171, "text": " actually kind of cheat, you give it the actual state. But here you give it" }, { "end": 2179.3599999999997, "start": 2175.2799999999997, "text": " the observation but tell it it's actually a noisy observation. You use" }, { "end": 2183.68, "start": 2179.3599999999997, "text": " what this paper proposes and again it will learn to assign a low value to" }, { "end": 2188, "start": 2183.68, "text": " these states because it needs to go all the way around. Even though it has" }, { "end": 2193.9599999999996, "start": 2188, "text": " supposedly seen the agent go from here to here directly, but it kind of" }, { "end": 2199.32, "start": 2193.96, "text": " understands that it's just a noisy observation. Alright so this was this" }, { "end": 2204.2400000000002, "start": 2199.32, "text": " from this paper. It's a very very cool approach I think to reinforcement" }, { "end": 2207.16, "start": 2204.2400000000002, "text": " learning and there's some more experiments where you can see that this" }, { "end": 2212.7200000000003, "start": 2207.16, "text": " DDC actually helps. I'm excited about successor representations and how to" }, { "end": 2217.36, "start": 2212.7200000000003, "text": " incorporate them in reinforcement learning because it seems a perfect kind" }, { "end": 2222.88, "start": 2217.36, "text": " of middle ground between model-based and model-free RL. With that" }, { "end": 2227, "start": 2222.88, "text": " thanks for listening and bye bye!" } ]
Xc9Rkbg6IZA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
SinGAN: Learning a Generative Model from a Single Natural Image
[ "Science & Technology" ]
[ "ml", "ai", "machine learning", "artificial ingelligence", "gan", "generative", "image processing", "deep learning", "image editing", "deep dream", "style transfer", "convolutional neural networks", "generative adversarial networks", "photoshop" ]
With just a single image as an input, this algorithm learns a generative model that matches the input image's patch distribution at multiple scales and resolutions. This enables sampling of extremely realistic looking variations on the original image and much more. Abstract: We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks. Authors: Tamar Rott Shaham, Tali Dekel, Tomer Michaeli https://arxiv.org/abs/1905.01164 https://github.com/tamarott/SinGAN Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Hi there! Today we'll look at SINGAN, Learning a Generative Model from a Single Natural Image by Tamar Rott-Schaum, Tali Dekal and Tomer Mikhaili. So this paper, as it says, it's dealing with learning a generative model from just one image. And this kind of needs to be stressed because most generative models, even if they produce single image samples, they're kind of trained on a large image database beforehand to kind of learn what an image is. But this algorithm really starts out clean-slate, right? The algorithm starts out with nothing and then you give it this one single training image. And from that it can then generate all of these things, without ever having seen any other images during training. And the second row is simply a second example where you start clean-slate, input this image and then produce these. And you can see there's quite a bit of variety in the samples you produce from this image. So basically the task is, if you're just given one image, learn something about the distribution. And this paper specifically deals with patch distributions at different scales. So this could be learn about the distribution of these grass to sky here. So learn about the individual birds and so on. And then at lower scales learn about how the border of this grass looks. So the generative model learns that there's always kind of grass at the bottom, where there's just one image at the largest scale. But then at lower scales sometimes the border looks like a sharp corner and sometimes the border is relatively flat, like here. So it can vary up those things and it can make the border different. Also the birds, it kind of learns how the individual birds look and how they're distributed and therefore it can change that. You see there's quite a bit of variety here. You can also change the aspect ratio and you can actually do much more, much weirder things with it. For example, here are some examples of applications. First there is paint to image. So these are different tasks here. So the top row is always the training image. This is the single image you give the algorithm. And then you have a row of input and then this is what the algorithm outputs. So in paint to image you input a training image and you input a, you can do this in MS Paint or something, kind of the way you want the image to look. So what you want the algorithm to do is take the style of this image and put it into the form of that image and it produces this. Looks pretty good. In editing you can tell the algorithm, alright I want this, I want this tower to go lower down, right? I want this house to be more wide. So you'll get an image like this and you can see there are clear kind of contours here and here that are not nice and also the house is, you know, pixel stretched and so on. So this algorithm, this generative algorithm, can produce this image from it which looks much better here around the borders and kind of fills in missing windows to match of course the patch statistics that it sees in this top image, right? You always have to think that all this algorithm sees is the topmost image to learn from. Harmonization is a task where you have an input image and then you like copy paste some object in it and what it does is it will kind of adjust the patch statistics of that object to the surrounding image. And super resolution, finally, finally we get what every single action movie, just the NSA, can do. It's like, ah here is the security camera footage. Zoom in, enhance. Yeah, so I doubt that, you know, hidden number plates here, pixel-ish number plates, all of a sudden can become readable and identifiable but still this is very cool. And lastly you can do animation from this, as you can guess, I guess. It's not a movie. All right, let's look at how they do all of this kind of stuff. All of this is the same model that can be tasked to do these different things through various probing. At its essence it's this multi-scale GAN and the GAN is trained to have a series of generators and a series of discriminators and you always train them one by one. So first you train the lowest resolution and then you keep it fixed and then train the next resolution and so on until you're at the highest resolution. So in each layer, so at the bottom layer, we simply feed in, we simply feed in noise to a generator of a GAN and the generator generates an image. Now you take this image and you take a down sampled version of your training image. Remember you just have one training image. You take a down sampled version of that and you let the discriminator decide which one is real, which one's fake and you train the generator to fool the discriminator as much as possible. Now if you were to do this with the entire image, of course the generator would simply learn to reproduce the original image. So that's no good. So what this paper does more is that the discriminator actually doesn't work on the entire image but just on patches of the image. And that's so that they basically can't memorize the entire image. So the discriminator will pick these patches, these overlapping patches basically. You can imagine it's something like this overlapping patches and it will try to decide for each one is this patch real or is this patch fake? So the generator produces the entire image. This is what the generator produces the entire image but the discriminator can only see the image in patches, in overlapping patches. And that's what makes this paper kind of work. Otherwise they would just remember the single training image because you only have one training image. You kind of need some variety. This is at the lowest scale. Remember you input the noise and the lowest scale in this example is for example 25 by 25 pixel. You scale down your original image here also to 25 by 25 and then you let the discriminator decide. So once you've trained this generator to make very good 25 by 25 pixel images, that in this patch way fool the discriminator. You keep it fixed. For the next stage what you want to do is you always want to go through this layer first. So forget this discriminator now. We've trained this stage. Keep this generator fixed. Input noise, output, whatever the generator produces. Then take this upscale it. For example multiply each side by 2 to 50 by 50 pixels. Input this together with some new noise into the next stage generator. And then the same as before. This generator produces an image. You scale down your original image. You scale it down to now 50 by 50 pixels and you let the discriminator decide again in patches. Since the discriminator patches are always the same size but we scale down the image less and less, the effective patch size of the discriminator becomes much lower. Now this discriminator only sees the image in patches like so. Also the generated image that comes in here. It also sees in these patches and tries to decide are these patches from real or from fake images. You can see that the lowest layer here, this layer, is trained to kind of get the coarse-grained structure of the image. The discriminator will kind of see very large patches. So the generator must match the kind of large-scale structure. These patches won't be very very high resolution because we downscaled the image, but they will be large across the image. So the generator must match the coarse low resolution stuff in the image. But as you go up the layers, up and up the layers, your discriminator sees less and less of the picture at once. It sees less and less of the picture at once. So this discriminator here in the topmost layer can only concentrate on very small patches and therefore this generator will only have to produce things that look real at a very very small scale. So in essence you have this series of generators trained that each one is tasked with basically modeling details at a finer and finer scale until you come to the last final scale. But then each input of each one is the output of the last one. So basically you take whatever the last one has produced and the last one is really good at doing coarser grain things and you add to it your details of this level. And this will in the end give you a very realistic image that matches at every level of resolution, matches the kind of statistics, the patch statistics of this real image. So that's the whole point of this thing. To have this series of generators one after the other, each one adds their own details at its own scale. And this works super well apparently. So each generator is just built like this. It takes some noise and the image of the lower scale, it adds them, sorry for these artifacts, it puts it through five convolutional layers and then simply combines it with the input. And this will produce this image at this scale. That's each layer, it's just five conv layers. And since they're fully convolutional you can actually change the aspect ratio at inference time, you can change the resolution and so on. It seems pretty neat. Of course from experience I can tell you that this probably didn't work at the first try and there is a lot of work even though it seems pretty easy. Keep that in mind. So for training this there are actually two different losses. First of all you have what's called the adversarial loss. And the adversarial loss is your classic GAN loss, where the generator tries to fool the discriminator and the discriminator tries to catch the generator. But then also you have a reconstruction loss. And the reconstruction loss specifically deals at each layer. At each layer you train the generator to reconstruct the original image when you put in a zero noise, except at the lowest layer. But essentially what you want to do is you want to say well when I don't input any noise then please reconstruct the original image. And that seems to be important for the setup to include this noise so that the generative model is basically able to reconstruct the original image as a whole. So these two losses are combined to form the training objective. And again this is not trained on data set. It is trained on a single image. And the productions are pretty cool. So again here are more samples from just the single training images at the left side. And then you have random samples from the single image. You can do things like super resolution, where this picture has been super resoluted to that picture. And I like that they investigate the effects of kind of their setup. So they ask okay what happens if we just have basically two different scales in this scaling setup. Then you see the kind of patch statistics will be very very fine-grained and it won't match any sort of coarse-grained structure. If you have very many scales, the more scales you have better basically. The more different scales you capture. Even more interesting is what if, so at this layer where we have G, G, G, you scale up, scale up, scale up and so on. What you could do is you could not start here, but you say okay scrap this layer. What we actually do is we take the original image and we scale it down and we input that into here instead of inputting the output from the lower layer. So basically you start at let's say the ground truth and that effect is shown here. So if you start at the lowest layer in this particular example you see that sometimes there are weird things. But what you can do is start at a let's say an intermediate layer with the original image and then the variety you get because you kind of keep the coarse-grained structure the same. The variety you get will only be in the right we said there are different layers and but you now eliminate these two layers and replace them with your original image at the scale. So the variety you get will only be from these finer grained lower resolution patches things. So for example as you can see here the zebra samples now differ in how exactly their stripes are manifested. This seems pretty cool. So you have kind of a handle on how fine grained you want your details or your changes to be. They do a bunch of more experiments where you can do a lot of kind of playful things with this thing. There is code available for example here you can see editing again as an example where they compare also with content aware move which I think is implemented in Photoshop and paint harmonization as we saw before. So all of these kind of things are very playful are very cool and I encourage you to check out this paper and the code it seems pretty easy. I have a remark though this again is only learned from a single image and that's the kind of cool part but it should be possible to combine this with some sort of approach over a data set. Like if I have a model that is really good at a single image right producing something that looks like a single image I should be able to combine it with a model that has been learned from a database. It's kind of like a Bayesian approach where you say okay I want to produce the best image so I want to maximize the probability of this image given the other image. But then you can also say aha but that's kind of proportional to j given i times p of i right you know Bayes rule and it seems that this paper is dealing mostly with kind of maximizing the likelihood of the output while you could probably combine it with some sort of prior over natural images and come up with an even better model. Of course then you'd need an actual database of images and training procedure and you need a way to combine these two models. So maybe that's a bit of a challenge. Anyway cool paper check it out bye bye.
[ { "end": 6, "start": 0, "text": " Hi there! Today we'll look at SINGAN, Learning a Generative Model from a Single" }, { "end": 13.96, "start": 6, "text": " Natural Image by Tamar Rott-Schaum, Tali Dekal and Tomer Mikhaili. So this paper," }, { "end": 19.04, "start": 13.96, "text": " as it says, it's dealing with learning a generative model from just one image. And" }, { "end": 22.92, "start": 19.04, "text": " this kind of needs to be stressed because most generative models, even if" }, { "end": 27.28, "start": 22.92, "text": " they produce single image samples, they're kind of trained on a large image" }, { "end": 32.68, "start": 27.28, "text": " database beforehand to kind of learn what an image is. But this" }, { "end": 38.2, "start": 32.68, "text": " algorithm really starts out clean-slate, right? The algorithm starts out with nothing" }, { "end": 44.040000000000006, "start": 38.2, "text": " and then you give it this one single training image. And from that it can then" }, { "end": 49.44, "start": 44.040000000000006, "text": " generate all of these things, without ever having seen any other images" }, { "end": 55.120000000000005, "start": 49.44, "text": " during training. And the second row is simply a second example where you start" }, { "end": 61.519999999999996, "start": 55.12, "text": " clean-slate, input this image and then produce these. And you can see there's" }, { "end": 65.16, "start": 61.519999999999996, "text": " quite a bit of variety in the samples you produce from this image. So basically" }, { "end": 71, "start": 65.16, "text": " the task is, if you're just given one image, learn something about the" }, { "end": 75.8, "start": 71, "text": " distribution. And this paper specifically deals with patch distributions at" }, { "end": 81.2, "start": 75.8, "text": " different scales. So this could be learn about the distribution of these" }, { "end": 90.60000000000001, "start": 81.2, "text": " grass to sky here. So learn about the individual birds and so on. And then at" }, { "end": 97.28, "start": 90.60000000000001, "text": " lower scales learn about how the border of this grass looks. So the" }, { "end": 102.24000000000001, "start": 97.28, "text": " generative model learns that there's always kind of grass at the" }, { "end": 107.36, "start": 102.24000000000001, "text": " bottom, where there's just one image at the largest scale. But then at lower" }, { "end": 114, "start": 107.36, "text": " scales sometimes the border looks like a sharp corner and sometimes the" }, { "end": 119.88, "start": 114, "text": " border is relatively flat, like here. So it can vary up those things and it can" }, { "end": 125.8, "start": 119.88, "text": " make the border different. Also the birds, it kind of learns how" }, { "end": 130.2, "start": 125.8, "text": " the individual birds look and how they're distributed and therefore it" }, { "end": 135.16, "start": 130.2, "text": " can change that. You see there's quite a bit of variety here. You can also change" }, { "end": 139.88, "start": 135.16, "text": " the aspect ratio and you can actually do much more, much weirder things with it." }, { "end": 146.12, "start": 139.88, "text": " For example, here are some examples of applications. First there is paint to" }, { "end": 151, "start": 146.12, "text": " image. So these are different tasks here. So the top row is always the training" }, { "end": 155.96, "start": 151, "text": " image. This is the single image you give the algorithm. And then you have a row of" }, { "end": 160.56, "start": 155.96, "text": " input and then this is what the algorithm outputs. So in paint to image" }, { "end": 167.08, "start": 160.56, "text": " you input a training image and you input a, you can do this in MS Paint or" }, { "end": 173.24, "start": 167.08, "text": " something, kind of the way you want the image to look. So what you want" }, { "end": 178.32, "start": 173.24, "text": " the algorithm to do is take the style of this" }, { "end": 184.48000000000002, "start": 178.32, "text": " image and put it into the form of that image and it produces this. Looks" }, { "end": 192.88, "start": 184.48, "text": " pretty good. In editing you can tell the algorithm, alright I want this, I want" }, { "end": 199.17999999999998, "start": 192.88, "text": " this tower to go lower down, right? I want this house to be more wide. So you'll get" }, { "end": 204.35999999999999, "start": 199.17999999999998, "text": " an image like this and you can see there are clear kind of contours here and here" }, { "end": 210.28, "start": 204.35999999999999, "text": " that are not nice and also the house is, you know, pixel stretched and so on. So" }, { "end": 216.16, "start": 210.28, "text": " this algorithm, this generative algorithm, can produce this image from it" }, { "end": 220.52, "start": 216.16, "text": " which looks much better here around the borders and kind of fills in missing" }, { "end": 227.28, "start": 220.52, "text": " windows to match of course the patch statistics that it sees in this top" }, { "end": 232.36, "start": 227.28, "text": " image, right? You always have to think that all this algorithm sees is the" }, { "end": 237.52, "start": 232.36, "text": " topmost image to learn from. Harmonization is a task where you have" }, { "end": 243.76000000000002, "start": 237.52, "text": " an input image and then you like copy paste some object in it and what it does" }, { "end": 248.4, "start": 243.76000000000002, "text": " is it will kind of adjust the patch statistics of that object to the" }, { "end": 255.48000000000002, "start": 248.4, "text": " surrounding image. And super resolution, finally, finally we get what every single" }, { "end": 262.24, "start": 255.48000000000002, "text": " action movie, just the NSA, can do. It's like, ah here is the security camera" }, { "end": 272.56, "start": 262.24, "text": " footage. Zoom in, enhance. Yeah, so I doubt that, you know, hidden" }, { "end": 276.12, "start": 272.56, "text": " number plates here, pixel-ish number plates, all of a sudden can become" }, { "end": 283.2, "start": 276.12, "text": " readable and identifiable but still this is very cool. And lastly you can do" }, { "end": 292.92, "start": 283.2, "text": " animation from this, as you can guess, I guess. It's not a movie." }, { "end": 297.48, "start": 292.92, "text": " All right, let's look at how they do all of this kind of stuff. All of this is the" }, { "end": 301.8, "start": 297.48, "text": " same model that can be tasked to do these different things through various" }, { "end": 309, "start": 301.8, "text": " probing. At its essence it's this multi-scale GAN and the GAN is trained" }, { "end": 314.76, "start": 309, "text": " to have a series of generators and a series of discriminators and you always" }, { "end": 320.32, "start": 314.76, "text": " train them one by one. So first you train the lowest resolution and then you keep" }, { "end": 323.84, "start": 320.32, "text": " it fixed and then train the next resolution and so on until you're at" }, { "end": 330.68, "start": 323.84, "text": " the highest resolution. So in each layer, so at the bottom layer, we simply feed in," }, { "end": 338.52, "start": 330.68, "text": " we simply feed in noise to a generator of a GAN and the generator generates" }, { "end": 345.47999999999996, "start": 338.52, "text": " an image. Now you take this image and you take a down sampled version of" }, { "end": 349.15999999999997, "start": 345.47999999999996, "text": " your training image. Remember you just have one training image. You take a" }, { "end": 355.47999999999996, "start": 349.15999999999997, "text": " down sampled version of that and you let the discriminator decide which one is" }, { "end": 359.64, "start": 355.47999999999996, "text": " real, which one's fake and you train the generator to fool the discriminator as" }, { "end": 363.64, "start": 359.64, "text": " much as possible. Now if you were to do this with the entire image, of course the" }, { "end": 369.12, "start": 363.64, "text": " generator would simply learn to reproduce the original image. So that's" }, { "end": 375.44, "start": 369.12, "text": " no good. So what this paper does more is that the discriminator" }, { "end": 380.8, "start": 375.44, "text": " actually doesn't work on the entire image but just on patches of the image." }, { "end": 388.8, "start": 380.8, "text": " And that's so that they basically can't memorize the" }, { "end": 396.36, "start": 388.8, "text": " entire image. So the discriminator will pick these patches, these overlapping" }, { "end": 400.5, "start": 396.36, "text": " patches basically. You can imagine it's something like this overlapping patches" }, { "end": 406.8, "start": 400.5, "text": " and it will try to decide for each one is this patch real or is this patch fake?" }, { "end": 412.76, "start": 406.8, "text": " So the generator produces the entire image. This is what the" }, { "end": 419.92, "start": 412.76, "text": " generator produces the entire image but the discriminator can only see the image" }, { "end": 426.4, "start": 419.92, "text": " in patches, in overlapping patches. And that's what makes this paper kind of" }, { "end": 432.64, "start": 426.4, "text": " work. Otherwise they would just remember the single training image" }, { "end": 437.88, "start": 432.64, "text": " because you only have one training image. You kind of need some variety." }, { "end": 445.24, "start": 437.88, "text": " This is at the lowest scale. Remember you input the noise and the lowest" }, { "end": 451.64, "start": 445.24, "text": " scale in this example is for example 25 by 25 pixel. You scale down" }, { "end": 456.44, "start": 451.64, "text": " your original image here also to 25 by 25 and then you let the discriminator" }, { "end": 461.92, "start": 456.44, "text": " decide. So once you've trained this generator to make very good" }, { "end": 469.64000000000004, "start": 461.92, "text": " 25 by 25 pixel images, that in this patch way fool the discriminator. You keep" }, { "end": 474.68, "start": 469.64000000000004, "text": " it fixed. For the next stage what you want to do is you always want to go" }, { "end": 480.8, "start": 474.68, "text": " through this layer first. So forget this discriminator now. We've trained" }, { "end": 487.28000000000003, "start": 480.8, "text": " this stage. Keep this generator fixed. Input noise, output, whatever the" }, { "end": 494.32, "start": 487.28, "text": " generator produces. Then take this upscale it. For example multiply each" }, { "end": 501.64, "start": 494.32, "text": " side by 2 to 50 by 50 pixels. Input this together with some new noise into the" }, { "end": 506.11999999999995, "start": 501.64, "text": " next stage generator. And then the same as before. This generator produces an" }, { "end": 512.4, "start": 506.11999999999995, "text": " image. You scale down your original image. You scale it down to now 50 by 50" }, { "end": 518.76, "start": 512.4, "text": " pixels and you let the discriminator decide again in patches. Since the" }, { "end": 523.0799999999999, "start": 518.76, "text": " discriminator patches are always the same size but we scale down the image" }, { "end": 527.72, "start": 523.0799999999999, "text": " less and less, the effective patch size of the discriminator becomes much lower." }, { "end": 537.36, "start": 527.72, "text": " Now this discriminator only sees the image in patches like so. Also the" }, { "end": 542.28, "start": 537.36, "text": " generated image that comes in here. It also sees in these" }, { "end": 549.88, "start": 542.28, "text": " patches and tries to decide are these patches from real or from fake images." }, { "end": 559.24, "start": 549.88, "text": " You can see that the lowest layer here, this layer, is trained to kind of get the" }, { "end": 566.9200000000001, "start": 559.24, "text": " coarse-grained structure of the image. The discriminator will" }, { "end": 573.5999999999999, "start": 566.92, "text": " kind of see very large patches. So the generator must match the kind of" }, { "end": 578.52, "start": 573.5999999999999, "text": " large-scale structure. These patches won't be very very high resolution" }, { "end": 582.8399999999999, "start": 578.52, "text": " because we downscaled the image, but they will be large across the image. So the" }, { "end": 589.7199999999999, "start": 582.8399999999999, "text": " generator must match the coarse low resolution stuff in the image. But as you" }, { "end": 597.6800000000001, "start": 589.72, "text": " go up the layers, up and up the layers, your discriminator sees less and less of" }, { "end": 604.1600000000001, "start": 597.6800000000001, "text": " the picture at once. It sees less and less of the picture at once." }, { "end": 610.44, "start": 604.1600000000001, "text": " So this discriminator here in the topmost layer can only concentrate on" }, { "end": 616.6, "start": 610.44, "text": " very small patches and therefore this generator will only have to produce" }, { "end": 625.44, "start": 616.6, "text": " things that look real at a very very small scale. So in essence you have" }, { "end": 631.6, "start": 625.44, "text": " this series of generators trained that each one is tasked with basically" }, { "end": 636.8000000000001, "start": 631.6, "text": " modeling details at a finer and finer scale until you come to the last final" }, { "end": 642.2, "start": 636.8000000000001, "text": " scale. But then each input of each one is the output of the last one. So" }, { "end": 646.52, "start": 642.2, "text": " basically you take whatever the last one has produced and the last one is really" }, { "end": 653.36, "start": 646.52, "text": " good at doing coarser grain things and you add to it your details of this level." }, { "end": 660.12, "start": 653.36, "text": " And this will in the end give you a very realistic image that matches at every" }, { "end": 666.4399999999999, "start": 660.12, "text": " level of resolution, matches the kind of statistics, the patch statistics of this" }, { "end": 674.3199999999999, "start": 666.4399999999999, "text": " real image. So that's the whole point of this thing. To have" }, { "end": 679.2, "start": 674.32, "text": " this series of generators one after the other, each one adds their own details" }, { "end": 685.5600000000001, "start": 679.2, "text": " at its own scale. And this works super well apparently. So each generator is" }, { "end": 690.96, "start": 685.5600000000001, "text": " just built like this. It takes some noise and the image of the lower" }, { "end": 696.7600000000001, "start": 690.96, "text": " scale, it adds them, sorry for these artifacts, it puts it through five" }, { "end": 704.2, "start": 696.7600000000001, "text": " convolutional layers and then simply combines it with the input. And this" }, { "end": 711.1600000000001, "start": 704.2, "text": " will produce this image at this scale. That's each layer, it's just five" }, { "end": 716.2, "start": 711.1600000000001, "text": " conv layers. And since they're fully convolutional you can actually change" }, { "end": 723.2800000000001, "start": 716.2, "text": " the aspect ratio at inference time, you can change the resolution and so on." }, { "end": 731.2, "start": 723.2800000000001, "text": " It seems pretty neat. Of course from experience I can tell you that this" }, { "end": 736.84, "start": 731.2, "text": " probably didn't work at the first try and there is a lot of work even though" }, { "end": 742.32, "start": 736.84, "text": " it seems pretty easy. Keep that in mind. So for training this there are" }, { "end": 746.76, "start": 742.32, "text": " actually two different losses. First of all you have what's called the" }, { "end": 753.12, "start": 746.76, "text": " adversarial loss. And the adversarial loss is your classic GAN loss, where" }, { "end": 756.84, "start": 753.12, "text": " the generator tries to fool the discriminator and the" }, { "end": 760.72, "start": 756.84, "text": " discriminator tries to catch the generator. But then also you have a" }, { "end": 765.76, "start": 760.72, "text": " reconstruction loss. And the reconstruction loss specifically deals" }, { "end": 775.6, "start": 765.76, "text": " at each layer. At each layer you train the generator to reconstruct the" }, { "end": 781.1600000000001, "start": 775.6, "text": " original image when you put in a zero noise, except at the lowest layer. But" }, { "end": 786.64, "start": 781.1600000000001, "text": " essentially what you want to do is you want to say well when I don't input" }, { "end": 792.48, "start": 786.64, "text": " any noise then please reconstruct the original image. And that seems to be" }, { "end": 797.76, "start": 792.48, "text": " important for the setup to include this noise so that the" }, { "end": 804.36, "start": 797.76, "text": " generative model is basically able to reconstruct the original image as a whole." }, { "end": 809.4399999999999, "start": 804.36, "text": " So these two losses are combined to form the training objective. And" }, { "end": 815.84, "start": 809.4399999999999, "text": " again this is not trained on data set. It is trained on a single image." }, { "end": 824.32, "start": 815.84, "text": " And the productions are pretty cool. So again here are more samples from just" }, { "end": 828.48, "start": 824.32, "text": " the single training images at the left side. And then you have random samples" }, { "end": 833.0600000000001, "start": 828.48, "text": " from the single image. You can do things like super resolution, where this picture" }, { "end": 840.7800000000001, "start": 833.0600000000001, "text": " has been super resoluted to that picture. And I like that they investigate the" }, { "end": 845.72, "start": 840.7800000000001, "text": " effects of kind of their setup. So they ask okay what happens if we just have" }, { "end": 851.9200000000001, "start": 845.72, "text": " basically two different scales in this scaling setup. Then you see" }, { "end": 859.24, "start": 851.9200000000001, "text": " the kind of patch statistics will be very very fine-grained and it won't match" }, { "end": 865.32, "start": 859.24, "text": " any sort of coarse-grained structure. If you have very many scales, the" }, { "end": 872.52, "start": 865.32, "text": " more scales you have better basically. The more different scales you capture." }, { "end": 881.56, "start": 872.52, "text": " Even more interesting is what if, so at this layer where we have G, G, G," }, { "end": 886.52, "start": 881.56, "text": " you scale up, scale up, scale up and so on. What you could do is you could not" }, { "end": 892, "start": 886.52, "text": " start here, but you say okay scrap this layer. What we actually do is we" }, { "end": 896.92, "start": 892, "text": " take the original image and we scale it down and we input that into here instead" }, { "end": 901.12, "start": 896.92, "text": " of inputting the output from the lower layer. So basically you start at let's" }, { "end": 908.84, "start": 901.12, "text": " say the ground truth and that effect is shown here. So if you" }, { "end": 916.84, "start": 908.84, "text": " start at the lowest layer in this particular example you see that" }, { "end": 923.12, "start": 916.84, "text": " sometimes there are weird things. But what you can do is start at a let's say" }, { "end": 928.52, "start": 923.12, "text": " an intermediate layer with the original image and then the variety you get" }, { "end": 932.8, "start": 928.52, "text": " because you kind of keep the coarse-grained structure the same. The" }, { "end": 936.6, "start": 932.8, "text": " variety you get will only be in the right we said there are different" }, { "end": 941.52, "start": 936.6, "text": " layers and but you now eliminate these two layers and replace them with your" }, { "end": 945.68, "start": 941.52, "text": " original image at the scale. So the variety you get will only be from these" }, { "end": 951.72, "start": 945.68, "text": " finer grained lower resolution patches things. So for example as you can see" }, { "end": 958.76, "start": 951.72, "text": " here the zebra samples now differ in how exactly their stripes are manifested." }, { "end": 965.76, "start": 958.76, "text": " This seems pretty cool. So you have kind of a handle on how fine" }, { "end": 971.48, "start": 965.76, "text": " grained you want your details or your changes to be. They do a bunch of" }, { "end": 978.36, "start": 971.48, "text": " more experiments where you can do a lot of kind of playful things with this" }, { "end": 984.8000000000001, "start": 978.36, "text": " thing. There is code available for example here you can see editing again" }, { "end": 990.88, "start": 984.8000000000001, "text": " as an example where they compare also with content aware move which I think is" }, { "end": 999.76, "start": 990.88, "text": " implemented in Photoshop and paint harmonization as we saw before. So all of" }, { "end": 1003.88, "start": 999.76, "text": " these kind of things are very playful are very cool and I encourage you to" }, { "end": 1008.6, "start": 1003.88, "text": " check out this paper and the code it seems pretty easy. I have a remark though" }, { "end": 1013.24, "start": 1008.6, "text": " this again is only learned from a single image and that's the kind of" }, { "end": 1020.24, "start": 1013.24, "text": " cool part but it should be possible to combine this with some sort of approach" }, { "end": 1028.42, "start": 1020.24, "text": " over a data set. Like if I have a model that is really good at a single" }, { "end": 1032.56, "start": 1028.42, "text": " image right producing something that looks like a single image I should be" }, { "end": 1039.24, "start": 1032.56, "text": " able to combine it with a model that has been learned from a database." }, { "end": 1043.72, "start": 1039.24, "text": " It's kind of like a Bayesian approach where you say okay I want to produce" }, { "end": 1052.6799999999998, "start": 1043.72, "text": " the best image so I want to maximize the probability of this image given the" }, { "end": 1060.32, "start": 1052.6799999999998, "text": " other image. But then you can also say aha but that's kind of" }, { "end": 1069.6399999999999, "start": 1060.32, "text": " proportional to j given i times p of i right you know Bayes rule and it seems" }, { "end": 1075.2, "start": 1069.6399999999999, "text": " that this paper is dealing mostly with kind of maximizing the likelihood of the" }, { "end": 1080.36, "start": 1075.2, "text": " output while you could probably combine it with some sort of prior over natural" }, { "end": 1086.32, "start": 1080.36, "text": " images and come up with an even better model. Of course then you'd need an" }, { "end": 1092.1599999999999, "start": 1086.32, "text": " actual database of images and training procedure and you need a way to combine" }, { "end": 1096.76, "start": 1092.1599999999999, "text": " these two models. So maybe that's a bit of a challenge. Anyway cool paper check" }, { "end": 1116.92, "start": 1096.76, "text": " it out bye bye." } ]
BTLCdge7uSQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning
[ "Science & Technology" ]
[ "ml", "ai", "machine learning", "reinforcement learning", "deep rl", "deepmind", "google", "starcraft", "alphastar", "alphago", "alphazero", "value function", "policy", "vtrace", "upgo", "terran", "protoss", "zerg", "build order", "strategy", "pointer network", "transformer", "league training", "league", "battlenet", "artificial intelligence", "bot", "rl", "deep reinforcement learning", "model-free", "exploiters", "self-play", "ficticious self-play", "rts" ]
DeepMind's new agent to tackle yet another Esport: Starcraft II. This agent uses deep reinforcement learning with a new technique, called League Training, to catapult itself to Grandmaster-level skill at playing this game. Abstract: Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players. Authors: Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, David Silver https://www.deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
Alright, let's talk about AlphaStar, Grandmaster level in StarCraft 2 using multi-agent reinforcement learning. The corresponding paper looks like this and is by Oriol Vinyals et al. from DeepMind and has been published in the journal of Nature recently. Now let me say this first. Stop publishing in Nature. This is a journal is not open access. It makes its readers pay for getting the article. So actually you can access this article or a public version of it for free but you can't print it, you can't download it unless you pay for it. And this to me, it seems ridiculous because none of this money goes to the authors of the article. None of this money goes to the reviewers. The review quality isn't notably better, at least in the field of computer science. All of this is a publicity stunt by DeepMind because Nature has been kind of impactful in the last decades. It's like, ooh, look at me, I got a big dick I publish in Nature. Nothing more than that. It's like OpenAI saying their model is too dangerous to release to the world. I guess DeepMind might make the same claim about AlphaStar. It's like too dangerous of a StarCraft player. Yeah, so stop this. Publish your research in open access. Nature or journals like these for computer science. It's a remnant of the last century. So go on and join everyone else in distributing knowledge. All right, rant over. Let's jump in into this article. So the article describes how to train a reinforcement learning agent to play the game of StarCraft 2. So StarCraft 2 is this game for everyone who doesn't know. Just very quickly explain the game. StarCraft 2 is a real time strategy game and you're kind of in this top third person view and you control your units and the goal is kind of to move your units around and first of all build up buildings and using those buildings you can then produce more and more diverse units and ultimately you want to kind of produce some sort of army that can go to the opponent and destroy the opponent's base. So you control all of this on a computer using a mouse and a keyboard and StarCraft is notable for being very balanced. So there are three different races you can play. So first are the Terran which are kind of human, human-ish. They have marines and tanks and helicopters I believe and things like this. Then the Protoss are some sort of alien race that are super advanced so they can teleport and have energy shields and things like that. And then last are the Zerg and the Zerg are kind of icky ground dwelling creatures that infect things and spread like a disease. So the interesting thing here is compared to other real-time strategy games is that the three races they play very different. So the game is almost a different game if you play as a different race but they are so well balanced that almost any matchup is kind of a fair game between equally skilled players. So that's makes StarCraft pretty unique. Also pretty unique is the very very high action per minute rates that pro players get. Like they play this insanely fast. So game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy base. So to train an RL agent to play this is very hard because the action space is very high. You have to target with your mouse part of the screen. You have to look what is on the screen, what can I do. There's this mini map down here. There are things you can do. There are opponents you can target and so on. So all of this is very very very difficult for an RL agent. And at the end, after 10 minutes, you play play play play play and after 10 minutes you either win or you lose. And the RL agent has to figure out which of the actions that I did during those 10 minutes right. Was it this one? Was it this one? Which led to me winning or losing? These are very hard problems for reinforcement learning. And DeepMind has combined almost every trick in the book known so far to RL to achieve this. Now the main contribution I'd say here that is novel is what is called league training and we'll get to that. So first of all, if you don't know what reinforcement learning is, reinforcement learning is basically what I just described. You have an input right, which could be this thing here and you have a set of actions that you can do, which the set of actions here is anywhere you can click right, you can click anywhere on the screen. And you have to do this over and over and over and over again until you either win or you lose. And from that you will see you will at the end receive Yeah, you win or you lose and then you have to kind of learn to play the game. So it's machine learning hardcore because you get minimal information and have to achieve a lot of things from it. So the first thing that DeepMind actually does is it does supervised learning. And we'll get into how exactly the model works later. But first thing DeepMind does is it trains an agent to simply imitate humans, right? So you have human data. And from the human data, you so these are games played by humans, good humans, right? Not not people like me. So these these are games played with humans from a significantly high ELO. And the first thing you extract is this Z here. Now Z is is called a statistics vector. And as I understand it, it's mainly the build order, which means in which order do you build your buildings and units and this is very important in StarCraft. This is a strategic decision where you say, okay, first, I'm going to build three worker units. This is like three workers, worker, worker, worker, and then I'm going to build a house and then I'm going to and so on. So these are major strategic decisions that that you kind of have to make with minutes, minutes ahead of time to plan in advance. And this this is kind of stays constant for the game. So this is extracted and provided to the model as an input. So what is the current strategy basically the current overall strategy? The second thing that is extracted is this is at every time step, the observation that the humans had so the screen that humans see, and also the actions that the human did, right? So the human takes its mouse and clicks somewhere, right? This is supposed to be a mouse pointer and clicks here, right? And then the model, this part here, this is the model. And this is the policy function of the model. So the policy decides what to do, right? Is trained to match the action that the human did. So in essence, first, you train an agent to simply imitate humans. And this you can do by supervised learning, right? This is classic machine learning. Each each step you have this input, which is an image, and you have the strategy you're trying to follow. And from these two, you're simply trying to match the action that the human did, assuming the human made a good decision. So this is how you initialize, right? You don't start from scratch. Now I have to say that even though this name is Alpha star, it has surprisingly little to do with Alpha Go or Alpha Zero that DeepMind has done before. Mainly this is entirely model free reinforcement learning. And goes more into the direction of classic deep RL. And you can see with the human data, you can already get pretty far. So these down here are the leagues of StarCraft. And this this here are percentiles of players. And you see with the supervised training, you can get almost you can get better than 80 85% of human players already. Right? So pretty, pretty impressive already simply by imitating humans. Now so the the the way to to further improve this, and let's actually go first into how the model looks like. So down here, they describe this model. That's it. So the model is supposed to map from input to output. So from the screen that the agent sees, right, and some other things to what the agent is going to do to an action a. If you simply do this at every time step, then you have a game playing agent. So first, the question is, of course, how does this happen? Now the input isn't only the thing that the agencies which is this the mini map and the mini map? I believe that's the mini map or the entire map. Well, it's it's in essence, it is a picture. It is also a list of entities. So the the game engine extracts a list of entities. And these can be inside the screen here and outside the screen for friendly. So the assumption is the agent knows about all of its units and where they are and what their statistics are. So in this entity thing, for each entity, you have a list of what is its health, what is its type, what is its position, does it carry any items and so on all the things you need to know about this entity. This is in this list of entities. And along with that also opponent entities, but only the ones that are on screen. Right. So all of this goes into this list of entities. And then the next features are scalar features. And as I understand it, scalar features are things like what race are you playing currently? What time is it in the game and so on. So these are additional features. And also baseline features. And this is mainly used to train the value network. And if you this is not going to make sense if you know nothing about reinforcement learning. But one main contribution of this paper is or not contribution, but kind of thing that they claim is that for computing the value network, they also use the observations. So all of this of the opponent player, because you know this during training, because you're doing self play, and you don't need this value network during inference. You can actually do this and this improves performance significantly. Alright so that's just for people who know RL very well. Everyone else don't don't worry too much about these things. Alright so these are the inputs, the scalar features, the entity and the minimap. Each one goes through separate encoders. So the minimap goes through a ResNet which is a convolutional network. And the entities go through a transformer which is kind of a thing to, it's appropriate to encode a set of entities right. Scalar features go through a classic feed forward network MLP. All of these get combined here into a deep LSTM that goes over time. Now the deep LSTM is what really makes the strategy because each time step, each time step a screen like this is input into the into the thing. But the agent also needs to remember what did it do last steps two steps ago right. This is important because you don't have full observability. You need to know what did I do in the in the past. And that's where the so if the last step you saw this screen and the step before you saw this screen right then all of this would go through these encoding step into the LSTM right. So the LSTM will encode now over time all of these different steps. And so you can kind of say alright if I have just started building a building I should probably not build the same building again even though I can't see it on the screen right. Because I know that three steps ago I did start building a build build a building. So this is kind of the LSTM is basically where you integrate your strategy over time. So from the LSTM you have to make two predictions. You have to make a prediction of what to do. This is the action and how valuable is your current state and how valuable is your current state. This is called the value network. This is a core component of deep reinforcement learning. These two components one is called the policy which would be everything over here and what is called the value network which is called everything over here. These are the things you need to do actor critic learning and actor critic learning is the current state of the art in deep RL. So deep mind does nothing else here except as I said they use these baseline features for the value network. But if you don't know what a value network is don't worry about it. The important part for playing the game is actually the part over here that called the policy. So first you need to do to decide what action you do and that there are many action types in Starcraft as I already said you can build a building you can move a unit you can actually move the camera that's an action type right because you want to maybe see what's over here or over here or over here. So that's an action you can do and if you have decided on what action you want to do you have to decide when do I do it. So you see the action type once you figured it out it goes into the next neural network and that decides okay when do I do it when do I do this action. So it specifies a delay. Then once you've decided what to do and when to do it it goes into the next neural network and that decides should I put this into the queue of actions because the agent here is limited to a certain number of actions per second and I think it's 22 actions per five seconds or something like this so in order to mimic you know human limitations. So there's a queue of actions to be executed and the agent needs to decide do I really want is this action so important to put it into the queue. Alright if you have decided what to do when to do it whether you would like to do it at all right then you have to you have to say it goes into the next neural network and you have to say alright which units do I want to do it with right if you want to build a building you can have to choose one or many workers to do it. I don't actually know how StarCraft works in this I'm a bit of a noob but you have to you have to select units with which to do the action for most of the thing and there I like the use of a pointer network here so what a pointer network is is a network that can point to its own inputs it's sort of like an attention network but not really in a pointer network if you have a set of inputs like we have here so entity entity entity entity right all these entities and you can see the entity embedding the entity encoder actually has skip connections that go here right so this network directly gets these these entities as input it can then write you then you have a neural network on top of that neural network that the neural network takes all of these things as an input and what the neural network will output is a pointer to one of these things right you can say look I point to this thing right here this is a called a pointer network and yeah as I said it's different from an attention network which might so an attention network is where you get a distribution actually get a distribution in both cases there is a difference but we don't have to really time to go into it here but in essence with a pointer network you can select which of these entities you want to do something with all right now you've decided on which action when whether to cue it with which unit to do it now you have to decide for some actions for example if the action is attack or heal or something this target unit which unit do you want to target or which which location on the map you want to target this is the target point here and you can see again here are skip connections from the entity encoder and from the spatial encoder to these things and while the target unit is an attention network that's this like much like a pointer network you will kind of point to places in lists the target point is a deconvolution or resnet what that means is so you have this spatial encoder here will embed the mini map so there will be a neural network right here actually let's draw the neural network in this color right here it will give you a an embedding of that right and that's what you what you feed into that's what you feed for example into the LSTM but then what you do is you have a deconvolutional network which again produces a mini map but on this mini map there there's not it's not the original mini map but it's kind of a distribution of locations so it said here here do I want to point all right so the that this neural network is responsible for producing this dot on the mini map basically saying okay I know what to do when to do it with which units to do it and so on I want to do it right here on the mini map okay and now you have it right you go from the input which are these things the mini map the entities and so on to what do I want to do where when with which units and so on right this is called a policy and it's extremely complicated every one of these boxes here is a neural network and you can see it's it's very it's a lot to train and they of course they have a lot of resources since they are deep mind but that's the the main thing all right they have a few tricks to train this and we won't go too much into this but one of the tricks is V trace from the Impala paper one of another trick is up go up going policy update and a third trick is TD lambda learning here and all of these are kind of improvements onto classic actor critic reinforcement learning style like a to see your a3c if you are interested then you can you know look into these things so that's how they train it and the question now is what's the protocol for training it we saw okay there is supervised learning cool then there is reinforcement learning all right but you can't just apply and this is in the reinforcement learning this is what we said you get kind of a reward and the reward goes into this TD lambda and V trace and and up going policy update to train the value function and the policy but the special thing that this paper introduces is what's called leak training now in in papers like alpha go or alpha zero what had been done is called self play and self play basically means you have an agent you have an agent right you have this how in a row an agent that's this is supposed to be an artificial intelligence right how to make it artificial okay it has a little hat right a funky hat it's a robot and the robot will play a copy of itself right and the copy it might be slightly different but the it basically these two these two play each other and thereby become better and better and better and you can see this like over time as as the purple one gets better the blue one gets better as well because they they kind of play against each other and when one falls behind right when one falls behind then they simply copy over from the other one they basically copy the other one and then they catch up again right they catch up right and they continue competing so by competing against each other they get better and this is called self play now people have noticed this kind of leads to instabilities because you can get kind of trapped get trapped in cycles like rock paper scissor cycles so what they do is they will actually as they get better so this is the first version right and the second version they are a bit better now so they have bigger hats right and here bigger bigger larger hats right and down here they are even better so they have like ginormous hats but they might have some weaknesses because they only play against each other right so this is the same players but over time what they will do is they will actually play occasionally play old versions of the other player or of themselves right occasionally the new versions will fall back and play old versions or not only the current versions of the agent or old versions of themselves right so this this is called fictitious self play in that you always play the you know not only play the your current kind of opponent or your current self i mean it's the same anyway because you keep copying the weights you also play the old ones and this paper goes a step further and says actually we we do this but we want to prioritize the good ones so for example we know that we know that the current ones are good right but we know that this particular one was also pretty good so far so we are we keep making we keep making these these new ones play against this one more often and this has led to kind of an improvement in these kind of self play algorithms and the real new part of this um alpha star paper is the fact that they do this league training and in the league training they this this is what it looks like but i find this graphic rather confusing i'd rather explain it like something like this all right so there is your current your current strategy and you have a hat right and you do all of the you do all of the all of the i play against myself with the smaller hat thing right i play against past versions of myself fine but then you also do you have what's called exploiters and exploiters an exploiter is a let's call it a triangle hat because it's very evil what it does is it specifically targets only the current good agent right so this this agent right here is tasked with playing old versions of itself and playing the exploiter both at the same time but the exploiter is only tasked with playing this thing so um what it can do is it can specialize in exploiting whatever weaknesses this player has of course the hope is that the this player will become better in response because there's a player trying to exploit it right so every and as this as this player becomes better than this player here is reinitialized and tries to find new weaknesses right so as this as this one continues to learn so the exploiters they are initialized you can see this here so these are called the main agents and you can see they play against each other right one of them they play against each other they play against past versions of themselves so these are past versions of themselves but then there are these main exploiters and the main exploiters they're constantly reinitialized from human data right you can see this here they're reinitialized and they only play against these main players right they don't have to deal with any of the past players or playing against themselves stuff they only try to exploit the main players and thereby the main players get better once they get better than an exploiter they are reinitialized so the exploiters are reinitialized to find new exploits of the main agents the third component is what's called a league exploiter and a league exploiter is the following so the league let's the league exploiter here and its hat is a wavy hat and what the league exploiter does is it plays against past versions of itself and others so it does play against the league exploiter sorry with smaller wavy hat it also plays against this thing by the way the this this here also plays against past versions of this and of everything else you can see here the past version arrows it goes against all past players so this this represents all the past players that ever existed and so does the so does the so here but also against past versions of this of this main exploiter here but the important thing is the current main exploiter doesn't play past versions of its of itself right so this also plays this and this place this and this place this and this also place this so the league exploiter they they do take part in this whole league business like playing against past versions of all the players but it only plays against the main ex against the main exploiters and this is a thing that i find missing here honestly i don't know if i don't understand this but i'm pretty sure i do like these also play these and that's an arrow missing in the in the drawing uh the league exploiters play the main agents but the main difference between the league exploiters and the main agents is the league exploiters they don't play themselves right there is no there's no playing themselves on the league exploiters so the league exploiters what they can do is they can find weaknesses of the entire league and kind of train train the by playing against the main opponents using those found weaknesses you bet that the main ex the main agents will get better against those major weaknesses of the entire league right so the main agents first of all they get better by playing the main exploiters because the main exploiters are mainly trying to exploit the main agents the main agents also get better by playing the league exploiters because the league exploiters find weaknesses of the entire league right so and the main agents they also get better by playing each So that makes these these main agents kind of... You can say they're trained against everything under the sun, against any possible exploit that can be found either in themselves or generally. And thereby they get really good at StarCraft, because they can counter pretty much everything. So this is how league training works and this is what I feel is the main contribution of this paper to the reinforcement learning world. Now they do an ablation study here. You can see where this ends up. So these final agents here, they end up in Grandmaster level StarCraft and beat 99. some percent of human players. So really really good. They do an ablation study of all of the tricks they use. So this is pretty much all tricks they use. And you can see here this includes this league composition. What happens if we only have main agents, then main exploiters, league exploiters, and you can see the elo going up. Then you can see multi-agent learning. How much does this fictitious self play? The fact that we prioritize to strong players and so on. How much does this help? And you again see the elo going up. How much does it help that we use human data? How much does it help that we use these different networks? They have very good ablation studies of how much each of the things help. Here they investigate what if we didn't have a camera interface? So what if we could see the entire game at once and not only the opponents that are within the camera? And what if we didn't need to move the camera? They investigate the off-policy learning corrections that we mentioned and so on. I find this very cool that they do these huge ablation studies to show really how much each of these tricks that they used helps in generating their superior performance. Here you can see how these agents develop. So over training and they have a massive infrastructure and they train for days. You can see this here. But you can see that the the main agents just get better and better and better and better. While the main exploiters of course they stay the same but they kind of keep getting reinitialized. So this main agent is trained to exploit these these sorry these main exploiters trained to exploit these main agents. This one is trying to exploit these ones. They're not by themselves really good agents but they're simply trained to to find and exploit weaknesses of the main agents. Likewise these league exploiters they do get better with the league but they are only concerned with exploiting current and past versions of the league. Also to make the main agents better. So everything is geared towards making these main agents better. And you can see it actually works. They have some analysis of which units these agents build. I'm not too versed in Starcraft to comment on this. But all in all I find this to be a very cool paper and I find it to be described fairly clear what they do. Though they do not release the source code. They release some kind of pseudo code. But the analysis and the ablations are very good. The results are let's say questionable because of course you can't compare machines to humans especially in a game where you have to make quick actions. Even if you limit the actions, they do this here. So they have this monitoring layer which limits the actions and introduces delay and so on. But still if it's not the same as a human who might not always be able to do these 22 actions per five seconds. If something quick happens they may need to have some kind of relaxation phase and so on. But they try with these kind of delays and action limits. They try to model these kind of limitations. I find this as fair as possible. This is what I find kind of problematic. So they own units as I said. The agent can also see the ones that are outside the camera. And that seems kind of shady. Because of course you can you can claim humans can do whatever command groups to also control units outside the camera. But it's not really the case. So that's sort of a distinct advantage that the machine has. But yeah in any case I find it to be very well done. And I hope this made it a bit clearer what the exact contributions are. And with that have a fun time playing against AlphaStar. Bye bye.
[ { "end": 7.28, "start": 0, "text": " Alright, let's talk about AlphaStar, Grandmaster level in StarCraft 2 using multi-agent reinforcement" }, { "end": 8.36, "start": 7.28, "text": " learning." }, { "end": 15.3, "start": 8.36, "text": " The corresponding paper looks like this and is by Oriol Vinyals et al. from DeepMind and" }, { "end": 19.56, "start": 15.3, "text": " has been published in the journal of Nature recently." }, { "end": 21.84, "start": 19.56, "text": " Now let me say this first." }, { "end": 24.12, "start": 21.84, "text": " Stop publishing in Nature." }, { "end": 26.38, "start": 24.12, "text": " This is a journal is not open access." }, { "end": 29.92, "start": 26.38, "text": " It makes its readers pay for getting the article." }, { "end": 36.440000000000005, "start": 29.92, "text": " So actually you can access this article or a public version of it for free but you can't" }, { "end": 40.760000000000005, "start": 36.440000000000005, "text": " print it, you can't download it unless you pay for it." }, { "end": 47.84, "start": 40.760000000000005, "text": " And this to me, it seems ridiculous because none of this money goes to the authors of" }, { "end": 48.92, "start": 47.84, "text": " the article." }, { "end": 50.88, "start": 48.92, "text": " None of this money goes to the reviewers." }, { "end": 56.160000000000004, "start": 50.88, "text": " The review quality isn't notably better, at least in the field of computer science." }, { "end": 61.68, "start": 56.16, "text": " All of this is a publicity stunt by DeepMind because Nature has been kind of impactful" }, { "end": 62.68, "start": 61.68, "text": " in the last decades." }, { "end": 68.44, "start": 62.68, "text": " It's like, ooh, look at me, I got a big dick I publish in Nature." }, { "end": 69.6, "start": 68.44, "text": " Nothing more than that." }, { "end": 74.08, "start": 69.6, "text": " It's like OpenAI saying their model is too dangerous to release to the world." }, { "end": 78.16, "start": 74.08, "text": " I guess DeepMind might make the same claim about AlphaStar." }, { "end": 81.36, "start": 78.16, "text": " It's like too dangerous of a StarCraft player." }, { "end": 85.12, "start": 81.36, "text": " Yeah, so stop this." }, { "end": 88.64, "start": 85.12, "text": " Publish your research in open access." }, { "end": 92.04, "start": 88.64, "text": " Nature or journals like these for computer science." }, { "end": 94.56, "start": 92.04, "text": " It's a remnant of the last century." }, { "end": 99.72, "start": 94.56, "text": " So go on and join everyone else in distributing knowledge." }, { "end": 102.4, "start": 99.72, "text": " All right, rant over." }, { "end": 104.36000000000001, "start": 102.4, "text": " Let's jump in into this article." }, { "end": 110.84, "start": 104.36000000000001, "text": " So the article describes how to train a reinforcement learning agent to play the game of StarCraft" }, { "end": 112.04, "start": 110.84, "text": " 2." }, { "end": 117.04, "start": 112.04, "text": " So StarCraft 2 is this game for everyone who doesn't know." }, { "end": 118.80000000000001, "start": 117.04, "text": " Just very quickly explain the game." }, { "end": 125, "start": 118.80000000000001, "text": " StarCraft 2 is a real time strategy game and you're kind of in this top third person view" }, { "end": 130.4, "start": 125, "text": " and you control your units and the goal is kind of to move your units around and first" }, { "end": 136, "start": 130.4, "text": " of all build up buildings and using those buildings you can then produce more and more" }, { "end": 142, "start": 136, "text": " diverse units and ultimately you want to kind of produce some sort of army that can go to" }, { "end": 145.76, "start": 142, "text": " the opponent and destroy the opponent's base." }, { "end": 152, "start": 145.76, "text": " So you control all of this on a computer using a mouse and a keyboard and StarCraft is notable" }, { "end": 154.16, "start": 152, "text": " for being very balanced." }, { "end": 157.6, "start": 154.16, "text": " So there are three different races you can play." }, { "end": 164.64, "start": 157.6, "text": " So first are the Terran which are kind of human, human-ish." }, { "end": 170.64, "start": 164.64, "text": " They have marines and tanks and helicopters I believe and things like this." }, { "end": 177.83999999999997, "start": 170.64, "text": " Then the Protoss are some sort of alien race that are super advanced so they can teleport" }, { "end": 182.11999999999998, "start": 177.83999999999997, "text": " and have energy shields and things like that." }, { "end": 189.35999999999999, "start": 182.11999999999998, "text": " And then last are the Zerg and the Zerg are kind of icky ground dwelling creatures that" }, { "end": 195.66, "start": 189.35999999999999, "text": " infect things and spread like a disease." }, { "end": 200.76, "start": 195.66, "text": " So the interesting thing here is compared to other real-time strategy games is that" }, { "end": 203.96, "start": 200.76, "text": " the three races they play very different." }, { "end": 209.8, "start": 203.96, "text": " So the game is almost a different game if you play as a different race but they are" }, { "end": 216.64, "start": 209.8, "text": " so well balanced that almost any matchup is kind of a fair game between equally skilled" }, { "end": 218.24, "start": 216.64, "text": " players." }, { "end": 220.5, "start": 218.24, "text": " So that's makes StarCraft pretty unique." }, { "end": 226.68, "start": 220.5, "text": " Also pretty unique is the very very high action per minute rates that pro players get." }, { "end": 229.48, "start": 226.68, "text": " Like they play this insanely fast." }, { "end": 238.08, "start": 229.48, "text": " So game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy base." }, { "end": 244.12, "start": 238.08, "text": " So to train an RL agent to play this is very hard because the action space is very high." }, { "end": 248.72, "start": 244.12, "text": " You have to target with your mouse part of the screen." }, { "end": 253.72, "start": 248.72, "text": " You have to look what is on the screen, what can I do." }, { "end": 256, "start": 253.72, "text": " There's this mini map down here." }, { "end": 259.32, "start": 256, "text": " There are things you can do." }, { "end": 261.28, "start": 259.32, "text": " There are opponents you can target and so on." }, { "end": 266.84, "start": 261.28, "text": " So all of this is very very very difficult for an RL agent." }, { "end": 274.32, "start": 266.84, "text": " And at the end, after 10 minutes, you play play play play play and after 10 minutes you" }, { "end": 276.96, "start": 274.32, "text": " either win or you lose." }, { "end": 282.96, "start": 276.96, "text": " And the RL agent has to figure out which of the actions that I did during those 10 minutes" }, { "end": 283.96, "start": 282.96, "text": " right." }, { "end": 284.96, "start": 283.96, "text": " Was it this one?" }, { "end": 285.96, "start": 284.96, "text": " Was it this one?" }, { "end": 287.76, "start": 285.96, "text": " Which led to me winning or losing?" }, { "end": 292.15999999999997, "start": 287.76, "text": " These are very hard problems for reinforcement learning." }, { "end": 299.64, "start": 292.15999999999997, "text": " And DeepMind has combined almost every trick in the book known so far to RL to achieve" }, { "end": 300.64, "start": 299.64, "text": " this." }, { "end": 308.03999999999996, "start": 300.64, "text": " Now the main contribution I'd say here that is novel is what is called league training" }, { "end": 310.86, "start": 308.03999999999996, "text": " and we'll get to that." }, { "end": 319.4, "start": 310.86, "text": " So first of all, if you don't know what reinforcement learning is, reinforcement learning is basically" }, { "end": 320.82, "start": 319.4, "text": " what I just described." }, { "end": 329.12, "start": 320.82, "text": " You have an input right, which could be this thing here and you have a set of actions that" }, { "end": 333.52, "start": 329.12, "text": " you can do, which the set of actions here is anywhere you can click right, you can click" }, { "end": 335.64, "start": 333.52, "text": " anywhere on the screen." }, { "end": 341.16, "start": 335.64, "text": " And you have to do this over and over and over and over again until you either win or" }, { "end": 342.68, "start": 341.16, "text": " you lose." }, { "end": 348.12, "start": 342.68, "text": " And from that you will see you will at the end receive Yeah, you win or you lose and" }, { "end": 350.72, "start": 348.12, "text": " then you have to kind of learn to play the game." }, { "end": 355.72, "start": 350.72, "text": " So it's machine learning hardcore because you get minimal information and have to achieve" }, { "end": 357.92, "start": 355.72, "text": " a lot of things from it." }, { "end": 366.16, "start": 357.92, "text": " So the first thing that DeepMind actually does is it does supervised learning." }, { "end": 371.86, "start": 366.16, "text": " And we'll get into how exactly the model works later." }, { "end": 378.08000000000004, "start": 371.86, "text": " But first thing DeepMind does is it trains an agent to simply imitate humans, right?" }, { "end": 381.28000000000003, "start": 378.08000000000004, "text": " So you have human data." }, { "end": 387.52000000000004, "start": 381.28000000000003, "text": " And from the human data, you so these are games played by humans, good humans, right?" }, { "end": 390.71999999999997, "start": 387.52, "text": " Not not people like me." }, { "end": 396.96, "start": 390.71999999999997, "text": " So these these are games played with humans from a significantly high ELO." }, { "end": 399.84, "start": 396.96, "text": " And the first thing you extract is this Z here." }, { "end": 403.44, "start": 399.84, "text": " Now Z is is called a statistics vector." }, { "end": 409.08, "start": 403.44, "text": " And as I understand it, it's mainly the build order, which means in which order do you build" }, { "end": 412.4, "start": 409.08, "text": " your buildings and units and this is very important in StarCraft." }, { "end": 418.76, "start": 412.4, "text": " This is a strategic decision where you say, okay, first, I'm going to build three worker" }, { "end": 419.76, "start": 418.76, "text": " units." }, { "end": 424.28, "start": 419.76, "text": " This is like three workers, worker, worker, worker, and then I'm going to build a house" }, { "end": 426.2, "start": 424.28, "text": " and then I'm going to and so on." }, { "end": 434.08, "start": 426.2, "text": " So these are major strategic decisions that that you kind of have to make with minutes," }, { "end": 438.09999999999997, "start": 434.08, "text": " minutes ahead of time to plan in advance." }, { "end": 442.58000000000004, "start": 438.1, "text": " And this this is kind of stays constant for the game." }, { "end": 446.44, "start": 442.58000000000004, "text": " So this is extracted and provided to the model as an input." }, { "end": 451.28000000000003, "start": 446.44, "text": " So what is the current strategy basically the current overall strategy?" }, { "end": 457.72, "start": 451.28000000000003, "text": " The second thing that is extracted is this is at every time step, the observation that" }, { "end": 466.86, "start": 457.72, "text": " the humans had so the screen that humans see, and also the actions that the human did, right?" }, { "end": 472.96000000000004, "start": 466.86, "text": " So the human takes its mouse and clicks somewhere, right?" }, { "end": 477.6, "start": 472.96000000000004, "text": " This is supposed to be a mouse pointer and clicks here, right?" }, { "end": 482.62, "start": 477.6, "text": " And then the model, this part here, this is the model." }, { "end": 484.64, "start": 482.62, "text": " And this is the policy function of the model." }, { "end": 488.48, "start": 484.64, "text": " So the policy decides what to do, right?" }, { "end": 492.64, "start": 488.48, "text": " Is trained to match the action that the human did." }, { "end": 498.28, "start": 492.64, "text": " So in essence, first, you train an agent to simply imitate humans." }, { "end": 500.36, "start": 498.28, "text": " And this you can do by supervised learning, right?" }, { "end": 502.24, "start": 500.36, "text": " This is classic machine learning." }, { "end": 509.24, "start": 502.24, "text": " Each each step you have this input, which is an image, and you have the strategy you're" }, { "end": 510.56, "start": 509.24, "text": " trying to follow." }, { "end": 515.56, "start": 510.56, "text": " And from these two, you're simply trying to match the action that the human did, assuming" }, { "end": 518.02, "start": 515.56, "text": " the human made a good decision." }, { "end": 520.68, "start": 518.02, "text": " So this is how you initialize, right?" }, { "end": 524.04, "start": 520.68, "text": " You don't start from scratch." }, { "end": 530.76, "start": 524.04, "text": " Now I have to say that even though this name is Alpha star, it has surprisingly little" }, { "end": 537.4399999999999, "start": 530.76, "text": " to do with Alpha Go or Alpha Zero that DeepMind has done before." }, { "end": 542.88, "start": 537.4399999999999, "text": " Mainly this is entirely model free reinforcement learning." }, { "end": 549.56, "start": 542.88, "text": " And goes more into the direction of classic deep RL." }, { "end": 553.1199999999999, "start": 549.56, "text": " And you can see with the human data, you can already get pretty far." }, { "end": 557.3599999999999, "start": 553.1199999999999, "text": " So these down here are the leagues of StarCraft." }, { "end": 561.68, "start": 557.3599999999999, "text": " And this this here are percentiles of players." }, { "end": 566.0799999999999, "start": 561.68, "text": " And you see with the supervised training, you can get almost you can get better than" }, { "end": 569.9599999999999, "start": 566.0799999999999, "text": " 80 85% of human players already." }, { "end": 571.26, "start": 569.9599999999999, "text": " Right?" }, { "end": 576.52, "start": 571.26, "text": " So pretty, pretty impressive already simply by imitating humans." }, { "end": 589.52, "start": 576.52, "text": " Now so the the the way to to further improve this, and let's actually go first into how" }, { "end": 591.88, "start": 589.52, "text": " the model looks like." }, { "end": 597.52, "start": 591.88, "text": " So down here, they describe this model." }, { "end": 598.6999999999999, "start": 597.52, "text": " That's it." }, { "end": 603.66, "start": 598.6999999999999, "text": " So the model is supposed to map from input to output." }, { "end": 611.92, "start": 603.66, "text": " So from the screen that the agent sees, right, and some other things to what the agent is" }, { "end": 615.16, "start": 611.92, "text": " going to do to an action a." }, { "end": 619.68, "start": 615.16, "text": " If you simply do this at every time step, then you have a game playing agent." }, { "end": 623.9399999999999, "start": 619.68, "text": " So first, the question is, of course, how does this happen?" }, { "end": 631.72, "start": 623.9399999999999, "text": " Now the input isn't only the thing that the agencies which is this the mini map and the" }, { "end": 634.12, "start": 631.72, "text": " mini map?" }, { "end": 637.72, "start": 634.12, "text": " I believe that's the mini map or the entire map." }, { "end": 641.76, "start": 637.72, "text": " Well, it's it's in essence, it is a picture." }, { "end": 644.46, "start": 641.76, "text": " It is also a list of entities." }, { "end": 650.52, "start": 644.46, "text": " So the the game engine extracts a list of entities." }, { "end": 658.6, "start": 650.52, "text": " And these can be inside the screen here and outside the screen for friendly." }, { "end": 664.32, "start": 658.6, "text": " So the assumption is the agent knows about all of its units and where they are and what" }, { "end": 665.84, "start": 664.32, "text": " their statistics are." }, { "end": 672.12, "start": 665.84, "text": " So in this entity thing, for each entity, you have a list of what is its health, what" }, { "end": 677.9200000000001, "start": 672.12, "text": " is its type, what is its position, does it carry any items and so on all the things you" }, { "end": 679.88, "start": 677.9200000000001, "text": " need to know about this entity." }, { "end": 682, "start": 679.88, "text": " This is in this list of entities." }, { "end": 688.96, "start": 682, "text": " And along with that also opponent entities, but only the ones that are on screen." }, { "end": 690.54, "start": 688.96, "text": " Right." }, { "end": 695.08, "start": 690.54, "text": " So all of this goes into this list of entities." }, { "end": 697.44, "start": 695.08, "text": " And then the next features are scalar features." }, { "end": 703.28, "start": 697.44, "text": " And as I understand it, scalar features are things like what race are you playing currently?" }, { "end": 705.72, "start": 703.28, "text": " What time is it in the game and so on." }, { "end": 708.56, "start": 705.72, "text": " So these are additional features." }, { "end": 712.4799999999999, "start": 708.56, "text": " And also baseline features." }, { "end": 718.4799999999999, "start": 712.4799999999999, "text": " And this is mainly used to train the value network." }, { "end": 723.0999999999999, "start": 718.4799999999999, "text": " And if you this is not going to make sense if you know nothing about reinforcement learning." }, { "end": 730.0799999999999, "start": 723.0999999999999, "text": " But one main contribution of this paper is or not contribution, but kind of thing that" }, { "end": 735.6999999999999, "start": 730.0799999999999, "text": " they claim is that for computing the value network, they also use the observations." }, { "end": 741.22, "start": 735.7, "text": " So all of this of the opponent player, because you know this during training, because you're" }, { "end": 746.76, "start": 741.22, "text": " doing self play, and you don't need this value network during inference." }, { "end": 751.32, "start": 746.76, "text": " You can actually do this and this improves performance significantly." }, { "end": 759.76, "start": 751.32, "text": " Alright so that's just for people who know RL very well." }, { "end": 763, "start": 759.76, "text": " Everyone else don't don't worry too much about these things." }, { "end": 768.44, "start": 763, "text": " Alright so these are the inputs, the scalar features, the entity and the minimap." }, { "end": 771.04, "start": 768.44, "text": " Each one goes through separate encoders." }, { "end": 775.72, "start": 771.04, "text": " So the minimap goes through a ResNet which is a convolutional network." }, { "end": 781.8, "start": 775.72, "text": " And the entities go through a transformer which is kind of a thing to, it's appropriate" }, { "end": 784.2, "start": 781.8, "text": " to encode a set of entities right." }, { "end": 788.16, "start": 784.2, "text": " Scalar features go through a classic feed forward network MLP." }, { "end": 793.28, "start": 788.16, "text": " All of these get combined here into a deep LSTM that goes over time." }, { "end": 801.0799999999999, "start": 793.28, "text": " Now the deep LSTM is what really makes the strategy because each time step, each time" }, { "end": 806.74, "start": 801.0799999999999, "text": " step a screen like this is input into the into the thing." }, { "end": 811.64, "start": 806.74, "text": " But the agent also needs to remember what did it do last steps two steps ago right." }, { "end": 814.28, "start": 811.64, "text": " This is important because you don't have full observability." }, { "end": 818.48, "start": 814.28, "text": " You need to know what did I do in the in the past." }, { "end": 824.6, "start": 818.48, "text": " And that's where the so if the last step you saw this screen and the step before you saw" }, { "end": 831.12, "start": 824.6, "text": " this screen right then all of this would go through these encoding step into the LSTM" }, { "end": 832.12, "start": 831.12, "text": " right." }, { "end": 837.9599999999999, "start": 832.12, "text": " So the LSTM will encode now over time all of these different steps." }, { "end": 844.04, "start": 837.9599999999999, "text": " And so you can kind of say alright if I have just started building a building I should" }, { "end": 849.68, "start": 844.04, "text": " probably not build the same building again even though I can't see it on the screen right." }, { "end": 858.36, "start": 849.68, "text": " Because I know that three steps ago I did start building a build build a building." }, { "end": 865.4, "start": 858.36, "text": " So this is kind of the LSTM is basically where you integrate your strategy over time." }, { "end": 869.0799999999999, "start": 865.4, "text": " So from the LSTM you have to make two predictions." }, { "end": 873.64, "start": 869.0799999999999, "text": " You have to make a prediction of what to do." }, { "end": 880.88, "start": 873.64, "text": " This is the action and how valuable is your current state and how valuable is your current" }, { "end": 881.88, "start": 880.88, "text": " state." }, { "end": 883.72, "start": 881.88, "text": " This is called the value network." }, { "end": 887.76, "start": 883.72, "text": " This is a core component of deep reinforcement learning." }, { "end": 891.8, "start": 887.76, "text": " These two components one is called the policy which would be everything over here and what" }, { "end": 895.72, "start": 891.8, "text": " is called the value network which is called everything over here." }, { "end": 900.3199999999999, "start": 895.72, "text": " These are the things you need to do actor critic learning and actor critic learning" }, { "end": 902.64, "start": 900.3199999999999, "text": " is the current state of the art in deep RL." }, { "end": 907.06, "start": 902.64, "text": " So deep mind does nothing else here except as I said they use these baseline features" }, { "end": 909.1999999999999, "start": 907.06, "text": " for the value network." }, { "end": 912.4399999999999, "start": 909.1999999999999, "text": " But if you don't know what a value network is don't worry about it." }, { "end": 918.16, "start": 912.4399999999999, "text": " The important part for playing the game is actually the part over here that called the" }, { "end": 919.28, "start": 918.16, "text": " policy." }, { "end": 924.56, "start": 919.28, "text": " So first you need to do to decide what action you do and that there are many action types" }, { "end": 929.24, "start": 924.56, "text": " in Starcraft as I already said you can build a building you can move a unit you can actually" }, { "end": 933.12, "start": 929.24, "text": " move the camera that's an action type right because you want to maybe see what's over" }, { "end": 935.72, "start": 933.12, "text": " here or over here or over here." }, { "end": 943.72, "start": 935.72, "text": " So that's an action you can do and if you have decided on what action you want to do" }, { "end": 946.26, "start": 943.72, "text": " you have to decide when do I do it." }, { "end": 952.04, "start": 946.26, "text": " So you see the action type once you figured it out it goes into the next neural network" }, { "end": 957.32, "start": 952.04, "text": " and that decides okay when do I do it when do I do this action." }, { "end": 960.6800000000001, "start": 957.32, "text": " So it specifies a delay." }, { "end": 966.5600000000001, "start": 960.6800000000001, "text": " Then once you've decided what to do and when to do it it goes into the next neural network" }, { "end": 973.84, "start": 966.5600000000001, "text": " and that decides should I put this into the queue of actions because the agent here is" }, { "end": 980.62, "start": 973.84, "text": " limited to a certain number of actions per second and I think it's 22 actions per five" }, { "end": 987.6, "start": 980.62, "text": " seconds or something like this so in order to mimic you know human limitations." }, { "end": 993.36, "start": 987.6, "text": " So there's a queue of actions to be executed and the agent needs to decide do I really" }, { "end": 997.88, "start": 993.36, "text": " want is this action so important to put it into the queue." }, { "end": 1004, "start": 997.88, "text": " Alright if you have decided what to do when to do it whether you would like to do it at" }, { "end": 1011.16, "start": 1004, "text": " all right then you have to you have to say it goes into the next neural network and you" }, { "end": 1016.28, "start": 1011.16, "text": " have to say alright which units do I want to do it with right if you want to build a" }, { "end": 1020.6, "start": 1016.28, "text": " building you can have to choose one or many workers to do it." }, { "end": 1025.6, "start": 1020.6, "text": " I don't actually know how StarCraft works in this I'm a bit of a noob but you have to" }, { "end": 1031.12, "start": 1025.6, "text": " you have to select units with which to do the action for most of the thing and there" }, { "end": 1037.6, "start": 1031.12, "text": " I like the use of a pointer network here so what a pointer network is is a network that" }, { "end": 1043.08, "start": 1037.6, "text": " can point to its own inputs it's sort of like an attention network but not really in a pointer" }, { "end": 1048.6, "start": 1043.08, "text": " network if you have a set of inputs like we have here so entity entity entity entity" }, { "end": 1054.3799999999999, "start": 1048.6, "text": " right all these entities and you can see the entity embedding the entity encoder actually" }, { "end": 1062.2800000000002, "start": 1054.38, "text": " has skip connections that go here right so this network directly gets these these entities" }, { "end": 1072.2, "start": 1062.2800000000002, "text": " as input it can then write you then you have a neural network on top of that neural network" }, { "end": 1081.24, "start": 1072.2, "text": " that the neural network takes all of these things as an input and what the neural network" }, { "end": 1089.44, "start": 1081.24, "text": " will output is a pointer to one of these things right you can say look I point to this thing" }, { "end": 1094.76, "start": 1089.44, "text": " right here this is a called a pointer network and yeah as I said it's different from an" }, { "end": 1104.04, "start": 1094.76, "text": " attention network which might so an attention network is where you get a distribution actually" }, { "end": 1108.08, "start": 1104.04, "text": " get a distribution in both cases there is a difference but we don't have to really time" }, { "end": 1115.76, "start": 1108.08, "text": " to go into it here but in essence with a pointer network you can select which of these entities" }, { "end": 1122.3799999999999, "start": 1115.76, "text": " you want to do something with all right now you've decided on which action when whether" }, { "end": 1128.8799999999999, "start": 1122.3799999999999, "text": " to cue it with which unit to do it now you have to decide for some actions for example" }, { "end": 1134.96, "start": 1128.8799999999999, "text": " if the action is attack or heal or something this target unit which unit do you want to" }, { "end": 1143.64, "start": 1134.96, "text": " target or which which location on the map you want to target this is the target point" }, { "end": 1150.56, "start": 1143.64, "text": " here and you can see again here are skip connections from the entity encoder and from the spatial" }, { "end": 1158.16, "start": 1150.56, "text": " encoder to these things and while the target unit is an attention network that's this like" }, { "end": 1166.68, "start": 1158.16, "text": " much like a pointer network you will kind of point to places in lists the target point" }, { "end": 1172.9, "start": 1166.68, "text": " is a deconvolution or resnet what that means is so you have this spatial encoder here will" }, { "end": 1179.0800000000002, "start": 1172.9, "text": " embed the mini map so there will be a neural network right here actually let's draw the" }, { "end": 1186.6000000000001, "start": 1179.0800000000002, "text": " neural network in this color right here it will give you a an embedding of that right" }, { "end": 1193.12, "start": 1186.6, "text": " and that's what you what you feed into that's what you feed for example into the LSTM but" }, { "end": 1202.84, "start": 1193.12, "text": " then what you do is you have a deconvolutional network which again produces a mini map but" }, { "end": 1208.9599999999998, "start": 1202.84, "text": " on this mini map there there's not it's not the original mini map but it's kind of a distribution" }, { "end": 1218.48, "start": 1208.96, "text": " of locations so it said here here do I want to point all right so the that this neural" }, { "end": 1225.4, "start": 1218.48, "text": " network is responsible for producing this dot on the mini map basically saying okay" }, { "end": 1231.52, "start": 1225.4, "text": " I know what to do when to do it with which units to do it and so on I want to do it right" }, { "end": 1239.76, "start": 1231.52, "text": " here on the mini map okay and now you have it right you go from the input which are these" }, { "end": 1245.5, "start": 1239.76, "text": " things the mini map the entities and so on to what do I want to do where when with which" }, { "end": 1252.68, "start": 1245.5, "text": " units and so on right this is called a policy and it's extremely complicated every one of" }, { "end": 1259.84, "start": 1252.68, "text": " these boxes here is a neural network and you can see it's it's very it's a lot to train" }, { "end": 1266.04, "start": 1259.84, "text": " and they of course they have a lot of resources since they are deep mind but that's the the" }, { "end": 1276.84, "start": 1266.04, "text": " main thing all right they have a few tricks to train this and we won't go too much into" }, { "end": 1286.72, "start": 1276.84, "text": " this but one of the tricks is V trace from the Impala paper one of another trick is" }, { "end": 1296.76, "start": 1286.72, "text": " up go up going policy update and a third trick is TD lambda learning here and all of these" }, { "end": 1302.24, "start": 1296.76, "text": " are kind of improvements onto classic actor critic reinforcement learning style like a" }, { "end": 1312.88, "start": 1302.24, "text": " to see your a3c if you are interested then you can you know look into these things so" }, { "end": 1321.24, "start": 1312.88, "text": " that's how they train it and the question now is what's the protocol for training it" }, { "end": 1327.16, "start": 1321.24, "text": " we saw okay there is supervised learning cool then there is reinforcement learning all right" }, { "end": 1331.66, "start": 1327.16, "text": " but you can't just apply and this is in the reinforcement learning this is what we said" }, { "end": 1337.92, "start": 1331.66, "text": " you get kind of a reward and the reward goes into this TD lambda and V trace and and up" }, { "end": 1346.3200000000002, "start": 1337.92, "text": " going policy update to train the value function and the policy but the special thing that" }, { "end": 1352.76, "start": 1346.3200000000002, "text": " this paper introduces is what's called leak training now in in papers like alpha go or" }, { "end": 1359, "start": 1352.76, "text": " alpha zero what had been done is called self play and self play basically means you have" }, { "end": 1366.64, "start": 1359, "text": " an agent you have an agent right you have this how in a row an agent that's this is" }, { "end": 1371.5200000000002, "start": 1366.64, "text": " supposed to be an artificial intelligence right how to make it artificial okay it has" }, { "end": 1382.5200000000002, "start": 1371.5200000000002, "text": " a little hat right a funky hat it's a robot and the robot will play a copy of itself right" }, { "end": 1390.5600000000002, "start": 1382.5200000000002, "text": " and the copy it might be slightly different but the it basically these two these two play" }, { "end": 1395.48, "start": 1390.5600000000002, "text": " each other and thereby become better and better and better and you can see this like over" }, { "end": 1401.84, "start": 1395.48, "text": " time as as the purple one gets better the blue one gets better as well because they" }, { "end": 1406.8, "start": 1401.84, "text": " they kind of play against each other and when one falls behind right when one falls behind" }, { "end": 1413.24, "start": 1406.8, "text": " then they simply copy over from the other one they basically copy the other one and" }, { "end": 1419.44, "start": 1413.24, "text": " then they catch up again right they catch up right and they continue competing so by" }, { "end": 1426.4, "start": 1419.44, "text": " competing against each other they get better and this is called self play now people have" }, { "end": 1431.16, "start": 1426.4, "text": " noticed this kind of leads to instabilities because you can get kind of trapped get trapped" }, { "end": 1439.06, "start": 1431.16, "text": " in cycles like rock paper scissor cycles so what they do is they will actually as they" }, { "end": 1445.2, "start": 1439.06, "text": " get better so this is the first version right and the second version they are a bit better" }, { "end": 1457.6000000000001, "start": 1445.2, "text": " now so they have bigger hats right and here bigger bigger larger hats right and down here" }, { "end": 1463.44, "start": 1457.6000000000001, "text": " they are even better so they have like ginormous hats but they might have some weaknesses because" }, { "end": 1469.1200000000001, "start": 1463.44, "text": " they only play against each other right so this is the same players but over time what" }, { "end": 1477.08, "start": 1469.12, "text": " they will do is they will actually play occasionally play old versions of the other player or of" }, { "end": 1485.36, "start": 1477.08, "text": " themselves right occasionally the new versions will fall back and play old versions or not" }, { "end": 1491.3999999999999, "start": 1485.36, "text": " only the current versions of the agent or old versions of themselves right so this this" }, { "end": 1498.52, "start": 1491.3999999999999, "text": " is called fictitious self play in that you always play the you know not only play the" }, { "end": 1503.4, "start": 1498.52, "text": " your current kind of opponent or your current self i mean it's the same anyway because you" }, { "end": 1509.2, "start": 1503.4, "text": " keep copying the weights you also play the old ones and this paper goes a step further" }, { "end": 1518.28, "start": 1509.2, "text": " and says actually we we do this but we want to prioritize the good ones so for example" }, { "end": 1524.32, "start": 1518.28, "text": " we know that we know that the current ones are good right but we know that this particular" }, { "end": 1533.4399999999998, "start": 1524.32, "text": " one was also pretty good so far so we are we keep making we keep making these these" }, { "end": 1540.8999999999999, "start": 1533.4399999999998, "text": " new ones play against this one more often and this has led to kind of an improvement" }, { "end": 1547.62, "start": 1540.8999999999999, "text": " in these kind of self play algorithms and the real new part of this um alpha star paper" }, { "end": 1554.4399999999998, "start": 1547.62, "text": " is the fact that they do this league training and in the league training they this this" }, { "end": 1559.7199999999998, "start": 1554.4399999999998, "text": " is what it looks like but i find this graphic rather confusing i'd rather explain it like" }, { "end": 1567.8799999999999, "start": 1559.7199999999998, "text": " something like this all right so there is your current your current strategy and you" }, { "end": 1577.48, "start": 1567.88, "text": " have a hat right and you do all of the you do all of the all of the i play against myself" }, { "end": 1584, "start": 1577.48, "text": " with the smaller hat thing right i play against past versions of myself fine but then you" }, { "end": 1596.88, "start": 1584, "text": " also do you have what's called exploiters and exploiters an exploiter is a let's call" }, { "end": 1603.68, "start": 1596.88, "text": " it a triangle hat because it's very evil what it does is it specifically targets only the" }, { "end": 1611.48, "start": 1603.68, "text": " current good agent right so this this agent right here is tasked with playing old versions" }, { "end": 1618.5200000000002, "start": 1611.48, "text": " of itself and playing the exploiter both at the same time but the exploiter is only" }, { "end": 1626.5200000000002, "start": 1618.5200000000002, "text": " tasked with playing this thing so um what it can do is it can specialize in exploiting" }, { "end": 1632.4, "start": 1626.52, "text": " whatever weaknesses this player has of course the hope is that the this player will become" }, { "end": 1639.84, "start": 1632.4, "text": " better in response because there's a player trying to exploit it right so every and as" }, { "end": 1645.16, "start": 1639.84, "text": " this as this player becomes better than this player here is reinitialized and tries to" }, { "end": 1651.44, "start": 1645.16, "text": " find new weaknesses right so as this as this one continues to learn so the exploiters they" }, { "end": 1658.52, "start": 1651.44, "text": " are initialized you can see this here so these are called the main agents and you can see" }, { "end": 1662.88, "start": 1658.52, "text": " they play against each other right one of them they play against each other they play" }, { "end": 1670.92, "start": 1662.88, "text": " against past versions of themselves so these are past versions of themselves but then there" }, { "end": 1675.66, "start": 1670.92, "text": " are these main exploiters and the main exploiters they're constantly reinitialized from human" }, { "end": 1684.24, "start": 1675.66, "text": " data right you can see this here they're reinitialized and they only play against these main players" }, { "end": 1688.3200000000002, "start": 1684.24, "text": " right they don't have to deal with any of the past players or playing against themselves" }, { "end": 1694.16, "start": 1688.3200000000002, "text": " stuff they only try to exploit the main players and thereby the main players get better once" }, { "end": 1700.6000000000001, "start": 1694.16, "text": " they get better than an exploiter they are reinitialized so the exploiters are reinitialized" }, { "end": 1706.84, "start": 1700.6, "text": " to find new exploits of the main agents the third component is what's called a league" }, { "end": 1714.36, "start": 1706.84, "text": " exploiter and a league exploiter is the following so the league let's the league exploiter here" }, { "end": 1725.28, "start": 1714.36, "text": " and its hat is a wavy hat and what the league exploiter does is it plays against past versions" }, { "end": 1734, "start": 1725.28, "text": " of itself and others so it does play against the league exploiter sorry with smaller wavy" }, { "end": 1742.56, "start": 1734, "text": " hat it also plays against this thing by the way the this this here also plays against" }, { "end": 1748.44, "start": 1742.56, "text": " past versions of this and of everything else you can see here the past version arrows it" }, { "end": 1753.44, "start": 1748.44, "text": " goes against all past players so this this represents all the past players that ever" }, { "end": 1762.96, "start": 1753.44, "text": " existed and so does the so does the so here but also against past versions of this of" }, { "end": 1769.44, "start": 1762.96, "text": " this main exploiter here but the important thing is the current main exploiter doesn't" }, { "end": 1777.28, "start": 1769.44, "text": " play past versions of its of itself right so this also plays this and this place this" }, { "end": 1784.16, "start": 1777.28, "text": " and this place this and this also place this so the league exploiter they they do take" }, { "end": 1793.12, "start": 1784.16, "text": " part in this whole league business like playing against past versions of all the players but" }, { "end": 1802.3999999999999, "start": 1793.12, "text": " it only plays against the main ex against the main exploiters and this is a thing that" }, { "end": 1807.52, "start": 1802.4, "text": " i find missing here honestly i don't know if i don't understand this but i'm pretty sure" }, { "end": 1814.3200000000002, "start": 1807.52, "text": " i do like these also play these and that's an arrow missing in the in the drawing uh" }, { "end": 1817.8000000000002, "start": 1814.3200000000002, "text": " the league exploiters play the main agents but the main difference between the league" }, { "end": 1823.16, "start": 1817.8000000000002, "text": " exploiters and the main agents is the league exploiters they don't play themselves right" }, { "end": 1828.72, "start": 1823.16, "text": " there is no there's no playing themselves on the league exploiters so the league exploiters" }, { "end": 1837.76, "start": 1828.72, "text": " what they can do is they can find weaknesses of the entire league and kind of train train" }, { "end": 1843.88, "start": 1837.76, "text": " the by playing against the main opponents using those found weaknesses you bet that" }, { "end": 1850.88, "start": 1843.88, "text": " the main ex the main agents will get better against those major weaknesses of the entire" }, { "end": 1858.96, "start": 1850.88, "text": " league right so the main agents first of all they get better by playing the main exploiters" }, { "end": 1864.24, "start": 1858.96, "text": " because the main exploiters are mainly trying to exploit the main agents the main agents" }, { "end": 1870.6000000000001, "start": 1864.24, "text": " also get better by playing the league exploiters because the league exploiters find weaknesses" }, { "end": 1877.3400000000001, "start": 1870.6000000000001, "text": " of the entire league right so and the main agents they also get better by playing each" }, { "end": 1884.22, "start": 1877.34, "text": " So that makes these these main agents kind of..." }, { "end": 1888.1399999999999, "start": 1884.22, "text": " You can say they're trained against everything under the sun," }, { "end": 1893.02, "start": 1888.1399999999999, "text": " against any possible exploit that can be found either in themselves or" }, { "end": 1898.4599999999998, "start": 1893.02, "text": " generally. And thereby they get really good at StarCraft," }, { "end": 1902.4599999999998, "start": 1898.4599999999998, "text": " because they can counter pretty much everything. So this is how" }, { "end": 1906.4599999999998, "start": 1902.4599999999998, "text": " league training works and this is what I feel is the main contribution of this" }, { "end": 1911.18, "start": 1906.46, "text": " paper to the reinforcement learning world." }, { "end": 1916.22, "start": 1911.18, "text": " Now they do an ablation study here. You can see" }, { "end": 1920.94, "start": 1916.22, "text": " where this ends up. So these final agents here," }, { "end": 1928.7, "start": 1920.94, "text": " they end up in Grandmaster level StarCraft and beat 99." }, { "end": 1934.94, "start": 1928.7, "text": " some percent of human players. So really really good." }, { "end": 1939.1000000000001, "start": 1934.94, "text": " They do an ablation study of all of the tricks they use." }, { "end": 1942.6200000000001, "start": 1939.1000000000001, "text": " So this is pretty much all tricks they use." }, { "end": 1949.5, "start": 1942.6200000000001, "text": " And you can see here this includes this league composition." }, { "end": 1953.3400000000001, "start": 1949.5, "text": " What happens if we only have main agents, then main exploiters, league" }, { "end": 1959.5, "start": 1953.3400000000001, "text": " exploiters, and you can see the elo going up." }, { "end": 1965.66, "start": 1959.5, "text": " Then you can see multi-agent learning. How much does this fictitious" }, { "end": 1969.1, "start": 1965.66, "text": " self play? The fact that we prioritize to strong" }, { "end": 1973.58, "start": 1969.1, "text": " players and so on. How much does this help? And you again see the elo" }, { "end": 1978.54, "start": 1973.58, "text": " going up. How much does it help that we use human data?" }, { "end": 1982.46, "start": 1978.54, "text": " How much does it help that we use these different networks?" }, { "end": 1991.26, "start": 1982.46, "text": " They have very good ablation studies of how much each of the" }, { "end": 1996.38, "start": 1991.26, "text": " things help. Here they investigate what if we didn't have a camera" }, { "end": 2002.78, "start": 1996.38, "text": " interface? So what if we could see the entire game at once and not only" }, { "end": 2005.5, "start": 2002.78, "text": " the opponents that are within the camera?" }, { "end": 2009.02, "start": 2005.5, "text": " And what if we didn't need to move the camera?" }, { "end": 2014.22, "start": 2009.02, "text": " They investigate the off-policy learning corrections that we mentioned" }, { "end": 2018.7, "start": 2014.22, "text": " and so on. I find this very cool that they do these" }, { "end": 2023.34, "start": 2018.7, "text": " huge ablation studies to show really how much each of these tricks that they used" }, { "end": 2029.58, "start": 2023.34, "text": " helps in generating their superior performance." }, { "end": 2033.98, "start": 2029.58, "text": " Here you can see how these agents develop." }, { "end": 2038.7, "start": 2033.98, "text": " So over training and they have a massive infrastructure and they train for" }, { "end": 2041.9, "start": 2038.7, "text": " days. You can see this here. But you can see that the" }, { "end": 2045.3400000000001, "start": 2041.9, "text": " the main agents just get better and better and better" }, { "end": 2049.18, "start": 2045.3400000000001, "text": " and better. While the main exploiters of course" }, { "end": 2053.02, "start": 2049.18, "text": " they stay the same but they kind of keep getting reinitialized." }, { "end": 2058.46, "start": 2053.02, "text": " So this main agent is trained to exploit these" }, { "end": 2064.86, "start": 2058.46, "text": " these sorry these main exploiters trained to exploit these main agents." }, { "end": 2068.46, "start": 2064.86, "text": " This one is trying to exploit these ones. They're not by themselves" }, { "end": 2071.34, "start": 2068.46, "text": " really good agents but they're simply trained to" }, { "end": 2074.54, "start": 2071.34, "text": " to find and exploit weaknesses of the main agents." }, { "end": 2078.7, "start": 2074.54, "text": " Likewise these league exploiters they do get better with the league" }, { "end": 2085.26, "start": 2078.7, "text": " but they are only concerned with exploiting current and past versions of" }, { "end": 2089.26, "start": 2085.26, "text": " the league. Also to make the main agents better." }, { "end": 2092.94, "start": 2089.26, "text": " So everything is geared towards making these main agents better." }, { "end": 2099.34, "start": 2092.94, "text": " And you can see it actually works." }, { "end": 2105.02, "start": 2099.34, "text": " They have some analysis of which units these agents build." }, { "end": 2110.06, "start": 2105.02, "text": " I'm not too versed in Starcraft to comment on this." }, { "end": 2113.66, "start": 2110.06, "text": " But all in all I find this to be a very cool paper" }, { "end": 2119.5, "start": 2113.66, "text": " and I find it to be described fairly clear what they do." }, { "end": 2123.66, "start": 2119.5, "text": " Though they do not release the source code." }, { "end": 2128.78, "start": 2123.66, "text": " They release some kind of pseudo code. But the analysis and the ablations" }, { "end": 2134.86, "start": 2128.78, "text": " are very good. The results are let's say questionable because of course" }, { "end": 2140.54, "start": 2134.86, "text": " you can't compare" }, { "end": 2144.54, "start": 2140.54, "text": " machines to humans especially in a game where you have to make quick actions." }, { "end": 2148.46, "start": 2144.54, "text": " Even if you limit the actions, they do this here." }, { "end": 2154.54, "start": 2148.46, "text": " So they have this monitoring layer which limits the actions and" }, { "end": 2160.78, "start": 2154.54, "text": " introduces delay and so on. But still if it's not the same as a" }, { "end": 2165.58, "start": 2160.78, "text": " human who might not always be able to do these 22" }, { "end": 2170.46, "start": 2165.58, "text": " actions per five seconds. If something quick happens they may" }, { "end": 2174.06, "start": 2170.46, "text": " need to have some kind of relaxation phase and so on." }, { "end": 2178.14, "start": 2174.06, "text": " But they try with these kind of delays and action limits. They try to" }, { "end": 2182.54, "start": 2178.14, "text": " model these kind of limitations." }, { "end": 2187.58, "start": 2182.54, "text": " I find this as fair as possible." }, { "end": 2191.74, "start": 2187.58, "text": " This is what I find kind of problematic. So they own units as I said." }, { "end": 2196.06, "start": 2191.74, "text": " The agent can also see the ones that are outside the camera." }, { "end": 2202.3799999999997, "start": 2196.06, "text": " And that seems kind of shady. Because of course you can you can claim" }, { "end": 2205.1, "start": 2202.3799999999997, "text": " humans can do whatever command groups to also" }, { "end": 2211.8199999999997, "start": 2205.1, "text": " control units outside the camera. But it's not really the case." }, { "end": 2217.74, "start": 2211.8199999999997, "text": " So that's sort of a distinct advantage that the machine has." }, { "end": 2222.94, "start": 2217.74, "text": " But yeah in any case I find it to be very well done." }, { "end": 2228.7799999999997, "start": 2222.94, "text": " And I hope this made it a bit clearer what the exact contributions are." }, { "end": 2235.5, "start": 2228.78, "text": " And with that have a fun time playing against AlphaStar." }, { "end": 2263.02, "start": 2235.5, "text": " Bye bye." } ]
kOy49NqZeqI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
[ "Science & Technology" ]
[ "machine learning", "ml", "ai", "artificial intellgence", "deepmind", "reinforcement learning", "deep rl", "a2c", "a3c", "actor", "critic", "distributed", "scale", "bias", "off-policy", "policy gradient", "deepmind lab", "vtrace" ]
Policy Gradient RL on a massively distributed scale with theoretical guarantees! Abstract: In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach. Authors: Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu https://arxiv.org/abs/1802.01561 https://github.com/deepmind/scalable_agent
Hi there! Today we're looking at Impala, scalable distributed deep RL with importance-weighted actor learner architectures by Lasse Espejolt, Hubert Sawyer, Remy Munoz and Al. So this paper deals with a new architecture for deep reinforcement learning, specifically distributed deep reinforcement learning. So that means settings where you go beyond one single machine or beyond one single accelerator like a GPU. So I want to introduce this by showing you this task here. This is called the DeepMind lab and the DeepMind lab is a kind of a 3D environment as you can see here. These are screenshots where they're very different goals but some of this as you can see are kind of labyrinth style things where you have to collect apples, some are platformers where you I guess have to jump around and so on or find objects. So the DeepMind introduced this as kind of a as an reinforcement learning environment and what you can do the agent as you can see here has a camera it perceives pixels and it can get rewards for performing actions. The actions it can perform is it can you know walk back and forth, it can jump, it can crouch, it can rotate. So this is kind of a limited set of actions that it can do but it can move around in this 3D world and it needs to achieve some goals. So that usually this is kind of a good setting for reinforcement learning and this paper doesn't do a whole lot of new things in terms of reinforcement learning but it does a lot of things to kind of make it work in a distributed setting. So usually what you would like to do is something like A2C. A2C is advantage actor critic learning and it's a very successful algorithm in reinforcement learning. We won't go into this much here but basic elements of it is you have are two things you have a policy and usually this is called PI, sorry about that, usually this is called PI policy that you input your current state so your current observation at time t and you want to score an action right action A. Now you might have maybe as we saw before you can walk left walk right and so on so you might have ten actions or so. So in here you would put action one or action two or action three and for this you would get probability distributions over each action so maybe in this particular state so each time with the same state. So you would get a distribution something like this right so here you should probably go with action three. That's your policy function. Policy function PI tells you in this particular state which action should you take how often kind of gives you distribution. The second thing you want is a what's called a value function so the value function V, capital V usually, you input your state and it will output it will output what the value is of that state and that's usually termed kind of as a lowercase V. The value of the state is given if you're in a maze right I'm gonna draw maze from the top here right you can't reach there like here so here is the goal and let's say you are oops you're right here the green right and you have the choice of going forward to the right or to the left. Now this would be your policy here. You would ask your policy and A1 would maybe be go forward A2 go to the left A3 to the right so your policy would decide what to do. Your value function however would decide in each of the states so where you are plus where you could go here here here so basically for each state in the system it would give you a value in particular in this case it would probably give you a very very high value here like yeah this is a good point because you're very close to the goal right here this is probably not so good a point and this is a very bad point because you're you're going to corner you're actually moving farther away from the goal so if your value function is trained well then you can you can use that also to assess your situation so the value function for each state s it will give you a numerical value of how good that state is in terms of reaching your goal and the A2C algorithm now deals with the interplay of these the A2C uses actually both of these in an interplay so it will use one to teach the other one right and this interplay between those gives makes for a very successful reinforcement learning algorithm now the way A2C does it is as you can see here what it does is it has to there are two variants here think synced step and synced trajectories but in essence it has to run these episodes and these here are steps in the episodes and let's say an episode as four steps before it can do the learning part and the learning part is here the orange thing once it has done a step of learning it has to run episodes again and then it has can do learning again and that's because of a limitation of this which is called on policy learning so on in on policy learning you always want to have your update step which is the orange part to be fed with data so the this all of these app all of these steps here go into this update steps and it's necessary that the steps that you make the updates from are computed with kind of the most current version of the of the agent right so that the agent will go into the world make some steps using its neural network maybe I should explain so that the agent right is this box and the agent has this policy right and with this policy as we saw it will go and it will interact with the world right outside of itself and it will kind of the world will give back observations and it will then interact again so you can move a step forward right first first thing is move the step step forward and then the world gives it back a high you are now no longer here you've moved here right and then it's on I want to move to the left and the world says okay so you're no longer here you've moved one to the left and this on the right here are the observations and is on the left here are the actions and for the a to see is kind of necessary that we always have a current version of the policy generating these steps in order to be able to learn from them and then the next steps also need to be kind of current to be learned now there have been attempts to decentralize this and is exactly what impala does impala splits this into multiple workers you can think of this as different machines so there is a split here and these are called actors and this is called a learner now the actors they will go ahead and they will run episodes on their own right occasionally or they will run episodes and they will communicate those episodes to the learner and the learner will continuously here learn so these orange steps can be made in much more quick success succession and don't have to be like synchronized as in the a to see here is another way of seeing this over here and we'll just concentrate on this on this left thing here so there is a learner and there are actors and every now and then the actor sinks its model from the learner these are different machines so this can happen over the network every now and then the actor gets like an update of the newest policy network and then the actor will just go ahead and run episodes using that policy right episode episode episode episode step steps without interfering with anything else and then once it has run an episode or multiple ones it will communicate this back to the learner and if all the actors do this all right the learner gets a whole bunch of these episodes and then can learn from all of them simultaneously and it can do so in kind of with in in kind of very fast succession as you see here so the work is split of course you run into a problem namely as we saw in the a to see algorithm this type of reinforcement learning requires basically that you always run the episode with the current model and that's not the case here right the actor may sink the parameters they sink the parameters once in a while but then it will run these episodes right when it runs these episodes here it has no idea or it the learner in the meantime has continually been updating the model while the actor kind of has an old model so these episodes here are run with an old model so the learner if it tries to learn from this must kind of correct for this fact and the big kind of theoretical contribution of this paper is how to correct for the fact that the data you learn from comes from an outdated policy model and this is what's called V trace correction so without going too much into the into the details here V trace correction happens as as fall it happens as follows so what you define are what's called V trace targets and these V trace targets are basically the targets that you train your value function towards right so the the the value function as we discussed before that is a that is the thing that tells you how good each state is and the targets you train this towards and you're also by the way using this V V trace corrections in policy updates but these are defined as follows so the V trace target for step s is the value function at step s plus this correction thing and the the correction thing basically well I've I want to break this down some more so the V at current s is your value function plus and this is a sum over all future steps over and this is a discount factor and this is kind of a delta from one step to the next so you're in an episode and you've made some steps right and let's say we are here right this is s and so your your little V s will be whatever your value function says of s plus kind of a correction for each step that you make go into the future like this and the main part of these is is this here which is basically the reward at the step plus the difference of the value functions of the steps after it and what V trace introduces now is this bit here and these CI again are computed as such so all of this kind of is very nested so there is a there's a big multiplication here it's a very nested thing but in the very very very core of it you can see the following these V trace corrections are a ratio between pi and mu and pi is the policy of the learner that is the current policy and mu is the policy that has been used to generate the to generate the episode and this is truncated by a minimum and usually the C bar is one so let's consider what happens here what happens is let's say that mu is higher than pi for a given pair of AI index a what does it mean it means that in the past you run an episode you come you are in this maze right such to them and you're here right now the and the goal let's say the goal is down here and the action is going over here that does the action that you're considering here now your mu which is your old policy that the actor has synced at some point mu might say this is very good right because it moves you towards the goal more but then your your pie the learner has been learning since the eight since the agent the actor has synchronized the weights the learner has been learning and the learner might know wait wait since you have decided this I have actually learned that this might not be such a good move because you know there's a wall here and I'd rather go down here and then over here so what it will do it will since pi is low and mu is higher it will down weigh this action and this is how you correct for the fact that there are old weights by basically down weighing wherever the old policy thought of an action as being worth more than the new policy does and this is how you make up for the fact that the new policy you assume it knows better because it has learned more and thereby you you give lower weight to the data points where the the policies have diverged a lot so that's at the core of it and you can think of in terms of here you can think of it as maybe here at this step you're at a point where the old policy that the actor has has updated itself to says we should do action one right but the new policy that the learner has in the meantime has learned more says now we should do action two and if this is the case then this whole rest of the episode is down weight because it is no longer current knowledge right and this is not just kind of a heuristic but they actually do prove that this this this comes with some guarantees especially reduces to kind of the classic reinforcement algorithms if you assume that mu is always pi so that current policy is the old policy and therefore you're in the old setting alright so this was a bit of a lengthy explanation of the math behind it and at the end what you do is following you train your value function using this update and you can see here it's simply the gradient of the value function scaled by the thing that contains this V trace target right you then you update your policy in this direction and this is the classic reinforcement learning reinforce style policy update where here you have the gradient of the of the policy and here you have the weighing by the reward and specifically here it is the reward plus this V trace target and this thing here is a bias correction or a bias reducing sorry variance reducing bias that was terrible the final form is what's called an entropy penalty where you want to push the entropy of your policy up such that the agent kind of is biased towards exploring more than exploiting if you know of the classic exploration exploitation dilemma so that's that's what you do compute these V trace targets update your value and policy according to these equations and there you go so what do what does Impala do specifically in this deep mind lab they have two architectures first of all they have this they have this small architecture second they have this large architecture and they just kind of try it out on these and they measure how many frames per second they can get in and you see here compared to on single machine compared to a 3c they bring in a lot more frames per second this is just on a single machine but then on distributed setting the scale up also is very significant that they reach that's because they don't have to wait for other things they can just go ahead everything runs at full speed basically and everything runs in parallel and the fact that that some of the information is old is corrected by V trace and the last thing I want to show is the wall clock time I think this is the important plot in this deep mind lab on over all the tasks the wall clock time compared to the score you can see a 3c while it does you know increase over time the Impala variants up here increase in much much faster wall clock time so that's the that's the paper they have a lot of proofs in the appendix which I'm not gonna go over if you want to give it a try then it is it is not called Impala on github it is called I think scalable agent so on github it is called scalable agent I think but you'll find it if you if you search for Impala github or something like this yeah other than that thanks for listening and see you next time
[ { "end": 5.88, "start": 0, "text": " Hi there! Today we're looking at Impala, scalable distributed deep RL with" }, { "end": 11.48, "start": 5.88, "text": " importance-weighted actor learner architectures by Lasse Espejolt, Hubert" }, { "end": 18.48, "start": 11.48, "text": " Sawyer, Remy Munoz and Al. So this paper deals with a new architecture for deep" }, { "end": 23.44, "start": 18.48, "text": " reinforcement learning, specifically distributed deep reinforcement learning." }, { "end": 29.88, "start": 23.44, "text": " So that means settings where you go beyond one single machine or beyond one" }, { "end": 35.4, "start": 29.88, "text": " single accelerator like a GPU. So I want to introduce this by showing you this" }, { "end": 41.6, "start": 35.4, "text": " task here. This is called the DeepMind lab and the DeepMind lab is a kind of a" }, { "end": 47.44, "start": 41.6, "text": " 3D environment as you can see here. These are screenshots where they're very" }, { "end": 51.239999999999995, "start": 47.44, "text": " different goals but some of this as you can see are kind of labyrinth style" }, { "end": 56.96, "start": 51.239999999999995, "text": " things where you have to collect apples, some are platformers where you I guess" }, { "end": 62.120000000000005, "start": 56.96, "text": " have to jump around and so on or find objects. So the DeepMind introduced this" }, { "end": 69.76, "start": 62.120000000000005, "text": " as kind of a as an reinforcement learning environment and what you can do" }, { "end": 75.76, "start": 69.76, "text": " the agent as you can see here has a camera it perceives pixels and it can" }, { "end": 81.6, "start": 75.76, "text": " get rewards for performing actions. The actions it can perform is it can you" }, { "end": 86.92, "start": 81.6, "text": " know walk back and forth, it can jump, it can crouch, it can rotate. So this is" }, { "end": 91.12, "start": 86.92, "text": " kind of a limited set of actions that it can do but it can move around in this" }, { "end": 97.28, "start": 91.12, "text": " 3D world and it needs to achieve some goals. So that usually this is" }, { "end": 103.96000000000001, "start": 97.28, "text": " kind of a good setting for reinforcement learning and this paper doesn't" }, { "end": 109.04, "start": 103.96000000000001, "text": " do a whole lot of new things in terms of reinforcement learning but it does a lot" }, { "end": 115.88, "start": 109.04, "text": " of things to kind of make it work in a distributed setting. So usually what you" }, { "end": 121.96, "start": 115.88, "text": " would like to do is something like A2C. A2C is advantage actor critic learning" }, { "end": 128.04, "start": 121.96, "text": " and it's a very successful algorithm in reinforcement learning. We won't go into" }, { "end": 135.2, "start": 128.04, "text": " this much here but basic elements of it is you have are two things you have" }, { "end": 141.07999999999998, "start": 135.2, "text": " a policy and usually this is called PI, sorry about that, usually this is called" }, { "end": 146.36, "start": 141.08, "text": " PI policy that you input your current state so your current observation at" }, { "end": 156.52, "start": 146.36, "text": " time t and you want to score an action right action A. Now you might have maybe" }, { "end": 160.8, "start": 156.52, "text": " as we saw before you can walk left walk right and so on so you might have ten" }, { "end": 169.52, "start": 160.8, "text": " actions or so. So in here you would put action one or action two or action three" }, { "end": 174.84, "start": 169.52, "text": " and for this you would get probability distributions over each action so maybe" }, { "end": 182.24, "start": 174.84, "text": " in this particular state so each time with the same state. So you would get a" }, { "end": 188.76000000000002, "start": 182.24, "text": " distribution something like this right so here you should probably go with" }, { "end": 195.44, "start": 188.76000000000002, "text": " action three. That's your policy function. Policy function PI tells you in this" }, { "end": 200.44, "start": 195.44, "text": " particular state which action should you take how often kind of gives you" }, { "end": 206.35999999999999, "start": 200.44, "text": " distribution. The second thing you want is a what's called a value function so" }, { "end": 212.72, "start": 206.35999999999999, "text": " the value function V, capital V usually, you input your state and it will output" }, { "end": 220.4, "start": 212.72, "text": " it will output what the value is of that state and that's usually termed kind of" }, { "end": 227.76000000000002, "start": 220.4, "text": " as a lowercase V. The value of the state is given if you're in a maze right I'm" }, { "end": 236.08, "start": 227.76000000000002, "text": " gonna draw maze from the top here right you can't reach there like here so here" }, { "end": 245.04000000000002, "start": 236.08, "text": " is the goal and let's say you are oops you're right here the green right and" }, { "end": 252.2, "start": 245.04, "text": " you have the choice of going forward to the right or to the left. Now this" }, { "end": 257.84, "start": 252.2, "text": " would be your policy here. You would ask your policy and A1" }, { "end": 264.12, "start": 257.84, "text": " would maybe be go forward A2 go to the left A3 to the right so your policy" }, { "end": 270.12, "start": 264.12, "text": " would decide what to do. Your value function however would decide in each of" }, { "end": 275.04, "start": 270.12, "text": " the states so where you are plus where you could go here here here so basically" }, { "end": 279.8, "start": 275.04, "text": " for each state in the system it would give you a value in particular in this" }, { "end": 285.88, "start": 279.8, "text": " case it would probably give you a very very high value here like yeah this is" }, { "end": 290.64, "start": 285.88, "text": " a good point because you're very close to the goal right here this is probably" }, { "end": 296.16, "start": 290.64, "text": " not so good a point and this is a very bad point because you're you're going to" }, { "end": 300.66, "start": 296.16, "text": " corner you're actually moving farther away from the goal so if your value" }, { "end": 306.68, "start": 300.66, "text": " function is trained well then you can you can use that also to assess your" }, { "end": 313.04, "start": 306.68, "text": " situation so the value function for each state s it will give you a numerical" }, { "end": 320.20000000000005, "start": 313.04, "text": " value of how good that state is in terms of reaching your goal and the A2C" }, { "end": 326.47999999999996, "start": 320.2, "text": " algorithm now deals with the interplay of these the A2C uses actually both of" }, { "end": 334.52, "start": 326.47999999999996, "text": " these in an interplay so it will use one to teach the other one right and this" }, { "end": 340, "start": 334.52, "text": " interplay between those gives makes for a very successful reinforcement learning" }, { "end": 347.2, "start": 340, "text": " algorithm now the way A2C does it is as you can see here what it does is it has" }, { "end": 352.64, "start": 347.2, "text": " to there are two variants here think synced step and synced trajectories but" }, { "end": 358.24, "start": 352.64, "text": " in essence it has to run these episodes and these here are steps in the episodes" }, { "end": 363.4, "start": 358.24, "text": " and let's say an episode as four steps before it can do the learning part and" }, { "end": 367.48, "start": 363.4, "text": " the learning part is here the orange thing once it has done a step of" }, { "end": 372.91999999999996, "start": 367.48, "text": " learning it has to run episodes again and then it has can do learning again" }, { "end": 377.96000000000004, "start": 372.92, "text": " and that's because of a limitation of this which is called on policy learning" }, { "end": 384.36, "start": 377.96000000000004, "text": " so on in on policy learning you always want to have your update step which is" }, { "end": 390.88, "start": 384.36, "text": " the orange part to be fed with data so the this all of these app all of these" }, { "end": 396.24, "start": 390.88, "text": " steps here go into this update steps and it's necessary that the steps that you" }, { "end": 402.84000000000003, "start": 396.24, "text": " make the updates from are computed with kind of the most current version of the" }, { "end": 407.71999999999997, "start": 402.84, "text": " of the agent right so that the agent will go into the world make some steps" }, { "end": 413.67999999999995, "start": 407.71999999999997, "text": " using its neural network maybe I should explain so that the agent right is this" }, { "end": 420.12, "start": 413.67999999999995, "text": " box and the agent has this policy right and with this policy as we saw it will" }, { "end": 425.64, "start": 420.12, "text": " go and it will interact with the world right outside of itself and it will kind" }, { "end": 430.55999999999995, "start": 425.64, "text": " of the world will give back observations and it will then interact again so you" }, { "end": 435.6, "start": 430.56, "text": " can move a step forward right first first thing is move the step step forward" }, { "end": 441.12, "start": 435.6, "text": " and then the world gives it back a high you are now no longer here you've moved" }, { "end": 445.56, "start": 441.12, "text": " here right and then it's on I want to move to the left and the world says okay" }, { "end": 450.72, "start": 445.56, "text": " so you're no longer here you've moved one to the left and this on the right" }, { "end": 455.76, "start": 450.72, "text": " here are the observations and is on the left here are the actions and for the a" }, { "end": 459.64, "start": 455.76, "text": " to see is kind of necessary that we always have a current version of the" }, { "end": 466.44, "start": 459.64, "text": " policy generating these steps in order to be able to learn from them and then" }, { "end": 472.15999999999997, "start": 466.44, "text": " the next steps also need to be kind of current to be learned now there have" }, { "end": 478.59999999999997, "start": 472.15999999999997, "text": " been attempts to decentralize this and is exactly what impala does impala" }, { "end": 486.8, "start": 478.59999999999997, "text": " splits this into multiple workers you can think of this as different machines" }, { "end": 492.92, "start": 486.8, "text": " so there is a split here and these are called actors and this is called a" }, { "end": 498.92, "start": 492.92, "text": " learner now the actors they will go ahead and they will run episodes on" }, { "end": 505.08000000000004, "start": 498.92, "text": " their own right occasionally or they will run episodes and they will" }, { "end": 510.36, "start": 505.08000000000004, "text": " communicate those episodes to the learner and the learner will continuously" }, { "end": 515.64, "start": 510.36, "text": " here learn so these orange steps can be made in much more quick success" }, { "end": 523.76, "start": 515.64, "text": " succession and don't have to be like synchronized as in the a to see here is" }, { "end": 527.88, "start": 523.76, "text": " another way of seeing this over here and we'll just concentrate on this on this" }, { "end": 534.24, "start": 527.88, "text": " left thing here so there is a learner and there are actors and every now and" }, { "end": 539.6, "start": 534.24, "text": " then the actor sinks its model from the learner these are different machines so" }, { "end": 543.52, "start": 539.6, "text": " this can happen over the network every now and then the actor gets like an" }, { "end": 549, "start": 543.52, "text": " update of the newest policy network and then the actor will just go ahead and" }, { "end": 555.88, "start": 549, "text": " run episodes using that policy right episode episode episode episode step" }, { "end": 561.06, "start": 555.88, "text": " steps without interfering with anything else and then once it has run an episode" }, { "end": 565.72, "start": 561.06, "text": " or multiple ones it will communicate this back to the learner and if all the" }, { "end": 571.1999999999999, "start": 565.72, "text": " actors do this all right the learner gets a whole bunch of these episodes and" }, { "end": 577.5200000000001, "start": 571.2, "text": " then can learn from all of them simultaneously and it can do so in kind" }, { "end": 583.36, "start": 577.5200000000001, "text": " of with in in kind of very fast succession as you see here so the work" }, { "end": 589.9200000000001, "start": 583.36, "text": " is split of course you run into a problem namely as we saw in the a to see" }, { "end": 596.84, "start": 589.9200000000001, "text": " algorithm this type of reinforcement learning requires basically that you" }, { "end": 602.32, "start": 596.84, "text": " always run the episode with the current model and that's not the case here right" }, { "end": 608.88, "start": 602.32, "text": " the actor may sink the parameters they sink the parameters once in a while but" }, { "end": 614.12, "start": 608.88, "text": " then it will run these episodes right when it runs these episodes here it has" }, { "end": 621.84, "start": 614.12, "text": " no idea or it the learner in the meantime has continually been updating" }, { "end": 627.2800000000001, "start": 621.84, "text": " the model while the actor kind of has an old model so these episodes here are run" }, { "end": 633.12, "start": 627.2800000000001, "text": " with an old model so the learner if it tries to learn from this must kind of" }, { "end": 638.4, "start": 633.12, "text": " correct for this fact and the big kind of theoretical contribution of this" }, { "end": 644.88, "start": 638.4, "text": " paper is how to correct for the fact that the data you learn from comes from" }, { "end": 653.76, "start": 644.88, "text": " an outdated policy model and this is what's called V trace correction so" }, { "end": 663.32, "start": 653.76, "text": " without going too much into the into the details here V trace correction happens" }, { "end": 669.76, "start": 663.32, "text": " as as fall it happens as follows so what you define are what's called V trace" }, { "end": 676.08, "start": 669.76, "text": " targets and these V trace targets are basically the targets that you train" }, { "end": 684.36, "start": 676.08, "text": " your value function towards right so the the the value function as we discussed" }, { "end": 690.76, "start": 684.36, "text": " before that is a that is the thing that tells you how good each state is and the" }, { "end": 696.16, "start": 690.76, "text": " targets you train this towards and you're also by the way using this V V" }, { "end": 704.52, "start": 696.16, "text": " trace corrections in policy updates but these are defined as follows so the V" }, { "end": 712.04, "start": 704.52, "text": " trace target for step s is the value function at step s plus this correction" }, { "end": 720.0799999999999, "start": 712.04, "text": " thing and the the correction thing basically well I've I want to break this" }, { "end": 730.5200000000001, "start": 720.08, "text": " down some more so the V at current s is your value function plus and this is a" }, { "end": 738.12, "start": 730.5200000000001, "text": " sum over all future steps over and this is a discount factor and this is kind of" }, { "end": 743.0600000000001, "start": 738.12, "text": " a delta from one step to the next so you're in an episode and you've made" }, { "end": 754.4, "start": 743.06, "text": " some steps right and let's say we are here right this is s and so your your" }, { "end": 766.8, "start": 754.4, "text": " little V s will be whatever your value function says of s plus kind of a" }, { "end": 773.4, "start": 766.8, "text": " correction for each step that you make go into the future like this and the" }, { "end": 781.3599999999999, "start": 773.4, "text": " main part of these is is this here which is basically the reward at the step plus" }, { "end": 787.8, "start": 781.3599999999999, "text": " the difference of the value functions of the steps after it and what V trace" }, { "end": 798.76, "start": 787.8, "text": " introduces now is this bit here and these CI again are computed as such so" }, { "end": 802.68, "start": 798.76, "text": " all of this kind of is very nested so there is a there's a big multiplication" }, { "end": 808.24, "start": 802.68, "text": " here it's a very nested thing but in the very very very core of it you can see" }, { "end": 816.3599999999999, "start": 808.24, "text": " the following these V trace corrections are a ratio between pi and mu and pi is" }, { "end": 824.32, "start": 816.36, "text": " the policy of the learner that is the current policy and mu is the policy that" }, { "end": 832.88, "start": 824.32, "text": " has been used to generate the to generate the episode and this is truncated" }, { "end": 838.64, "start": 832.88, "text": " by a minimum and usually the C bar is one so let's consider what happens here" }, { "end": 848.56, "start": 838.64, "text": " what happens is let's say that mu is higher than pi for a given pair of AI" }, { "end": 855.76, "start": 848.56, "text": " index a what does it mean it means that in the past you run an episode you come" }, { "end": 867.96, "start": 855.76, "text": " you are in this maze right such to them and you're here right now the and the" }, { "end": 879.2800000000001, "start": 867.96, "text": " goal let's say the goal is down here and the action is going over here that" }, { "end": 886.2, "start": 879.2800000000001, "text": " does the action that you're considering here now your mu which is your old" }, { "end": 892.52, "start": 886.2, "text": " policy that the actor has synced at some point mu might say this is very good" }, { "end": 901.84, "start": 892.52, "text": " right because it moves you towards the goal more but then your your pie the" }, { "end": 906.1999999999999, "start": 901.84, "text": " learner has been learning since the eight since the agent the actor has" }, { "end": 910.8, "start": 906.1999999999999, "text": " synchronized the weights the learner has been learning and the learner might know" }, { "end": 916.96, "start": 910.8, "text": " wait wait since you have decided this I have actually learned that this might" }, { "end": 922.24, "start": 916.96, "text": " not be such a good move because you know there's a wall here and I'd rather go" }, { "end": 930.4, "start": 922.24, "text": " down here and then over here so what it will do it will since pi is low and mu" }, { "end": 935.96, "start": 930.4, "text": " is higher it will down weigh this action and this is how you correct for the fact" }, { "end": 942.36, "start": 935.96, "text": " that there are old weights by basically down weighing wherever the old policy" }, { "end": 948.12, "start": 942.36, "text": " thought of an action as being worth more than the new policy does and this is how" }, { "end": 951.76, "start": 948.12, "text": " you make up for the fact that the new policy you assume it knows better" }, { "end": 956.68, "start": 951.76, "text": " because it has learned more and thereby you you give lower weight to the data" }, { "end": 963.36, "start": 956.68, "text": " points where the the policies have diverged a lot so that's at the core of" }, { "end": 974.24, "start": 963.36, "text": " it and you can think of in terms of here you can think of it as maybe here at" }, { "end": 980.2, "start": 974.24, "text": " this step you're at a point where the old policy that the actor has has" }, { "end": 988.5200000000001, "start": 980.2, "text": " updated itself to says we should do action one right but the new policy that" }, { "end": 993.84, "start": 988.5200000000001, "text": " the learner has in the meantime has learned more says now we should do action" }, { "end": 1002.84, "start": 993.84, "text": " two and if this is the case then this whole rest of the episode is down weight" }, { "end": 1009.24, "start": 1002.84, "text": " because it is no longer current knowledge right and this is not just" }, { "end": 1014.5600000000001, "start": 1009.24, "text": " kind of a heuristic but they actually do prove that this this this comes with" }, { "end": 1018.24, "start": 1014.5600000000001, "text": " some guarantees especially reduces to kind of the classic reinforcement" }, { "end": 1023.84, "start": 1018.24, "text": " algorithms if you assume that mu is always pi so that current policy is the" }, { "end": 1027.76, "start": 1023.84, "text": " old policy and therefore you're in the old setting alright so this was a bit of" }, { "end": 1035.04, "start": 1027.76, "text": " a lengthy explanation of the math behind it and at the end what you do is" }, { "end": 1043.1599999999999, "start": 1035.04, "text": " following you train your value function using this update and you can see here" }, { "end": 1048.24, "start": 1043.1599999999999, "text": " it's simply the gradient of the value function scaled by the thing that" }, { "end": 1055.1599999999999, "start": 1048.24, "text": " contains this V trace target right you then you update your policy in this" }, { "end": 1060.68, "start": 1055.1599999999999, "text": " direction and this is the classic reinforcement learning reinforce style" }, { "end": 1068.72, "start": 1060.68, "text": " policy update where here you have the gradient of the of the policy and here" }, { "end": 1076.44, "start": 1068.72, "text": " you have the weighing by the reward and specifically here it is the reward plus" }, { "end": 1084.44, "start": 1076.44, "text": " this V trace target and this thing here is a bias correction or a bias reducing" }, { "end": 1092.92, "start": 1084.44, "text": " sorry variance reducing bias that was terrible the final form is what's called" }, { "end": 1099.28, "start": 1092.92, "text": " an entropy penalty where you want to push the entropy of your policy up such" }, { "end": 1105.92, "start": 1099.28, "text": " that the agent kind of is biased towards exploring more than exploiting if you" }, { "end": 1110, "start": 1105.92, "text": " know of the classic exploration exploitation dilemma so that's that's" }, { "end": 1115.72, "start": 1110, "text": " what you do compute these V trace targets update your value and policy" }, { "end": 1122.84, "start": 1115.72, "text": " according to these equations and there you go so what do what does Impala do" }, { "end": 1127.76, "start": 1122.84, "text": " specifically in this deep mind lab they have two architectures first of all they" }, { "end": 1132.32, "start": 1127.76, "text": " have this they have this small architecture second they have this large" }, { "end": 1138.24, "start": 1132.32, "text": " architecture and they just kind of try it out on these and they measure how" }, { "end": 1143.32, "start": 1138.24, "text": " many frames per second they can get in and you see here compared to on single" }, { "end": 1150.92, "start": 1143.32, "text": " machine compared to a 3c they bring in a lot more frames per second this is just" }, { "end": 1157.08, "start": 1150.92, "text": " on a single machine but then on distributed setting the scale up also is" }, { "end": 1163.52, "start": 1157.08, "text": " very significant that they reach that's because they don't have to wait for" }, { "end": 1168.44, "start": 1163.52, "text": " other things they can just go ahead everything runs at full speed basically" }, { "end": 1174.2, "start": 1168.44, "text": " and everything runs in parallel and the fact that that some of the information" }, { "end": 1182.4, "start": 1174.2, "text": " is old is corrected by V trace and the last thing I want to show is the wall" }, { "end": 1187.6, "start": 1182.4, "text": " clock time I think this is the important plot in this deep mind lab on over all" }, { "end": 1195.1599999999999, "start": 1187.6, "text": " the tasks the wall clock time compared to the score you can see a 3c while it" }, { "end": 1201.1999999999998, "start": 1195.1599999999999, "text": " does you know increase over time the Impala variants up here increase in much" }, { "end": 1210.6399999999999, "start": 1201.1999999999998, "text": " much faster wall clock time so that's the that's the paper they have a lot of" }, { "end": 1215.2199999999998, "start": 1210.6399999999999, "text": " proofs in the appendix which I'm not gonna go over if you want to give it a" }, { "end": 1223.04, "start": 1215.22, "text": " try then it is it is not called Impala on github it is called I think scalable" }, { "end": 1237.04, "start": 1223.04, "text": " agent so on github it is called scalable agent I think but you'll find it if you" }, { "end": 1242.96, "start": 1237.04, "text": " if you search for Impala github or something like this yeah other than that" }, { "end": 1247.16, "start": 1242.96, "text": " thanks for listening and see you next time" } ]
ctCv_NRpqvM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The Visual Task Adaptation Benchmark
[ "Science & Technology" ]
[ "ml", "machine learning", "cnn", "imagenet", "pretraining", "finetuning", "fine-tuning", "google", "benchmark", "initialization", "supervised", "unsupervised", "bert", "artificial intelligence", "score" ]
This paper presents a new benchmark for Visual Task Adaptation (i.e. BERT for images) and investigates several baseline methods for doing so. Abstract: Representation learning promises to unlock deep learning for the long tail of vision tasks without expansive labelled datasets. Yet, the absence of a unified yardstick to evaluate general visual representations hinders progress. Many sub-fields promise representations, but each has different evaluation protocols that are either too constrained (linear classification), limited in scope (ImageNet, CIFAR, Pascal-VOC), or only loosely related to representation quality (generation). We present the Visual Task Adaptation Benchmark (VTAB): a diverse, realistic, and challenging benchmark to evaluate representations. VTAB embodies one principle: good representations adapt to unseen tasks with few examples. We run a large VTAB study of popular algorithms, answering questions like: How effective are ImageNet representation on non-standard datasets? Are generative models competitive? Is self-supervision useful if one already has labels? Authors: Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, Lucas Beyer, Olivier Bachem, Michael Tschannen, Marcin Michalski, Olivier Bousquet, Sylvain Gelly, Neil Houlsby https://arxiv.org/abs/1910.04867 https://github.com/google-research/task_adaptation
Hi there. Today we're looking at the visual task adaptation benchmark by a list of authors that's way too long to read out all from Google Brain. So what is this paper? This paper cares about a new benchmark that is abbreviated VTab and VTab is a benchmark for a task called visual task adaptation. So a benchmark, the meaning of a benchmark is it's kind of a number that you achieve with a model and whoever has the highest number is the best at this task. So the benchmark kind of standardizes how you evaluate models and the model is here. They do visual task adaptation. So what is visual task adaptation? So this is visual task adaptation. It's kind of illustrated in this figure. Imagine you have a bunch of what are called visual tasks and a visual task, and this is the right side here, a visual task is anything that can be solved from just visual input. So basically given a picture or many pictures and you ask kind of a question about it, if that question can be answered by just looking at the picture then that's called a visual task. For example in this data set you might be asked whether a picture contains a dog or a cat. In this data set you might be asked to outline where the objects are. So here the plane, you might be able to segment or you might be able to point out where buildings are in the images. Right here, here, there's no building here. So there's varieties of tasks that are possible. Or in the bottom domain you might be asked which one of the two red dots here is closer to the observer in 3D space. Or you might be asked in this picture please count the number of gray boxes. So there's a bunch of, all of these count as visual tasks. Now the setting that the authors imagine here is there are many of these visual tasks in the world for which there isn't much training data. Imagine something like this. These are aerial images so you kind of need a satellite or a plane to obtain them and then you need to label them. So all of this is isn't that cheap. Even more so in a for example medical domain where you have very expensive CT images of patients and then you need to obtain them and you need to convince the patients to release their data and someone needs to label it. So it's very costly to obtain lots of training data. Now what we want to do is we want to, for all of these tasks, we ideally want to build neural networks, deep neural networks because we know they're super accurate but they are only super accurate if you have lots of training data. So that conflicts with the fact that we might not have so much training data for these tasks. So the proposed solution here is what's called visual task adaptation and it's the following. Imagine you have lots and lots of what's called here upstream data. And upstream data, what they mean is data that is similar to the data here but not exactly the same but you have lots of it. And the example given is ImageNet. So imagine this here to be ImageNet. ImageNet is a data set with over a million images. All of them are labeled into one of a thousand classes and so you can build a very good model for ImageNet to predict the ImageNet class. And you can get very accurate, you have lots of data. Cool. So you build this model but now what you want to do is you want to use this what's here called an adaptation algorithm. And you want to use that model that you trained on ImageNet data and kind of change it just a bit. So you start from the model you have that works on ImageNet and with the few training data you have here on the right side and the author has actually standardized this in the benchmark to 1k samples. So you only have a thousand training samples compared to the millions that you potentially need. You have a thousand samples and you adapt your model to these tasks. So you train the model on ImageNet and you adapt it to predict whether or not there's a cat or a dog and you adapt it to segment these images and you adapt it to predict the depth of points. So you can consider this kind of as a pre-training thing. So you pre-train your model on ImageNet and then you adapt it to these others. That's what's called task adaptation. It's not exactly pre-training in the classic sense because pre-training in the classic sense means basically that you retain the same model but here it's a bit different. So in stage one you train a deep neural network on lots of training data. A deep neural network here this might be you know you have a bunch of layers layer layer layer layer layer and then here you have a thousand you classify into a thousand classes. This is your model. Then in stage two over here you adapt this model and what it ultimately means is you take for example this part here up until the second to last layer transfer it over put it here right bam bam bam bam bam you retain the weights you keep the weights but then you add just one or two new layers and classify your new tasks. This could be is it a cat or is it a dog? Then you train you can either elect to only train the green part here or you can train the whole thing. The second thing is called fine-tuning. The author is mostly elect to do fine-tuning in this work so you carry over the weights and you add a new head and then you train the entire thing with the 1000 samples that you have for this task and then you the kind of the goal is to get as good as possible on that one task where you only have a thousand samples. If your pre-training was good so if your stage one was good then you would expect that stage two would profit a lot from this pre-training which basically means that even though you only have a thousand samples you can reach accuracies that would usually only be possible with much more samples. That's the idea behind it. This is what's called visual task adaptation. The authors propose a benchmark for this. A benchmark for this part, for the adaptation algorithm. The adaptation algorithm they propose as a baseline is train on ImageNet and then fine-tune. That's an adaptation algorithm. They propose a score for this. If you come up with a better adaptation algorithm for example you could say no I'm going to train on YouTube data and then do fine-tune that and then maybe you'd reach better accuracies in these tasks over here and then your score would be higher. It's kind of a benchmark to compare adaptation algorithms. Here your benchmark score and this is conditioned on n, the number of samples that you have in the in the layer two tasks and this here is standardized to 1000 in their case. The score of an adaptation algorithm A is the following. It's the expectation over this is kind of an error measure and you can think of it basically as a test set classification error on the layer two tasks. Of that adaptation algorithm if given the data set of a layer two tasks of n samples and the layer two tasks here comes from a distribution of layer two tasks. What does it mean? This distribution of layer two tasks they imagine, they show this in this picture, they imagine the visual tasks like on this big landscape of visual tasks right here and what they ideally want to do is they want to sample a task here and this task corresponds to classifying these dog images and very close to it could be classifying bird images but then very far away could be a task of counting and depth estimation and so on. They imagine all the visual tasks have some kind of some sort of distribution. So what happens is you sample one of those visual tasks for each element in this expectation. You sample one of them, you build the data set with a thousand samples right you put it through your adaptation algorithms or your adaptation algorithm for example your pre-trained image net you adapt it to that task with a thousand samples and then you compute your error metric on that. Now if you do this over the whole distribution you get an expectation of this error metric in all the visual tasks and that will be your score. What does it mean in practice? I mean in practice you don't have this distribution right in practice you have a list so like list here is a list of tasks right there's this task this task this task this task there's whatever the pets task and then there is the aerial then there is the counting right you have a list of tasks and what is it like this stuff and this expectation ultimately right stage one train a model M stage two for each of these tasks adapt the model M or fine-tune your model M on these tasks then for each task get an error rate error rate one task two gives you error rate two tasks three gives you error rate three then jump simply one over n sum them up so take the take the average error rate of the of the of all of the tasks and that's your score that's kind of my first criticism of this thing like this this all just seems like super mathematized with like oh we imagine all of these tasks being in some distribution somewhere like that there is a distribution of tasks and we have an expectation over the distribution now like why just say here's a bunch of tasks right adapt your model to each one of them get the average error rate done that's your score that would have been first of all much easier and second of all they never actually care to characterize this distribution like if if they were to actually rigorously characterize this distribution of visual tasks I would agree that this formulation makes sense but all they say basically all they say is tasks that a human can solve from visual input alone and they give a bunch of examples of you know a good task would be the following right so label one one zero zero one right and you probably figured it out the task is is it a square or is it a triangle right that's a does a visual task in the classic sense human can solve it from visual input alone then the following task wouldn't be as easy labels one zero zero one so the task I had in mind was is there and spelling is the spelling of the shape over here does it contain an a so square contains an a circle doesn't line doesn't but triangle contains an a right so therefore this you kind of need world knowledge and you can't just solve it from visual input alone right especially not you can't generalize to new new shapes if you if you just from visually put so um they and they say appendix B they validate this right they validate that humans can solve it but I I actually disagree with this because just because humans can solve a task just from visual input doesn't mean that they don't use world knowledge in it like in this whatever pets example here right humans know how cats and dogs look anatomically right how they look from the side and from the back and so on even if they haven't seen it in a picture they they know how they behave and so on what is kind of realistic setting for a cat and a dog to be in so all of this it seems kind of a bit shady and the reason I'm saying this is if you make this distribution formulation you also you have to give a rigorous definition and because if a new task arrives now like one that's not in your list like never been before here in the world like new task arrives how do we know whether or not we should include it in the list or not right how do we know whether it's part of this distribution or not it just seems very very shaky so that being said they do give this list and this list has 19 tasks that's down here so there are 19 tasks their categorized as natural which means natural images these these yeah the examples here are pets flowers images house numbers and so on specialized images are for example images with that you special equipment for example medical images and then structured means where that's down here structured means that the model needs come to comprehend the structure of a scene so they give an example of object counting or 3d depth prediction I mean that's that's fair enough they have these 19 tasks but and they show kind of the tasks down here here's a list of tasks and kind of their baseline method on it but but why for me like the question is why exactly these tasks if they don't specify this distribution why these tasks and they don't really like they do some they do a lot of experimentation actually an investigation but what's kind of missing for me is to show that these tasks first of all are kind of internally consistent in that they're really visual tasks and second of all that they kind of cover this distribution or they represent this entire distribution that they're trying to model and it seems to me unclear why exactly these tasks why they left others out and included these ones in all fairness probably they simply took the ones that that they could get their hands on but still I feel that this is very shaky and that might that might lead to the benchmark not being adapted very widely but alright enough with the criticism let's go further in this so they do present this kind of baseline experiments and they they pre train always on image net and then they they they fine-tune on these layer two tasks and the way they pre train here is listed here for example so if they pre train a generative model it actually performs worse than if they just train from scratch for the layer two tasks on the thousand samples right self supervised is kind of a pre training method where if you have an image you do something like you rotate it to the right or to the left and then you ask a model some sort of a discriminator did it did I turn it to the right or to the left like zero is to the right left and one is to the right so you this is called self supervised you don't need labels for this right and it kind of works well semi supervised has some of the labels and supervised has is like image net with full labels and you kind of see unsurprisingly that the more information you have the the better you are going to be in all of these these kind of tasks interestingly the generative pre training works the worst worse than even from scratch training so that's kind of a sort of special what what I do really appreciate about this this investigation here is that they investigate a lot of variants of this of this benchmark and they come to the conclusion I think this encapsulated here one for example we find two models using 16 Google Cloud TPU hardware accelerators now that's expensive right but they say we conduct additional experiments to assess whether our result can be reproduced with a more basic hardware setup we evaluate on all the tasks using a single Nvidia P100 GPU with a thousand steps 64 images per mini batch right so they verify that you can do this benchmark you can take part in this benchmark even if you don't have much time or money or hardware right that's why for example they limit they limit the number of examples in the layer two tasks to a thousand they do investigate that this correlates with your performance if you were to include the full data sets of the layer two tasks so if you just include a thousand examples that correlates well they do investigate they do investigate whether you can put it on a single GPU they do investigate if you only run it for a thousand steps here you see this experiment you have to run it for a thousand steps basically and you're almost at the level if as if you were to run it for 50,000 steps so there's a lot of work to that goes into making sure that everybody can kind of participate in this benchmark and that I appreciate this a lot and there is actually code available so if you go to github and you just search for task adaptation actually I had it open before but I don't know so you go to github and you go to Google research and search for task adaptation to adaptation you'll you'll find it there is code that downloads all of the data sets for you prepares them and there is a script that runs your layer one model so you need to provide it a layer one model but then there is a script that that runs it on all of the different layer two tasks and at the end calculates your benchmark for you so that's pretty neat and I would encourage you if you have a good idea for a pre training or for a adaptation algorithm take part in the benchmark I suspect there will be a leaderboard kind of online leaderboard coming out at some point otherwise you simply can report the number in your papers and I hope you are going to be successful at that all right so that was it for me have lots of fun and bye bye
[ { "end": 6.3, "start": 0, "text": " Hi there. Today we're looking at the visual task adaptation benchmark by a" }, { "end": 14.34, "start": 6.3, "text": " list of authors that's way too long to read out all from Google Brain. So what" }, { "end": 20.86, "start": 14.34, "text": " is this paper? This paper cares about a new benchmark that is abbreviated VTab" }, { "end": 28.22, "start": 20.86, "text": " and VTab is a benchmark for a task called visual task adaptation. So a" }, { "end": 34.64, "start": 28.22, "text": " benchmark, the meaning of a benchmark is it's kind of a number that you achieve" }, { "end": 40.32, "start": 34.64, "text": " with a model and whoever has the highest number is the best at this task." }, { "end": 46.82, "start": 40.32, "text": " So the benchmark kind of standardizes how you evaluate models and the" }, { "end": 52.239999999999995, "start": 46.82, "text": " model is here. They do visual task adaptation. So what is visual task" }, { "end": 59.52, "start": 52.24, "text": " adaptation? So this is visual task adaptation. It's kind of illustrated in" }, { "end": 65.76, "start": 59.52, "text": " this figure. Imagine you have a bunch of what are called visual tasks and a" }, { "end": 70.76, "start": 65.76, "text": " visual task, and this is the right side here, a visual task is anything that" }, { "end": 76.38, "start": 70.76, "text": " can be solved from just visual input. So basically given a picture or many" }, { "end": 83.1, "start": 76.38, "text": " pictures and you ask kind of a question about it, if that question can be" }, { "end": 88.44, "start": 83.1, "text": " answered by just looking at the picture then that's called a visual task. For" }, { "end": 93.24, "start": 88.44, "text": " example in this data set you might be asked whether a picture contains a dog" }, { "end": 102.03999999999999, "start": 93.24, "text": " or a cat. In this data set you might be asked to outline where the objects" }, { "end": 107, "start": 102.04, "text": " are. So here the plane, you might be able to segment or you might be able to point" }, { "end": 112.68, "start": 107, "text": " out where buildings are in the images. Right here, here, there's no building" }, { "end": 117.08000000000001, "start": 112.68, "text": " here. So there's varieties of tasks that are possible. Or in the" }, { "end": 123.04, "start": 117.08000000000001, "text": " bottom domain you might be asked which one of the two red dots here is closer" }, { "end": 129.60000000000002, "start": 123.04, "text": " to the observer in 3D space. Or you might be asked in this picture please count" }, { "end": 135.76, "start": 129.6, "text": " the number of gray boxes. So there's a bunch of, all of these count as visual" }, { "end": 142.12, "start": 135.76, "text": " tasks. Now the setting that the authors imagine here is there are many of these" }, { "end": 148.76, "start": 142.12, "text": " visual tasks in the world for which there isn't much training data. Imagine" }, { "end": 152.88, "start": 148.76, "text": " something like this. These are aerial images so you kind of need a satellite" }, { "end": 156.64, "start": 152.88, "text": " or a plane to obtain them and then you need to label them. So all of this is" }, { "end": 163.72, "start": 156.64, "text": " isn't that cheap. Even more so in a for example medical domain where you have" }, { "end": 169.35999999999999, "start": 163.72, "text": " very expensive CT images of patients and then you need to obtain them and you" }, { "end": 174.48, "start": 169.35999999999999, "text": " need to convince the patients to release their data and someone needs to label it." }, { "end": 180, "start": 174.48, "text": " So it's very costly to obtain lots of training data. Now what we want to do is" }, { "end": 185, "start": 180, "text": " we want to, for all of these tasks, we ideally want to build neural networks," }, { "end": 189.76, "start": 185, "text": " deep neural networks because we know they're super accurate but they are only" }, { "end": 194.88, "start": 189.76, "text": " super accurate if you have lots of training data. So that conflicts with the" }, { "end": 200.4, "start": 194.88, "text": " fact that we might not have so much training data for these tasks. So the" }, { "end": 204.84, "start": 200.4, "text": " proposed solution here is what's called visual task adaptation and it's the" }, { "end": 210.84, "start": 204.84, "text": " following. Imagine you have lots and lots of what's called here upstream data." }, { "end": 217.52, "start": 210.84, "text": " And upstream data, what they mean is data that is similar to the data here but not" }, { "end": 222.64000000000001, "start": 217.52, "text": " exactly the same but you have lots of it. And the example given is ImageNet." }, { "end": 231.12, "start": 222.64000000000001, "text": " So imagine this here to be ImageNet. ImageNet is a data set with over a" }, { "end": 237.88, "start": 231.12, "text": " million images. All of them are labeled into one of a thousand classes and so" }, { "end": 243.72, "start": 237.88, "text": " you can build a very good model for ImageNet to predict the ImageNet class." }, { "end": 250, "start": 243.72, "text": " And you can get very accurate, you have lots of data. Cool. So you build" }, { "end": 253.84, "start": 250, "text": " this model but now what you want to do is you want to use this what's here" }, { "end": 259.6, "start": 253.84, "text": " called an adaptation algorithm. And you want to use that model that you trained" }, { "end": 265.48, "start": 259.6, "text": " on ImageNet data and kind of change it just a bit. So you start from the model" }, { "end": 270.64000000000004, "start": 265.48, "text": " you have that works on ImageNet and with the few training data you have here on" }, { "end": 273.68, "start": 270.64000000000004, "text": " the right side and the author has actually standardized this in the" }, { "end": 278.76, "start": 273.68, "text": " benchmark to 1k samples. So you only have a thousand training samples" }, { "end": 283.20000000000005, "start": 278.76, "text": " compared to the millions that you potentially need. You have a thousand" }, { "end": 290.72, "start": 283.20000000000005, "text": " samples and you adapt your model to these tasks. So you train the model" }, { "end": 294.40000000000003, "start": 290.72, "text": " on ImageNet and you adapt it to predict whether or not there's a cat or a dog" }, { "end": 300.44, "start": 294.4, "text": " and you adapt it to segment these images and you adapt it to predict the depth of" }, { "end": 306.47999999999996, "start": 300.44, "text": " points. So you can consider this kind of as a pre-training thing. So you pre-train" }, { "end": 313.2, "start": 306.47999999999996, "text": " your model on ImageNet and then you adapt it to these others. That's what's" }, { "end": 318.12, "start": 313.2, "text": " called task adaptation. It's not exactly pre-training in the classic sense" }, { "end": 322.52, "start": 318.12, "text": " because pre-training in the classic sense means basically that you" }, { "end": 329.03999999999996, "start": 322.52, "text": " retain the same model but here it's a bit different. So in stage one you" }, { "end": 333.68, "start": 329.03999999999996, "text": " train a deep neural network on lots of training data. A deep neural network" }, { "end": 337.68, "start": 333.68, "text": " here this might be you know you have a bunch of layers layer layer layer layer" }, { "end": 343.96, "start": 337.68, "text": " layer and then here you have a thousand you classify into a thousand classes." }, { "end": 350.84, "start": 343.96, "text": " This is your model. Then in stage two over here you adapt this model" }, { "end": 357.03999999999996, "start": 350.84, "text": " and what it ultimately means is you take for example this part here up until the" }, { "end": 364.35999999999996, "start": 357.03999999999996, "text": " second to last layer transfer it over put it here right bam bam bam bam bam" }, { "end": 371.59999999999997, "start": 364.35999999999996, "text": " you retain the weights you keep the weights but then you add just one or two" }, { "end": 377.88, "start": 371.59999999999997, "text": " new layers and classify your new tasks. This could be is it a cat or is it a dog?" }, { "end": 383.44, "start": 377.88, "text": " Then you train you can either elect to only train the green part here or" }, { "end": 389.28, "start": 383.44, "text": " you can train the whole thing. The second thing is called fine-tuning." }, { "end": 394.71999999999997, "start": 389.28, "text": " The author is mostly elect to do fine-tuning in this work so you carry" }, { "end": 401.76, "start": 394.71999999999997, "text": " over the weights and you add a new head and then you train the entire thing with" }, { "end": 408, "start": 401.76, "text": " the 1000 samples that you have for this task and then you the kind of the goal" }, { "end": 412.44, "start": 408, "text": " is to get as good as possible on that one task where you only have a thousand" }, { "end": 420.03999999999996, "start": 412.44, "text": " samples. If your pre-training was good so if your stage one was good then you" }, { "end": 426.52, "start": 420.03999999999996, "text": " would expect that stage two would profit a lot from this pre-training which" }, { "end": 429.8, "start": 426.52, "text": " basically means that even though you only have a thousand samples you can" }, { "end": 437.08, "start": 429.8, "text": " reach accuracies that would usually only be possible with much more samples." }, { "end": 444.88, "start": 437.08, "text": " That's the idea behind it. This is what's called visual task" }, { "end": 452.04, "start": 444.88, "text": " adaptation. The authors propose a benchmark for this. A benchmark for" }, { "end": 457.88, "start": 452.04, "text": " this part, for the adaptation algorithm. The adaptation algorithm they" }, { "end": 463.4, "start": 457.88, "text": " propose as a baseline is train on ImageNet and then fine-tune. That's an" }, { "end": 468.76, "start": 463.4, "text": " adaptation algorithm. They propose a score for this. If you come up with a" }, { "end": 474.08, "start": 468.76, "text": " better adaptation algorithm for example you could say no I'm going to train" }, { "end": 480.64, "start": 474.08, "text": " on YouTube data and then do fine-tune that and then maybe you'd reach" }, { "end": 487.15999999999997, "start": 480.64, "text": " better accuracies in these tasks over here and then your" }, { "end": 490.72, "start": 487.16, "text": " score would be higher. It's kind of a benchmark to compare adaptation" }, { "end": 498.68, "start": 490.72, "text": " algorithms. Here your benchmark score and this is conditioned on n, the number of" }, { "end": 503.96000000000004, "start": 498.68, "text": " samples that you have in the in the layer two tasks and this here is" }, { "end": 512.9200000000001, "start": 503.96000000000004, "text": " standardized to 1000 in their case. The score of an adaptation algorithm A is" }, { "end": 522.92, "start": 512.92, "text": " the following. It's the expectation over this is kind of an error" }, { "end": 527.04, "start": 522.92, "text": " measure and you can think of it basically as a test set" }, { "end": 533.12, "start": 527.04, "text": " classification error on the layer two tasks. Of that adaptation algorithm if" }, { "end": 540.4, "start": 533.12, "text": " given the data set of a layer two tasks of n samples and the layer two tasks here" }, { "end": 548.8, "start": 540.4, "text": " comes from a distribution of layer two tasks. What does it mean? This" }, { "end": 553.12, "start": 548.8, "text": " distribution of layer two tasks they imagine, they show this in this picture," }, { "end": 559.4399999999999, "start": 553.12, "text": " they imagine the visual tasks like on this big landscape of visual" }, { "end": 565, "start": 559.4399999999999, "text": " tasks right here and what they ideally want to do is they want to sample a" }, { "end": 570.64, "start": 565, "text": " task here and this task corresponds to classifying these dog images and very" }, { "end": 576.32, "start": 570.64, "text": " close to it could be classifying bird images but then very far away could be a" }, { "end": 581.24, "start": 576.32, "text": " task of counting and depth estimation and so on. They imagine all the visual" }, { "end": 587.24, "start": 581.24, "text": " tasks have some kind of some sort of distribution. So what happens is" }, { "end": 594.06, "start": 587.24, "text": " you sample one of those visual tasks for each element in this" }, { "end": 599.76, "start": 594.06, "text": " expectation. You sample one of them, you build the data set with a thousand" }, { "end": 604.16, "start": 599.76, "text": " samples right you put it through your adaptation algorithms or your" }, { "end": 609.1199999999999, "start": 604.16, "text": " adaptation algorithm for example your pre-trained image net you adapt it to" }, { "end": 614.4799999999999, "start": 609.1199999999999, "text": " that task with a thousand samples and then you compute your error metric on" }, { "end": 621.8399999999999, "start": 614.4799999999999, "text": " that. Now if you do this over the whole distribution you get an expectation of" }, { "end": 628.24, "start": 621.84, "text": " this error metric in all the visual tasks and that will be your score." }, { "end": 633.4, "start": 628.24, "text": " What does it mean in practice? I mean in practice you don't have this" }, { "end": 639.8000000000001, "start": 633.4, "text": " distribution right in practice you have a list so like list here is a list of" }, { "end": 644.36, "start": 639.8000000000001, "text": " tasks right there's this task this task this task this task there's whatever the" }, { "end": 651.1600000000001, "start": 644.36, "text": " pets task and then there is the aerial then there is the counting right you" }, { "end": 658.24, "start": 651.16, "text": " have a list of tasks and what is it like this stuff and this expectation" }, { "end": 665.68, "start": 658.24, "text": " ultimately right stage one train a model M stage two for each of these tasks" }, { "end": 671.12, "start": 665.68, "text": " adapt the model M or fine-tune your model M on these tasks then for each" }, { "end": 678.0799999999999, "start": 671.12, "text": " task get an error rate error rate one task two gives you error rate two tasks" }, { "end": 687.32, "start": 678.08, "text": " three gives you error rate three then jump simply one over n sum them up so" }, { "end": 693.48, "start": 687.32, "text": " take the take the average error rate of the of the of all of the tasks and" }, { "end": 698.5200000000001, "start": 693.48, "text": " that's your score that's kind of my first criticism of this thing like this" }, { "end": 703.44, "start": 698.5200000000001, "text": " this all just seems like super mathematized with like oh we imagine all" }, { "end": 708.7600000000001, "start": 703.44, "text": " of these tasks being in some distribution somewhere like that there" }, { "end": 714.24, "start": 708.7600000000001, "text": " is a distribution of tasks and we have an expectation over the distribution" }, { "end": 720.6, "start": 714.24, "text": " now like why just say here's a bunch of tasks right adapt your model to each one" }, { "end": 727.6800000000001, "start": 720.6, "text": " of them get the average error rate done that's your score that would have been" }, { "end": 732.08, "start": 727.6800000000001, "text": " first of all much easier and second of all they never actually care to" }, { "end": 736.2800000000001, "start": 732.08, "text": " characterize this distribution like if if they were to actually rigorously" }, { "end": 740.1600000000001, "start": 736.2800000000001, "text": " characterize this distribution of visual tasks I would agree that this" }, { "end": 749, "start": 740.1600000000001, "text": " formulation makes sense but all they say basically all they say is tasks that a" }, { "end": 754.44, "start": 749, "text": " human can solve from visual input alone and they give a bunch of examples of" }, { "end": 764.9200000000001, "start": 754.44, "text": " you know a good task would be the following right so label one one zero" }, { "end": 769.9200000000001, "start": 764.9200000000001, "text": " zero one right and you probably figured it out the task is is it a square or is" }, { "end": 774.5200000000001, "start": 769.9200000000001, "text": " it a triangle right that's a does a visual task in the classic sense human" }, { "end": 779.08, "start": 774.5200000000001, "text": " can solve it from visual input alone then the following task wouldn't be as" }, { "end": 792.1600000000001, "start": 779.08, "text": " easy labels one zero zero one so the task I had in mind was is there and" }, { "end": 799.32, "start": 792.1600000000001, "text": " spelling is the spelling of the shape over here does it contain an a so square" }, { "end": 806, "start": 799.32, "text": " contains an a circle doesn't line doesn't but triangle contains an a right" }, { "end": 810.12, "start": 806, "text": " so therefore this you kind of need world knowledge and you can't just solve it" }, { "end": 815.12, "start": 810.12, "text": " from visual input alone right especially not you can't generalize to new new" }, { "end": 824.48, "start": 815.12, "text": " shapes if you if you just from visually put so um they and they say appendix B" }, { "end": 831.44, "start": 824.48, "text": " they validate this right they validate that humans can solve it but I I" }, { "end": 836.48, "start": 831.44, "text": " actually disagree with this because just because humans can solve a task just" }, { "end": 840.8800000000001, "start": 836.48, "text": " from visual input doesn't mean that they don't use world knowledge in it like in" }, { "end": 848, "start": 840.8800000000001, "text": " this whatever pets example here right humans know how cats and dogs look" }, { "end": 852.32, "start": 848, "text": " anatomically right how they look from the side and from the back and so on" }, { "end": 857.72, "start": 852.32, "text": " even if they haven't seen it in a picture they they know how they behave" }, { "end": 864.84, "start": 857.72, "text": " and so on what is kind of realistic setting for a cat and a dog to be in so" }, { "end": 870.12, "start": 864.84, "text": " all of this it seems kind of a bit shady and the reason I'm saying this is if" }, { "end": 874.4, "start": 870.12, "text": " you make this distribution formulation you also you have to give a rigorous" }, { "end": 880.76, "start": 874.4, "text": " definition and because if a new task arrives now like one that's not in your" }, { "end": 886.24, "start": 880.76, "text": " list like never been before here in the world like new task arrives how do we" }, { "end": 891.64, "start": 886.24, "text": " know whether or not we should include it in the list or not right how do we know" }, { "end": 899.04, "start": 891.64, "text": " whether it's part of this distribution or not it just seems very very shaky so" }, { "end": 905.6800000000001, "start": 899.04, "text": " that being said they do give this list and this list has 19 tasks that's down" }, { "end": 910.36, "start": 905.6800000000001, "text": " here so there are 19 tasks their categorized as natural which means" }, { "end": 916.24, "start": 910.36, "text": " natural images these these yeah the examples here are pets flowers images" }, { "end": 923.08, "start": 916.24, "text": " house numbers and so on specialized images are for example images with that" }, { "end": 929.04, "start": 923.08, "text": " you special equipment for example medical images and then structured means" }, { "end": 936.12, "start": 929.04, "text": " where that's down here structured means that the model needs come to comprehend" }, { "end": 941.64, "start": 936.12, "text": " the structure of a scene so they give an example of object counting or 3d depth" }, { "end": 947.12, "start": 941.64, "text": " prediction I mean that's that's fair enough they have these 19 tasks but and" }, { "end": 955.6, "start": 947.12, "text": " they show kind of the tasks down here here's a list of tasks and kind of their" }, { "end": 963.12, "start": 955.6, "text": " baseline method on it but but why for me like the question is why exactly these" }, { "end": 969.32, "start": 963.12, "text": " tasks if they don't specify this distribution why these tasks and they" }, { "end": 973.12, "start": 969.32, "text": " don't really like they do some they do a lot of experimentation actually an" }, { "end": 978.16, "start": 973.12, "text": " investigation but what's kind of missing for me is to show that these tasks first" }, { "end": 983.08, "start": 978.16, "text": " of all are kind of internally consistent in that they're really visual tasks and" }, { "end": 988, "start": 983.08, "text": " second of all that they kind of cover this distribution or they represent" }, { "end": 993.52, "start": 988, "text": " this entire distribution that they're trying to model and it seems to me" }, { "end": 999.76, "start": 993.52, "text": " unclear why exactly these tasks why they left others out and included these ones" }, { "end": 1005.16, "start": 999.76, "text": " in all fairness probably they simply took the ones that that they could get" }, { "end": 1014.4, "start": 1005.16, "text": " their hands on but still I feel that this is very shaky and that might that" }, { "end": 1020.28, "start": 1014.4, "text": " might lead to the benchmark not being adapted very widely but alright enough" }, { "end": 1027.36, "start": 1020.28, "text": " with the criticism let's go further in this so they do present this kind of" }, { "end": 1034.52, "start": 1027.36, "text": " baseline experiments and they they pre train always on image net and then they" }, { "end": 1041.12, "start": 1034.52, "text": " they they fine-tune on these layer two tasks and the way they pre train here is" }, { "end": 1046.1599999999999, "start": 1041.12, "text": " listed here for example so if they pre train a generative model it actually" }, { "end": 1050.36, "start": 1046.1599999999999, "text": " performs worse than if they just train from scratch for the layer two tasks on" }, { "end": 1056.1599999999999, "start": 1050.36, "text": " the thousand samples right self supervised is kind of a pre training" }, { "end": 1060.6, "start": 1056.1599999999999, "text": " method where if you have an image you do something like you rotate it to the" }, { "end": 1065.6399999999999, "start": 1060.6, "text": " right or to the left and then you ask a model some sort of a discriminator did" }, { "end": 1070, "start": 1065.6399999999999, "text": " it did I turn it to the right or to the left like zero is to the right left and" }, { "end": 1074.64, "start": 1070, "text": " one is to the right so you this is called self supervised you don't need" }, { "end": 1082.04, "start": 1074.64, "text": " labels for this right and it kind of works well semi supervised has some of" }, { "end": 1087.92, "start": 1082.04, "text": " the labels and supervised has is like image net with full labels and you kind" }, { "end": 1093.52, "start": 1087.92, "text": " of see unsurprisingly that the more information you have the the better you" }, { "end": 1098.52, "start": 1093.52, "text": " are going to be in all of these these kind of tasks interestingly the" }, { "end": 1105.84, "start": 1098.52, "text": " generative pre training works the worst worse than even from scratch training so" }, { "end": 1114.24, "start": 1105.84, "text": " that's kind of a sort of special what what I do really appreciate about this" }, { "end": 1121.44, "start": 1114.24, "text": " this investigation here is that they investigate a lot of variants of this" }, { "end": 1128.48, "start": 1121.44, "text": " of this benchmark and they come to the conclusion I think this encapsulated" }, { "end": 1134.64, "start": 1128.48, "text": " here one for example we find two models using 16 Google Cloud TPU hardware" }, { "end": 1139.32, "start": 1134.64, "text": " accelerators now that's expensive right but they say we conduct additional" }, { "end": 1143.6, "start": 1139.32, "text": " experiments to assess whether our result can be reproduced with a more basic" }, { "end": 1149.72, "start": 1143.6, "text": " hardware setup we evaluate on all the tasks using a single Nvidia P100 GPU" }, { "end": 1156.24, "start": 1149.72, "text": " with a thousand steps 64 images per mini batch right so they verify that you can" }, { "end": 1160.72, "start": 1156.24, "text": " do this benchmark you can take part in this benchmark even if you don't have" }, { "end": 1167.1200000000001, "start": 1160.72, "text": " much time or money or hardware right that's why for example they limit they" }, { "end": 1172.6, "start": 1167.1200000000001, "text": " limit the number of examples in the layer two tasks to a thousand they do" }, { "end": 1177.92, "start": 1172.6, "text": " investigate that this correlates with your performance if you were to include" }, { "end": 1182.56, "start": 1177.92, "text": " the full data sets of the layer two tasks so if you just include a thousand" }, { "end": 1187.36, "start": 1182.56, "text": " examples that correlates well they do investigate they do investigate whether" }, { "end": 1193.44, "start": 1187.36, "text": " you can put it on a single GPU they do investigate if you only run it for a" }, { "end": 1196.44, "start": 1193.44, "text": " thousand steps here you see this experiment you have to run it for a" }, { "end": 1202.28, "start": 1196.44, "text": " thousand steps basically and you're almost at the level if as if you were to" }, { "end": 1207.6, "start": 1202.28, "text": " run it for 50,000 steps so there's a lot of work to that goes into making sure" }, { "end": 1212.8, "start": 1207.6, "text": " that everybody can kind of participate in this benchmark and that I appreciate" }, { "end": 1220.1999999999998, "start": 1212.8, "text": " this a lot and there is actually code available so if you go to github and" }, { "end": 1225.08, "start": 1220.1999999999998, "text": " you just search for task adaptation actually I had it open before but I don't" }, { "end": 1231.7199999999998, "start": 1225.08, "text": " know so you go to github and you go to Google research and search for task" }, { "end": 1243.2, "start": 1231.72, "text": " adaptation to adaptation you'll you'll find it there is code that downloads all" }, { "end": 1249.2, "start": 1243.2, "text": " of the data sets for you prepares them and there is a script that runs your" }, { "end": 1253.64, "start": 1249.2, "text": " layer one model so you need to provide it a layer one model but then there is" }, { "end": 1261.1200000000001, "start": 1253.64, "text": " a script that that runs it on all of the different layer two tasks and at the end" }, { "end": 1267.1999999999998, "start": 1261.12, "text": " calculates your benchmark for you so that's pretty neat and I would encourage" }, { "end": 1272.4799999999998, "start": 1267.1999999999998, "text": " you if you have a good idea for a pre training or for a adaptation algorithm" }, { "end": 1277.28, "start": 1272.4799999999998, "text": " take part in the benchmark I suspect there will be a leaderboard kind of" }, { "end": 1282.12, "start": 1277.28, "text": " online leaderboard coming out at some point otherwise you simply can report" }, { "end": 1288.1999999999998, "start": 1282.12, "text": " the number in your papers and I hope you are going to be successful at that all" }, { "end": 1296.28, "start": 1288.2, "text": " right so that was it for me have lots of fun and bye bye" } ]
69IjNZaoeao
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
LeDeepChef 👨‍🍳 Deep Reinforcement Learning Agent for Families of Text-Based Games
[ "Science & Technology" ]
[ "ml", "machine learning", "reinforcement learning", "recipe", "text-based games", "text games", "natural language processing", "nlp", "actor", "critic", "GRU", "embedding", "pretraining", "artificial intelligence", "ai", "competition", "microsoft" ]
The AI cook is here! This agent learns to play a text-based game where the goal is to prepare a meal according to a recipe. Challenges? Many! The number of possible actions is huge, ingredients change and can include ones never seen before, you need to navigate rooms, use tools, manage an inventory and sequence everything correctly and all of this from a noisy textual description that the game engine throws at you. This paper mixes supervised explicit training with reinforcement learning in order to solve this task. Abstract: While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas in recent history, natural language tasks remained mostly unaffected, due to the compositional and combinatorial nature that makes them notoriously hard to optimize. With the emerging field of Text-Based Games (TBGs), researchers try to bridge this gap. Inspired by the success of RL algorithms on Atari games, the idea is to develop new methods in a restricted game world and then gradually move to more complex environments. Previous work in the area of TBGs has mainly focused on solving individual games. We, however, consider the task of designing an agent that not just succeeds in a single game, but performs well across a whole family of games, sharing the same theme. In this work, we present our deep RL agent--LeDeepChef--that shows generalization capabilities to never-before-seen games of the same family with different environments and task descriptions. The agent participated in Microsoft Research's "First TextWorld Problems: A Language and Reinforcement Learning Challenge" and outperformed all but one competitor on the final test set. The games from the challenge all share the same theme, namely cooking in a modern house environment, but differ significantly in the arrangement of the rooms, the presented objects, and the specific goal (recipe to cook). To build an agent that achieves high scores across a whole family of games, we use an actor-critic framework and prune the action-space by using ideas from hierarchical reinforcement learning and a specialized module trained on a recipe database. Authors: Leonard Adolphs, Thomas Hofmann https://arxiv.org/abs/1909.01646
Hi there. Today we're looking at Le Deep Chef, deep reinforcement learning agent for families of text-based games by Leonard Adolfs and Thomas Hoffmann. So this is a paper about engineering an agent for a particular family of tasks. This is different from reinforcement learning agents that for example are just good at one game, let's say Pong or whatnot and even I guess even things like Starcraft. Though this kind of depends on what you mean by game. So what are we talking about here? The following is a text-based games where the goal is to cook recipes. So let's just jump in and see what goes on. The game starts by telling you, you are hungry. Let's cook a delicious meal and so on. So the objective is basically always the same. It's find the cookbook, read the recipe that's in it, then collect all the things that are in the recipe, prepare them in certain ways that are also specified by the recipe and then at the end you have a meal and then you can eat the meal and that will give you points. But since it's a text-based games and the input doesn't come structured but it comes in natural text. So the game tells you for example kitchen. So basically you're in the kitchen. You are now in the kitchen. I guess you better just go and list everything you see here. You hear a noise, you spin around. So you see that the kind of input you get from the game is very playful, has a lot of descriptive elements. Sometimes it's like you see a closed oven. You make out a table. Then you can see on the counter you can make out a sliced fried red hot pepper and so on. So it's very much not trivial to kind of parse this in a traditional way. If you were to go about this by simply writing an algorithm extracting things it's very hard because for example you might see that there's an oven but it's a closed oven. You make out a table. So this is kind of a synonym for you see a table but you see like there is a table. You can make out a sliced fried red hot pepper and here it's important not only do you need to realize that there is a red hot pepper but also that its state is sliced and fried. This is important because you need all ingredients in a certain state. Right? You examine here you examine the stove so there is a stove. Right? So all these things you need to kind of understand. So if you now look there is a recipe book in here. Or no there isn't a recipe. You can examine recipe. I guess there is a recipe book in that room. If there is a recipe book then you can examine the recipe and that's the command. So the arrows here always indicate that that's a user command. And these you have to type. That's like the next thing that your agent needs to do. You can't select from a predefined set of actions. You actually need to type in the things you want to do. Right? And these are a lot. Like there are a lot of possibilities of what you could type in. Even if you restrict it to kind of what you know the game accepts there are still so many actions. It's way different than for example Atari games. They always have eight actions. Like there's eight buttons you could possibly press and that's it. And here there are like combinatorically many things you can do. Like you can prepare and take and all the ingredients. You don't know which ingredients come. So here you examine the recipe. Let's look at a recipe. It says you open the recipe. Start reading. Recipe number one. Here are the ingredients. Red hot pepper. Here for right now that's just one ingredient. Then there are directions. So what do you need to do? Slice the red hot pepper. Fry the red hot pepper and prepare the meal. Those are the directions of the recipe. You also have this inventory command which tells you which you're carrying. Next difficulty. The inventory is finite. So you can't carry everything. At some points you have to drop things that are unnecessary. You can't just take everything. Here you see the command take red hot pepper. That only works if there's a red hot pepper in the room. And here says you take the red hot pepper from the counter. Your score has just gone up by one point. And then if you type inventory it says you're carrying a sliced fried red hot pepper. Again here it says the state of the ingredient. So the ingredient is the red hot pepper and the state is sliced and fried. And then you can prepare meal and then you can eat meal and then it says your score has just gone up by one point. And these are the scores you collect. So there are a lot of difficulties that are actually not shown in this example. For example there are different rooms. You may have noticed here you're in the kitchen. But there could be other rooms and you start in a random room. You also need to navigate through the rooms. Close the doors to the rooms could be closed and then you need to open them and so on. You can only for example if this pepper here weren't already sliced and fried you need to find... You can only slice it if there is a knife in the room. You can only fry it if there is a frying pan or an oven or a stove in the room. So and then you'd have to notice that there is a knife. If there is no knife you need to take the red hot pepper bring it to a new room with a knife and then slice it. So this is vastly difficult game. The last difficulty is actually that in the test set there will be ingredients that you haven't seen during training. So also that there. Your agent needs to generalize. That's why it says a family of text-based games. Because the objective always the same to kind of cook the recipe. But the things you have to do and the things that appear and so on those are those change basically from episode to episode. And the test set will be different than the training set or kind of there will be unseen data. Alright so how does this paper go about solving this problem? This paper basically does the following and we are going here from high level to low level. On the highest level it's a reinforcement learning agent and that is sort of how you would imagine an RL agent to work. So here at the end you have a policy and the policy predicts an action. If you don't know what a kind of a policy and an action things are in RL these are basic RL concept and we'll kind of skip them here and I'll assume everyone knows what they are. But essentially a policy specifies which action you take next given the current game state. So the policy is made up, scores different actions. So at each step there are k actions available. And these k actions I foresaid there are almost infinitely many actions that you could take. The first difficulty and that's the thing that actually comes in here is to reduce all of the possible actions that you can't even list to just k commands. So we'll go into that later how this is done. But basically one of the main contributions of this paper is how do you even specify what is reasonable, what would be reasonable to do in the current situation. And then the policy over here only has to decide among those reasonable actions, not among all actions. But given that you have k reasonable commands you see here command one command, these are embedded and then fed into GRUs which are recurrent neural networks. So for each of these commands you'll get a 32 dimensional vector. This 32 dimensional vector is here C1 through Ck. Each are combined with an encoding of the current state. So these 32 dimensional vector are combined with encoding of the current state which is 256 dimensional and then fed into a neural network that will output a probability distribution over these actions. This is pretty classic in deep reinforcement learning. So you have action encoding and the state encoding and the policy decides on that. The state encoding you'll see here it's the same everywhere of course because the current game state is the current game state. This comes from this model up here. What this does is over here you have the what you would call the state the current observation. The current observation is composed of many things. Specifically the following eight things. The first one is actually called observation which is I would call all of this the current observation from an RL perspective. But the first is actually observation. It's whatever you saw the big text you saw before. Like you were in the kitchen it looks like this it smells like this you turn around and so on. This would be the observation. It's what the game engine says at the current time step. This is just a piece of text. Second missing items. Third unnecessary items. Now these things you might wonder okay how do I know what what items are missing and unnecessary. These things come from another model that this paper trains and we'll get into that later. But basically they have a method of specifying which items are still missing which are unnecessary and they list those here. Then description which is the output of the last look command. So in each room you can look you can type look and then it'll give you a description of the room and what's in there. The previous commands this is often used in RL either explicitly or implicitly through a recurrent network in order to give the agent an idea what what happened in the in the previous steps or what it did so that it doesn't repeat actions unnecessarily or so it learns to not repeat actions unnecessarily. Required utilities. Again this is a model that's kind of trained to predict what utilities are required to perform some actions. So as I said before if you want to slice the red hot pepper you need a knife. If you want to fry it you need a stove. Discovered locations. As I said there are different rooms you actually don't know what rooms there are before you actually go in in there. So before you go through a door you reach another room. So the list of previously discovered and visited locations is there and then the name of the current location it is also there. So these are eight things that make up the current observation. These eight things are just strings of text and these eight things are each one as you can see here these are that the eight things from observation to location each one are embedded and fed also into an RNN. So for each of these eight things you'll obtain a 32 dimensional vector and these are all concatenated to make up one big 256 dimensional vector. So this 256 dimensional vector will contain all the necessary information about the current room what's in there what what items are you still missing what items do you have in your inventory which ones are unnecessary and so on. So if you train this correctly this 256 dimensional vector will describe the current game state as it is relevant to your agent like everything about it every relevant information that's in here will be encoded in this vector. Now this vector isn't the final state encoding yet what you'll have is you feed this into an RNN that takes as input the last time steps you have to imagine the last time step already there was observation blah blah blah this entire thing was I'm just copying I'm just copying this box over here so this entire thing was already done last step and already fed into an RNN so this this is an RNN that actually goes over time and the last whatever the output here is it will be fed to the next step and this is a trick often done in reinforcement learning as well that you actually have a recurrent neural network over the time steps so each time step you have a certain observation you encode it and so on you get a description of that and then you feed this into an RNN what the RNN can learn to do is it can learn to react to different not only to the current observation but to the current observation conditioned on the history of previous observations so it can learn before I was in this room now I'm in this new room so I actually haven't you know taken all the items from this room yet because I just came into this room and so on so the the kind of component where you are able to look at the past and what happened in the past is in captured by this RNN here so it's fairly complicated architecture but this here this state encoding that is conditioned on the also on the history then goes into this into here that's it that's the vector that goes in here is combined with each action so all of these actions here these K actions and this is all fed through a neural network and that will give you the policy this is a fairly complicated thing but if you look at it it's not it's not too it's not too difficult actually so what you'll do is you will take your observations here this is all observation it will be encoded and combined with the history in order to give you this in order to give you an encoding of the current state on the other hand you'll take all of the possible commands that you could perform right now encode each one separately right into an embedding and then you combine each one of those with this encoding you specified previously that you and and from that you make your decision which action to take next and the action here is the one that's output is the action you take next sampled from this policy the last thing you need is a value network and this is just important for reinforcement learning which tells you from this state here so I'm getting weird with colors here from this state here which is the same as this one so you'd simply transfer this over from this state how valuable is that what's my value of the state and the value is if I'm in this state and I act as I normally act what are all my future rewards going to be combined so it basically gives you a value of this state you can think of this in for example terms of chess if you had this in chess and then this here is it would be a description of the chessboard this HT and the value would be how valuable is this position for you so if you're very much ahead and material and position and so on this value would be very high if you're behind this value would be very low and this is in a real network simply trying to predict that value so with all of this you now have a never good basis to do reinforcement learning you have a policy you have a value network and from that you can train an RL agent and this is done classically in an actor critic way where you do advantage learning here the advantage and the policy you train weighted by the advantage then the value network you train to be close to their reward and then you have an entropy penalty if you don't know what these things are the video will get bit too long if I were to go over these reinforcement learning concepts but these are very standard in reinforcement learning so you can train these you can basically train what it does is you can train these neural networks in absence of label training data because you don't know what the best action is in each step right there's no one telling you you just have a reward you just sometimes you get a point and you don't know which actions led to that so these things will actually allow you to train these neural networks by using just the reward without knowing which exact actions were right and wrong and that's the core of reinforcement learning obviously alright so the the core one of the core ingredients actually is this recipe manager and the recipe manager is a sub model that does the following so here it takes as an input the cookbook here and it also takes as an input the inventory and it outputs something like this and this this is a this is a table representation of what it outputs it will output all the ingredients that you need for the recipe whether or not this input that this ingredient is currently missing from your inventory and action to perform so which actions still need to be performed so let's look at the following let's look at this example the recipe tells you you need the ingredients are a carrot a red hot pepper and a white onion and the inventory says you care you're carrying a white onion and a carrot right so down here you see aha we we do actually have we do actually have a carrot so it's not missing the carrot isn't missing you have it in your inventory the red hot pepper is missing we don't have it in the inventory but we need it for the recipe the white onion we need for the recipe but it's not missing then it also is for each of the ingredients is supposed to tell you this recipe model which of the what you still need to perform on it so here it says slice the carrot roast the carrot and you simply have a carrot it doesn't say slice the roast that means it's not sliced and roasted so the recipe is supposed to output you still need to slice and roast the carrot here for example for the white onion says fry the white onion and as you can see in the inventory it says you're carrying a fried white onion so for the white onion you see we don't need to do anything anymore so that the recipe model is basically trying to to make this table here and this table you can see as an intermediary step in order to do all the other things and the difference here to a pure RL method and this is important the difference is that this representation this intermediate table representation is done explicitly so the recipe model really produces a table like this and not just in other RL methods people go about and make this recipe model output some sort of you know let's say a 200 dimensional vector that's supposed to encompass all of this information and that doesn't appear to work as well like often that if you simply train this end-to-end that will not pick up on the important information because the training signal tends to be way too weak you have to imagine you already have this really really big model construction here and you're trying to learn it you're trying to learn it from a tiny reward signal that you get at the end right this is very noisy signal now if if you're now trying to say well the inputs to these things right this command here and we also saw the inputs to these these depend on this recipe model also now are whatever giant neural network construction here and we'll all train this end-to-end and these will actually not be text these will actually be some sort of latent vectors that will often fail because you're now just trying to extract information from too noisy of a reward signal so the authors here do actually pretty neat separation of that and they train this recipe model with actually an augmented data set so they go to freebase and get more food items and then they construct a data set that resembles this and train it in a supervised way to output tables tables like this so this is is pretty smart and I think it's a good lesson if you ever attempt something like this that really really important information such as this one if you can train it in a supervised way as a kind of a pre-processing step to your RL procedure that's extremely helpful here you can you can see how this is then used so by combining this table that was output from the recipe model and your inventory and the output of this look command you can then generate these commands so before we said it's important to reduce the everything you could do which is infinite things to everything that is reasonable to do currently and this model here does that so given this given that and given the description of what's currently in the room you can now generate these commands and for example take knife if you have to slice something because you see a knife is in the room and you could conceivably take the knife right you can construct these commands but also since you know right since you know what's since you know what's in your inventory and since you know which things are still missing you can generate commands like take the white onion or drop the water because you don't need the water right so um the the offers also group these things here in this what they call high-level commands which take all required items from here simply means take everything that's in the room that is not in the inventory but you need it so these things which for an RL agent it makes sense to group these things together because it doesn't make sense to have them as two separate things if you need both take both if you don't need any what if you have a new entry drop all of these things so that makes sense that's a small optimization that apparently brought some gains but the kind of the the overarching message here is that once you have a once you have this information from the recipe model you can then use it in many useful ways in order to make life for your RL agent easier alright so that kind of is the entire model that's very it's quite convoluted but basically you start with this here this recipe manager you decide you output this table down here which ingredients are in the recipe are they still missing and which actions we need to perform you then combine it with this information here the information about the current room and your inventory in order to come up with a set of commands that are conceivable to do here you combine these commands with some commands that are always available so commands that are always available are things like look inventory prepare meal you have that right you add that if the recipe manager does not output any missing and the agents location is the kitchen so you can add these other items and also we're not even gonna get into that you add navigational items because there are doors in these rooms and you need to navigate around so they actually train another model to here you see to detect to detect directions that you could move into and open doors for every closed door in the room so that's another challenge that the agent needs to overcome they have to build an entire model to predict which doors are there and are they closed do you need to open them so these commands if there are doors and if you can move through them these commands are also added to this set of commands that are reasonable so now we have a set of commands that are reasonable over here then you describe the room here you put both into this embedding and then finally your policy outputs an action that's that that's the entire process very convoluted very big very astonishing that this works with our L but in order to need to get it to work you actually need to do this supervised training and the experimental evidence here is quite solid in that they compare to baseline systems that that use classic techniques and they do some ablation over over their individual parts and they get second place I think in a competition about these text-based games so that's pretty good and that was it for me and check it out and bye bye
[ { "end": 5.4, "start": 0, "text": " Hi there. Today we're looking at Le Deep Chef, deep reinforcement learning agent" }, { "end": 11.28, "start": 5.4, "text": " for families of text-based games by Leonard Adolfs and Thomas Hoffmann. So" }, { "end": 18.400000000000002, "start": 11.28, "text": " this is a paper about engineering an agent for a particular family of tasks." }, { "end": 22.400000000000002, "start": 18.400000000000002, "text": " This is different from reinforcement learning agents that for example are" }, { "end": 30.24, "start": 22.4, "text": " just good at one game, let's say Pong or whatnot and even I guess even things" }, { "end": 39.08, "start": 30.24, "text": " like Starcraft. Though this kind of depends on what you mean by game. So what" }, { "end": 45.32, "start": 39.08, "text": " are we talking about here? The following is a text-based games where the goal is" }, { "end": 55.7, "start": 45.32, "text": " to cook recipes. So let's just jump in and see what goes on. The game" }, { "end": 62.16, "start": 55.7, "text": " starts by telling you, you are hungry. Let's cook a delicious meal and so on." }, { "end": 68.52, "start": 62.16, "text": " So the objective is basically always the same. It's find the cookbook, read the" }, { "end": 75.16, "start": 68.52, "text": " recipe that's in it, then collect all the things that are in the recipe, prepare" }, { "end": 80.47999999999999, "start": 75.16, "text": " them in certain ways that are also specified by the recipe and then at the" }, { "end": 84.75999999999999, "start": 80.47999999999999, "text": " end you have a meal and then you can eat the meal and that will give you points." }, { "end": 91.52, "start": 84.75999999999999, "text": " But since it's a text-based games and the input doesn't come structured but it" }, { "end": 98.52, "start": 91.52, "text": " comes in natural text. So the game tells you for example kitchen. So basically" }, { "end": 102.72, "start": 98.52, "text": " you're in the kitchen. You are now in the kitchen. I guess you better just go and" }, { "end": 107.8, "start": 102.72, "text": " list everything you see here. You hear a noise, you spin around. So you see that" }, { "end": 113.84, "start": 107.8, "text": " the kind of input you get from the game is very playful, has a lot of descriptive" }, { "end": 123.6, "start": 113.84, "text": " elements. Sometimes it's like you see a closed oven. You make out a table. Then" }, { "end": 130.04, "start": 123.6, "text": " you can see on the counter you can make out a sliced fried red hot pepper and so" }, { "end": 136.92, "start": 130.04, "text": " on. So it's very much not trivial to kind of parse this in a traditional way." }, { "end": 141.84, "start": 136.92, "text": " If you were to go about this by simply writing an algorithm extracting things" }, { "end": 147.32, "start": 141.84, "text": " it's very hard because for example you might see that there's an oven but it's" }, { "end": 153.07999999999998, "start": 147.32, "text": " a closed oven. You make out a table. So this is kind of a synonym for you see a" }, { "end": 160.76000000000002, "start": 153.08, "text": " table but you see like there is a table. You can make out a sliced fried red hot" }, { "end": 164.48000000000002, "start": 160.76000000000002, "text": " pepper and here it's important not only do you need to realize that there is a" }, { "end": 170.8, "start": 164.48000000000002, "text": " red hot pepper but also that its state is sliced and fried. This is important" }, { "end": 179, "start": 170.8, "text": " because you need all ingredients in a certain state. Right? You examine here you" }, { "end": 186.96, "start": 179, "text": " examine the stove so there is a stove. Right? So all these things you need to" }, { "end": 193.48, "start": 186.96, "text": " kind of understand. So if you now look there is a recipe book in here." }, { "end": 200.2, "start": 193.48, "text": " Or no there isn't a recipe. You can examine recipe. I guess there is a recipe" }, { "end": 206.84, "start": 200.2, "text": " book in that room. If there is a recipe book then you can examine the recipe and" }, { "end": 211.32, "start": 206.84, "text": " that's the command. So the arrows here always indicate that that's a user" }, { "end": 217.16, "start": 211.32, "text": " command. And these you have to type. That's like the next thing that" }, { "end": 223.64000000000001, "start": 217.16, "text": " your agent needs to do. You can't select from a predefined set of actions." }, { "end": 228.96, "start": 223.64000000000001, "text": " You actually need to type in the things you want to do. Right? And these are a" }, { "end": 233.32, "start": 228.96, "text": " lot. Like there are a lot of possibilities of what you could type in." }, { "end": 237.48, "start": 233.32, "text": " Even if you restrict it to kind of what you know the game accepts there are" }, { "end": 243.4, "start": 237.48, "text": " still so many actions. It's way different than for example Atari games." }, { "end": 246.51999999999998, "start": 243.4, "text": " They always have eight actions. Like there's eight buttons you could" }, { "end": 252.64, "start": 246.51999999999998, "text": " possibly press and that's it. And here there are like combinatorically many" }, { "end": 259.03999999999996, "start": 252.64, "text": " things you can do. Like you can prepare and take and all the ingredients. You" }, { "end": 264.92, "start": 259.04, "text": " don't know which ingredients come. So here you examine the recipe." }, { "end": 269.6, "start": 264.92, "text": " Let's look at a recipe. It says you open the recipe. Start reading. Recipe number" }, { "end": 275.6, "start": 269.6, "text": " one. Here are the ingredients. Red hot pepper. Here for right now that's just one" }, { "end": 280.08000000000004, "start": 275.6, "text": " ingredient. Then there are directions. So what do you need to do? Slice the red" }, { "end": 285.32000000000005, "start": 280.08000000000004, "text": " hot pepper. Fry the red hot pepper and prepare the meal. Those are" }, { "end": 291.2, "start": 285.32, "text": " the directions of the recipe. You also have this inventory command which" }, { "end": 298.4, "start": 291.2, "text": " tells you which you're carrying. Next difficulty. The inventory is finite. So" }, { "end": 302.68, "start": 298.4, "text": " you can't carry everything. At some points you have to drop things that are" }, { "end": 308.6, "start": 302.68, "text": " unnecessary. You can't just take everything. Here you see the command take" }, { "end": 313.08, "start": 308.6, "text": " red hot pepper. That only works if there's a red hot pepper in the room. And" }, { "end": 318.12, "start": 313.08, "text": " here says you take the red hot pepper from the counter. Your score has just gone" }, { "end": 322.44, "start": 318.12, "text": " up by one point. And then if you type inventory it says you're carrying a" }, { "end": 330.08, "start": 322.44, "text": " sliced fried red hot pepper. Again here it says the state of the ingredient." }, { "end": 336.44, "start": 330.08, "text": " So the ingredient is the red hot pepper and the state is sliced and fried. And" }, { "end": 340.32, "start": 336.44, "text": " then you can prepare meal and then you can eat meal and then it says your" }, { "end": 345.92, "start": 340.32, "text": " score has just gone up by one point. And these are the scores you collect. So" }, { "end": 349.52, "start": 345.92, "text": " there are a lot of difficulties that are actually not shown in this example. For" }, { "end": 354.2, "start": 349.52, "text": " example there are different rooms. You may have noticed here you're in the" }, { "end": 359.48, "start": 354.2, "text": " kitchen. But there could be other rooms and you start in a random room. You also" }, { "end": 364.32, "start": 359.48, "text": " need to navigate through the rooms. Close the doors to the rooms could be" }, { "end": 373.08, "start": 364.32, "text": " closed and then you need to open them and so on. You can only for example if" }, { "end": 382.68, "start": 373.08, "text": " this pepper here weren't already sliced and fried you need to find..." }, { "end": 389.2, "start": 382.68, "text": " You can only slice it if there is a knife in the room. You can only fry" }, { "end": 395.12, "start": 389.2, "text": " it if there is a frying pan or an oven or a stove in the room." }, { "end": 402.59999999999997, "start": 395.12, "text": " So and then you'd have to notice that there is a knife. If there is no knife" }, { "end": 407.24, "start": 402.59999999999997, "text": " you need to take the red hot pepper bring it to a new room with a knife and" }, { "end": 415.2, "start": 407.24, "text": " then slice it. So this is vastly difficult game. The last difficulty is" }, { "end": 422.47999999999996, "start": 415.2, "text": " actually that in the test set there will be ingredients that you haven't seen" }, { "end": 428.92, "start": 422.47999999999996, "text": " during training. So also that there. Your agent needs to generalize. That's why it" }, { "end": 432.88, "start": 428.92, "text": " says a family of text-based games. Because the objective always the same to" }, { "end": 436.36, "start": 432.88, "text": " kind of cook the recipe. But the things you have to do and the things that" }, { "end": 443.36, "start": 436.36, "text": " appear and so on those are those change basically from episode to episode. And" }, { "end": 448.88, "start": 443.36, "text": " the test set will be different than the training set or kind of there will be" }, { "end": 454.84000000000003, "start": 448.88, "text": " unseen data. Alright so how does this paper go about solving this problem?" }, { "end": 465.2, "start": 454.84000000000003, "text": " This paper basically does the following and we are going here from high level to" }, { "end": 471.84000000000003, "start": 465.2, "text": " low level. On the highest level it's a reinforcement learning agent and that is" }, { "end": 481.64, "start": 471.84, "text": " sort of how you would imagine an RL agent to work. So here at the end you have" }, { "end": 487.71999999999997, "start": 481.64, "text": " a policy and the policy predicts an action. If you don't know what a kind of" }, { "end": 492.32, "start": 487.71999999999997, "text": " a policy and an action things are in RL these are basic RL concept and we'll" }, { "end": 498.2, "start": 492.32, "text": " kind of skip them here and I'll assume everyone knows what they are. But" }, { "end": 503.64, "start": 498.2, "text": " essentially a policy specifies which action you take next given the current" }, { "end": 511.64, "start": 503.64, "text": " game state. So the policy is made up, scores different actions. So at each step" }, { "end": 519.2, "start": 511.64, "text": " there are k actions available. And these k actions I foresaid there are almost" }, { "end": 524.84, "start": 519.2, "text": " infinitely many actions that you could take. The first difficulty and that's the" }, { "end": 534.2800000000001, "start": 524.84, "text": " thing that actually comes in here is to reduce all of the possible actions that" }, { "end": 541.48, "start": 534.2800000000001, "text": " you can't even list to just k commands. So we'll go into that later how this is" }, { "end": 547.52, "start": 541.48, "text": " done. But basically one of the main contributions of this paper is how do" }, { "end": 553.9200000000001, "start": 547.52, "text": " you even specify what is reasonable, what would be reasonable to do in the current" }, { "end": 559.92, "start": 553.92, "text": " situation. And then the policy over here only has to decide among those reasonable" }, { "end": 566.16, "start": 559.92, "text": " actions, not among all actions. But given that you have k reasonable commands" }, { "end": 572.64, "start": 566.16, "text": " you see here command one command, these are embedded and then fed into GRUs which are" }, { "end": 578.4399999999999, "start": 572.64, "text": " recurrent neural networks. So for each of these commands you'll get a 32" }, { "end": 588.48, "start": 578.44, "text": " dimensional vector. This 32 dimensional vector is here C1 through Ck. Each are" }, { "end": 596.36, "start": 588.48, "text": " combined with an encoding of the current state. So these 32 dimensional" }, { "end": 601.6800000000001, "start": 596.36, "text": " vector are combined with encoding of the current state which is 256 dimensional" }, { "end": 607.9200000000001, "start": 601.6800000000001, "text": " and then fed into a neural network that will output a probability distribution" }, { "end": 613.4, "start": 607.92, "text": " over these actions. This is pretty classic in deep reinforcement learning." }, { "end": 619.4399999999999, "start": 613.4, "text": " So you have action encoding and the state encoding and the policy decides on that." }, { "end": 623.52, "start": 619.4399999999999, "text": " The state encoding you'll see here it's the same everywhere of course because" }, { "end": 628.7199999999999, "start": 623.52, "text": " the current game state is the current game state. This comes from this model up" }, { "end": 636.3199999999999, "start": 628.7199999999999, "text": " here. What this does is over here you have the what you would call the state" }, { "end": 643.9200000000001, "start": 636.32, "text": " the current observation. The current observation is composed of many" }, { "end": 649.08, "start": 643.9200000000001, "text": " things. Specifically the following eight things. The first one is actually" }, { "end": 655.2800000000001, "start": 649.08, "text": " called observation which is I would call all of this the current observation" }, { "end": 661.08, "start": 655.2800000000001, "text": " from an RL perspective. But the first is actually observation. It's whatever you" }, { "end": 665.12, "start": 661.08, "text": " saw the big text you saw before. Like you were in the kitchen it looks like this" }, { "end": 669.16, "start": 665.12, "text": " it smells like this you turn around and so on. This would be the observation." }, { "end": 673.28, "start": 669.16, "text": " It's what the game engine says at the current time step. This is just a piece of" }, { "end": 683.64, "start": 673.28, "text": " text. Second missing items. Third unnecessary items. Now these things you" }, { "end": 688.28, "start": 683.64, "text": " might wonder okay how do I know what what items are missing and unnecessary." }, { "end": 695.3199999999999, "start": 688.28, "text": " These things come from another model that this paper trains and we'll get" }, { "end": 700.0799999999999, "start": 695.3199999999999, "text": " into that later. But basically they have a method of specifying which items are" }, { "end": 708.3199999999999, "start": 700.0799999999999, "text": " still missing which are unnecessary and they list those here. Then description" }, { "end": 713.36, "start": 708.3199999999999, "text": " which is the output of the last look command. So in each room you can look you" }, { "end": 717.4399999999999, "start": 713.36, "text": " can type look and then it'll give you a description of the room and what's in" }, { "end": 725.9200000000001, "start": 717.44, "text": " there. The previous commands this is often used in RL either explicitly or" }, { "end": 732.84, "start": 725.9200000000001, "text": " implicitly through a recurrent network in order to give the agent an idea what" }, { "end": 737.8800000000001, "start": 732.84, "text": " what happened in the in the previous steps or what it did so that it doesn't" }, { "end": 743.9200000000001, "start": 737.8800000000001, "text": " repeat actions unnecessarily or so it learns to not repeat actions" }, { "end": 750.8, "start": 743.92, "text": " unnecessarily. Required utilities. Again this is a model that's kind of trained" }, { "end": 757.8399999999999, "start": 750.8, "text": " to predict what utilities are required to perform some actions. So as I said" }, { "end": 762.52, "start": 757.8399999999999, "text": " before if you want to slice the red hot pepper you need a knife. If you want to" }, { "end": 770, "start": 762.52, "text": " fry it you need a stove. Discovered locations. As I said there are different" }, { "end": 776, "start": 770, "text": " rooms you actually don't know what rooms there are before you actually go in in" }, { "end": 782.08, "start": 776, "text": " there. So before you go through a door you reach another room. So the list of" }, { "end": 787.76, "start": 782.08, "text": " previously discovered and visited locations is there and then the name of" }, { "end": 795.04, "start": 787.76, "text": " the current location it is also there. So these are eight things that make up the" }, { "end": 801.24, "start": 795.04, "text": " current observation. These eight things are just strings of text and these eight" }, { "end": 807, "start": 801.24, "text": " things are each one as you can see here these are that the eight things from" }, { "end": 813.3199999999999, "start": 807, "text": " observation to location each one are embedded and fed also into an RNN. So for" }, { "end": 818.52, "start": 813.3199999999999, "text": " each of these eight things you'll obtain a 32 dimensional vector and these are" }, { "end": 824.88, "start": 818.52, "text": " all concatenated to make up one big 256 dimensional vector. So this 256" }, { "end": 830.4, "start": 824.88, "text": " dimensional vector will contain all the necessary information about the current" }, { "end": 835.52, "start": 830.4, "text": " room what's in there what what items are you still missing what items do you have" }, { "end": 839.96, "start": 835.52, "text": " in your inventory which ones are unnecessary and so on. So if you train" }, { "end": 846.4, "start": 839.96, "text": " this correctly this 256 dimensional vector will describe the current game" }, { "end": 851.76, "start": 846.4, "text": " state as it is relevant to your agent like everything about it every" }, { "end": 857.24, "start": 851.76, "text": " relevant information that's in here will be encoded in this vector. Now this" }, { "end": 863.4399999999999, "start": 857.24, "text": " vector isn't the final state encoding yet what you'll have is you feed this into" }, { "end": 869.92, "start": 863.4399999999999, "text": " an RNN that takes as input the last time steps you have to imagine the last time" }, { "end": 876.4, "start": 869.92, "text": " step already there was observation blah blah blah this entire thing was I'm just" }, { "end": 883.52, "start": 876.4, "text": " copying I'm just copying this box over here so this entire thing was already" }, { "end": 890.28, "start": 883.52, "text": " done last step and already fed into an RNN so this this is an RNN that actually" }, { "end": 896.8, "start": 890.28, "text": " goes over time and the last whatever the output here is it will be fed to the" }, { "end": 902.1999999999999, "start": 896.8, "text": " next step and this is a trick often done in reinforcement learning as well that" }, { "end": 908.44, "start": 902.2, "text": " you actually have a recurrent neural network over the time steps so each" }, { "end": 912.8000000000001, "start": 908.44, "text": " time step you have a certain observation you encode it and so on you get a" }, { "end": 917.88, "start": 912.8000000000001, "text": " description of that and then you feed this into an RNN what the RNN can learn" }, { "end": 925.84, "start": 917.88, "text": " to do is it can learn to react to different not only to the current" }, { "end": 929.96, "start": 925.84, "text": " observation but to the current observation conditioned on the history" }, { "end": 935.88, "start": 929.96, "text": " of previous observations so it can learn before I was in this room now I'm in this" }, { "end": 942.2800000000001, "start": 935.88, "text": " new room so I actually haven't you know taken all the items from this room yet" }, { "end": 949, "start": 942.2800000000001, "text": " because I just came into this room and so on so the the kind of component where" }, { "end": 954.52, "start": 949, "text": " you are able to look at the past and what happened in the past is in captured" }, { "end": 965.24, "start": 954.52, "text": " by this RNN here so it's fairly complicated architecture but this here" }, { "end": 973, "start": 965.24, "text": " this state encoding that is conditioned on the also on the history then goes into" }, { "end": 980.72, "start": 973, "text": " this into here that's it that's the vector that goes in here is combined" }, { "end": 988.1600000000001, "start": 980.72, "text": " with each action so all of these actions here these K actions and this is all fed" }, { "end": 994.64, "start": 988.1600000000001, "text": " through a neural network and that will give you the policy this is a fairly" }, { "end": 1000.48, "start": 994.64, "text": " complicated thing but if you look at it it's not it's not too it's not too" }, { "end": 1010.48, "start": 1000.48, "text": " difficult actually so what you'll do is you will take your observations here this" }, { "end": 1016.28, "start": 1010.48, "text": " is all observation it will be encoded and combined with the history in order" }, { "end": 1022.6, "start": 1016.28, "text": " to give you this in order to give you an encoding of the current state on the" }, { "end": 1027.2, "start": 1022.6, "text": " other hand you'll take all of the possible commands that you could" }, { "end": 1033.1200000000001, "start": 1027.2, "text": " perform right now encode each one separately right into an embedding and" }, { "end": 1039.6, "start": 1033.1200000000001, "text": " then you combine each one of those with this encoding you specified previously" }, { "end": 1046.9199999999998, "start": 1039.6, "text": " that you and and from that you make your decision which action to take next and" }, { "end": 1052.76, "start": 1046.9199999999998, "text": " the action here is the one that's output is the action you take next sampled from" }, { "end": 1060.6799999999998, "start": 1052.76, "text": " this policy the last thing you need is a value network and this is just important" }, { "end": 1068.3999999999999, "start": 1060.6799999999998, "text": " for reinforcement learning which tells you from this state here so I'm getting" }, { "end": 1075.3600000000001, "start": 1068.4, "text": " weird with colors here from this state here which is the same as this one so" }, { "end": 1079.8000000000002, "start": 1075.3600000000001, "text": " you'd simply transfer this over from this state how valuable is that what's" }, { "end": 1085.3600000000001, "start": 1079.8000000000002, "text": " my value of the state and the value is if I'm in this state and I act as I" }, { "end": 1091.3200000000002, "start": 1085.3600000000001, "text": " normally act what are all my future rewards going to be combined so it" }, { "end": 1096.2800000000002, "start": 1091.3200000000002, "text": " basically gives you a value of this state you can think of this in for" }, { "end": 1102.48, "start": 1096.28, "text": " example terms of chess if you had this in chess and then this here is it would" }, { "end": 1108, "start": 1102.48, "text": " be a description of the chessboard this HT and the value would be how valuable" }, { "end": 1111.92, "start": 1108, "text": " is this position for you so if you're very much ahead and material and" }, { "end": 1116.92, "start": 1111.92, "text": " position and so on this value would be very high if you're behind this value" }, { "end": 1121.16, "start": 1116.92, "text": " would be very low and this is in a real network simply trying to predict that" }, { "end": 1130.3600000000001, "start": 1121.16, "text": " value so with all of this you now have a never good basis to do reinforcement" }, { "end": 1137.0400000000002, "start": 1130.3600000000001, "text": " learning you have a policy you have a value network and from that you can" }, { "end": 1142.52, "start": 1137.0400000000002, "text": " train an RL agent and this is done classically in an actor critic way where" }, { "end": 1149.8400000000001, "start": 1142.52, "text": " you do advantage learning here the advantage and the policy you train" }, { "end": 1155, "start": 1149.84, "text": " weighted by the advantage then the value network you train to be close to their" }, { "end": 1159.56, "start": 1155, "text": " reward and then you have an entropy penalty if you don't know what these" }, { "end": 1164.12, "start": 1159.56, "text": " things are the video will get bit too long if I were to go over these" }, { "end": 1169.04, "start": 1164.12, "text": " reinforcement learning concepts but these are very standard in reinforcement" }, { "end": 1175.6799999999998, "start": 1169.04, "text": " learning so you can train these you can basically train what it does is you can" }, { "end": 1181.1200000000001, "start": 1175.68, "text": " train these neural networks in absence of label training data because you don't" }, { "end": 1185.44, "start": 1181.1200000000001, "text": " know what the best action is in each step right there's no one telling you" }, { "end": 1189.64, "start": 1185.44, "text": " you just have a reward you just sometimes you get a point and you don't" }, { "end": 1195.64, "start": 1189.64, "text": " know which actions led to that so these things will actually allow you to train" }, { "end": 1201.52, "start": 1195.64, "text": " these neural networks by using just the reward without knowing which exact" }, { "end": 1206.52, "start": 1201.52, "text": " actions were right and wrong and that's the core of reinforcement learning" }, { "end": 1216, "start": 1206.52, "text": " obviously alright so the the core one of the core ingredients actually is this" }, { "end": 1225.48, "start": 1216, "text": " recipe manager and the recipe manager is a sub model that does the following so" }, { "end": 1234.64, "start": 1225.48, "text": " here it takes as an input the cookbook here and it also takes as an input the" }, { "end": 1241.72, "start": 1234.64, "text": " inventory and it outputs something like this and this this is a this is a table" }, { "end": 1248.32, "start": 1241.72, "text": " representation of what it outputs it will output all the ingredients that you" }, { "end": 1256.12, "start": 1248.32, "text": " need for the recipe whether or not this input that this ingredient is currently" }, { "end": 1265.72, "start": 1256.12, "text": " missing from your inventory and action to perform so which actions still need" }, { "end": 1272.56, "start": 1265.72, "text": " to be performed so let's look at the following let's look at this example the" }, { "end": 1276.56, "start": 1272.56, "text": " recipe tells you you need the ingredients are a carrot a red hot pepper" }, { "end": 1283.9199999999998, "start": 1276.56, "text": " and a white onion and the inventory says you care you're carrying a white onion" }, { "end": 1295.44, "start": 1283.9199999999998, "text": " and a carrot right so down here you see aha we we do actually have we do" }, { "end": 1301.6, "start": 1295.44, "text": " actually have a carrot so it's not missing the carrot isn't missing you" }, { "end": 1305.48, "start": 1301.6, "text": " have it in your inventory the red hot pepper is missing we don't have it in" }, { "end": 1309.56, "start": 1305.48, "text": " the inventory but we need it for the recipe the white onion we need for the" }, { "end": 1317.52, "start": 1309.56, "text": " recipe but it's not missing then it also is for each of the ingredients is" }, { "end": 1322.58, "start": 1317.52, "text": " supposed to tell you this recipe model which of the what you still need to" }, { "end": 1327.52, "start": 1322.58, "text": " perform on it so here it says slice the carrot roast the carrot and you simply" }, { "end": 1331.48, "start": 1327.52, "text": " have a carrot it doesn't say slice the roast that means it's not sliced and" }, { "end": 1336.16, "start": 1331.48, "text": " roasted so the recipe is supposed to output you still need to slice and roast" }, { "end": 1342.64, "start": 1336.16, "text": " the carrot here for example for the white onion says fry the white onion and" }, { "end": 1352.8, "start": 1342.64, "text": " as you can see in the inventory it says you're carrying a fried white onion so" }, { "end": 1358.6, "start": 1352.8, "text": " for the white onion you see we don't need to do anything anymore so that the" }, { "end": 1366.9599999999998, "start": 1358.6, "text": " recipe model is basically trying to to make this table here and this table you" }, { "end": 1372.9599999999998, "start": 1366.9599999999998, "text": " can see as an intermediary step in order to do all the other things and the" }, { "end": 1378.6, "start": 1372.9599999999998, "text": " difference here to a pure RL method and this is important the difference is that" }, { "end": 1384.4399999999998, "start": 1378.6, "text": " this representation this intermediate table representation is done explicitly" }, { "end": 1391.3600000000001, "start": 1384.44, "text": " so the recipe model really produces a table like this and not just in other RL" }, { "end": 1397.4, "start": 1391.3600000000001, "text": " methods people go about and make this recipe model output some sort of you" }, { "end": 1402.76, "start": 1397.4, "text": " know let's say a 200 dimensional vector that's supposed to encompass all of this" }, { "end": 1410.16, "start": 1402.76, "text": " information and that doesn't appear to work as well like often that if you" }, { "end": 1415.28, "start": 1410.16, "text": " simply train this end-to-end that will not pick up on the important information" }, { "end": 1420.2, "start": 1415.28, "text": " because the training signal tends to be way too weak you have to imagine you" }, { "end": 1426.0800000000002, "start": 1420.2, "text": " already have this really really big model construction here and you're" }, { "end": 1431.4, "start": 1426.0800000000002, "text": " trying to learn it you're trying to learn it from a tiny reward signal that" }, { "end": 1437.28, "start": 1431.4, "text": " you get at the end right this is very noisy signal now if if you're now trying" }, { "end": 1443.36, "start": 1437.28, "text": " to say well the inputs to these things right this command here and we also saw" }, { "end": 1448.36, "start": 1443.36, "text": " the inputs to these these depend on this recipe model also now are whatever" }, { "end": 1454.16, "start": 1448.36, "text": " giant neural network construction here and we'll all train this end-to-end and" }, { "end": 1458.48, "start": 1454.16, "text": " these will actually not be text these will actually be some sort of latent" }, { "end": 1464.32, "start": 1458.48, "text": " vectors that will often fail because you're now just trying to extract" }, { "end": 1469.36, "start": 1464.32, "text": " information from too noisy of a reward signal so the authors here do actually" }, { "end": 1477.52, "start": 1469.36, "text": " pretty neat separation of that and they train this recipe model with actually an" }, { "end": 1482.24, "start": 1477.52, "text": " augmented data set so they go to freebase and get more food items and" }, { "end": 1488.76, "start": 1482.24, "text": " then they construct a data set that resembles this and train it in a" }, { "end": 1496.56, "start": 1488.76, "text": " supervised way to output tables tables like this so this is is pretty smart and" }, { "end": 1503.28, "start": 1496.56, "text": " I think it's a good lesson if you ever attempt something like this that really" }, { "end": 1507.42, "start": 1503.28, "text": " really important information such as this one if you can train it in a" }, { "end": 1512.32, "start": 1507.42, "text": " supervised way as a kind of a pre-processing step to your RL" }, { "end": 1520.56, "start": 1512.32, "text": " procedure that's extremely helpful here you can you can see how this is then" }, { "end": 1528.28, "start": 1520.56, "text": " used so by combining this table that was output from the recipe model and your" }, { "end": 1537.4399999999998, "start": 1528.28, "text": " inventory and the output of this look command you can then generate these" }, { "end": 1542.56, "start": 1537.44, "text": " commands so before we said it's important to reduce the everything you could do" }, { "end": 1548.6000000000001, "start": 1542.56, "text": " which is infinite things to everything that is reasonable to do currently and" }, { "end": 1556.2, "start": 1548.6000000000001, "text": " this model here does that so given this given that and given the description of" }, { "end": 1563.04, "start": 1556.2, "text": " what's currently in the room you can now generate these commands and for example" }, { "end": 1567.44, "start": 1563.04, "text": " take knife if you have to slice something because you see a knife is in" }, { "end": 1573.8, "start": 1567.44, "text": " the room and you could conceivably take the knife right you can construct these" }, { "end": 1580.12, "start": 1573.8, "text": " commands but also since you know right since you know what's since you know" }, { "end": 1585.8, "start": 1580.12, "text": " what's in your inventory and since you know which things are still missing you" }, { "end": 1591.56, "start": 1585.8, "text": " can generate commands like take the white onion or drop the water because" }, { "end": 1597.9199999999998, "start": 1591.56, "text": " you don't need the water right so um the the offers also group these things here" }, { "end": 1602.04, "start": 1597.9199999999998, "text": " in this what they call high-level commands which take all required items" }, { "end": 1607.6399999999999, "start": 1602.04, "text": " from here simply means take everything that's in the room that is not in the" }, { "end": 1612.76, "start": 1607.6399999999999, "text": " inventory but you need it so these things which for an RL agent it makes" }, { "end": 1618.44, "start": 1612.76, "text": " sense to group these things together because it doesn't make sense to have" }, { "end": 1623.56, "start": 1618.44, "text": " them as two separate things if you need both take both if you don't need any" }, { "end": 1628.88, "start": 1623.56, "text": " what if you have a new entry drop all of these things so that makes sense that's" }, { "end": 1636.04, "start": 1628.88, "text": " a small optimization that apparently brought some gains but the kind of the" }, { "end": 1641.72, "start": 1636.04, "text": " the overarching message here is that once you have a once you have this" }, { "end": 1647.52, "start": 1641.72, "text": " information from the recipe model you can then use it in many useful ways in" }, { "end": 1656.44, "start": 1647.52, "text": " order to make life for your RL agent easier alright so that kind of is the" }, { "end": 1661.74, "start": 1656.44, "text": " entire model that's very it's quite convoluted but basically you start with" }, { "end": 1666.84, "start": 1661.74, "text": " this here this recipe manager you decide you output this table down here which" }, { "end": 1674.48, "start": 1666.84, "text": " ingredients are in the recipe are they still missing and which actions we need" }, { "end": 1679.64, "start": 1674.48, "text": " to perform you then combine it with this information here the information about" }, { "end": 1684.64, "start": 1679.64, "text": " the current room and your inventory in order to come up with a set of commands" }, { "end": 1690.32, "start": 1684.64, "text": " that are conceivable to do here you combine these commands with some" }, { "end": 1697.72, "start": 1690.32, "text": " commands that are always available so commands that are always available are" }, { "end": 1705.84, "start": 1697.72, "text": " things like look inventory prepare meal you have that right you add that if the" }, { "end": 1711.68, "start": 1705.84, "text": " recipe manager does not output any missing and the agents location is the" }, { "end": 1718.56, "start": 1711.68, "text": " kitchen so you can add these other items and also we're not even gonna get into" }, { "end": 1722.76, "start": 1718.56, "text": " that you add navigational items because there are doors in these rooms and you" }, { "end": 1728.8, "start": 1722.76, "text": " need to navigate around so they actually train another model to here you see to" }, { "end": 1738.32, "start": 1728.8, "text": " detect to detect directions that you could move into and open doors for every" }, { "end": 1742.18, "start": 1738.32, "text": " closed door in the room so that's another challenge that the agent needs to" }, { "end": 1746.84, "start": 1742.18, "text": " overcome they have to build an entire model to predict which doors are there" }, { "end": 1752.04, "start": 1746.84, "text": " and are they closed do you need to open them so these commands if there are" }, { "end": 1757.08, "start": 1752.04, "text": " doors and if you can move through them these commands are also added to this" }, { "end": 1761.24, "start": 1757.08, "text": " set of commands that are reasonable so now we have a set of commands that are" }, { "end": 1768.8799999999999, "start": 1761.24, "text": " reasonable over here then you describe the room here you put both into this" }, { "end": 1775.44, "start": 1768.8799999999999, "text": " embedding and then finally your policy outputs an action that's that that's the" }, { "end": 1781.72, "start": 1775.44, "text": " entire process very convoluted very big very astonishing that this works with our" }, { "end": 1788.2, "start": 1781.72, "text": " L but in order to need to get it to work you actually need to do this supervised" }, { "end": 1794.3600000000001, "start": 1788.2, "text": " training and the experimental evidence here is quite solid in that they compare" }, { "end": 1803.48, "start": 1794.3600000000001, "text": " to baseline systems that that use classic techniques and they do some" }, { "end": 1811.2, "start": 1803.48, "text": " ablation over over their individual parts and they get second place I think" }, { "end": 1817.56, "start": 1811.2, "text": " in a competition about these text-based games so that's pretty good and that was" }, { "end": 1847.1599999999999, "start": 1817.56, "text": " it for me and check it out and bye bye" } ]
BK3rv0MQMwY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
[News] The Siraj Raval Controversy
[ "Science & Technology" ]
[ "machine learning", "siraj", "controversy", "scam", "scammer", "fraud", "plagiarism", "plagiarized", "course", "refund", "policy", "ai", "online", "hype", "credit", "attribution", "paper", "scandal", "news", "twitter", "neural qubit", "intellectual property" ]
Popular ML YouTuber Siraj Raval is in the middle of not just one, but two controversies: First, a lot of students of his 200$ online-course have accused him of breaking major promises he made when advertising the course and denying them refunds. Second, his paper on "The Neural Qubit" appears to be plagiarized almost verbatim. https://www.reddit.com/r/MachineLearning/comments/d7ad2y/d_siraj_raval_potentially_exploiting_students/ https://www.reddit.com/r/MachineLearning/comments/dh2xfs/d_siraj_has_a_new_paper_the_neural_qubit_its/
There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion, just kind of stating what's up in a very high level overview. Because if you haven't heard of this, I think it's important that you do. And this is both sad and funny to a degree, more sad actually, but you know, make your own opinions. So Siraj is this very prominent YouTuber that makes videos mostly, let's say coding tutorials or explaining short concepts in the field of machine learning. And recently also branched out into other fields like here Watch Me Build a Marketing Startup and so on. So what happened, it was two recent developments. First of all, he offered a course and the course was $200. And this is one of his students on Twitter and many more have come out. And he offered this course for $200 and basically said, make money with machine learning. That was the course. And he said he was going to take 500 students in this course and it would be personal and it would be a very, very high level. He said he was going to take 500 students in this course and it would be personalized learning, personalized support from basically from him or he said he is all in into this course. Then the students discovered that there were actually over a thousand people in the course and there was almost no personalized support. So there's only, he was giving 50 minutes of his weekly time to do Q&A, 30 minutes of video content and apparently he also replied to all the code submissions with the exact same email. So things like this. He actually split up the students into two different Slack groups so they wouldn't notice that there are over a thousand people. So about two, 500 people groups. Then people wanted a refund and then apparently when he hit the Slack limit, he transferred them to Discord and he added everyone that wanted a refund to Discord channel and then simply banned them. I mean, yeah. There are many more stories of students about this course apparently. This was kind of really a bit of a scam, this course, without especially the refunds. There was no refund policy and then he sent the students to Discord and they were like, oh, I want to see. Then about two weeks, I think, into the course there was a refund policy. After two weeks after the course started and the refund policy said you can get a refund within two weeks of the course starting. So this, I mean, this is all just kind of really, really weird. I encourage you to read up more on this because there are many more stories about this course. So he apologized publicly and said he shouldn't have done that, he should have hired TAs and so on. He apologized for it and that seemed to be kind of the end of that. I don't exactly know what happened to the students. Some claimed they never got a refund and so on. But then it went on and it went on badly for Siraj, if I may say, because he published a paper called The Neural Cubit and people have gone and it turns out that it is almost all plagiarized from one or two other papers. Actually, yeah, it turns out it's, I think it's two papers and it's almost all plagiarized from there. You can see on the left the green sections and on the right the red sections are exactly identical. For example, this table up here, I think it's on the next page of the other paper, is exactly this from the other paper. If you look at whatever these equations, they're all the same. The sentences are exactly the same and so on. He only changed, also the diagrams, you see here on the upper left, exactly taken from this other paper. I think he mentions this other paper, he cites it once and he says his work is kind of a derivative of that or leaned on that and so on. But these aren't explicitly quotes here. The only changes he made are changes like, so whenever the other paper says we can write the combined transformation, here you can see he says I can write. Thanks to the CV encoding, I get a nonlinear functional. There's a rule in computer science. The only person who's allowed to do this is Don Knuth. No one else. That's wholly rule broken. So more seriously, he changed that and then he also kind of used a couple of synonyms which make no sense. So, for example, he replaced the word gate by the word door and of course a logic gate then becomes a logic door. So here it's a non-Gaussian gate, phi. I don't know if in this instance, but in this instance he replaced it. Here it actually says gate, but sometimes it's replaced by door and also he replaced the word complex Hilbert space to complicated Hilbert space which makes no sense at all. So this, yeah, it's funny and sad at the same time. So this happened and again he's apologizing. He says I've seen claims that my neural qubit was partly plagiarized. This is true. And he basically claims it. He sort of blames it. He says he's doing too many videos a week which I agree. I mean, I can tell you that making videos is hard, even crappy videos like mine. And his are actually edited and so on. But the problem is many people more came out and said that he did the same thing to their project. Here you see someone. He did the exact same thing to our project. It took four people a couple of months to do. He acted like it was his own. And many more came out and said he plagiarized other things as well where he basically just takes code and gives minimal or no attribution to the original authors and then passed it off as his own. This after this course, yeah, everyone, this could not get any worse. Hold my gas in quantum doors. Yeah. So this all happened. I mean, I encourage you to go read up on it to make up your own mind. I just want to point out quickly the end. And I won't actually show the identity of the person. I'm posting this if you really want to find out. But it's not about that person. It's about the kind of sentiment. So there is a sentiment around that you should kind of unfollow him. And because that lends credibility to him. And there is a point to be made of that kind of if the kind of prominent researchers refer to him and so on that gives him some credibility. But I'm also very much against sort of cancel culture. It is also the case that he, like no matter how much he's plagiarized, has popularized the field more than anyone else. And maybe, you know, there is a conversation to be had and a lesson to be learned without immediately canceling someone. That's just so that I mean, there's, it's a it's a complicated issue, but just kind of want to get this out there. So go read up on this is all it's it's yeah, it's a wild world. So that being said, bye bye. Have fun.
[ { "end": 7, "start": 0, "text": " There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent" }, { "end": 14.6, "start": 7.6000000000000005, "text": " YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion," }, { "end": 21.6, "start": 15.08, "text": " just kind of stating what's up in a very high level overview. Because if you haven't heard" }, { "end": 31.560000000000002, "start": 21.6, "text": " of this, I think it's important that you do. And this is both sad and funny to a degree," }, { "end": 37.24, "start": 31.560000000000002, "text": " more sad actually, but you know, make your own opinions. So Siraj is this very prominent" }, { "end": 44.24, "start": 37.24, "text": " YouTuber that makes videos mostly, let's say coding tutorials or explaining short concepts" }, { "end": 51.24, "start": 44.24, "text": " in the field of machine learning. And recently also branched out into other fields like here" }, { "end": 58.24, "start": 51.24, "text": " Watch Me Build a Marketing Startup and so on. So what happened, it was two recent developments." }, { "end": 65.28, "start": 58.28, "text": " First of all, he offered a course and the course was $200. And this is one of his students" }, { "end": 72.28, "start": 65.28, "text": " on Twitter and many more have come out. And he offered this course for $200 and basically" }, { "end": 80.64, "start": 73.64, "text": " said, make money with machine learning. That was the course. And he said he was going to" }, { "end": 88.04, "start": 81.64, "text": " take 500 students in this course and it would be personal and it would be a very, very" }, { "end": 95.04, "start": 88.04, "text": " high level. He said he was going to take 500 students in this course and it would be personalized" }, { "end": 102.80000000000001, "start": 95.80000000000001, "text": " learning, personalized support from basically from him or he said he is all in into this" }, { "end": 111.36000000000001, "start": 104.36000000000001, "text": " course. Then the students discovered that there were actually over a thousand people" }, { "end": 118.36, "start": 111.36, "text": " in the course and there was almost no personalized support. So there's only, he was giving 50" }, { "end": 126.72, "start": 119.72, "text": " minutes of his weekly time to do Q&A, 30 minutes of video content and apparently he also replied" }, { "end": 136.92000000000002, "start": 129.92, "text": " to all the code submissions with the exact same email. So things like this. He actually" }, { "end": 142.92, "start": 136.92, "text": " split up the students into two different Slack groups so they wouldn't notice that there" }, { "end": 149.92, "start": 142.92, "text": " are over a thousand people. So about two, 500 people groups. Then people wanted a refund" }, { "end": 160.92, "start": 153.92, "text": " and then apparently when he hit the Slack limit, he transferred them to Discord and" }, { "end": 167.92, "start": 160.92, "text": " he added everyone that wanted a refund to Discord channel and then simply banned them." }, { "end": 175.92, "start": 168.92, "text": " I mean, yeah. There are many more stories of students about this course apparently." }, { "end": 183.92, "start": 176.92, "text": " This was kind of really a bit of a scam, this course, without especially the refunds. There" }, { "end": 189.92, "start": 183.92, "text": " was no refund policy and then he sent the students to Discord and they were like, oh," }, { "end": 196.92, "start": 189.92, "text": " I want to see. Then about two weeks, I think, into the course there was a refund policy." }, { "end": 201.07999999999998, "start": 197.04, "text": " After two weeks after the course started and the refund policy said you can get a refund" }, { "end": 208.07999999999998, "start": 201.07999999999998, "text": " within two weeks of the course starting. So this, I mean, this is all just kind of really," }, { "end": 216.83999999999997, "start": 209.83999999999997, "text": " really weird. I encourage you to read up more on this because there are many more stories" }, { "end": 223.84, "start": 216.84, "text": " about this course. So he apologized publicly and said he shouldn't have done that, he" }, { "end": 234.8, "start": 227.8, "text": " should have hired TAs and so on. He apologized for it and that seemed to be kind of the end" }, { "end": 242.8, "start": 238.8, "text": " of that. I don't exactly know what happened to the students. Some claimed they never got" }, { "end": 249.8, "start": 242.8, "text": " a refund and so on. But then it went on and it went on badly for Siraj, if I may say," }, { "end": 257.8, "start": 250.8, "text": " because he published a paper called The Neural Cubit and people have gone and it turns out" }, { "end": 266.8, "start": 257.8, "text": " that it is almost all plagiarized from one or two other papers. Actually, yeah, it turns" }, { "end": 271.8, "start": 266.8, "text": " out it's, I think it's two papers and it's almost all plagiarized from there. You can" }, { "end": 276.8, "start": 271.8, "text": " see on the left the green sections and on the right the red sections are exactly identical." }, { "end": 283.8, "start": 276.8, "text": " For example, this table up here, I think it's on the next page of the other paper, is exactly" }, { "end": 289.8, "start": 283.8, "text": " this from the other paper. If you look at whatever these equations, they're all the" }, { "end": 296.8, "start": 289.8, "text": " same. The sentences are exactly the same and so on. He only changed, also the diagrams," }, { "end": 303.8, "start": 296.8, "text": " you see here on the upper left, exactly taken from this other paper. I think he mentions" }, { "end": 310.8, "start": 303.8, "text": " this other paper, he cites it once and he says his work is kind of a derivative of that" }, { "end": 319.8, "start": 310.8, "text": " or leaned on that and so on. But these aren't explicitly quotes here. The only changes" }, { "end": 326.8, "start": 319.8, "text": " he made are changes like, so whenever the other paper says we can write the combined" }, { "end": 331.8, "start": 326.8, "text": " transformation, here you can see he says I can write. Thanks to the CV encoding, I get" }, { "end": 335.8, "start": 331.8, "text": " a nonlinear functional. There's a rule in computer science. The only person who's allowed" }, { "end": 345.8, "start": 335.8, "text": " to do this is Don Knuth. No one else. That's wholly rule broken. So more seriously, he" }, { "end": 353.8, "start": 345.8, "text": " changed that and then he also kind of used a couple of synonyms which make no sense." }, { "end": 359.8, "start": 353.8, "text": " So, for example, he replaced the word gate by the word door and of course a logic gate" }, { "end": 367.8, "start": 359.8, "text": " then becomes a logic door. So here it's a non-Gaussian gate, phi. I don't know if in" }, { "end": 376.8, "start": 367.8, "text": " this instance, but in this instance he replaced it. Here it actually says gate, but sometimes" }, { "end": 384.8, "start": 376.8, "text": " it's replaced by door and also he replaced the word complex Hilbert space to complicated" }, { "end": 393.8, "start": 384.8, "text": " Hilbert space which makes no sense at all. So this, yeah, it's funny and sad at the same" }, { "end": 405.8, "start": 393.8, "text": " time. So this happened and again he's apologizing. He says I've seen claims that my neural" }, { "end": 412.8, "start": 405.8, "text": " qubit was partly plagiarized. This is true. And he basically claims it. He sort of blames" }, { "end": 419.8, "start": 412.8, "text": " it. He says he's doing too many videos a week which I agree. I mean, I can tell you that" }, { "end": 426.8, "start": 419.8, "text": " making videos is hard, even crappy videos like mine. And his are actually edited and" }, { "end": 437.8, "start": 426.8, "text": " so on. But the problem is many people more came out and said that he did the same thing" }, { "end": 441.8, "start": 437.8, "text": " to their project. Here you see someone. He did the exact same thing to our project. It" }, { "end": 447.8, "start": 441.8, "text": " took four people a couple of months to do. He acted like it was his own. And many more" }, { "end": 457.8, "start": 447.8, "text": " came out and said he plagiarized other things as well where he basically just takes code" }, { "end": 464.8, "start": 457.8, "text": " and gives minimal or no attribution to the original authors and then passed it off as" }, { "end": 474.8, "start": 464.8, "text": " his own. This after this course, yeah, everyone, this could not get any worse. Hold my gas" }, { "end": 484.8, "start": 474.8, "text": " in quantum doors. Yeah. So this all happened. I mean, I encourage you to go read up on it" }, { "end": 489.8, "start": 484.8, "text": " to make up your own mind. I just want to point out quickly the end. And I won't actually" }, { "end": 495.8, "start": 489.8, "text": " show the identity of the person. I'm posting this if you really want to find out. But it's" }, { "end": 499.8, "start": 495.8, "text": " not about that person. It's about the kind of sentiment. So there is a sentiment around" }, { "end": 507.8, "start": 499.8, "text": " that you should kind of unfollow him. And because that lends credibility to him. And" }, { "end": 514.8, "start": 507.8, "text": " there is a point to be made of that kind of if the kind of prominent researchers refer" }, { "end": 520.8, "start": 514.8, "text": " to him and so on that gives him some credibility. But I'm also very much against sort of cancel" }, { "end": 526.8, "start": 520.8, "text": " culture. It is also the case that he, like no matter how much he's plagiarized, has" }, { "end": 533.8, "start": 526.8, "text": " popularized the field more than anyone else. And maybe, you know, there is a conversation" }, { "end": 542.8, "start": 533.8, "text": " to be had and a lesson to be learned without immediately canceling someone. That's just" }, { "end": 548.8, "start": 542.8, "text": " so that I mean, there's, it's a it's a complicated issue, but just kind of want to get this out" }, { "end": 558.8, "start": 548.8, "text": " there. So go read up on this is all it's it's yeah, it's a wild world. So that being said," }, { "end": 579.8, "start": 558.8, "text": " bye bye. Have fun." } ]
rvr143crpuU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Accelerating Deep Learning by Focusing on the Biggest Losers
[ "Science & Technology" ]
[ "machine learning", "deep learning", "dl", "neural network", "training", "convergence", "loss", "importance", "speed-up", "faster", "ai", "dnn", "deep neural network", "backprop", "backpropagation", "cifar10", "svhn", "classifier" ]
What if you could reduce the time your network trains by only training on the hard examples? This paper proposes to select samples with high loss and only train on those in order to speed up training. Abstract: This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration. Selective-Backprop uses the output of a training example's forward pass to decide whether to use that example to compute gradients and update parameters, or to skip immediately to the next example. By reducing the number of computationally-expensive backpropagation steps performed, Selective-Backprop accelerates training. Evaluation on CIFAR10, CIFAR100, and SVHN, across a variety of modern image models, shows that Selective-Backprop converges to target error rates up to 3.5x faster than with standard SGD and between 1.02--1.8x faster than a state-of-the-art importance sampling approach. Further acceleration of 26% can be achieved by using stale forward pass results for selection, thus also skipping forward passes of low priority examples. Authors: Angela H. Jiang, Daniel L.-K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai https://arxiv.org/abs/1910.00762
Hi there! Today we're looking at accelerating deep learning by focusing on the biggest losers by Angela Jiang et al. This paper is pretty simple, pretty short in idea and is a pretty much an engineering paper. So we'll go over this idea and give it a good look and discuss advantages, disadvantages and so on. What's the basic idea? The basic idea is the following. If you train a neural network, what do you do? Usually you have a training data set, which I represent here. Each line is a sample and usually your network has a bunch of layers. Each line here is a layer of weights. What you do is you group your training data set into mini batches. Let's say that's a mini batch, four samples and you pass it through the network. This is called the forward propagation. You then calculate the loss of your forward propagated signal and then you back propagate this loss. When back propagating, you want to back propagate the loss such that it reaches each of the layers and it tells each layer how to update itself. What you want to do is for each layer you actually need to back prop the loss once towards the layer below it and once towards itself in order for the layer below it to continue the back prop and for the layer itself to update its weights. Each time you back prop basically once towards the lower layer and once towards yourself. That's a lot of work. You see whatever work you have passing your samples through the network here, you basically double the work going back. The core idea of this paper is if you look at the following. In a traditional training neural network you'll have some overhead in each training step, some overhead of maybe putting the data to the GPU or something like this. Then you have a time that you require for a forward pass and then you have a big chunk that you require for the backward pass. You see it's about double the size of this forward pass. This paper asks how can we reduce this backward pass time. What they propose is the following. They propose if the backward pass is expensive and we do it here for each data point in these mini batches, why don't we stop doing this and only try to select examples that are important. Once we only have selected the important examples, only those examples get to do the backward pass. Thereby let's say if we can only select one third of the examples to do the backward pass, we can reduce the amount that's required in the backward pass, the amount of work, by one third or sorry by two thirds. The way they select the important examples is by looking at the loss. They basically say whichever examples have a high loss, these must be the important examples, these are the hard examples. If we only train on the hard examples or if we train on the hard examples more, then the network will learn on these hard examples faster. Of course there is an implication there that if your network is good on the hard examples, it's also going to be good on the easy examples. That's like the definition of hard and easy examples. Of course that's a kind of a simplifying assumption. The idea is only select the hard examples and only by how much loss they have and only then backprop these hard examples. That's how they can reduce this by a lot. There's several intricacies here. The setup time of course is the same. What they do next is they forward propagate the entire mini batch here, because they need the loss of each example and then therefore they need to forward propagate the entire mini batch. At the end of this they select the examples with the highest loss and they only use those in training. Training consists of another forward pass, but this one is much smaller because you only forward pass the examples that you're actually training on. Then the backward pass accordingly will also be much much smaller because now again you have less samples to actually train on. The reason that you even need this second forward pass is the following. When you do backprop you can't simply start with a signal back here and then backprop that through the network. That doesn't work usually with most network architectures. Namely what you need to do is actually while you forward pass you need to remember information at each layer. A good example of this is the MaxPool operation. In MaxPool what you do is you maybe have four pixels that are next to each other and you select one of them. Now you need to remember during the forward pass which one you selected. Otherwise the backward pass won't work. You need to know which pixel to back prop through. That's why at each point you need to remember information to inform the backward pass. That's why basically you need a second forward pass with only the examples that you want to train on. You forward pass once, calculate this loss, select the ones with the high loss, then forward pass these again and then backprop only these examples. That's the main gist of it. This is exactly what you see here. Forward pass everything, then forward pass those again that have high loss and then backprop them. There is actually an interesting thing in this graphic in that you see that this forward pass here also is shorter than this forward pass. I assume that's because this forward pass here actually needs to do those additional saving of information while this forward pass here is simply a forward pass without intention of backward passing. You can instruct the deep learning frameworks to then not remember this information. They have another improvement over their algorithm called stale selective backprop. This is called selective backprop. They have stale selective backprop. What stale selective backprop does is it says well we might not always need this forward pass. What we might be able to do is actually, first we take the entire data set, let's use a different color here, let's use this. We take the entire data set forward properly through the network and then save this save into some database the losses. Then we use these losses to select the individual points here for a while. We perform maybe this is training here. You start here, you do this loss calculation and then you run your training until a couple of epochs and then you say okay now this information here is really outdated, I should really update it. Then you do this entire thing again and then you run training some more until you again stop and say okay now this information is stale again. Thereby you can amortize the cost of these forward passes. You pass your entire training set once in a while and then use those losses to select the hard examples. That's amortized. You can then reduce this forward pass that's used for selecting again by a lot. Of course the paper shows that this doesn't hurt your performance too much if you have this stale information. This is the entire idea of the algorithm and the algorithm is detailed here. Very briefly you have this buffer and you go through the data, you forward pass every example. For each loss you calculate the probability that you should retain it. It's a probabilistic framework, it's not absolute cutoff. If you decide to choose it probabilistically you append it to this buffer. If this buffer is of a certain size then you do the back prop only on this buffer. This buffer now only has the high loss examples with higher probability. Don't forget within this backward here there is also an implicit forward pass. Then you clear the buffer and go on. There are lots of forward passes here to compute the losses and then every now and then there is a backward pass whenever the buffer is of certain size. The probabilistic calculation of how and when to retain a sample is very simple. You have a deck of recent losses, a history of recent losses. You simply calculate the percentile that a given loss has in this history and that percentile will then decide on the probability. If you raise it to a power and that looks something like this. What's often used in this paper is this 33% selection. That would be the blue curve and you see the median example here. If you are in the median then you have about a 33% chance of being retained. If you have a higher loss than that then your probability rises. The first interesting thing actually is this graphic here where they show what the algorithm deems the hardest and easiest examples. Examples chosen least frequently and this is the CIFAR-10 dataset which is images 32 by 32 in color and you need to classify them into 10 categories. You see the easiest images, the ones chosen least frequently, are almost all automobiles. Of the automobiles they're almost all where you see the full car with the wheels and whatnot like this one here. These are what the algorithm deems easy samples and if you look at the hard samples, examples chosen most frequently by selective backprop, it's these here. For example bird and bird in this case is just kind of a smear. They're just kind of smears on a blue background. It's understandably that this resolution is pretty hard to make out that this is a bird. Airplane and automobile here you see that it's only a partial view of the thing. It's not like the full car like we saw in the easy pictures. It's only partial and this seems to be pretty hard and it's understandable. This cat here to me it's even unclear if this is a cat or a dog and I think dog is also a category in CIFAR-10 so the algorithm is certainly understandably confused by this example and deems it a hard example. And here even more you see truck and this isn't a truck as far as I can make out. These are two humans on the picture with no truck anywhere visible. So this seems to be a mislabeled example and of course mislabeled examples are going to be of high loss to the algorithm. This is the first criticism or thing and the authors recognize this that if you up weigh your examples with high loss you are going to up weigh all the mislabeled examples as well and thereby you're going to train more on the mislabeled examples and thereby you're going to possibly degrade your test performance much more than had you given every sample the same weight. And the authors address this actually nicely by doing an experiment and the experiment is what if we artificially mislabel examples how much can these algorithms tolerate. And so they often have these graphics here where they show test error over time. So test error and the x-axis here is number of back propped images which is kind of a time dimension in training these algorithms. You see the blue is a traditional model and the pink is a selective back prop with a 33% retain rate. So you see the selective back prop is much faster in reaching a low error and this first thing is simply with 1% of the labels shuffled. So 1% of the images are mislabeled. Selective back prop is still much faster than the traditional trajectory. If you go to 10% shuffled still you can see selective back prop is faster reaching a low error. Of course the error now generally is higher. But if you go to 20% here what you can see 20% shuffled labels what you can see it starts to become clear that these selective back prop it retains 33% of the hardest examples right. So and 20% of the examples have a wrong label. That means most of what it upweighs are wrongly labeled examples. Almost let's say that there's still a lot of correctly labeled examples. But you see it gets to a low error but then it gets up again as it kind of massively overfits on these wrongly labeled examples because it upweighs them so much. Because here in still every example is hard right. So these wrongly labeled examples they'll get about the same weight as correctly labeled examples because the network isn't trained yet. But as you go lower it starts to massively overfit. So compared to the traditional model kind of just reaches this low error that okay is now corrupted by these wrong labels but it doesn't it doesn't hurt as much. So that's kind of my first criticism. If you have a lot of noisy labels or if you have a lot of mislabeled examples then this method might actually hurt more than it helps. But the level is interesting that it can kind of tolerate 10% but it gets kind of into trouble at 20 or so more percent. So this is the first criticism and that's how the authors address it. I really like this ablation study that they do. Here this is kind of the meat of the experiment. So what they show here these curves on the bottom and let's look at this curve is on the x-axis you actually have wall clock time now. So how much time do you need in order to reach a kind of low error. Here is test set error. You see the traditional model in blue has a certain trajectory. Now cath 18 is a baseline don't worry about it. What we're interested in is the selective backprop which is the pink which you can see outperforms this traditional training. And what we're also interested in is the stale SB. So stale meaning it has this buffer of information that reduces it's supposed to reduce the time again. And you see that is even that even more outperforms the traditional approach. You can also see that the staleness here apparently doesn't hurt the performance too much. You see the error is fairly close and it reaches this error in a much faster time. This on CIFAR 10. They have this nice table up here where they show the speed up to reach a given error. So what they do is they take this error of the traditional model this test set error and they ask how fast are these methods in reaching this error times a constant. So times 1.1 times 1.2 times 1.4 now. Of course the reaching 1.4 times the final error is much is easier and thereby but it's also easier for the traditional model of course. So that's the catch but these are kind of benchmarks they chose to how fast are these models in reaching 1.1 1.2 1.4 times the error of a traditionally trained model. You can see here on CIFAR 10 for example actually it's go to SVHN. SVHN is the easiest of the of the data sets and it shows the most clear thing. So the traditional error is 1.7% and you see that the speed up is so this selective back prop is 3.4 times faster in reaching this 1.1 times the error of this traditional model and it's also 3.4 times faster reaching 1.2 times and it's 3.5 times faster in reaching it 1.4 times. The stale selective back prop is even faster so 4.3 4.9 5 times faster in reaching 1.4 times this reaching 1.4 times the the error and so what you can what you can see here is that these methods really make it faster but also there's kind of two things two important things to note in this table. First of all you can see as you go to the right in the table the speed ups get higher and what it means is that as you need to reach as you make the problem easier so as you need to reach a higher error which is as you need to reach a higher loss value these methods are there faster what that means is they're really fast at reaching a somewhat decent point which is represented here they're really fast but if they need them to reach a more and more accurate performance they themselves get slower and slower so this this is of course clear because what you're doing is you're no longer treating every day to point the same you are introducing a bias into your training by only training on the hard examples so you're introducing a bias and this bias will give you a speed up but also hurt your performance and thereby if you have to get more and more accurate you will you will lose much of that speed up because you need to reduce that bias at the end that you introduced so that's the first caveat as you want to get to a higher and higher performance these methods will help less and less because they basically introduce the bias to gain speed at the beginning of training or to reach less accurate points the second thing is as you look at these problems here so SVH n 1.7 percent error C for 10 is a slightly harder problem 2.9 percent error and C for 100 is really a harder problem where a traditional model has 18 percent error if you look at the speed ups now then you can see even at this right most end here you have the 3.5 and 5x speed up here we have a 1.5 2x speed up here we have a 1.2 1.6x speed up so as the problems get harder and as the kind of models get get fancier as the classes get more then the the speed up is much lower and I believe that's because the the bias you introduce by reweighing the samples the bias you introduce will hurt you much more on a difficult and large problem with a large network then it will hurt you on an easy problem right easy problem you were fine introducing some bias but if you have a hard noisy problem then this bias you introduce will hurt you much more and thereby this the speed up that these methods give you is much much less and so this means that the performance of these models is directly anti correlated with the hardness of the problem and that tells me it kind of makes it almost unusable or it goes towards if I look at the numbers if I look at the numbers over here and extrapolate that to something like image net it tells me that these methods are going to be almost useless on a data set of the size and complexity as image net and the interesting problems nowadays are very much in the domain of more hard more complex problems so the the kind of usefulness of this method in practice is something that I wouldn't bet on just from reading this paper I'm open to be convinced otherwise but just from reading this papers it seems like the harder you make the problem the less these methods help and that's exactly not what you want you want exactly the opposite you want to say oh if I scale this up it'll it'll you know give me even more of a speed up and that's going to be even better but this is the opposite so and given that they have no basically no theoretical analysis of how much this bias hurts you or how you can still make it kind of good in expectation how you would need to correct at the end and so on I would I would I would first of course test it I'm very interested to see tests on larger more complex problems but from this I'm a bit skeptical I'm sorry yeah so they they show I mean they show that on these states that it clearly helps clearly speeds up the training and that's of course that's already a good good thing and they do the required experiments they do the ablation studies on these data sets and so on so you can see here for example on these first graphics on all the data sets see clearly goes down as you introduce the more sophisticated algorithms but again you can see on the hard data set it doesn't go down as much all right but they do discuss this they're really fair to themselves they do risk they discuss this in their paper of how you know how practical this is and so on and what they what else they tried and didn't work and and that's a I think that it's a really good paper in itself and it's a really good investigation all right so that was it for me have a fun day bye bye
[ { "end": 5.04, "start": 0, "text": " Hi there! Today we're looking at accelerating deep learning by focusing on" }, { "end": 12.6, "start": 5.04, "text": " the biggest losers by Angela Jiang et al. This paper is pretty simple, pretty short" }, { "end": 18.76, "start": 12.6, "text": " in idea and is a pretty much an engineering paper. So we'll go over this" }, { "end": 24.88, "start": 18.76, "text": " idea and give it a good look and discuss advantages, disadvantages and so on." }, { "end": 30.759999999999998, "start": 24.88, "text": " What's the basic idea? The basic idea is the following. If you train a neural" }, { "end": 37.44, "start": 30.759999999999998, "text": " network, what do you do? Usually you have a training data set, which I represent" }, { "end": 42.44, "start": 37.44, "text": " here. Each line is a sample and usually your network has a bunch of" }, { "end": 49, "start": 42.44, "text": " layers. Each line here is a layer of weights. What you do is you group your" }, { "end": 53.120000000000005, "start": 49, "text": " training data set into mini batches. Let's say that's a mini batch, four" }, { "end": 57.879999999999995, "start": 53.12, "text": " samples and you pass it through the network. This is called the forward" }, { "end": 66.24, "start": 57.879999999999995, "text": " propagation. You then calculate the loss of your forward propagated" }, { "end": 73.44, "start": 66.24, "text": " signal and then you back propagate this loss. When back propagating, you" }, { "end": 77.16, "start": 73.44, "text": " want to back propagate the loss such that it reaches each of the layers and" }, { "end": 81.56, "start": 77.16, "text": " it tells each layer how to update itself. What you want to do is for each" }, { "end": 86.32000000000001, "start": 81.56, "text": " layer you actually need to back prop the loss once towards the layer below it and" }, { "end": 91.64, "start": 86.32000000000001, "text": " once towards itself in order for the layer below it to continue the back prop" }, { "end": 97.08, "start": 91.64, "text": " and for the layer itself to update its weights. Each time you back prop" }, { "end": 103.56, "start": 97.08, "text": " basically once towards the lower layer and once towards yourself. That's a" }, { "end": 110.72, "start": 103.56, "text": " lot of work. You see whatever work you have passing your samples through the" }, { "end": 117.96, "start": 110.72, "text": " network here, you basically double the work going back." }, { "end": 124.2, "start": 117.96, "text": " The core idea of this paper is if you look at the following. In a" }, { "end": 129.92, "start": 124.2, "text": " traditional training neural network you'll have some overhead in each" }, { "end": 135.92, "start": 129.92, "text": " training step, some overhead of maybe putting the data to the GPU or something" }, { "end": 142.23999999999998, "start": 135.92, "text": " like this. Then you have a time that you require for a forward pass and then you" }, { "end": 145.95999999999998, "start": 142.23999999999998, "text": " have a big chunk that you require for the backward pass. You see it's about" }, { "end": 152.56, "start": 145.95999999999998, "text": " double the size of this forward pass. This paper asks how can we reduce this" }, { "end": 160.79999999999998, "start": 152.56, "text": " backward pass time. What they propose is the following. They propose if" }, { "end": 165.23999999999998, "start": 160.79999999999998, "text": " the backward pass is expensive and we do it here for each data point in these" }, { "end": 172.04000000000002, "start": 165.24, "text": " mini batches, why don't we stop doing this and only try to select" }, { "end": 177.64000000000001, "start": 172.04000000000002, "text": " examples that are important. Once we only have selected the important" }, { "end": 184.56, "start": 177.64000000000001, "text": " examples, only those examples get to do the backward pass. Thereby let's say if" }, { "end": 189.38, "start": 184.56, "text": " we can only select one third of the examples to do the backward pass, we can" }, { "end": 195.08, "start": 189.38, "text": " reduce the amount that's required in the backward pass, the amount of work, by" }, { "end": 202.24, "start": 195.08, "text": " one third or sorry by two thirds. The way they select the important examples" }, { "end": 208.32000000000002, "start": 202.24, "text": " is by looking at the loss. They basically say whichever examples have a" }, { "end": 213.28, "start": 208.32000000000002, "text": " high loss, these must be the important examples, these are the hard examples." }, { "end": 218, "start": 213.28, "text": " If we only train on the hard examples or if we train on the hard" }, { "end": 226.56, "start": 218, "text": " examples more, then the network will learn on these hard examples faster." }, { "end": 230.6, "start": 226.56, "text": " Of course there is an implication there that if your network is good on the hard" }, { "end": 234.64, "start": 230.6, "text": " examples, it's also going to be good on the easy examples. That's like the" }, { "end": 240.88, "start": 234.64, "text": " definition of hard and easy examples. Of course that's a kind of a simplifying" }, { "end": 247.36, "start": 240.88, "text": " assumption. The idea is only select the hard examples and only by how much" }, { "end": 252.16000000000003, "start": 247.36, "text": " loss they have and only then backprop these hard examples. That's how" }, { "end": 258.96000000000004, "start": 252.16000000000003, "text": " they can reduce this by a lot. There's several intricacies here." }, { "end": 263.8, "start": 258.96000000000004, "text": " The setup time of course is the same. What they do next is they forward" }, { "end": 268.72, "start": 263.8, "text": " propagate the entire mini batch here, because they need the loss of each" }, { "end": 273.8, "start": 268.72, "text": " example and then therefore they need to forward propagate the entire mini batch." }, { "end": 280.44, "start": 273.8, "text": " At the end of this they select the examples with the highest loss and they" }, { "end": 285.5, "start": 280.44, "text": " only use those in training. Training consists of another forward" }, { "end": 289.24, "start": 285.5, "text": " pass, but this one is much smaller because you only forward pass the" }, { "end": 293.96000000000004, "start": 289.24, "text": " examples that you're actually training on. Then the backward pass accordingly" }, { "end": 300.2, "start": 293.96000000000004, "text": " will also be much much smaller because now again you have less samples to" }, { "end": 306.92, "start": 300.2, "text": " actually train on. The reason that you even need this second forward" }, { "end": 312.36, "start": 306.92, "text": " pass is the following. When you do backprop you can't simply start with a" }, { "end": 316.91999999999996, "start": 312.36, "text": " signal back here and then backprop that through the network. That doesn't work" }, { "end": 322.76, "start": 316.91999999999996, "text": " usually with most network architectures. Namely what you need to do is actually" }, { "end": 328.56, "start": 322.76, "text": " while you forward pass you need to remember information at each layer. A good" }, { "end": 333.36, "start": 328.56, "text": " example of this is the MaxPool operation. In MaxPool what you do is you" }, { "end": 337.32, "start": 333.36, "text": " maybe have four pixels that are next to each other and you select one of them." }, { "end": 342.32, "start": 337.32, "text": " Now you need to remember during the forward pass which one you selected." }, { "end": 347.32, "start": 342.32, "text": " Otherwise the backward pass won't work. You need to know which pixel to back" }, { "end": 352.96, "start": 347.32, "text": " prop through. That's why at each point you need to remember information to" }, { "end": 358.88, "start": 352.96, "text": " inform the backward pass. That's why basically you need a second forward" }, { "end": 368.32, "start": 358.88, "text": " pass with only the examples that you want to train on." }, { "end": 373.64, "start": 368.32, "text": " You forward pass once, calculate this loss, select the ones with the" }, { "end": 378.59999999999997, "start": 373.64, "text": " high loss, then forward pass these again and then backprop only these examples." }, { "end": 384.40000000000003, "start": 378.6, "text": " That's the main gist of it. This is exactly what you see here." }, { "end": 390, "start": 384.40000000000003, "text": " Forward pass everything, then forward pass those again that have high loss" }, { "end": 394.20000000000005, "start": 390, "text": " and then backprop them. There is actually an interesting thing in this" }, { "end": 399.04, "start": 394.20000000000005, "text": " graphic in that you see that this forward pass here also is shorter than" }, { "end": 403.12, "start": 399.04, "text": " this forward pass. I assume that's because this forward pass here actually" }, { "end": 407.68, "start": 403.12, "text": " needs to do those additional saving of information while this forward pass here" }, { "end": 412.24, "start": 407.68, "text": " is simply a forward pass without intention of backward passing. You can" }, { "end": 419.40000000000003, "start": 412.24, "text": " instruct the deep learning frameworks to then not remember this information." }, { "end": 425.4, "start": 419.40000000000003, "text": " They have another improvement over their algorithm called stale" }, { "end": 429.48, "start": 425.4, "text": " selective backprop. This is called selective backprop. They have stale" }, { "end": 435.12, "start": 429.48, "text": " selective backprop. What stale selective backprop does is it says well we might" }, { "end": 441.2, "start": 435.12, "text": " not always need this forward pass. What we might be able to do is" }, { "end": 446.24, "start": 441.2, "text": " actually, first we take the entire data set," }, { "end": 450.96, "start": 446.24, "text": " let's use a different color here, let's use this. We take the" }, { "end": 457.04, "start": 450.96, "text": " entire data set forward properly through the network and then save this" }, { "end": 463.6, "start": 457.04, "text": " save into some database the losses. Then we use these losses to select" }, { "end": 471.36, "start": 463.6, "text": " the individual points here for a while. We perform maybe" }, { "end": 477.36, "start": 471.36, "text": " this is training here. You start here, you do this loss calculation and then" }, { "end": 483, "start": 477.36, "text": " you run your training until a couple of epochs and then you say okay now this" }, { "end": 487.08000000000004, "start": 483, "text": " information here is really outdated, I should really update it. Then you do this" }, { "end": 492.8, "start": 487.08000000000004, "text": " entire thing again and then you run training some more until you again stop" }, { "end": 498.44, "start": 492.8, "text": " and say okay now this information is stale again. Thereby you can amortize" }, { "end": 504.36, "start": 498.44, "text": " the cost of these forward passes. You pass your entire training set once" }, { "end": 510.04, "start": 504.36, "text": " in a while and then use those losses to select the hard examples. That's" }, { "end": 516.72, "start": 510.04, "text": " amortized. You can then reduce this forward pass that's used for selecting" }, { "end": 521.16, "start": 516.72, "text": " again by a lot. Of course the paper shows that this doesn't hurt your" }, { "end": 526.28, "start": 521.16, "text": " performance too much if you have this stale information. This is the" }, { "end": 533.36, "start": 526.28, "text": " entire idea of the algorithm and the algorithm is detailed here. Very" }, { "end": 539.68, "start": 533.36, "text": " briefly you have this buffer and you go through the data, you forward pass every" }, { "end": 545.24, "start": 539.68, "text": " example. For each loss you calculate the probability that you should retain it." }, { "end": 551.6, "start": 545.24, "text": " It's a probabilistic framework, it's not absolute cutoff. If you decide to" }, { "end": 556.48, "start": 551.6, "text": " choose it probabilistically you append it to this buffer. If this buffer is of a" }, { "end": 561.64, "start": 556.48, "text": " certain size then you do the back prop only on this buffer. This buffer" }, { "end": 565, "start": 561.64, "text": " now only has the high loss examples with higher" }, { "end": 570.6800000000001, "start": 565, "text": " probability. Don't forget within this backward here there is also an implicit" }, { "end": 577.56, "start": 570.68, "text": " forward pass. Then you clear the buffer and go on. There are lots of" }, { "end": 583.76, "start": 577.56, "text": " forward passes here to compute the losses and then every now" }, { "end": 587.4799999999999, "start": 583.76, "text": " and then there is a backward pass whenever the buffer is of certain size." }, { "end": 592.8399999999999, "start": 587.4799999999999, "text": " The probabilistic calculation of how and when to retain a sample is very" }, { "end": 599.56, "start": 592.8399999999999, "text": " simple. You have a deck of recent losses, a history of recent losses. You simply" }, { "end": 605.4399999999999, "start": 599.56, "text": " calculate the percentile that a given loss has in this history and that" }, { "end": 609.88, "start": 605.4399999999999, "text": " percentile will then decide on the probability. If you raise it to a power" }, { "end": 615.88, "start": 609.88, "text": " and that looks something like this. What's often used in this paper is this" }, { "end": 620.9599999999999, "start": 615.88, "text": " 33% selection. That would be the blue curve and you see the median example" }, { "end": 627.0799999999999, "start": 620.9599999999999, "text": " here. If you are in the median then you have about a 33% chance of being" }, { "end": 633.8000000000001, "start": 627.08, "text": " retained. If you have a higher loss than that then your probability rises." }, { "end": 638.76, "start": 633.8000000000001, "text": " The first interesting thing actually is this graphic here where they show" }, { "end": 645.8000000000001, "start": 638.76, "text": " what the algorithm deems the hardest and easiest examples. Examples chosen" }, { "end": 652.2800000000001, "start": 645.8000000000001, "text": " least frequently and this is the CIFAR-10 dataset which is images 32 by 32 in" }, { "end": 658.48, "start": 652.28, "text": " color and you need to classify them into 10 categories. You see the easiest" }, { "end": 664.3199999999999, "start": 658.48, "text": " images, the ones chosen least frequently, are almost all automobiles." }, { "end": 670.3199999999999, "start": 664.3199999999999, "text": " Of the automobiles they're almost all where you see the full car with the" }, { "end": 676.3399999999999, "start": 670.3199999999999, "text": " wheels and whatnot like this one here. These are what the" }, { "end": 683.2, "start": 676.34, "text": " algorithm deems easy samples and if you look at the hard samples, examples" }, { "end": 689.4, "start": 683.2, "text": " chosen most frequently by selective backprop, it's these here. For example" }, { "end": 695.6, "start": 689.4, "text": " bird and bird in this case is just kind of a smear. They're just kind of smears" }, { "end": 701.12, "start": 695.6, "text": " on a blue background. It's understandably that this resolution is pretty hard to" }, { "end": 705.96, "start": 701.12, "text": " make out that this is a bird. Airplane and automobile here you see that it's" }, { "end": 713.1600000000001, "start": 705.96, "text": " only a partial view of the thing. It's not like the full car like we saw" }, { "end": 718.36, "start": 713.1600000000001, "text": " in the easy pictures. It's only partial and this seems to be pretty hard and" }, { "end": 724.36, "start": 718.36, "text": " it's understandable. This cat here to me it's even unclear if this is a cat or a" }, { "end": 731.2800000000001, "start": 724.36, "text": " dog and I think dog is also a category in CIFAR-10 so the algorithm is" }, { "end": 736.9599999999999, "start": 731.28, "text": " certainly understandably confused by this example and deems it a hard example." }, { "end": 743.48, "start": 736.9599999999999, "text": " And here even more you see truck and this isn't a truck as far as I can make" }, { "end": 750.36, "start": 743.48, "text": " out. These are two humans on the picture with no truck anywhere visible. So this" }, { "end": 755.68, "start": 750.36, "text": " seems to be a mislabeled example and of course mislabeled examples are going to" }, { "end": 762.64, "start": 755.68, "text": " be of high loss to the algorithm. This is the first criticism or thing and the" }, { "end": 769.92, "start": 762.64, "text": " authors recognize this that if you up weigh your examples with high loss you" }, { "end": 775.4399999999999, "start": 769.92, "text": " are going to up weigh all the mislabeled examples as well and thereby you're going" }, { "end": 780.1999999999999, "start": 775.4399999999999, "text": " to train more on the mislabeled examples and thereby you're going to possibly" }, { "end": 786.5600000000001, "start": 780.2, "text": " degrade your test performance much more than had you given every sample the same" }, { "end": 792.08, "start": 786.5600000000001, "text": " weight. And the authors address this actually nicely by doing an experiment" }, { "end": 797.32, "start": 792.08, "text": " and the experiment is what if we artificially mislabel examples how much" }, { "end": 802.96, "start": 797.32, "text": " can these algorithms tolerate. And so they often have these graphics here" }, { "end": 811, "start": 802.96, "text": " where they show test error over time. So test error and the x-axis here is number" }, { "end": 816, "start": 811, "text": " of back propped images which is kind of a time dimension in training these" }, { "end": 823.2, "start": 816, "text": " algorithms. You see the blue is a traditional model and the pink is a" }, { "end": 831.0400000000001, "start": 823.2, "text": " selective back prop with a 33% retain rate. So you see the selective back prop" }, { "end": 836.76, "start": 831.04, "text": " is much faster in reaching a low error and this first thing is simply with 1%" }, { "end": 841.5999999999999, "start": 836.76, "text": " of the labels shuffled. So 1% of the images are mislabeled. Selective back" }, { "end": 851.0799999999999, "start": 841.5999999999999, "text": " prop is still much faster than the traditional trajectory. If you go to 10%" }, { "end": 856.8399999999999, "start": 851.0799999999999, "text": " shuffled still you can see selective back prop is faster reaching a low error." }, { "end": 864.6800000000001, "start": 856.84, "text": " Of course the error now generally is higher. But if you go to 20% here what" }, { "end": 870.64, "start": 864.6800000000001, "text": " you can see 20% shuffled labels what you can see it starts to become clear that" }, { "end": 878.2, "start": 870.64, "text": " these selective back prop it retains 33% of the hardest examples right. So and" }, { "end": 885.52, "start": 878.2, "text": " 20% of the examples have a wrong label. That means most of what it upweighs are" }, { "end": 890.1999999999999, "start": 885.52, "text": " wrongly labeled examples. Almost let's say that there's still a lot of" }, { "end": 897.76, "start": 890.1999999999999, "text": " correctly labeled examples. But you see it gets to a low error but then it gets" }, { "end": 903.16, "start": 897.76, "text": " up again as it kind of massively overfits on these wrongly labeled examples" }, { "end": 909.76, "start": 903.16, "text": " because it upweighs them so much. Because here in still every" }, { "end": 914.4, "start": 909.76, "text": " example is hard right. So these wrongly labeled examples they'll get about the" }, { "end": 917.72, "start": 914.4, "text": " same weight as correctly labeled examples because the network isn't" }, { "end": 923.88, "start": 917.72, "text": " trained yet. But as you go lower it starts to massively overfit. So compared" }, { "end": 933.48, "start": 923.88, "text": " to the traditional model kind of just reaches this low error that okay is now" }, { "end": 938.64, "start": 933.48, "text": " corrupted by these wrong labels but it doesn't it doesn't hurt as much. So" }, { "end": 944.16, "start": 938.64, "text": " that's kind of my first criticism. If you have a lot of noisy labels or if you" }, { "end": 950.4399999999999, "start": 944.16, "text": " have a lot of mislabeled examples then this method might actually hurt more" }, { "end": 955.8399999999999, "start": 950.4399999999999, "text": " than it helps. But the level is interesting that it can kind of tolerate" }, { "end": 965.68, "start": 955.8399999999999, "text": " 10% but it gets kind of into trouble at 20 or so more percent. So this is the" }, { "end": 969.24, "start": 965.68, "text": " first criticism and that's how the authors address it. I really like this" }, { "end": 976.24, "start": 969.24, "text": " ablation study that they do. Here this is kind of the meat of the experiment." }, { "end": 980.28, "start": 976.24, "text": " So what they show here these curves on the bottom and let's look at this curve" }, { "end": 986.28, "start": 980.28, "text": " is on the x-axis you actually have wall clock time now. So how much time do you" }, { "end": 993.72, "start": 986.28, "text": " need in order to reach a kind of low error. Here is test set error. You see the" }, { "end": 999.2, "start": 993.72, "text": " traditional model in blue has a certain trajectory. Now cath 18 is a baseline" }, { "end": 1004.36, "start": 999.2, "text": " don't worry about it. What we're interested in is the selective" }, { "end": 1010.76, "start": 1004.36, "text": " backprop which is the pink which you can see outperforms this traditional" }, { "end": 1016.84, "start": 1010.76, "text": " training. And what we're also interested in is the stale SB. So stale meaning it" }, { "end": 1021.36, "start": 1016.84, "text": " has this buffer of information that reduces it's supposed to reduce the time" }, { "end": 1027.66, "start": 1021.36, "text": " again. And you see that is even that even more outperforms the traditional" }, { "end": 1033.88, "start": 1027.66, "text": " approach. You can also see that the staleness here apparently doesn't hurt" }, { "end": 1039.3200000000002, "start": 1033.88, "text": " the performance too much. You see the error is fairly close and it reaches" }, { "end": 1046.3200000000002, "start": 1039.3200000000002, "text": " this error in a much faster time. This on CIFAR 10. They have this nice table up here" }, { "end": 1054.76, "start": 1046.3200000000002, "text": " where they show the speed up to reach a given error. So what they do is they take" }, { "end": 1060.04, "start": 1054.76, "text": " this error of the traditional model this test set error and they ask how fast are" }, { "end": 1066.68, "start": 1060.04, "text": " these methods in reaching this error times a constant. So times 1.1 times 1.2" }, { "end": 1072.2, "start": 1066.68, "text": " times 1.4 now. Of course the reaching 1.4 times the final error is much is" }, { "end": 1079.04, "start": 1072.2, "text": " easier and thereby but it's also easier for the traditional model of course. So" }, { "end": 1085.2, "start": 1079.04, "text": " that's the catch but these are kind of benchmarks they chose to how fast are" }, { "end": 1090.36, "start": 1085.2, "text": " these models in reaching 1.1 1.2 1.4 times the error of a traditionally" }, { "end": 1097.08, "start": 1090.36, "text": " trained model. You can see here on CIFAR 10 for example actually it's go to SVHN." }, { "end": 1102.92, "start": 1097.08, "text": " SVHN is the easiest of the of the data sets and it shows the most clear thing." }, { "end": 1111.92, "start": 1102.92, "text": " So the traditional error is 1.7% and you see that the speed up is so this" }, { "end": 1120.24, "start": 1111.92, "text": " selective back prop is 3.4 times faster in reaching this 1.1 times the" }, { "end": 1127.28, "start": 1120.24, "text": " error of this traditional model and it's also 3.4 times faster reaching 1.2" }, { "end": 1135.72, "start": 1127.28, "text": " times and it's 3.5 times faster in reaching it 1.4 times. The stale" }, { "end": 1143.32, "start": 1135.72, "text": " selective back prop is even faster so 4.3 4.9 5 times faster in reaching 1.4" }, { "end": 1152.24, "start": 1143.32, "text": " times this reaching 1.4 times the the error and so what you can what you can" }, { "end": 1157.76, "start": 1152.24, "text": " see here is that these methods really make it faster but also there's kind of" }, { "end": 1162.56, "start": 1157.76, "text": " two things two important things to note in this table. First of all you can see" }, { "end": 1170.1200000000001, "start": 1162.56, "text": " as you go to the right in the table the speed ups get higher and what it means" }, { "end": 1176.6, "start": 1170.1200000000001, "text": " is that as you need to reach as you make the problem easier so as you need to" }, { "end": 1184.8799999999999, "start": 1176.6, "text": " reach a higher error which is as you need to reach a higher loss value these" }, { "end": 1190.9199999999998, "start": 1184.8799999999999, "text": " methods are there faster what that means is they're really fast at reaching a" }, { "end": 1196.7199999999998, "start": 1190.9199999999998, "text": " somewhat decent point which is represented here they're really fast but" }, { "end": 1202.3999999999999, "start": 1196.7199999999998, "text": " if they need them to reach a more and more accurate performance they" }, { "end": 1209.16, "start": 1202.4, "text": " themselves get slower and slower so this this is of course clear because what" }, { "end": 1214.5600000000002, "start": 1209.16, "text": " you're doing is you're no longer treating every day to point the same you" }, { "end": 1219.48, "start": 1214.5600000000002, "text": " are introducing a bias into your training by only training on the hard" }, { "end": 1225.3200000000002, "start": 1219.48, "text": " examples so you're introducing a bias and this bias will give you a speed up" }, { "end": 1229.52, "start": 1225.3200000000002, "text": " but also hurt your performance and thereby if you have to get more and more" }, { "end": 1236.32, "start": 1229.52, "text": " accurate you will you will lose much of that speed up because you need to reduce" }, { "end": 1242.6399999999999, "start": 1236.32, "text": " that bias at the end that you introduced so that's the first caveat as you want" }, { "end": 1247.6399999999999, "start": 1242.6399999999999, "text": " to get to a higher and higher performance these methods will help less" }, { "end": 1253.36, "start": 1247.6399999999999, "text": " and less because they basically introduce the bias to gain speed at the" }, { "end": 1262.1599999999999, "start": 1253.36, "text": " beginning of training or to reach less accurate points the second thing is as" }, { "end": 1270.52, "start": 1262.1599999999999, "text": " you look at these problems here so SVH n 1.7 percent error C for 10 is a" }, { "end": 1276.1999999999998, "start": 1270.52, "text": " slightly harder problem 2.9 percent error and C for 100 is really a harder" }, { "end": 1280.9599999999998, "start": 1276.1999999999998, "text": " problem where a traditional model has 18 percent error if you look at the speed" }, { "end": 1290.48, "start": 1280.96, "text": " ups now then you can see even at this right most end here you have the 3.5 and" }, { "end": 1298.8400000000001, "start": 1290.48, "text": " 5x speed up here we have a 1.5 2x speed up here we have a 1.2 1.6x speed up so" }, { "end": 1305.8400000000001, "start": 1298.8400000000001, "text": " as the problems get harder and as the kind of models get get fancier as the" }, { "end": 1313.9199999999998, "start": 1305.84, "text": " classes get more then the the speed up is much lower and I believe that's" }, { "end": 1321.6, "start": 1313.9199999999998, "text": " because the the bias you introduce by reweighing the samples the bias you" }, { "end": 1327.32, "start": 1321.6, "text": " introduce will hurt you much more on a difficult and large problem with a large" }, { "end": 1333.52, "start": 1327.32, "text": " network then it will hurt you on an easy problem right easy problem you were fine" }, { "end": 1339.28, "start": 1333.52, "text": " introducing some bias but if you have a hard noisy problem then this bias you" }, { "end": 1345.6, "start": 1339.28, "text": " introduce will hurt you much more and thereby this the speed up that these" }, { "end": 1351.6, "start": 1345.6, "text": " methods give you is much much less and so this means that the performance of" }, { "end": 1357.2, "start": 1351.6, "text": " these models is directly anti correlated with the hardness of the problem and" }, { "end": 1364.4, "start": 1357.2, "text": " that tells me it kind of makes it almost unusable or it goes towards if I look at" }, { "end": 1370, "start": 1364.4, "text": " the numbers if I look at the numbers over here and extrapolate that to" }, { "end": 1374.16, "start": 1370, "text": " something like image net it tells me that these methods are going to be" }, { "end": 1381.24, "start": 1374.16, "text": " almost useless on a data set of the size and complexity as image net and the" }, { "end": 1387.1200000000001, "start": 1381.24, "text": " interesting problems nowadays are very much in the domain of more hard more" }, { "end": 1393.8, "start": 1387.12, "text": " complex problems so the the kind of usefulness of this method in practice" }, { "end": 1400.08, "start": 1393.8, "text": " is something that I wouldn't bet on just from reading this paper I'm open to be" }, { "end": 1403.8799999999999, "start": 1400.08, "text": " convinced otherwise but just from reading this papers it seems like the" }, { "end": 1407, "start": 1403.8799999999999, "text": " harder you make the problem the less these methods help and that's exactly" }, { "end": 1411.4399999999998, "start": 1407, "text": " not what you want you want exactly the opposite you want to say oh if I scale" }, { "end": 1416.28, "start": 1411.4399999999998, "text": " this up it'll it'll you know give me even more of a speed up and that's going" }, { "end": 1423.24, "start": 1416.28, "text": " to be even better but this is the opposite so and given that they have no" }, { "end": 1429.12, "start": 1423.24, "text": " basically no theoretical analysis of how much this bias hurts you or how you can" }, { "end": 1433.44, "start": 1429.12, "text": " still make it kind of good in expectation how you would need to correct" }, { "end": 1440.12, "start": 1433.44, "text": " at the end and so on I would I would I would first of course test it I'm very" }, { "end": 1445.6399999999999, "start": 1440.12, "text": " interested to see tests on larger more complex problems but from this I'm a bit" }, { "end": 1453.44, "start": 1445.64, "text": " skeptical I'm sorry yeah so they they show I mean they show that on these" }, { "end": 1457.3600000000001, "start": 1453.44, "text": " states that it clearly helps clearly speeds up the training and that's of" }, { "end": 1461.8400000000001, "start": 1457.3600000000001, "text": " course that's already a good good thing and they do the required experiments" }, { "end": 1466.5200000000002, "start": 1461.8400000000001, "text": " they do the ablation studies on these data sets and so on so you can see here" }, { "end": 1472.76, "start": 1466.5200000000002, "text": " for example on these first graphics on all the data sets see clearly goes down" }, { "end": 1479.4, "start": 1472.76, "text": " as you introduce the more sophisticated algorithms but again you can see on the" }, { "end": 1486.28, "start": 1479.4, "text": " hard data set it doesn't go down as much all right but they do discuss this" }, { "end": 1491.16, "start": 1486.28, "text": " they're really fair to themselves they do risk they discuss this in their paper" }, { "end": 1496.68, "start": 1491.16, "text": " of how you know how practical this is and so on and what they what else they" }, { "end": 1501.92, "start": 1496.68, "text": " tried and didn't work and and that's a I think that it's a really good paper in" }, { "end": 1506.24, "start": 1501.92, "text": " itself and it's a really good investigation all right so that was it" }, { "end": 1532.96, "start": 1506.24, "text": " for me have a fun day bye bye" } ]
MIEA8azwu1k
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
DEEP LEARNING MEME REVIEW - Episode 1
[ "Comedy" ]
[ "deep learning", "memes", "meme review", "artificial intelligence", "review", "discussion", "reaction", "ai", "machine learning", "ml", "dnn", "gpu", "deep neural network", "ml memes", "deep learning memes", "machine learning memes", "funny", "gpus", "classifier", "hinton", "turing award", "bert", "xlnet", "optimization", "error rate", "culture", "community", "research" ]
The wait is finally over! Antonio and I discuss the best, funniest and dankest memes of the machine learning world. Join us for a laugh!
What? You haven't done memes before? No. Don't you have this show on YouTube when you review memes and stuff? No. You haven't? What is that? I think that's an entirely new concept. We're just gonna steal this concept from PewDiePie. Okay. But first actual meme review deep learning theme. Welcome. I'm joined by Antonio who is a bit of a memester himself. And today we're just gonna kind of look at deep learning memes. Nice. Let's jump in. So. Oh no, that's a paper. That's the meme. That is code. Okay. Being a DL researcher is not stress at all. 26. That is incredible how he says like, but now, oh, I already, I always knew that it worked. Of course. Yeah, yeah. There was no other way. There was no AI winter or anything. This was, this was always, Tep Hinton is so cool. Yeah. All right. Nice. Next meme. Next meme. I guess my brain is just really big. Oh, what else is really big? I thought you never asked. I agree. Gradient update on the edge of a really steep cliff. Big gradients are always good. I mean, look at that. Why wouldn't you want to land over there? Yeah, yeah, it's perfect. It seems much more interesting than down there. So perfect. I guess it's an, oh, minus seven over four. Wow. That's a small epsilon. Very small epsilon. Yes. Almost optimal. Crazy. Take the scientist when he sees a new problem. Classifier fit. This is, this is the old days. The old days, yes. Of scikit-learn. It still works pretty well. No, we must use deep learning for everything. Oh, sorry. No, no, sorry. Let's just look at the next meme, please. I don't know this template. This is a cool template. Yeah, it's a good template. NLP researchers BERT and then XLNet. What is XLNet? So XLNet is BERT just trained differently. Okay. And it costs like 10 times more to train it. Okay. And it's a bit better. How much does it cost electricity? Why? So people have calculated this to train one XLNet costs about 250K. It's insane. But does it work 1% better? It's like, that is like five PhD students. That's almost as good a language model as XLNet. And how much is better than BERT? A bit. A bit? Oh, a bit. A bit. That's all that counts. Wow. State of the art. Search archive for preprint. Search GitHub for code. Ask random idiots on Facebook. Me. Go. Let's go, Burbus. Go. In some ways, actually, it is simpler to publish something on archive and not being completely like people just saying, oh, you're an idiot and stuff like that. Because we've probably got unnoticed. Probably gets unnoticed, right? Yeah. On Facebook, it doesn't get unnoticed. Yeah, that's a real peer review. Exactly. If you are in a very good meme page on Facebook about deep learning, you're going to get wrecked. Yes, exactly. That's not going to happen. This software engineer designed a chat board to chat with his girlfriend while he's busy at work. However, the girl eventually got suspicious over the speed she was receiving messages from her boyfriend. Modern problems require modern solutions. But also like pretty good chat board. Got suspicious with the timing. Yeah. And now for the actual content. Well, what fashion companies try to sell us. What we really want. Fashion MNIST. Fashion MNIST is the new cool thing. So cool. Does anyone use it? I use it. Cool. By the way, I found a huge saddle point. Nice. MNIST. Wow. Huge saddle. It is not very MNIST. Where is it? Places. How much accuracy do you get on fashion MNIST? Like as MNIST. Because it's so easy. Like it's basically as MNIST. I don't know. I'm not a fashion person. So I don't know what to call this. What? What? This? This is a pants sweat. Me and the boys after using dropouts. Me and the boys. Also, I don't know where they come from. Where do they come from? I don't know. Some comic. They are so, so beautiful. Are you still watching machine learning tutorials on YouTube? Did you check my internet history? Why can't you watch porn like a normal child? I'm addicted. Andrew NG? I'm addicted. What is this Andrew NG? I must use more Keras code. Yes. Please. What is wrong with you? Because Andrew NG, boy, I don't know. But I understand that it makes you comfortable. And respected and loved. He does. He says it's okay if I don't understand everything. Whereas in porn it's completely different. It's not okay. I'm really with my notes trying to follow the plot. Wait, what was the plot? Why? When your binary classifier predicts 51% accuracy. It ain't much, but it's honest work. That's what you want to get. Better than random. Exactly. Just change your random seed until you get 51%. Your method works. Yes, exactly. And also like, you know about in finance, but it's actually state of the art, right? In what? In finance. Prediction of the last, if you have a profit time series, if you predict the next time point as the last time point, that's probably the best thing you can do. I'm going to switch my PhD topic. Yeah, and also like some people with their fancy methods do worse. Because they say, yeah, because of this and that and then it's just to be, and then... Because it's just like, you just predict whatever was there and you're good. Okay, next meme. Next meme. Deep learning research rather than video. Cheap view, cheap view, cheap view. Oh, damn. Too bad I don't use cheap views. I will start though. You know this Math Lab Deep Learning toolbox? Yeah. Recently they introduced neuronal stuff with the networks and the graphs, which is basically as the brain. Yeah. And so basically you can learn stuff with Math Lab. With Math Lab? Exactly. Wow. Exactly. Can you learn to uninstall it? I look like all you need. No, you don't look like an Envy that hide and not. Because that's what we really want. Exactly. Me, I sure hope my model's error rate isn't super high. Error rate. Sorry. So sorry. Optimization is hard. Yeah, it's hard. Just hard. You do as fancy methods and then there's SGD. Yeah. That beats you every time. Yeah. Bastard. Me and the boys about to receive the Turing Award. Me and the boys. So fancy. Yeah. Look at them. It's probably thinking about capsules. Yeah. Oh, oh. But wasn't it like two years ago? Yeah. Yeah. What is the state of that? It's still the same. He's still thinking about it. Okay. I didn't get what capsules are. To be honest. Well, they sort of are different. Oh, they're different? Yeah. Okay. Yeah. They're not like the same. Ah, I see, I see. So that means that they work in another way. Yes, but only kind of. So to do other things. Sort of. Sort of. I see. But then they do it on the same tasks. Ah, I see. No, they're like trying to abstract concepts into these capsules and then the capsules can route the information to other capsules dynamically. Yeah. Does it work? No, I don't think so. Right? Kind of. It kind of works. Yeah. Ah, why are people... Okay. Like you can make it do something. Okay. Capsules. Capsules. And like meme. My desires are unconventional. So show me. RTX 2060, 2070 and 2080. Ah, yeah. No, don't let me look at them. I want them so badly. I just can't. Use a transformer instead of an LSTM. I have failed you. You again. You again. No. RNNs must come back. Yes, exactly. They're too touring complete. Not. Assistant, remember this location. Okay, I remember that. What did I ask you to remember? I remember what you told me. This location. What does this location mean? Visitor top results. Assistant, machines are about to take over the world. Definitely. This is this intelligence. Yeah, exactly. Yeah, we must be very, very careful. Also with jobs and stuff. What? What? You finished the memes? Not yet. There's one more. So I have to preface this. So basically this is a... So the robot is supposed to get the ball to the target. And in one setting it has a reference motion of a human doing the same thing. So it learns to learn from that. And then for comparison, there is no reference motion. And it just learns from scratch. So first is with and three times and then without. With reference motion. Nice. Nice, yeah. Wow. And now without. Get the ball there. Get it there. Get it there. It's so cute. Yes, yes. We are AI Doom. Yes, done already. The damage is done. Yeah, I mean I can see an army of robots. Their arms. Their guns. They just take the bullet and go like... All right, this was it for episode one of Deep Learning Meme Review. Thanks so much for being here with us. And have a good time.
[ { "end": 2, "start": 0, "text": " What? You haven't done memes before?" }, { "end": 2.5, "start": 2, "text": " No." }, { "end": 5, "start": 2.5, "text": " Don't you have this show on YouTube when you review memes and stuff?" }, { "end": 5.5, "start": 5, "text": " No." }, { "end": 6, "start": 5.5, "text": " You haven't?" }, { "end": 9, "start": 6, "text": " What is that? I think that's an entirely new concept." }, { "end": 16, "start": 13, "text": " We're just gonna steal this concept from PewDiePie." }, { "end": 17, "start": 16, "text": " Okay." }, { "end": 21, "start": 17, "text": " But first actual meme review deep learning theme." }, { "end": 22, "start": 21, "text": " Welcome." }, { "end": 34, "start": 22, "text": " I'm joined by Antonio who is a bit of a memester himself." }, { "end": 39, "start": 34, "text": " And today we're just gonna kind of look at deep learning memes." }, { "end": 40, "start": 39, "text": " Nice." }, { "end": 41, "start": 40, "text": " Let's jump in." }, { "end": 43, "start": 42, "text": " So." }, { "end": 44, "start": 43, "text": " Oh no, that's a paper." }, { "end": 46, "start": 45, "text": " That's the meme." }, { "end": 47, "start": 46, "text": " That is code." }, { "end": 48, "start": 47, "text": " Okay." }, { "end": 52, "start": 48, "text": " Being a DL researcher is not stress at all." }, { "end": 56, "start": 54, "text": " 26." }, { "end": 64, "start": 59, "text": " That is incredible how he says like, but now, oh, I already, I always knew that it worked." }, { "end": 65, "start": 64, "text": " Of course." }, { "end": 66, "start": 65, "text": " Yeah, yeah." }, { "end": 67, "start": 66, "text": " There was no other way." }, { "end": 70, "start": 67, "text": " There was no AI winter or anything." }, { "end": 73, "start": 70, "text": " This was, this was always, Tep Hinton is so cool." }, { "end": 74, "start": 73, "text": " Yeah." }, { "end": 76, "start": 75, "text": " All right." }, { "end": 77, "start": 76, "text": " Nice." }, { "end": 78, "start": 77, "text": " Next meme." }, { "end": 79, "start": 78, "text": " Next meme." }, { "end": 82, "start": 79, "text": " I guess my brain is just really big." }, { "end": 84, "start": 82, "text": " Oh, what else is really big?" }, { "end": 86, "start": 84, "text": " I thought you never asked." }, { "end": 87, "start": 86, "text": " I agree." }, { "end": 90, "start": 87, "text": " Gradient update on the edge of a really steep cliff." }, { "end": 95, "start": 93, "text": " Big gradients are always good." }, { "end": 96, "start": 95, "text": " I mean, look at that." }, { "end": 98, "start": 96, "text": " Why wouldn't you want to land over there?" }, { "end": 99, "start": 98, "text": " Yeah, yeah, it's perfect." }, { "end": 101, "start": 99, "text": " It seems much more interesting than down there." }, { "end": 102, "start": 101, "text": " So perfect." }, { "end": 104, "start": 102, "text": " I guess it's an, oh, minus seven over four." }, { "end": 105, "start": 104, "text": " Wow." }, { "end": 107, "start": 105, "text": " That's a small epsilon." }, { "end": 108, "start": 107, "text": " Very small epsilon." }, { "end": 109, "start": 108, "text": " Yes." }, { "end": 110, "start": 109, "text": " Almost optimal." }, { "end": 111, "start": 110, "text": " Crazy." }, { "end": 115, "start": 111, "text": " Take the scientist when he sees a new problem." }, { "end": 118, "start": 116, "text": " Classifier fit." }, { "end": 122, "start": 120, "text": " This is, this is the old days." }, { "end": 123, "start": 122, "text": " The old days, yes." }, { "end": 124, "start": 123, "text": " Of scikit-learn." }, { "end": 127, "start": 124, "text": " It still works pretty well." }, { "end": 130, "start": 128, "text": " No, we must use deep learning for everything." }, { "end": 131, "start": 130, "text": " Oh, sorry." }, { "end": 132, "start": 131, "text": " No, no, sorry." }, { "end": 133, "start": 132, "text": " Let's just look at the next meme, please." }, { "end": 134, "start": 133, "text": " I don't know this template." }, { "end": 135, "start": 134, "text": " This is a cool template." }, { "end": 136, "start": 135, "text": " Yeah, it's a good template." }, { "end": 140, "start": 136, "text": " NLP researchers BERT and then XLNet." }, { "end": 141, "start": 140, "text": " What is XLNet?" }, { "end": 144, "start": 141, "text": " So XLNet is BERT just trained differently." }, { "end": 145, "start": 144, "text": " Okay." }, { "end": 148, "start": 145, "text": " And it costs like 10 times more to train it." }, { "end": 149, "start": 148, "text": " Okay." }, { "end": 151, "start": 149, "text": " And it's a bit better." }, { "end": 153, "start": 151, "text": " How much does it cost electricity?" }, { "end": 154, "start": 153, "text": " Why?" }, { "end": 160, "start": 154, "text": " So people have calculated this to train one XLNet costs about 250K." }, { "end": 163, "start": 160, "text": " It's insane." }, { "end": 166, "start": 163, "text": " But does it work 1% better?" }, { "end": 169, "start": 166, "text": " It's like, that is like five PhD students." }, { "end": 173, "start": 169, "text": " That's almost as good a language model as XLNet." }, { "end": 175, "start": 173, "text": " And how much is better than BERT?" }, { "end": 176, "start": 175, "text": " A bit." }, { "end": 177, "start": 176, "text": " A bit?" }, { "end": 178, "start": 177, "text": " Oh, a bit." }, { "end": 179, "start": 178, "text": " A bit." }, { "end": 180, "start": 179, "text": " That's all that counts." }, { "end": 181, "start": 180, "text": " Wow." }, { "end": 182, "start": 181, "text": " State of the art." }, { "end": 184, "start": 182, "text": " Search archive for preprint." }, { "end": 186, "start": 184, "text": " Search GitHub for code." }, { "end": 190, "start": 186, "text": " Ask random idiots on Facebook." }, { "end": 191, "start": 190, "text": " Me." }, { "end": 192, "start": 191, "text": " Go." }, { "end": 193, "start": 192, "text": " Let's go, Burbus." }, { "end": 194, "start": 193, "text": " Go." }, { "end": 200, "start": 194, "text": " In some ways, actually, it is simpler to publish something on archive and not being completely" }, { "end": 203, "start": 200, "text": " like people just saying, oh, you're an idiot and stuff like that." }, { "end": 205, "start": 203, "text": " Because we've probably got unnoticed." }, { "end": 207, "start": 205, "text": " Probably gets unnoticed, right?" }, { "end": 208, "start": 207, "text": " Yeah." }, { "end": 209, "start": 208, "text": " On Facebook, it doesn't get unnoticed." }, { "end": 211, "start": 209, "text": " Yeah, that's a real peer review." }, { "end": 212, "start": 211, "text": " Exactly." }, { "end": 217, "start": 212, "text": " If you are in a very good meme page on Facebook about deep learning, you're going to get wrecked." }, { "end": 218, "start": 217, "text": " Yes, exactly." }, { "end": 220, "start": 218, "text": " That's not going to happen." }, { "end": 225, "start": 220, "text": " This software engineer designed a chat board to chat with his girlfriend while he's busy" }, { "end": 226, "start": 225, "text": " at work." }, { "end": 231, "start": 226, "text": " However, the girl eventually got suspicious over the speed she was receiving messages" }, { "end": 232, "start": 231, "text": " from her boyfriend." }, { "end": 237, "start": 232, "text": " Modern problems require modern solutions." }, { "end": 239, "start": 237, "text": " But also like pretty good chat board." }, { "end": 241, "start": 239, "text": " Got suspicious with the timing." }, { "end": 242, "start": 241, "text": " Yeah." }, { "end": 244, "start": 242, "text": " And now for the actual content." }, { "end": 249, "start": 244, "text": " Well, what fashion companies try to sell us." }, { "end": 250, "start": 249, "text": " What we really want." }, { "end": 251, "start": 250, "text": " Fashion MNIST." }, { "end": 254, "start": 251, "text": " Fashion MNIST is the new cool thing." }, { "end": 255, "start": 254, "text": " So cool." }, { "end": 256, "start": 255, "text": " Does anyone use it?" }, { "end": 257, "start": 256, "text": " I use it." }, { "end": 258, "start": 257, "text": " Cool." }, { "end": 262, "start": 258, "text": " By the way, I found a huge saddle point." }, { "end": 263, "start": 262, "text": " Nice." }, { "end": 264, "start": 263, "text": " MNIST." }, { "end": 265, "start": 264, "text": " Wow." }, { "end": 266, "start": 265, "text": " Huge saddle." }, { "end": 267, "start": 266, "text": " It is not very MNIST." }, { "end": 268, "start": 267, "text": " Where is it?" }, { "end": 269, "start": 268, "text": " Places." }, { "end": 272, "start": 269, "text": " How much accuracy do you get on fashion MNIST?" }, { "end": 273, "start": 272, "text": " Like as MNIST." }, { "end": 274, "start": 273, "text": " Because it's so easy." }, { "end": 277, "start": 274, "text": " Like it's basically as MNIST." }, { "end": 278, "start": 277, "text": " I don't know." }, { "end": 279, "start": 278, "text": " I'm not a fashion person." }, { "end": 280, "start": 279, "text": " So I don't know what to call this." }, { "end": 281, "start": 280, "text": " What?" }, { "end": 282, "start": 281, "text": " What?" }, { "end": 283, "start": 282, "text": " This?" }, { "end": 284, "start": 283, "text": " This is a pants sweat." }, { "end": 289, "start": 284, "text": " Me and the boys after using dropouts." }, { "end": 292, "start": 289, "text": " Me and the boys." }, { "end": 294, "start": 292, "text": " Also, I don't know where they come from." }, { "end": 295, "start": 294, "text": " Where do they come from?" }, { "end": 296, "start": 295, "text": " I don't know." }, { "end": 297, "start": 296, "text": " Some comic." }, { "end": 303, "start": 297, "text": " They are so, so beautiful." }, { "end": 307, "start": 303, "text": " Are you still watching machine learning tutorials on YouTube?" }, { "end": 309, "start": 307, "text": " Did you check my internet history?" }, { "end": 313, "start": 309, "text": " Why can't you watch porn like a normal child?" }, { "end": 314, "start": 313, "text": " I'm addicted." }, { "end": 315, "start": 314, "text": " Andrew NG?" }, { "end": 316, "start": 315, "text": " I'm addicted." }, { "end": 319, "start": 316, "text": " What is this Andrew NG?" }, { "end": 321, "start": 319, "text": " I must use more Keras code." }, { "end": 322, "start": 321, "text": " Yes." }, { "end": 323, "start": 322, "text": " Please." }, { "end": 325, "start": 323, "text": " What is wrong with you?" }, { "end": 327, "start": 325, "text": " Because Andrew NG, boy, I don't know." }, { "end": 330, "start": 327, "text": " But I understand that it makes you comfortable." }, { "end": 331, "start": 330, "text": " And respected and loved." }, { "end": 332, "start": 331, "text": " He does." }, { "end": 334, "start": 332, "text": " He says it's okay if I don't understand everything." }, { "end": 337, "start": 334, "text": " Whereas in porn it's completely different." }, { "end": 338, "start": 337, "text": " It's not okay." }, { "end": 342, "start": 338, "text": " I'm really with my notes trying to follow the plot." }, { "end": 344, "start": 342, "text": " Wait, what was the plot?" }, { "end": 345, "start": 344, "text": " Why?" }, { "end": 349, "start": 345, "text": " When your binary classifier predicts 51% accuracy." }, { "end": 353, "start": 349, "text": " It ain't much, but it's honest work." }, { "end": 354, "start": 353, "text": " That's what you want to get." }, { "end": 355, "start": 354, "text": " Better than random." }, { "end": 356, "start": 355, "text": " Exactly." }, { "end": 359, "start": 356, "text": " Just change your random seed until you get 51%." }, { "end": 361, "start": 359, "text": " Your method works." }, { "end": 362, "start": 361, "text": " Yes, exactly." }, { "end": 366, "start": 362, "text": " And also like, you know about in finance, but it's actually state of the art, right?" }, { "end": 367, "start": 366, "text": " In what?" }, { "end": 368, "start": 367, "text": " In finance." }, { "end": 373, "start": 368, "text": " Prediction of the last, if you have a profit time series, if you predict the next time" }, { "end": 378, "start": 373, "text": " point as the last time point, that's probably the best thing you can do." }, { "end": 381, "start": 378, "text": " I'm going to switch my PhD topic." }, { "end": 385, "start": 381, "text": " Yeah, and also like some people with their fancy methods do worse." }, { "end": 390, "start": 385, "text": " Because they say, yeah, because of this and that and then it's just to be, and then..." }, { "end": 394, "start": 390, "text": " Because it's just like, you just predict whatever was there and you're good." }, { "end": 396, "start": 394, "text": " Okay, next meme." }, { "end": 397, "start": 396, "text": " Next meme." }, { "end": 400, "start": 397, "text": " Deep learning research rather than video." }, { "end": 402, "start": 400, "text": " Cheap view, cheap view, cheap view." }, { "end": 405, "start": 402, "text": " Oh, damn." }, { "end": 407, "start": 405, "text": " Too bad I don't use cheap views." }, { "end": 408, "start": 407, "text": " I will start though." }, { "end": 411, "start": 408, "text": " You know this Math Lab Deep Learning toolbox?" }, { "end": 412, "start": 411, "text": " Yeah." }, { "end": 420, "start": 412, "text": " Recently they introduced neuronal stuff with the networks and the graphs, which is basically" }, { "end": 421, "start": 420, "text": " as the brain." }, { "end": 422, "start": 421, "text": " Yeah." }, { "end": 426, "start": 422, "text": " And so basically you can learn stuff with Math Lab." }, { "end": 427, "start": 426, "text": " With Math Lab?" }, { "end": 428, "start": 427, "text": " Exactly." }, { "end": 429, "start": 428, "text": " Wow." }, { "end": 430, "start": 429, "text": " Exactly." }, { "end": 431, "start": 430, "text": " Can you learn to uninstall it?" }, { "end": 433, "start": 431, "text": " I look like all you need." }, { "end": 438, "start": 433, "text": " No, you don't look like an Envy that hide and not." }, { "end": 440, "start": 438, "text": " Because that's what we really want." }, { "end": 441, "start": 440, "text": " Exactly." }, { "end": 447, "start": 441, "text": " Me, I sure hope my model's error rate isn't super high." }, { "end": 449, "start": 447, "text": " Error rate." }, { "end": 450, "start": 449, "text": " Sorry." }, { "end": 453, "start": 450, "text": " So sorry." }, { "end": 455, "start": 453, "text": " Optimization is hard." }, { "end": 457, "start": 455, "text": " Yeah, it's hard." }, { "end": 458, "start": 457, "text": " Just hard." }, { "end": 461, "start": 458, "text": " You do as fancy methods and then there's SGD." }, { "end": 462, "start": 461, "text": " Yeah." }, { "end": 463, "start": 462, "text": " That beats you every time." }, { "end": 464, "start": 463, "text": " Yeah." }, { "end": 465, "start": 464, "text": " Bastard." }, { "end": 469, "start": 465, "text": " Me and the boys about to receive the Turing Award." }, { "end": 471, "start": 469, "text": " Me and the boys." }, { "end": 472, "start": 471, "text": " So fancy." }, { "end": 473, "start": 472, "text": " Yeah." }, { "end": 474, "start": 473, "text": " Look at them." }, { "end": 476, "start": 474, "text": " It's probably thinking about capsules." }, { "end": 477, "start": 476, "text": " Yeah." }, { "end": 478, "start": 477, "text": " Oh, oh." }, { "end": 480, "start": 478, "text": " But wasn't it like two years ago?" }, { "end": 481, "start": 480, "text": " Yeah." }, { "end": 482, "start": 481, "text": " Yeah." }, { "end": 483, "start": 482, "text": " What is the state of that?" }, { "end": 484, "start": 483, "text": " It's still the same." }, { "end": 486, "start": 484, "text": " He's still thinking about it." }, { "end": 487, "start": 486, "text": " Okay." }, { "end": 489, "start": 487, "text": " I didn't get what capsules are." }, { "end": 490, "start": 489, "text": " To be honest." }, { "end": 493, "start": 490, "text": " Well, they sort of are different." }, { "end": 495, "start": 493, "text": " Oh, they're different?" }, { "end": 496, "start": 495, "text": " Yeah." }, { "end": 497, "start": 496, "text": " Okay." }, { "end": 498, "start": 497, "text": " Yeah." }, { "end": 499, "start": 498, "text": " They're not like the same." }, { "end": 501, "start": 499, "text": " Ah, I see, I see." }, { "end": 506, "start": 501, "text": " So that means that they work in another way." }, { "end": 507, "start": 506, "text": " Yes, but only kind of." }, { "end": 508, "start": 507, "text": " So to do other things." }, { "end": 509, "start": 508, "text": " Sort of." }, { "end": 510, "start": 509, "text": " Sort of." }, { "end": 511, "start": 510, "text": " I see." }, { "end": 513, "start": 511, "text": " But then they do it on the same tasks." }, { "end": 515, "start": 513, "text": " Ah, I see." }, { "end": 522, "start": 515, "text": " No, they're like trying to abstract concepts into these capsules and then the capsules" }, { "end": 525, "start": 522, "text": " can route the information to other capsules dynamically." }, { "end": 526, "start": 525, "text": " Yeah." }, { "end": 527, "start": 526, "text": " Does it work?" }, { "end": 528, "start": 527, "text": " No, I don't think so." }, { "end": 529, "start": 528, "text": " Right?" }, { "end": 530, "start": 529, "text": " Kind of." }, { "end": 531, "start": 530, "text": " It kind of works." }, { "end": 532, "start": 531, "text": " Yeah." }, { "end": 533, "start": 532, "text": " Ah, why are people..." }, { "end": 534, "start": 533, "text": " Okay." }, { "end": 536, "start": 534, "text": " Like you can make it do something." }, { "end": 537, "start": 536, "text": " Okay." }, { "end": 538, "start": 537, "text": " Capsules." }, { "end": 539, "start": 538, "text": " Capsules." }, { "end": 540, "start": 539, "text": " And like meme." }, { "end": 543, "start": 540, "text": " My desires are unconventional." }, { "end": 547, "start": 543, "text": " So show me." }, { "end": 552, "start": 547, "text": " RTX 2060, 2070 and 2080." }, { "end": 553, "start": 552, "text": " Ah, yeah." }, { "end": 555, "start": 553, "text": " No, don't let me look at them." }, { "end": 557, "start": 555, "text": " I want them so badly." }, { "end": 560, "start": 557, "text": " I just can't." }, { "end": 564, "start": 560, "text": " Use a transformer instead of an LSTM." }, { "end": 566, "start": 564, "text": " I have failed you." }, { "end": 567, "start": 566, "text": " You again." }, { "end": 568, "start": 567, "text": " You again." }, { "end": 569, "start": 568, "text": " No." }, { "end": 572, "start": 569, "text": " RNNs must come back." }, { "end": 573, "start": 572, "text": " Yes, exactly." }, { "end": 576, "start": 573, "text": " They're too touring complete." }, { "end": 577, "start": 576, "text": " Not." }, { "end": 580, "start": 577, "text": " Assistant, remember this location." }, { "end": 582, "start": 580, "text": " Okay, I remember that." }, { "end": 584, "start": 582, "text": " What did I ask you to remember?" }, { "end": 586, "start": 584, "text": " I remember what you told me." }, { "end": 588, "start": 586, "text": " This location." }, { "end": 591, "start": 588, "text": " What does this location mean?" }, { "end": 593, "start": 591, "text": " Visitor top results." }, { "end": 597, "start": 593, "text": " Assistant, machines are about to take over the world." }, { "end": 598, "start": 597, "text": " Definitely." }, { "end": 600, "start": 598, "text": " This is this intelligence." }, { "end": 602, "start": 600, "text": " Yeah, exactly." }, { "end": 604, "start": 602, "text": " Yeah, we must be very, very careful." }, { "end": 606, "start": 604, "text": " Also with jobs and stuff." }, { "end": 608, "start": 606, "text": " What?" }, { "end": 609, "start": 608, "text": " What?" }, { "end": 611, "start": 609, "text": " You finished the memes?" }, { "end": 612, "start": 611, "text": " Not yet." }, { "end": 613, "start": 612, "text": " There's one more." }, { "end": 616, "start": 613, "text": " So I have to preface this." }, { "end": 618, "start": 616, "text": " So basically this is a..." }, { "end": 622, "start": 618, "text": " So the robot is supposed to get the ball to the target." }, { "end": 628, "start": 622, "text": " And in one setting it has a reference motion of a human doing the same thing." }, { "end": 630, "start": 628, "text": " So it learns to learn from that." }, { "end": 634, "start": 630, "text": " And then for comparison, there is no reference motion." }, { "end": 636, "start": 634, "text": " And it just learns from scratch." }, { "end": 640, "start": 636, "text": " So first is with and three times and then without." }, { "end": 642, "start": 640, "text": " With reference motion." }, { "end": 643, "start": 642, "text": " Nice." }, { "end": 644, "start": 643, "text": " Nice, yeah." }, { "end": 645, "start": 644, "text": " Wow." }, { "end": 647, "start": 645, "text": " And now without." }, { "end": 652, "start": 651, "text": " Get the ball there." }, { "end": 653, "start": 652, "text": " Get it there." }, { "end": 654, "start": 653, "text": " Get it there." }, { "end": 657, "start": 654, "text": " It's so cute." }, { "end": 660, "start": 657, "text": " Yes, yes." }, { "end": 662, "start": 660, "text": " We are AI Doom." }, { "end": 664, "start": 662, "text": " Yes, done already." }, { "end": 665, "start": 664, "text": " The damage is done." }, { "end": 668, "start": 665, "text": " Yeah, I mean I can see an army of robots." }, { "end": 670, "start": 668, "text": " Their arms." }, { "end": 671, "start": 670, "text": " Their guns." }, { "end": 673, "start": 671, "text": " They just take the bullet and go like..." }, { "end": 680, "start": 676, "text": " All right, this was it for episode one of Deep Learning Meme Review." }, { "end": 682, "start": 680, "text": " Thanks so much for being here with us." }, { "end": 684, "start": 682, "text": " And have a good time." } ]
nXGHJTtFYRU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Dynamic Routing Between Capsules
[ "Science & Technology" ]
[ "machine learning", "deep learning", "capsules", "capsule networks", "google brain", "hinton", "jeff hinton", "geoff hinton", "routing", "neural networks", "convolution", "convolutional neural networks", "deep neural networks", "cnns", "mnist", "multimnist", "disentanglement", "architecture", "reconstruction", "alternative", "dnn", "ml", "ai", "artificial intelligence", "brain", "visual system", "classifier", "image", "nonlinearity", "entities", "objects", "capsule", "network" ]
Geoff Hinton's next big idea! Capsule Networks are an alternative way of implementing neural networks by dividing each layer into capsules. Each capsule is responsible for detecting the presence and properties of one particular entity in the input sample. This information is then allocated dynamically to higher-level capsules in a novel and unconventional routing scheme. While Capsule Networks are still in their infancy, they are an exciting and promising new direction. Abstract: A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule. Authors: Sara Sabour, Nicholas Frosst, Geoffrey E Hinton https://arxiv.org/abs/1710.09829 YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Minds: https://www.minds.com/ykilcher BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/
Hi there! Today we're looking at dynamic routing between capsules by Sara Sabour, Nicholas Frost and Jeffrey Hinton of Google Brain. This paper is a bit older but it's made quite the impact at the time and so we'll go through it. I find this pretty hard paper to read and kind of understand because a lot of things are very implicit and hand wavy. So we'll kind of go through it and try to get the best out of it, try to explain what capsules are and what they do and how they stack against current networks. So capsule network in essence is a new type of neural network made of capsules. And here it says a capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. Kind of cryptic but so what they're saying is that in a capsule network, let me try to draw one here actually, in a capsule network you have what's called capsules. Capsules you can imagine as just little blobs of things right? And they're also ordered in layers in this case. Let's actually leave away the second layer. And each of these of these capsules will correspond to an entity in the input. Let's say the input is an image. So somewhere here there is an image right? Then maybe this capsule here will be responsible for detecting is there a wall in the image. And this one will be responsible for detecting is there a roof. This one will be is there a door. And this one will be responsible for detecting is there a lake in the image right? So now each of these each of these capsules can for on one hand can either be high or low. So if you if you imagine now a situation where wall high, roof high, door high, lake low. It means probably the image has a house on it right? But second of all not only can it predict whether or not a given entity is present in an image but the individual capsules are also responsible for encoding the exact way or shape or form that this entity takes. So the wall could have different aspects such as color color green. It could have size tall. It could have orientation. orientation is like I don't know vertical. Cool. Then roof could have angle right? Angle wide. So it's a wide roof or a flat roof right? These are these are kind of attributes of these things that also the capsules would encode. So ultimately what these capsules that they are proposing will output is the roof capsule here for example would output a vector. So the output of the roof capsule is a let me draw a coordinate system is a vector. Now the length of the vector will represent so that the length draw this norm here will represent the probability that the roof is in the image. That there is a roof in an image right? The roof is element of this input image. This is simply the length and the individual coordinates will encode these attributes. So this here for example this axis could be the angle of the roof and this axis could be the color. Let's say just that the angle is like some degree number that can be positive or negative. Maybe a roof can be like this. Right this so this is but in essence this is a flat roof and this is a very narrow angle roof. So you can imagine something like this and then the color could also be maybe parameterized on a one-dimensional. It can have more dimensions than two I just can't draw more. So the depending on where this where this arrow now points the for example this vector here has the same probability that there is a roof in the image like if the output is this but the color will be different. The angle will be the same because they're roughly on the same this axis here but the color of this will encode a different different colored roof. And then if the vector is something like this a very short vector it will encode the same the same angle and color directions. So maybe I shouldn't say the position on the axis it's more like this angle and this this angle that encode the attributes. So the kind of the angular components if you will encode the attributes and the length encodes the probability. So this small vector has the same direction in terms of color and angle of the roof but it's much less probable much less likely. So this if the capsule outputs the little blue vector here it says well if there is a roof it's going to be this color in this angle but I'm really that really don't think there's a roof in this image. Whereas if it outputs the large green one then it says I'm pretty sure that there's a roof and it's going to be this angle and this this this angle and this color. Alright so that's that is what each capsule is supposed to do. Each capsule takes the input and outputs a vector that encodes if the entity that the capsule is responsible for is present in the image A and B what properties this entity has. And then we get to the point where there's the next layer of capsules. So the next layer of capsules takes information that each capsule here takes information from each capsule in the lower layer like like you're used to from your neural network and integrates this information and we'll talk about how this works. It integrates all of this information right all of these are vectors now that come from the lower integrates all of this information and again each capsule in this next layer is responsible for a entity. Now these entities in the higher layers are usually composite entities of the lower layers. So this one here could be responsible for house, this one could be responsible for national park, national park and this one could be responsible for beach or something like this right. And then each of these will integrate all of this information from the lower layers and then come up with their own output vector encoding whether or not a given entity is present in the in the image. Of course the house class will pick up if there is a door a roof and a wall in the image the house classes will pick up on that or that's how it's meant to work house class is meant to pick up on that and then itself output a large vector saying there's probably a house in this in this image. So each of these capsules in by itself is responsible for encoding the presence and attributes of a object or object part or entity or part of entity in the given input data. And of course the last layer here it will simply be your classification layer. So in the last layer you have as many capsules as you have classes in your classification task. So this is mainly for a classification task and then you can classify and you can kind of train the whole system like this. So how exactly this happens we'll see next. Alright so they make kind of analogies to the visual system and so on. We'll jump these you can everyone that does deep learning in some way is trying to to make that. We're rather going to the specifics of how these capsules work and how their specific suggestions for them. Note that they say this is in no way the only implementation of capsules. It's just kind of an example to show how one could do it. Alright so first of all they present their what you might call non-linearity. So their non-linearity what it needs to do is if you look at these capsule networks the outputs here the length of the outputs of these vectors right they're supposed to represent probabilities and as such they they need to be so here it roof this door maybe a vector like this wall maybe a vector like that. So initially we simply specify the output is a vector and in essence these capsules are implemented in much the same way like your classic neural network layer would be implemented. So each of these capsules will be in essence a neural network layer by itself that outputs a vector. There's nothing constraining the length of the vector initially so their non-linearity does constrain the vector to be of maximum length 1 and of minimum length 0. That's this non-linearity here. So S here is the unscaled output of the capsule and you can see here if the length of S gets close to 1 or sorry gets really large then this here becomes irrelevant. This whole term will be 1 and then the length of the final output of V here will be 1. Right so if this is very large then the the length of the scaled output will be 1 however if the if the length is really small of the original output so if this goes towards 0 then this becomes irrelevant this becomes irrelevant this will go towards 0 and the entire length will go towards 0. So this is kind of a nice way to scale these outputs always to be between length 0 and 1. Then next thing is so how this I find I find the the most complicated part right so we'll jump ahead actually to how a capsule's network is implemented and this is the the capsule network they implement so first it's an MNIST classifier you have an MNIST image here and it first goes through a simple convolutional layer that's that's nothing new this is a classic convolutional layer is there's 256 channels it has a 9 by 9 filters and stride 1 so it will output a 20 by 20 time by 256 tensor then each of these so each of the outputs here is sent to each of these capsules and now they're convolutional capsules so that makes it a bit more complicated but don't you know don't worry primarily about them being convolutional capsules the analogy is exactly as in a classic neural network you can implement these capsules as void-feed-forward capsules or as convolutional capsules and maybe also as transformer capsules I don't think anyone's done that all right there's a paper for you the so you'll send you'll send the output of this convolution layer to each capsule and then you have basically just two layer of capsules here the first layer consists of 32 what they call primary caps sorry the these 32 capsules each will output an eight dimensional vector and I'm simplifying here it's it's convolutional but they will just for simplest they will each output an eight dimensional vector right and these are exactly as we said before so each of these will be responsible ultimately for a given entity or part of entity being there like in MNIST this could be is there a little curve on the bottom left side right this might indicate the presence of a six or an eight something like this and then the these capsules here each is they represented as a row so each of these rows here is a capsule and we have ten of these and these are your simply your final classification capsules so each capsule is responsible for indicating the presence or absence of one particular class of digits so this will be of a one of a two of a three of a four and so on of a zero I guess somewhere as well so these are ten capsules and the question is how does information go from a capsule here from the output of a capsule or to any of capsule here and the easy way to do this is simply to say as in a classical neural network the output here simply goes to the input here just you just put it there basically on on unchanged now there is a bit of an issue here with the dimensions but you can simply say well we simply put a weight matrix in to route into the capsules but the idea of these capsules and this paper is to say wait wait these capsules actually we want to make them decide to which capsule in the next layer will they send their input right so the capsules can kind of decide where they want to send their output to like where is this where is the capsule that detects the maybe this one detects is there a line in the right side of the image right indicating maybe a seven or a one this is probably most relevant for the one class and for the seven class so it might decide to route its output there and the idea of how this routing happens is basically the topic of this paper so the the capsules route their output to the appropriate next layers capsules how is this done all right this is done via the what's called the routing mechanism that I find it quite poorly described here so I will simply draw it I will simply try to make it up all right so we have capsules and as I've drawn them before right we have one two three capsules and we maybe have two parent capsules each of these capsules here will output a vector as we said and we'll only do it for this this one sorry vector here so this will output this vector and needs to decide where to here or to here do I send to this output now what it does is there is an iterative procedure that has multiple steps and this is I think this is at least the way I understand I think the important part to understand is that if we forward pass data through this network it actually doesn't go forward in a straight line what it actually does is it goes through a layer and then it does multiple steps in between layers until it has decided where it wants to go in the next layer and then it goes on to the next layer and if there's another capsule layers it does again multiple steps before it goes on so that's that's my take on it and the multiple steps are as follows first I'll send my output vector to to all of the all of the layers like equally all of the parent capsules and so will will everyone else right everyone will send theirs equally to the parent now this isn't just done and this may be here this isn't just done just by sending it but this is actually done by modulation of weight matrices so each thing here if this is capsule I and this is capsule J there is a weight matrix in between W I J that is learned right this is a static weight matrix and each one of these red red arrows you see here has such a weight matrix attached to it so each each line you see here is actually modulated by such a weight matrix so there is an a quadratic number of these weight matrices flying around and this will also then allow you that maybe this vector is eight dimensional but the input vector here is 16 dimensional what we saw before all right so the out the input of capsule J here it will receive let's see what it receives it will receive the output of capsule will the output of capsule 1 V 1 modulated by the let's let's call this yeah let's call this J modulated by 1 J W 1 J and it will also receive this is a set the output of capsule 2 modulated by the weight matrix for sorry weight matrix for capsule 2 and so on now what it does is it adds this these all up into a soft max so sorry let's write this so soft it will add those all up in a soft max weighted fashion so it will actually compute a a weighted average of those now the weights at the beginning are are just one because it gets each from each lower capsule it gets equal amount of this vector but then this will give you an output so this will give you some output let's put this in green this will give you an output that's I don't know how they call it in the paper let's just call it O J right and then what you do is all right you compare how much do each of the individual contributions agree with OJ so you actually compute for each of these you would compute the inner product so you would compute the inner product of W 1 J V 1 with OJ and you would compute the inner product of W 2 J V 2 with OJ all right the inner product and then these inner products here will become the weighting coefficients for the soft max in the next iteration all right so this I mean this this is a bit convoluted but ultimately what you're saying is if you're a capsule here you'll send your output forward you have an output you send it forward right to the other capsule and the other capsule will so this is this is your output and we'll forget about this weight matrix 6 for now this is your up the other capsule will output its own its own output computed from the lower layers now we do an iteration again if your output now aligns with this you will send more of it and these these two that I've drawn here actually align pretty well right so you'll send more of it is more more more right and now maybe the output that next computed output of the same capsule will be even more in that direction because you've contributed more right you'll send more and then you're like in the next iteration wow these two are really equal sorry this should be red here your ears just keeps being the same and then you say well I'm gonna send even more to that one right whereas another capsule that it's whose initial output was basically whose initial output was basically like this it will by itself compute the inner product with the original this original it will send it here right it will compute the inner product with the original output and it will realize well these do not align very much and then it will send less right it will send less to the next step and because it sends less in the next step of course the output will then probably align even less with that vector and then it will send less and less and less so this is called dynamic routing the the idea behind it is kind of that you route by agreement so you will route to the parent capsules that agree with your output and by agreement we mean kind of the inner product is high after modulating by this weight matrix and that sort of so that basically means this weight matrix is responsible for deciding which information is relevant together whenever you have two vectors that align in the same layer then the in the sense of the capsule networks those represent the same kind of information and those will be routed together to the same capsule in terms of the examples we made maybe if a door and a roof is present then these these these weight matrices that connect door and roof to the house class they will transform a high vector in door and roof into aligning vectors for the house class and thereby saying look these two if I look at them through if I look at a door and a roof through the perspective of trying to be a house right then they are in much agreement on the presence of a house so if I am a house right I am a house and I look at a door and I look at a roof through the kind of from the perspective of being a house right this is this is what these weight matrices do they always have a perspective of the parent capsule then these two things they make a lot of sense together and thus I will route them to the same place so they can both contribute to their being a house now from the perspective of a house if I look at a little beach with a tree on it right then that does not that is not the same that does not really is not the same information as a door or a roof so I will not route this to the house in the in the same strength that is sort of the best way I have of explaining it how these capsules work basically the lower entities will always be routed for the relevance of the higher entities that are trying to are trying to combine the lower entities if that wasn't it's not entirely clear to me either yet but it's the best shot I I can give and the routing is here formalized I find it hard to follow the important thing is that there is an inner loop in all of this so there is an like kind of an an inner iteration and this inner iteration is computed in every forward pass and so these routing where the information goes in the next layer that is only the prior probability for that is learned but the actual routing coefficients those are dynamically computed in every forward pass so every forward pass goes it goes information goes through a layer then it goes multiple steps between two layers until it decides exactly what the distribution for the next layer is and then the next layer computes its outputs and that goes again multiple steps between these layers and the next layer so that's the the basic thing to remember there's also some normalization involved the squash is the non-linearity we discussed so what do they actually train now at the end here they have a they have these ten capsules and each capsule will be responsible for recognizing one the presence of one digit in the MNIST data set of course and so what they do is they take the length of these vectors that are output by these capsules these capsules are feed-forward capsules as opposed to the convolutional capsules here so the feed-forward capsules output again a vector the length of this vector is taken and then it's basically trained like you would train a regression problem and the loss here is specified up here so if the if the image actually does contain this if the training label actually has this digit present this T here encodes that so if if K let's say K is 2 right so if K 2 if there is a 2 in the image when we know that because it's a training image then the length of the output of capsule number 2 should be high and this simply encodes that it should be very close to this M plus an M plus here is that I think they said it to 0.9 so they say you should be the length should be as close as possible to 0.9 whereas if the 2 is not present then TK will be 0 then this part will be active so it's only one of these two parts will be active then the length of the vector so of capsule number 2 should be close to this M negative which is 0.1 it's basically a regression problem saying if if there if the given entity is in the image then please make the length as close as possible to 0.9 and if it's not make it as close as possible to 0.1 so this this is a classic say regression loss on the length of the output vectors the the lambda is just a factor to to dampen the contribution for all the negative classes with respect to the one positive class of course per capsule it turns out this is actually not enough so this will be the classification output but it's it seems not enough they don't say it's not enough but they simply say we additionally do the following so they also do is they introduce a reconstruction loss now if this model is trained correctly then these capsules here these last capsules especially this one maybe that's the capsule corresponding to the class of the digit 8 will not only encode if an 8 is there or not as in the length of the vector output but it will also encode the properties of dates it is a 16 dimensional vector so it will encode hopefully things like the stroke width so then it might encode the maybe the rotation of the digit then it might be controlled the tightness of the of the loop so you can have an 8 with very large loops or it can have an 8 sorry this is a smaller rate I can have an 8 with very tight loops so it might you know encode things like this so technically it is it will be possible to reconstruct from this description reconstruct say the width is high the rotation is zero and the tightness is low then maybe I have a wide widely stroked not tight 8 that is not rotated right so it should be possible to reconstruct this and they they do exactly that so they take this last capsule of the class that is the actual training label that's called the reconstruction target and they feed this to a simple feed-forward neural network that at the end you see this is exactly the MNIST size will try to reconstruct the the image so if the image here this image goes in then it goes all through here it will take the class for here feed it through this network reshape it to an image again and hopefully what will come out is again this for here and it will then have an auxiliary auxiliary loss in addition to the loss of this of this classification loss here will auxiliary loss that tries to reconstruct the original image right and that's simply a I believe it's just an L2 reconstruction loss that is that is scaled down that it doesn't dominate so they also train the network basically to reconstruct this and I believe they do this because the length isn't quite enough to make it do what they want it to do thus they by having this reconstruction here they really kind of enforce that the individual capsules the individual dimensions must encode some kind of information about the original image and since the original images in the MNIST data set at least vary by those things by stroke width by rotation by tightness that by this loss will be reflected in the in the reconstruction all right so how are they doing here you see different examples of inputs and then reconstructed outputs and this you know seems pretty good actually so you see here all of these the input image is reconstructed fairly well so the numbers up here in the fall so the right are the failure cases here it the input image is a five labeled in the training data but the network actually classifies it as a three but then if you now you have two choices right this this is the same sample I have two choices for reconstruction either you reconstruct the capsule that is actually the is that you know is the true capsule that should be activated and you reconstruct from that or you reconstruct from the capsule that the network says the it classifies it as so here it mixed up a five four three if you still take the five the capsule and reconstructed you see it actually looks like the original image but it looks much more like a five and if you take the three capsule to reconstruct which is what the network classified this as it's still it looks like the original image but it looks much more like an actual three right it's it's missing the the part up here whereas over here it's it's missing this part here so that the network really seems to kind of learn the different variations of these digits and in an ambiguous case such as this one it you know it can it can actually go either way and it can actually reconstruct the original output in either interpretations once as a three and once as a five it will be interesting to see what the actual lengths of the vector of both of these classes were that were mixed up and here they compare their accuracies so they have a baseline model which I believe is just a CNN where they get a decent kind of error and then the capsule networks they get a lower error and here you see as you add the reconstruction loss and as you add routing more so one step of routing simply means the first step is where you send your output equally to each parent that is as in the classical neural network case but if you introduce three steps of routing then your error drops even lower so they they kind of are on par with baseline CNNs on MNIST here they also explore what their capsules learn so as I said the individual capsules the dimensions should encode kind of properties of the variations of the of the class class samples and here they explore this in the different capsules so they change some dimensions and they run it through their reconstruction networks and indeed they discover that there is like a scale and thickness dimension stroke thickness dimension there's a skew dimension and so on width and translation so that this is pretty remarkable these networks really if you train them in this way they really seem to learn about the entities and about the properties of the entities and that seems to be quite interesting you see that there's everything here stays well within the class that the capsule is assigned to they also yeah this robustness to affine transformations where they improve over the baseline it's kind of an auxiliary experiment the next interesting experiment is what they call the multi MNIST experiment the multi MNIST experiment is done by taking two different MNIST digits and basically just overlapping them so that they have you know shift them slightly but as you see here or here they are overlapped heavily and the task of the network is to figure out which two overlapping digits are in the image and the the network is very very good at doing this the capsule network that is and better than the the baselines because the capsule network simply encodes the presence and properties of a particular instance in the image if you simply take the top two length capsules and then reconstruct those independently then you're you can you can you can basically segment the image and you see this here so the different colorations come from two different reconstructions of the image from two different capsules so green is from one capsule and red from the other capsule so the network correctly identifies that it's a 6 and the zero right and it also correctly identifies not only which pixels belong to the 6 and which belong to 0 but also pixels that belong to both so that's not a not a problem if you use capsule networks as they are are notable to say here they the way they train is is they train the actual reconstruction by only reconstructing one at a time so the kind of the premise of the data set is that you actually have access to the underlying individual digits while training so like the images of the individual digits you don't only have this label here but that's a detail here are some kind of failure cases where it it misclassified or you miss specify the capsules and it's kind of unable use here you see to to assign the digits of the misclassified or the pixels of the misclassified thing it's quite interesting to look at the failure cases but I find it more interesting to look actually the success cases and the kind of ease at which the at which the capsule networks can do this simply by how they're structured alright so then lastly they also experiment on C for 10 and interestingly the C for 10 experiments show that the capsule networks don't perform as well there and as you know C for 10 is a data set that is about the same size as MNIST but it's first of all color and second of all is natural images and so they have quite a bit of clutter it's not black and white black background white digits it's actually there's a sky like on an image there's lots of things going on and right there's my tree and there's stuff here and there's stuff here and the the capsule networks they like to account for things in the image so they like to have a capsule corresponding to everything that's going on here and here and here and here and here if the whole background is black that is not a problem you can account for simply the background but if there's lots of things going on then these capsule networks get they get they get a bit over explanatory they want to explain everything and that degrades the performance now this paper basically says yeah you can have a something like a none of the above category and they found that it helped to introduce that in my opinion that it I think the the the solution will be more towards introduction of a better loss function for this because like such that you don't need kind of to explain the entire thing rather than here we'll hear what you do is you simply explain it by saying it's none of the above but it's incredibly hard to balance that my opinion yeah all right so that is basically the end of this they say they have a discussion here where they compare capsules against other related work but I hope that you kind of got an overview of how this works now and as much as possible and with that that was it for me and thanks for watching bye bye
[ { "end": 6, "start": 0, "text": " Hi there! Today we're looking at dynamic routing between capsules by Sara Sabour," }, { "end": 11.96, "start": 6, "text": " Nicholas Frost and Jeffrey Hinton of Google Brain. This paper is a bit older" }, { "end": 18.8, "start": 11.96, "text": " but it's made quite the impact at the time and so we'll go through it. I find" }, { "end": 22.92, "start": 18.8, "text": " this pretty hard paper to read and kind of understand because a lot of things" }, { "end": 31.400000000000002, "start": 22.92, "text": " are very implicit and hand wavy. So we'll kind of go through it and try to get the" }, { "end": 35.96, "start": 31.400000000000002, "text": " best out of it, try to explain what capsules are and what they do and how" }, { "end": 41.44, "start": 35.96, "text": " they stack against current networks. So capsule network in essence is a" }, { "end": 46.32000000000001, "start": 41.44, "text": " new type of neural network made of capsules. And here it says a capsule is a" }, { "end": 50.120000000000005, "start": 46.32000000000001, "text": " group of neurons whose activity vector represents the instantiation" }, { "end": 56.12, "start": 50.12, "text": " parameters of a specific type of entity such as an object or an object part. Kind" }, { "end": 63.12, "start": 56.12, "text": " of cryptic but so what they're saying is that in a capsule network, let me try to" }, { "end": 68.52, "start": 63.12, "text": " draw one here actually, in a capsule network you have what's called capsules." }, { "end": 75.36, "start": 68.52, "text": " Capsules you can imagine as just little blobs of things right? And they're also" }, { "end": 81.4, "start": 75.36, "text": " ordered in layers in this case. Let's actually leave away the second layer. And" }, { "end": 89.8, "start": 81.4, "text": " each of these of these capsules will correspond to an entity in the input." }, { "end": 94.52, "start": 89.8, "text": " Let's say the input is an image. So somewhere here there is an image right?" }, { "end": 101.44, "start": 94.52, "text": " Then maybe this capsule here will be responsible for detecting is there a" }, { "end": 108.24, "start": 101.44, "text": " wall in the image. And this one will be responsible for detecting is there a" }, { "end": 117.16, "start": 108.24, "text": " roof. This one will be is there a door. And this one will be responsible for" }, { "end": 125.56, "start": 117.16, "text": " detecting is there a lake in the image right? So now each of these each of these" }, { "end": 133.2, "start": 125.56, "text": " capsules can for on one hand can either be high or low. So if you if you imagine" }, { "end": 142.32, "start": 133.2, "text": " now a situation where wall high, roof high, door high, lake low. It means" }, { "end": 150.56, "start": 142.32, "text": " probably the image has a house on it right? But second of all not only can it" }, { "end": 156.76, "start": 150.56, "text": " predict whether or not a given entity is present in an image but the individual" }, { "end": 162.68, "start": 156.76, "text": " capsules are also responsible for encoding the exact way or shape or form" }, { "end": 169.24, "start": 162.68, "text": " that this entity takes. So the wall could have different aspects such as color" }, { "end": 181.72, "start": 169.24, "text": " color green. It could have size tall. It could have orientation. orientation is" }, { "end": 191.96, "start": 181.72, "text": " like I don't know vertical. Cool. Then roof could have angle right? Angle wide." }, { "end": 196.64000000000001, "start": 191.96, "text": " So it's a wide roof or a flat roof right? These are these are kind of attributes" }, { "end": 203.23999999999998, "start": 196.64, "text": " of these things that also the capsules would encode. So ultimately what these" }, { "end": 209.6, "start": 203.23999999999998, "text": " capsules that they are proposing will output is the roof capsule here for" }, { "end": 215.56, "start": 209.6, "text": " example would output a vector. So the output of the roof capsule is a let me" }, { "end": 223.76, "start": 215.56, "text": " draw a coordinate system is a vector. Now the length of the vector will" }, { "end": 231.23999999999998, "start": 223.76, "text": " represent so that the length draw this norm here will represent the probability" }, { "end": 238.72, "start": 231.23999999999998, "text": " that the roof is in the image. That there is a roof in an image right? The roof is" }, { "end": 245.16, "start": 238.72, "text": " element of this input image. This is simply the length and the individual" }, { "end": 250.32, "start": 245.16, "text": " coordinates will encode these attributes. So this here for example this axis could" }, { "end": 257.44, "start": 250.32, "text": " be the angle of the roof and this axis could be the color. Let's say just that" }, { "end": 262.12, "start": 257.44, "text": " the angle is like some degree number that can be positive or negative. Maybe a" }, { "end": 268.84, "start": 262.12, "text": " roof can be like this. Right this so this is but in essence this is a flat roof" }, { "end": 273.68, "start": 268.84, "text": " and this is a very narrow angle roof. So you can imagine something like this and" }, { "end": 277.8, "start": 273.68, "text": " then the color could also be maybe parameterized on a one-dimensional. It" }, { "end": 282.36, "start": 277.8, "text": " can have more dimensions than two I just can't draw more. So the depending on" }, { "end": 289.8, "start": 282.36, "text": " where this where this arrow now points the for example this vector here has the" }, { "end": 294.92, "start": 289.8, "text": " same probability that there is a roof in the image like if the output is this but" }, { "end": 298.6, "start": 294.92, "text": " the color will be different. The angle will be the same because they're roughly" }, { "end": 303, "start": 298.6, "text": " on the same this axis here but the color of this will encode a different" }, { "end": 310.32, "start": 303, "text": " different colored roof. And then if the vector is something like this a very" }, { "end": 320.64, "start": 310.32, "text": " short vector it will encode the same the same angle and color directions. So maybe" }, { "end": 325.8, "start": 320.64, "text": " I shouldn't say the position on the axis it's more like this angle and this this" }, { "end": 330.4, "start": 325.8, "text": " angle that encode the attributes. So the kind of the angular components if you" }, { "end": 334.12, "start": 330.4, "text": " will encode the attributes and the length encodes the probability. So this" }, { "end": 339.59999999999997, "start": 334.12, "text": " small vector has the same direction in terms of color and angle of the roof but" }, { "end": 345.08, "start": 339.59999999999997, "text": " it's much less probable much less likely. So this if the capsule outputs the" }, { "end": 350.59999999999997, "start": 345.08, "text": " little blue vector here it says well if there is a roof it's going to be this" }, { "end": 354.52, "start": 350.59999999999997, "text": " color in this angle but I'm really that really don't think there's a roof in" }, { "end": 360.35999999999996, "start": 354.52, "text": " this image. Whereas if it outputs the large green one then it says I'm pretty" }, { "end": 365.2, "start": 360.36, "text": " sure that there's a roof and it's going to be this angle and this this this" }, { "end": 370.76, "start": 365.2, "text": " angle and this color. Alright so that's that is what each capsule is supposed to" }, { "end": 378.2, "start": 370.76, "text": " do. Each capsule takes the input and outputs a vector that encodes if the" }, { "end": 383.04, "start": 378.2, "text": " entity that the capsule is responsible for is present in the image A and B" }, { "end": 389.76, "start": 383.04, "text": " what properties this entity has. And then we get to the point where there's the" }, { "end": 394.92, "start": 389.76, "text": " next layer of capsules. So the next layer of capsules takes information that each" }, { "end": 402.4, "start": 394.92, "text": " capsule here takes information from each capsule in the lower layer like like" }, { "end": 407.36, "start": 402.4, "text": " you're used to from your neural network and integrates this information and" }, { "end": 411.4, "start": 407.36, "text": " we'll talk about how this works. It integrates all of this information right" }, { "end": 415.88, "start": 411.4, "text": " all of these are vectors now that come from the lower integrates all of this" }, { "end": 422.4, "start": 415.88, "text": " information and again each capsule in this next layer is responsible for a" }, { "end": 427.84, "start": 422.4, "text": " entity. Now these entities in the higher layers are usually composite entities of" }, { "end": 436.36, "start": 427.84, "text": " the lower layers. So this one here could be responsible for house, this one could" }, { "end": 444.4, "start": 436.36, "text": " be responsible for national park, national park and this one could be" }, { "end": 451.08, "start": 444.4, "text": " responsible for beach or something like this right. And then each of these will" }, { "end": 456.23999999999995, "start": 451.08, "text": " integrate all of this information from the lower layers and then come up with" }, { "end": 461.4, "start": 456.23999999999995, "text": " their own output vector encoding whether or not a given entity is present in the" }, { "end": 469, "start": 461.4, "text": " in the image. Of course the house class will pick up if there is a door a roof" }, { "end": 473.35999999999996, "start": 469, "text": " and a wall in the image the house classes will pick up on that or that's" }, { "end": 477.40000000000003, "start": 473.36, "text": " how it's meant to work house class is meant to pick up on that and then itself" }, { "end": 483, "start": 477.40000000000003, "text": " output a large vector saying there's probably a house in this in this image." }, { "end": 488.56, "start": 483, "text": " So each of these capsules in by itself is responsible for encoding the presence" }, { "end": 494.96000000000004, "start": 488.56, "text": " and attributes of a object or object part or entity or part of entity in the" }, { "end": 500.04, "start": 494.96000000000004, "text": " given input data. And of course the last layer here it will simply be your" }, { "end": 505.32, "start": 500.04, "text": " classification layer. So in the last layer you have as many capsules as you" }, { "end": 511.08000000000004, "start": 505.32, "text": " have classes in your classification task. So this is mainly for a" }, { "end": 517.84, "start": 511.08000000000004, "text": " classification task and then you can classify and you can kind of train the" }, { "end": 525.48, "start": 517.84, "text": " whole system like this. So how exactly this happens we'll see next." }, { "end": 533.96, "start": 525.48, "text": " Alright so they make kind of analogies to the visual system and so on." }, { "end": 541.6, "start": 533.96, "text": " We'll jump these you can everyone that does deep learning in some way is trying" }, { "end": 547.64, "start": 541.6, "text": " to to make that. We're rather going to the specifics of how these capsules work" }, { "end": 553.8000000000001, "start": 547.64, "text": " and how their specific suggestions for them. Note that they say this is in no" }, { "end": 558.92, "start": 553.8, "text": " way the only implementation of capsules. It's just kind of an example to show how" }, { "end": 565.56, "start": 558.92, "text": " one could do it. Alright so first of all they present their what you might call" }, { "end": 570.68, "start": 565.56, "text": " non-linearity. So their non-linearity what it needs to do is if you look at" }, { "end": 575.04, "start": 570.68, "text": " these capsule networks the outputs here the length of the outputs of these" }, { "end": 580.3199999999999, "start": 575.04, "text": " vectors right they're supposed to represent probabilities and as such they" }, { "end": 587, "start": 580.32, "text": " they need to be so here it roof this door maybe a vector like this wall maybe" }, { "end": 592.2, "start": 587, "text": " a vector like that. So initially we simply specify the output is a vector" }, { "end": 597, "start": 592.2, "text": " and in essence these capsules are implemented in much the same way like" }, { "end": 604.6800000000001, "start": 597, "text": " your classic neural network layer would be implemented. So each of these" }, { "end": 613.28, "start": 604.68, "text": " capsules will be in essence a neural network layer by itself that outputs a" }, { "end": 619.3599999999999, "start": 613.28, "text": " vector. There's nothing constraining the length of the vector initially so" }, { "end": 626.8, "start": 619.3599999999999, "text": " their non-linearity does constrain the vector to be of maximum length 1 and of" }, { "end": 631.3599999999999, "start": 626.8, "text": " minimum length 0. That's this non-linearity here. So S here is the" }, { "end": 638.6800000000001, "start": 631.36, "text": " unscaled output of the capsule and you can see here if the length of S gets" }, { "end": 646.2, "start": 638.6800000000001, "text": " close to 1 or sorry gets really large then this here becomes irrelevant." }, { "end": 653.8000000000001, "start": 646.2, "text": " This whole term will be 1 and then the length of the final output of V here" }, { "end": 661.12, "start": 653.8000000000001, "text": " will be 1. Right so if this is very large then the the length of the scaled" }, { "end": 666.92, "start": 661.12, "text": " output will be 1 however if the if the length is really small of the original" }, { "end": 672.92, "start": 666.92, "text": " output so if this goes towards 0 then this becomes irrelevant this becomes" }, { "end": 680, "start": 672.92, "text": " irrelevant this will go towards 0 and the entire length will go towards 0." }, { "end": 689.2, "start": 680, "text": " So this is kind of a nice way to scale these outputs always to be between length 0" }, { "end": 702.5200000000001, "start": 689.2, "text": " and 1. Then next thing is so how this I find I find the the most complicated" }, { "end": 710.48, "start": 702.5200000000001, "text": " part right so we'll jump ahead actually to how a capsule's network is implemented" }, { "end": 716.76, "start": 710.48, "text": " and this is the the capsule network they implement so first it's an MNIST" }, { "end": 721.84, "start": 716.76, "text": " classifier you have an MNIST image here and it first goes through a simple" }, { "end": 726.4, "start": 721.84, "text": " convolutional layer that's that's nothing new this is a classic" }, { "end": 734.84, "start": 726.4, "text": " convolutional layer is there's 256 channels it has a 9 by 9 filters and" }, { "end": 747.1600000000001, "start": 734.84, "text": " stride 1 so it will output a 20 by 20 time by 256 tensor then each of these" }, { "end": 752.6, "start": 747.1600000000001, "text": " so each of the outputs here is sent to each of these capsules and now they're" }, { "end": 758.2800000000001, "start": 752.6, "text": " convolutional capsules so that makes it a bit more complicated but don't you" }, { "end": 762.1600000000001, "start": 758.2800000000001, "text": " know don't worry primarily about them being convolutional capsules the" }, { "end": 765.28, "start": 762.16, "text": " analogy is exactly as in a classic neural network you can implement these" }, { "end": 772.4399999999999, "start": 765.28, "text": " capsules as void-feed-forward capsules or as convolutional capsules and maybe also" }, { "end": 777.3199999999999, "start": 772.4399999999999, "text": " as transformer capsules I don't think anyone's done that all right there's a" }, { "end": 785, "start": 777.3199999999999, "text": " paper for you the so you'll send you'll send the output of this convolution" }, { "end": 790.04, "start": 785, "text": " layer to each capsule and then you have basically just two layer of capsules" }, { "end": 797.64, "start": 790.04, "text": " here the first layer consists of 32 what they call primary caps sorry the these" }, { "end": 805.24, "start": 797.64, "text": " 32 capsules each will output an eight dimensional vector and I'm simplifying" }, { "end": 809.48, "start": 805.24, "text": " here it's it's convolutional but they will just for simplest they will each" }, { "end": 816.68, "start": 809.48, "text": " output an eight dimensional vector right and these are exactly as we said before" }, { "end": 821.8, "start": 816.68, "text": " so each of these will be responsible ultimately for a given entity or part of" }, { "end": 828.06, "start": 821.8, "text": " entity being there like in MNIST this could be is there a little curve on the" }, { "end": 831.64, "start": 828.06, "text": " bottom left side right this might indicate the presence of a six or an" }, { "end": 838.8399999999999, "start": 831.64, "text": " eight something like this and then the these capsules here each is they" }, { "end": 844.1999999999999, "start": 838.8399999999999, "text": " represented as a row so each of these rows here is a capsule and we have ten" }, { "end": 848.88, "start": 844.2, "text": " of these and these are your simply your final classification capsules so each" }, { "end": 854.76, "start": 848.88, "text": " capsule is responsible for indicating the presence or absence of one particular" }, { "end": 859.5600000000001, "start": 854.76, "text": " class of digits so this will be of a one of a two of a three of a four and so on" }, { "end": 865.9200000000001, "start": 859.5600000000001, "text": " of a zero I guess somewhere as well so these are ten capsules and the question" }, { "end": 871.5200000000001, "start": 865.9200000000001, "text": " is how does information go from a capsule here from the output of a" }, { "end": 877, "start": 871.52, "text": " capsule or to any of capsule here and the easy way to do this is simply to say" }, { "end": 884.12, "start": 877, "text": " as in a classical neural network the output here simply goes to the input" }, { "end": 891.92, "start": 884.12, "text": " here just you just put it there basically on on unchanged now there is a" }, { "end": 897.4, "start": 891.92, "text": " bit of an issue here with the dimensions but you can simply say well we simply" }, { "end": 903.88, "start": 897.4, "text": " put a weight matrix in to route into the capsules but the idea of these capsules" }, { "end": 912.28, "start": 903.88, "text": " and this paper is to say wait wait these capsules actually we want to make them" }, { "end": 920.84, "start": 912.28, "text": " decide to which capsule in the next layer will they send their input right" }, { "end": 926.84, "start": 920.84, "text": " so the capsules can kind of decide where they want to send their output to like" }, { "end": 932.48, "start": 926.84, "text": " where is this where is the capsule that detects the maybe this one detects is" }, { "end": 937.08, "start": 932.48, "text": " there a line in the right side of the image right indicating maybe a seven or" }, { "end": 945.4, "start": 937.08, "text": " a one this is probably most relevant for the one class and for the seven class so" }, { "end": 951.52, "start": 945.4, "text": " it might decide to route its output there and the idea of how this routing" }, { "end": 959.4399999999999, "start": 951.52, "text": " happens is basically the topic of this paper so the the capsules route their" }, { "end": 967, "start": 959.4399999999999, "text": " output to the appropriate next layers capsules how is this done all right this" }, { "end": 972.1999999999999, "start": 967, "text": " is done via the what's called the routing mechanism that I find it quite" }, { "end": 981.12, "start": 972.1999999999999, "text": " poorly described here so I will simply draw it I will simply try to make it up" }, { "end": 990.88, "start": 981.12, "text": " all right so we have capsules and as I've drawn them before right we have one" }, { "end": 1000.32, "start": 990.88, "text": " two three capsules and we maybe have two parent capsules each of these capsules" }, { "end": 1006.16, "start": 1000.32, "text": " here will output a vector as we said and we'll only do it for this this one sorry" }, { "end": 1012.92, "start": 1006.16, "text": " vector here so this will output this vector and needs to decide where to here" }, { "end": 1020.04, "start": 1012.92, "text": " or to here do I send to this output now what it does is there is an iterative" }, { "end": 1027.68, "start": 1020.04, "text": " procedure that has multiple steps and this is I think this is at least the way" }, { "end": 1032.52, "start": 1027.68, "text": " I understand I think the important part to understand is that if we forward pass" }, { "end": 1037.24, "start": 1032.52, "text": " data through this network it actually doesn't go forward in a straight line" }, { "end": 1042.04, "start": 1037.24, "text": " what it actually does is it goes through a layer and then it does multiple steps" }, { "end": 1047.76, "start": 1042.04, "text": " in between layers until it has decided where it wants to go in the next layer" }, { "end": 1051.96, "start": 1047.76, "text": " and then it goes on to the next layer and if there's another capsule layers it" }, { "end": 1058.32, "start": 1051.96, "text": " does again multiple steps before it goes on so that's that's my take on it and" }, { "end": 1064.6, "start": 1058.32, "text": " the multiple steps are as follows first I'll send my output vector to to all of" }, { "end": 1070.12, "start": 1064.6, "text": " the all of the layers like equally all of the parent capsules and so will will" }, { "end": 1078.08, "start": 1070.12, "text": " everyone else right everyone will send theirs equally to the parent now this" }, { "end": 1082.8999999999999, "start": 1078.08, "text": " isn't just done and this may be here this isn't just done just by sending it" }, { "end": 1087.32, "start": 1082.8999999999999, "text": " but this is actually done by modulation of weight matrices so each thing here if" }, { "end": 1093.3999999999999, "start": 1087.32, "text": " this is capsule I and this is capsule J there is a weight matrix in between W I J" }, { "end": 1098.1599999999999, "start": 1093.3999999999999, "text": " that is learned right this is a static weight matrix and each one of these red" }, { "end": 1104.36, "start": 1098.1599999999999, "text": " red arrows you see here has such a weight matrix attached to it so each" }, { "end": 1108.76, "start": 1104.36, "text": " each line you see here is actually modulated by such a weight matrix so" }, { "end": 1113.9199999999998, "start": 1108.76, "text": " there is an a quadratic number of these weight matrices flying around and this" }, { "end": 1118.24, "start": 1113.92, "text": " will also then allow you that maybe this vector is eight dimensional but the" }, { "end": 1124.16, "start": 1118.24, "text": " input vector here is 16 dimensional what we saw before all right so the out the" }, { "end": 1129.48, "start": 1124.16, "text": " input of capsule J here it will receive let's see what it receives it will" }, { "end": 1140.5600000000002, "start": 1129.48, "text": " receive the output of capsule will the output of capsule 1 V 1 modulated by the" }, { "end": 1148.8, "start": 1140.56, "text": " let's let's call this yeah let's call this J modulated by 1 J W 1 J and it" }, { "end": 1155.6, "start": 1148.8, "text": " will also receive this is a set the output of capsule 2 modulated by the" }, { "end": 1162.8799999999999, "start": 1155.6, "text": " weight matrix for sorry weight matrix for capsule 2 and so on now what it does" }, { "end": 1174.4, "start": 1162.88, "text": " is it adds this these all up into a soft max so sorry let's write this so soft it" }, { "end": 1180.24, "start": 1174.4, "text": " will add those all up in a soft max weighted fashion so it will actually" }, { "end": 1188.5600000000002, "start": 1180.24, "text": " compute a a weighted average of those now the weights at the beginning are are" }, { "end": 1195.56, "start": 1188.56, "text": " just one because it gets each from each lower capsule it gets equal amount of" }, { "end": 1200.6, "start": 1195.56, "text": " this vector but then this will give you an output so this will give you some" }, { "end": 1207.84, "start": 1200.6, "text": " output let's put this in green this will give you an output that's I don't know" }, { "end": 1215.72, "start": 1207.84, "text": " how they call it in the paper let's just call it O J right and then what you do" }, { "end": 1224.08, "start": 1215.72, "text": " is all right you compare how much do each of the individual contributions" }, { "end": 1230.68, "start": 1224.08, "text": " agree with OJ so you actually compute for each of these you would compute the" }, { "end": 1239.48, "start": 1230.68, "text": " inner product so you would compute the inner product of W 1 J V 1 with OJ and" }, { "end": 1249.24, "start": 1239.48, "text": " you would compute the inner product of W 2 J V 2 with OJ all right the inner" }, { "end": 1254.76, "start": 1249.24, "text": " product and then these inner products here will become the weighting" }, { "end": 1261.2, "start": 1254.76, "text": " coefficients for the soft max in the next iteration all right so this I mean" }, { "end": 1265.44, "start": 1261.2, "text": " this this is a bit convoluted but ultimately what you're saying is if" }, { "end": 1273.0800000000002, "start": 1265.44, "text": " you're a capsule here you'll send your output forward you have an output you" }, { "end": 1280.0800000000002, "start": 1273.0800000000002, "text": " send it forward right to the other capsule and the other capsule will so" }, { "end": 1283.56, "start": 1280.0800000000002, "text": " this is this is your output and we'll forget about this weight matrix 6 for" }, { "end": 1290.24, "start": 1283.56, "text": " now this is your up the other capsule will output its own its own output" }, { "end": 1297.88, "start": 1290.24, "text": " computed from the lower layers now we do an iteration again if your output now" }, { "end": 1305, "start": 1297.88, "text": " aligns with this you will send more of it and these these two that I've drawn" }, { "end": 1309.36, "start": 1305, "text": " here actually align pretty well right so you'll send more of it is more more" }, { "end": 1316.04, "start": 1309.36, "text": " more right and now maybe the output that next computed output of the same capsule" }, { "end": 1319.4, "start": 1316.04, "text": " will be even more in that direction because you've contributed more right" }, { "end": 1323.3200000000002, "start": 1319.4, "text": " you'll send more and then you're like in the next iteration wow these two are" }, { "end": 1328.5600000000002, "start": 1323.3200000000002, "text": " really equal sorry this should be red here your ears just keeps being the same" }, { "end": 1333.0400000000002, "start": 1328.5600000000002, "text": " and then you say well I'm gonna send even more to that one right whereas" }, { "end": 1340.76, "start": 1333.0400000000002, "text": " another capsule that it's whose initial output was basically whose initial" }, { "end": 1348.6000000000001, "start": 1340.76, "text": " output was basically like this it will by itself compute the inner product with" }, { "end": 1353.36, "start": 1348.6, "text": " the original this original it will send it here right it will compute the inner" }, { "end": 1358.48, "start": 1353.36, "text": " product with the original output and it will realize well these do not align" }, { "end": 1363.48, "start": 1358.48, "text": " very much and then it will send less right it will send less to the next step" }, { "end": 1369.08, "start": 1363.48, "text": " and because it sends less in the next step of course the output will then" }, { "end": 1374.4399999999998, "start": 1369.08, "text": " probably align even less with that vector and then it will send less and" }, { "end": 1380.2, "start": 1374.44, "text": " less and less so this is called dynamic routing the the idea behind it is kind" }, { "end": 1388.24, "start": 1380.2, "text": " of that you route by agreement so you will route to the parent capsules that" }, { "end": 1393.8400000000001, "start": 1388.24, "text": " agree with your output and by agreement we mean kind of the inner product is" }, { "end": 1400.3200000000002, "start": 1393.8400000000001, "text": " high after modulating by this weight matrix and that sort of so that" }, { "end": 1405.6799999999998, "start": 1400.32, "text": " basically means this weight matrix is responsible for deciding which" }, { "end": 1411.08, "start": 1405.6799999999998, "text": " information is relevant together whenever you have two vectors that align" }, { "end": 1417.48, "start": 1411.08, "text": " in the same layer then the in the sense of the capsule networks those represent" }, { "end": 1423.8799999999999, "start": 1417.48, "text": " the same kind of information and those will be routed together to the same" }, { "end": 1429.8, "start": 1423.8799999999999, "text": " capsule in terms of the examples we made maybe if a door and a roof is" }, { "end": 1436.9199999999998, "start": 1429.8, "text": " present then these these these weight matrices that connect door and roof to" }, { "end": 1442.84, "start": 1436.9199999999998, "text": " the house class they will transform a high vector in door and roof into" }, { "end": 1449.76, "start": 1442.84, "text": " aligning vectors for the house class and thereby saying look these two if I look" }, { "end": 1457.28, "start": 1449.76, "text": " at them through if I look at a door and a roof through the perspective of trying" }, { "end": 1464.6, "start": 1457.28, "text": " to be a house right then they are in much agreement on the presence of a" }, { "end": 1476.12, "start": 1464.6, "text": " house so if I am a house right I am a house and I look at a door and I look at" }, { "end": 1482.72, "start": 1476.12, "text": " a roof through the kind of from the perspective of being a house right this" }, { "end": 1486.6399999999999, "start": 1482.72, "text": " is this is what these weight matrices do they always have a perspective of the" }, { "end": 1492.72, "start": 1486.64, "text": " parent capsule then these two things they make a lot of sense together and" }, { "end": 1500.16, "start": 1492.72, "text": " thus I will route them to the same place so they can both contribute to their" }, { "end": 1506.1200000000001, "start": 1500.16, "text": " being a house now from the perspective of a house if I look at a little beach" }, { "end": 1512.8000000000002, "start": 1506.1200000000001, "text": " with a tree on it right then that does not that is not the same that does not" }, { "end": 1521.36, "start": 1512.8, "text": " really is not the same information as a door or a roof so I will not route this" }, { "end": 1530.6399999999999, "start": 1521.36, "text": " to the house in the in the same strength that is sort of the best way I have of" }, { "end": 1535.48, "start": 1530.6399999999999, "text": " explaining it how these capsules work basically the lower entities will always" }, { "end": 1543.08, "start": 1535.48, "text": " be routed for the relevance of the higher entities that are trying to are" }, { "end": 1549.84, "start": 1543.08, "text": " trying to combine the lower entities if that wasn't it's not entirely clear to" }, { "end": 1557, "start": 1549.84, "text": " me either yet but it's the best shot I I can give and the routing is here" }, { "end": 1563.84, "start": 1557, "text": " formalized I find it hard to follow the important thing is that there is an" }, { "end": 1570.8799999999999, "start": 1563.84, "text": " inner loop in all of this so there is an like kind of an an inner iteration and" }, { "end": 1578.04, "start": 1570.8799999999999, "text": " this inner iteration is computed in every forward pass and so these routing" }, { "end": 1584.48, "start": 1578.04, "text": " where the information goes in the next layer that is only the prior probability" }, { "end": 1591.1599999999999, "start": 1584.48, "text": " for that is learned but the actual routing coefficients those are" }, { "end": 1597.88, "start": 1591.16, "text": " dynamically computed in every forward pass so every forward pass goes it goes" }, { "end": 1602.28, "start": 1597.88, "text": " information goes through a layer then it goes multiple steps between two layers" }, { "end": 1606.1200000000001, "start": 1602.28, "text": " until it decides exactly what the distribution for the next layer is and" }, { "end": 1610.64, "start": 1606.1200000000001, "text": " then the next layer computes its outputs and that goes again multiple steps" }, { "end": 1616.48, "start": 1610.64, "text": " between these layers and the next layer so that's the the basic thing to" }, { "end": 1621.76, "start": 1616.48, "text": " remember there's also some normalization involved the squash is the non-linearity" }, { "end": 1629.1200000000001, "start": 1621.76, "text": " we discussed so what do they actually train now at the end here they have a" }, { "end": 1634.56, "start": 1629.1200000000001, "text": " they have these ten capsules and each capsule will be responsible for" }, { "end": 1640.4, "start": 1634.56, "text": " recognizing one the presence of one digit in the MNIST data set of course" }, { "end": 1646.04, "start": 1640.4, "text": " and so what they do is they take the length of these vectors that are output" }, { "end": 1650, "start": 1646.04, "text": " by these capsules these capsules are feed-forward capsules as opposed to the" }, { "end": 1655.3999999999999, "start": 1650, "text": " convolutional capsules here so the feed-forward capsules output again a" }, { "end": 1661.1599999999999, "start": 1655.3999999999999, "text": " vector the length of this vector is taken and then it's basically trained" }, { "end": 1666.52, "start": 1661.1599999999999, "text": " like you would train a regression problem and the loss here is specified" }, { "end": 1673.52, "start": 1666.52, "text": " up here so if the if the image actually does contain this if the training label" }, { "end": 1683.28, "start": 1673.52, "text": " actually has this digit present this T here encodes that so if if K let's say K" }, { "end": 1691.92, "start": 1683.28, "text": " is 2 right so if K 2 if there is a 2 in the image when we know that because it's" }, { "end": 1698.24, "start": 1691.92, "text": " a training image then the length of the output of capsule number 2 should be" }, { "end": 1705.56, "start": 1698.24, "text": " high and this simply encodes that it should be very close to this M plus an" }, { "end": 1710.52, "start": 1705.56, "text": " M plus here is that I think they said it to 0.9 so they say you should be the" }, { "end": 1717.04, "start": 1710.52, "text": " length should be as close as possible to 0.9 whereas if the 2 is not present then" }, { "end": 1723.44, "start": 1717.04, "text": " TK will be 0 then this part will be active so it's only one of these two" }, { "end": 1730.04, "start": 1723.44, "text": " parts will be active then the length of the vector so of capsule number 2 should" }, { "end": 1735.48, "start": 1730.04, "text": " be close to this M negative which is 0.1 it's basically a regression problem" }, { "end": 1742.44, "start": 1735.48, "text": " saying if if there if the given entity is in the image then please make the" }, { "end": 1746.3600000000001, "start": 1742.44, "text": " length as close as possible to 0.9 and if it's not make it as close as possible" }, { "end": 1755.04, "start": 1746.36, "text": " to 0.1 so this this is a classic say regression loss on the length of the" }, { "end": 1761.7199999999998, "start": 1755.04, "text": " output vectors the the lambda is just a factor to to dampen the contribution for" }, { "end": 1768.1599999999999, "start": 1761.7199999999998, "text": " all the negative classes with respect to the one positive class of course per" }, { "end": 1776.76, "start": 1768.16, "text": " capsule it turns out this is actually not enough so this will be the" }, { "end": 1781.44, "start": 1776.76, "text": " classification output but it's it seems not enough they don't say it's not" }, { "end": 1786.0400000000002, "start": 1781.44, "text": " enough but they simply say we additionally do the following so they" }, { "end": 1791.8000000000002, "start": 1786.0400000000002, "text": " also do is they introduce a reconstruction loss now if this model is" }, { "end": 1796.68, "start": 1791.8000000000002, "text": " trained correctly then these capsules here these last capsules especially" }, { "end": 1800, "start": 1796.68, "text": " this one maybe that's the capsule corresponding to the class of the digit" }, { "end": 1808.02, "start": 1800, "text": " 8 will not only encode if an 8 is there or not as in the length of the vector" }, { "end": 1812.72, "start": 1808.02, "text": " output but it will also encode the properties of dates it is a 16" }, { "end": 1818.8400000000001, "start": 1812.72, "text": " dimensional vector so it will encode hopefully things like the stroke width" }, { "end": 1829.3999999999999, "start": 1818.84, "text": " so then it might encode the maybe the rotation of the digit then it might be" }, { "end": 1836.28, "start": 1829.3999999999999, "text": " controlled the tightness of the of the loop so you can have an 8 with very" }, { "end": 1841.08, "start": 1836.28, "text": " large loops or it can have an 8 sorry this is a smaller rate I can have an 8" }, { "end": 1846.8, "start": 1841.08, "text": " with very tight loops so it might you know encode things like this so" }, { "end": 1853.48, "start": 1846.8, "text": " technically it is it will be possible to reconstruct from this description" }, { "end": 1859.44, "start": 1853.48, "text": " reconstruct say the width is high the rotation is zero and the tightness is" }, { "end": 1870.3999999999999, "start": 1859.44, "text": " low then maybe I have a wide widely stroked not tight 8 that is not rotated" }, { "end": 1875.12, "start": 1870.3999999999999, "text": " right so it should be possible to reconstruct this and they they do exactly" }, { "end": 1880.9599999999998, "start": 1875.12, "text": " that so they take this last capsule of the class that is the actual training" }, { "end": 1888.08, "start": 1880.9599999999998, "text": " label that's called the reconstruction target and they feed this to a simple" }, { "end": 1893.1599999999999, "start": 1888.08, "text": " feed-forward neural network that at the end you see this is exactly the MNIST" }, { "end": 1899.84, "start": 1893.1599999999999, "text": " size will try to reconstruct the the image so if the image here this image" }, { "end": 1907.24, "start": 1899.84, "text": " goes in then it goes all through here it will take the class for here feed it" }, { "end": 1912.56, "start": 1907.24, "text": " through this network reshape it to an image again and hopefully what will come" }, { "end": 1920.56, "start": 1912.56, "text": " out is again this for here and it will then have an auxiliary auxiliary loss in" }, { "end": 1926.36, "start": 1920.56, "text": " addition to the loss of this of this classification loss here will auxiliary" }, { "end": 1932.8799999999999, "start": 1926.36, "text": " loss that tries to reconstruct the original image right and that's simply a" }, { "end": 1941.52, "start": 1932.8799999999999, "text": " I believe it's just an L2 reconstruction loss that is that is scaled down that it" }, { "end": 1947.1999999999998, "start": 1941.52, "text": " doesn't dominate so they also train the network basically to reconstruct this" }, { "end": 1952.28, "start": 1947.1999999999998, "text": " and I believe they do this because the length isn't quite enough to make it do" }, { "end": 1959.12, "start": 1952.28, "text": " what they want it to do thus they by having this reconstruction here they" }, { "end": 1964.36, "start": 1959.12, "text": " really kind of enforce that the individual capsules the individual" }, { "end": 1971.52, "start": 1964.36, "text": " dimensions must encode some kind of information about the original image" }, { "end": 1976.44, "start": 1971.52, "text": " and since the original images in the MNIST data set at least vary by those" }, { "end": 1983.2, "start": 1976.44, "text": " things by stroke width by rotation by tightness that by this loss will be" }, { "end": 1996.16, "start": 1983.2, "text": " reflected in the in the reconstruction all right so how are they doing here you" }, { "end": 2003.1200000000001, "start": 1996.16, "text": " see different examples of inputs and then reconstructed outputs and this you" }, { "end": 2009.1999999999998, "start": 2003.12, "text": " know seems pretty good actually so you see here all of these the input image is" }, { "end": 2016.9599999999998, "start": 2009.1999999999998, "text": " reconstructed fairly well so the numbers up here in the fall so the right are the" }, { "end": 2023, "start": 2016.9599999999998, "text": " failure cases here it the input image is a five labeled in the training data but" }, { "end": 2029.32, "start": 2023, "text": " the network actually classifies it as a three but then if you now you have two" }, { "end": 2032.6399999999999, "start": 2029.32, "text": " choices right this this is the same sample I have two choices for" }, { "end": 2038.8000000000002, "start": 2032.64, "text": " reconstruction either you reconstruct the capsule that is actually the is that" }, { "end": 2042.76, "start": 2038.8000000000002, "text": " you know is the true capsule that should be activated and you reconstruct from" }, { "end": 2049.2000000000003, "start": 2042.76, "text": " that or you reconstruct from the capsule that the network says the it classifies" }, { "end": 2054, "start": 2049.2000000000003, "text": " it as so here it mixed up a five four three if you still take the five the" }, { "end": 2058.96, "start": 2054, "text": " capsule and reconstructed you see it actually looks like the original image" }, { "end": 2064.32, "start": 2058.96, "text": " but it looks much more like a five and if you take the three capsule to" }, { "end": 2068.2400000000002, "start": 2064.32, "text": " reconstruct which is what the network classified this as it's still it looks" }, { "end": 2073.28, "start": 2068.2400000000002, "text": " like the original image but it looks much more like an actual three right it's" }, { "end": 2078.68, "start": 2073.28, "text": " it's missing the the part up here whereas over here it's it's missing this" }, { "end": 2083.76, "start": 2078.68, "text": " part here so that the network really seems to kind of learn the different" }, { "end": 2089.92, "start": 2083.76, "text": " variations of these digits and in an ambiguous case such as this one it you" }, { "end": 2094.48, "start": 2089.92, "text": " know it can it can actually go either way and it can actually reconstruct the" }, { "end": 2101, "start": 2094.48, "text": " original output in either interpretations once as a three and once" }, { "end": 2105.44, "start": 2101, "text": " as a five it will be interesting to see what the actual lengths of the vector of" }, { "end": 2112.6400000000003, "start": 2105.44, "text": " both of these classes were that were mixed up and here they compare their" }, { "end": 2118.48, "start": 2112.64, "text": " accuracies so they have a baseline model which I believe is just a CNN" }, { "end": 2125.92, "start": 2118.48, "text": " where they get a decent kind of error and then the capsule networks they get a" }, { "end": 2130.72, "start": 2125.92, "text": " lower error and here you see as you add the reconstruction loss and as you add" }, { "end": 2135.64, "start": 2130.72, "text": " routing more so one step of routing simply means the first step is where you" }, { "end": 2142.44, "start": 2135.64, "text": " send your output equally to each parent that is as in the classical neural" }, { "end": 2148.88, "start": 2142.44, "text": " network case but if you introduce three steps of routing then your error drops" }, { "end": 2159.96, "start": 2148.88, "text": " even lower so they they kind of are on par with baseline CNNs on MNIST here" }, { "end": 2167.04, "start": 2162.2400000000002, "text": " they also explore what their capsules learn so as I said the individual capsules" }, { "end": 2174.32, "start": 2167.04, "text": " the dimensions should encode kind of properties of the variations of the of" }, { "end": 2180.4, "start": 2174.32, "text": " the class class samples and here they explore this in the different capsules so" }, { "end": 2184.32, "start": 2180.4, "text": " they change some dimensions and they run it through their reconstruction networks" }, { "end": 2189.96, "start": 2184.32, "text": " and indeed they discover that there is like a scale and thickness dimension" }, { "end": 2196.04, "start": 2189.96, "text": " stroke thickness dimension there's a skew dimension and so on width and" }, { "end": 2204.44, "start": 2196.04, "text": " translation so that this is pretty remarkable these networks really if you" }, { "end": 2209.2, "start": 2204.44, "text": " train them in this way they really seem to learn about the entities and about" }, { "end": 2214.72, "start": 2209.2, "text": " the properties of the entities and that seems to be quite interesting you see" }, { "end": 2219.96, "start": 2214.72, "text": " that there's everything here stays well within the class that the capsule is" }, { "end": 2227.92, "start": 2219.96, "text": " assigned to they also yeah this robustness to affine transformations" }, { "end": 2232.92, "start": 2227.92, "text": " where they improve over the baseline it's kind of an auxiliary experiment the" }, { "end": 2238.44, "start": 2232.92, "text": " next interesting experiment is what they call the multi MNIST experiment the" }, { "end": 2245.44, "start": 2238.44, "text": " multi MNIST experiment is done by taking two different MNIST digits and basically" }, { "end": 2251.32, "start": 2245.44, "text": " just overlapping them so that they have you know shift them slightly but as you" }, { "end": 2257.8, "start": 2251.32, "text": " see here or here they are overlapped heavily and the task of the network is" }, { "end": 2265.12, "start": 2257.8, "text": " to figure out which two overlapping digits are in the image and the the" }, { "end": 2272.56, "start": 2265.12, "text": " network is very very good at doing this the capsule network that is and better" }, { "end": 2276.96, "start": 2272.56, "text": " than the the baselines because the capsule network simply encodes the" }, { "end": 2282.92, "start": 2276.96, "text": " presence and properties of a particular instance in the image if you simply take" }, { "end": 2288.7999999999997, "start": 2282.92, "text": " the top two length capsules and then reconstruct those independently then" }, { "end": 2296.6, "start": 2288.7999999999997, "text": " you're you can you can you can basically segment the image and you see this here" }, { "end": 2302.12, "start": 2296.6, "text": " so the different colorations come from two different reconstructions of the" }, { "end": 2306.7999999999997, "start": 2302.12, "text": " image from two different capsules so green is from one capsule and red from" }, { "end": 2311, "start": 2306.7999999999997, "text": " the other capsule so the network correctly identifies that it's a 6 and" }, { "end": 2316.04, "start": 2311, "text": " the zero right and it also correctly identifies not only which pixels belong" }, { "end": 2321.24, "start": 2316.04, "text": " to the 6 and which belong to 0 but also pixels that belong to both so that's not" }, { "end": 2325.2799999999997, "start": 2321.24, "text": " a not a problem if you use capsule networks as they are" }, { "end": 2330.2, "start": 2325.2799999999997, "text": " are notable to say here they the way they train is is they train the actual" }, { "end": 2336.2799999999997, "start": 2330.2, "text": " reconstruction by only reconstructing one at a time so the kind of the premise" }, { "end": 2340.12, "start": 2336.2799999999997, "text": " of the data set is that you actually have access to the underlying individual" }, { "end": 2345.9199999999996, "start": 2340.12, "text": " digits while training so like the images of the individual digits you don't" }, { "end": 2352.8799999999997, "start": 2345.9199999999996, "text": " only have this label here but that's a detail here are some kind of failure" }, { "end": 2359.68, "start": 2352.8799999999997, "text": " cases where it it misclassified or you miss specify the capsules and it's kind" }, { "end": 2367.8799999999997, "start": 2359.68, "text": " of unable use here you see to to assign the digits of the misclassified or the" }, { "end": 2372.8799999999997, "start": 2367.8799999999997, "text": " pixels of the misclassified thing it's quite interesting to look at the failure" }, { "end": 2378.3999999999996, "start": 2372.8799999999997, "text": " cases but I find it more interesting to look actually the success cases and the" }, { "end": 2384.8199999999997, "start": 2378.3999999999996, "text": " kind of ease at which the at which the capsule networks can do this simply by" }, { "end": 2392.04, "start": 2384.82, "text": " how they're structured alright so then lastly they also experiment on C for 10" }, { "end": 2397.4, "start": 2392.04, "text": " and interestingly the C for 10 experiments show that the capsule" }, { "end": 2404, "start": 2397.4, "text": " networks don't perform as well there and as you know C for 10 is a data set that" }, { "end": 2407.44, "start": 2404, "text": " is about the same size as MNIST but it's first of all color and second of all is" }, { "end": 2413.32, "start": 2407.44, "text": " natural images and so they have quite a bit of clutter it's not black and white" }, { "end": 2418.8, "start": 2413.32, "text": " black background white digits it's actually there's a sky like on an" }, { "end": 2425.2400000000002, "start": 2418.8, "text": " image there's lots of things going on and right there's my tree and there's" }, { "end": 2429.76, "start": 2425.2400000000002, "text": " stuff here and there's stuff here and the the capsule networks they like to" }, { "end": 2434.96, "start": 2429.76, "text": " account for things in the image so they like to have a capsule corresponding to" }, { "end": 2438.84, "start": 2434.96, "text": " everything that's going on here and here and here and here and here if the whole" }, { "end": 2442.52, "start": 2438.84, "text": " background is black that is not a problem you can account for simply the" }, { "end": 2447, "start": 2442.52, "text": " background but if there's lots of things going on then these capsule networks" }, { "end": 2455, "start": 2447, "text": " get they get they get a bit over explanatory they want to explain" }, { "end": 2459.6, "start": 2455, "text": " everything and that degrades the performance now this paper basically" }, { "end": 2465.12, "start": 2459.6, "text": " says yeah you can have a something like a none of the above category and they" }, { "end": 2473.92, "start": 2465.12, "text": " found that it helped to introduce that in my opinion that it I think the the" }, { "end": 2478.88, "start": 2473.92, "text": " the solution will be more towards introduction of a better loss function" }, { "end": 2486.24, "start": 2478.88, "text": " for this because like such that you don't need kind of to explain the entire" }, { "end": 2490.8199999999997, "start": 2486.24, "text": " thing rather than here we'll hear what you do is you simply explain it by" }, { "end": 2494.4, "start": 2490.8199999999997, "text": " saying it's none of the above but it's incredibly hard to balance that my" }, { "end": 2504.48, "start": 2494.4, "text": " opinion yeah all right so that is basically the end of this they say they" }, { "end": 2510.32, "start": 2504.48, "text": " have a discussion here where they compare capsules against other related" }, { "end": 2519.84, "start": 2510.32, "text": " work but I hope that you kind of got an overview of how this works now and as" }, { "end": 2525.48, "start": 2519.84, "text": " much as possible and with that that was it for me and thanks for watching bye" }, { "end": 2551.48, "start": 2525.48, "text": " bye" } ]
-MCYbmU9kfg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
RoBERTa: A Robustly Optimized BERT Pretraining Approach
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "tensor2tensor", "rnn", "recurrent", "seq2seq", "bert", "unsupervised", "squad", "wordpiece", "embeddings", "language", "language modeling", "attention layers", "bidirectional", "elmo", "word vectors", "pretrained", "fine tuning" ]
This paper shows that the original BERT model, if trained correctly, can outperform all of the improvements that have been proposed lately, raising questions about the necessity and reasoning behind these. Abstract: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code. Authors: Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov https://arxiv.org/abs/1907.11692 YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Minds: https://www.minds.com/ykilcher BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/
Hello everyone, today we're looking at Roberta, a robustly optimized BERT pre-training approach by Yin-Han Liu at AL, mainly of Facebook research. So this paper is a pretty short, pretty simple paper and the main premise is we've seen a number of improvements over the initial BERT paper where different pre-training of the transformer architecture or extensions of the architecture have been shown to have better performance than the original BERT model. And this paper basically says if you get the design choices right, then BERT is able to basically be on par or exceed all of these other methods so far. So they're basically exploring design choices in the pre-training and training of BERT. Alright, so if you don't know what BERT is, by the way, I have made a video about BERT, I've also made a video about transformers. In very quick terms, BERT is a language neural network architecture that takes as input text such as this kind of thing you see here, text such as that, and it will kind of encode it out and it can do various things, for example, classify it into certain categories or kind of segment it, extract answers from questions and so on. The whole thing is pre-trained with what's called a masked language model objective where you don't need labels to train it. So in a masked language model objective, you basically mask out certain words during training and then you ask BERT to reconstruct these words from the surrounding information. And that kind of has given some improvements in the original BERT paper, but subsequent papers have claimed that you can improve even more by using different pre-training objectives and so on such as Excel, NET. But here, these researchers basically explore different things. So they use a regular BERT architecture, that's what they describe here, so they use both the BERT base, the 12-layer, as well as the 24-layer BERT that has originally been described. They use masked language modeling as a pre-training objective and they explore the necessity of this next sentence prediction loss that has been part of BERT. So along with the masked sentence modeling, BERT has also had an objective where if you input a piece of, actually you input two pieces of text, two sentences such as this, these are two sentences, and BERT has to decide if the second sentence follows the first sentence in the corpus or in 50% of the cases, the second sentence is sampled from a different document. This kind of is, so the original paper argued this is necessary to incorporate long-distance relationships between text. Yeah, here the NSP objective was designed to improve performance on downstream tasks such as natural language inference. And this paper kind of explores the necessity of that loss. In terms of optimization, there is of course kind of a pre-training scheme and then a training scheme using Adam here with certain parameters and also this paper explores the use of these parameters. Lastly you have data and of course these models sometimes they're trained on different data and that's why comparing them makes it a bit harder to compare them because the pre-training is done on differently sized and differently structured data. This paper also tries to investigate the influence of the training data and especially what happens if we keep the training data constant. So all right, so they implement BERT, they re-implement BERT and then they fix some hyperparameters while they tune others and first of all the data set. So they use different data sets. The original BERT has been trained on this Book Corpus and Wikipedia, English Wikipedia data set which is 16 gigabytes large. Now this paper here collects a, what's this CC News data set which is the subset of the Common Crawl News data set which is all in. So the subset is the English portion and that's 76 gigabytes which is on par with for example what GPT-2 used I believe. So this is a very large training set and kind of comparing this original data to the large corpus, kind of what influence that is should make very clear what the influence of more training of more pre-training data is. They also have a number of other corpora open web text as well as here I believe there's one more stories, yes. So these are also pretty sizable but these are like, yeah these are like, have very specific schemas to them. Then the evaluation here happens on several different kind of downstream tasks. So the idea is you first you pre-train this BERT model on with the masked language modeling and so on and then you have this GLU task which is actually a collection of nine tasks and you have some other tasks such as SQUAD which is a question answering task and here RACE I don't even know what that is in particular but suffice to say these are kind of downstream NLP tasks. The paper isn't about these downstream tasks but it's just a way to measure how well your pre-training worked if then you can fine tune on such a task and you get a good performance. But what the tasks are in particular isn't too important. Alright so here we get into the meat of the paper. First they decide on what they call static versus dynamic masking. So in the original BERT paper whenever they do masked language modeling they take a piece of text and they basically replicate it a bunch of times because they want to iterate through training data a bunch of times and then in each iteration they mask out different tokens. They compare this to what's called dynamic masking. So this is static masking. Dynamic masking would be where you basically on the fly generate your mask. You don't pre-compute it and save it you on the fly generate it. This allows you to go through kind of more or less of the data as you want and when you encounter the same sample twice even though you replicate it in the original BERT model you could still encounter it twice if you train for longer than the number of replications. Then you basically see the exact same mask again and the dynamic masking is actually much more useful. It's much more ad hoc. Each time you see a sample you generate the mask on the fly. So they compare this here and they see that there is a marginal improvement so here higher is better marginal improvement in two tasks and a less marginal decrease in performance in one task. So they decide that this dynamic masking is of use. Second thing they investigate is the kind of input format and this next sentence prediction. So as I already said the original BERT training objective always gets two sentences next to each other and has to decide if the second one follows from the first one. Actually it doesn't it observes two concatenated document segments which are either sampled contiguously from the same document or from distinct documents and this is half and half. So in addition to the masked language modeling the model is trained to predict whether the observed document segments come from the same or distinct document via an auxiliary next sentence prediction loss. They investigate different ways of including or excluding this loss. So first is what they define if here if it's plus NSP that means that this particular thing includes the next sentence or next segment prediction loss. So they have segment pair plus NSP which means that each input has a pair of segments and these segments now the difference the distinction between a segment and a sentence is important where the sentence is really a natural sentence a segment can actually be multiple natural sentences which is what the original BERT does. So as long as the combined length is less than 512 tokens there can also be multiple sentences but there's clearly two segments and you have to decide if they follow after each other or not. The second thing they try is the same thing so the next segment prediction but now it's just two sentences it's just natural sentences so it must be one sentence a period and then the next sentence a period and you have to distinguish these two if they follow or not. Then they investigate full sentences which is they leave away this next segment prediction loss and they simply fill up the 512 tokens with text from the corpus. So each input is packed with full sentences sampled continuously from one or more documents and the one or more document means if you so if you sample text right you sample here text you put all of this in the thing and you are at the end of a document you simply continue with the next one and go on until you have the 512 tokens. So you basically fill fill fill until you have 512 tokens and that's this variant here. And then in the last variant you do the same thing this called dock sentences but you basically you stop at the end. So even so you put all of this in your state and if you here you stop and then you have to be content by simply padding the rest of the 512 tokens or something like this so you don't have as much data but the all the text that you have in one sample is actually continuous text from the same document. So they pit these four things against each other. This is this table here and as you can see here the best thing is this dock sentences thing so on these things followed by the full sentences encoding. So there's some some ambiguities here but in general you can kind of rank them as best second best and then here third best and fourth best and they conclude that this next segment or next sentence prediction loss here is more hurtful than helpful in the ways we see here and they say even though this is most most effective they in their case they'd rather go with this one because it's well I guess easier to implement you get more data through the model in the same time and the performance decrease isn't that much. So but it's pretty interesting to see that this next next segment next sentence prediction isn't super super helpful in actuality. Here so removing the NSP loss matches or slightly improves the downstream task performance. This is yeah in contrast to what the original BERT authors found but you have to keep in mind this is also on hasn't a bunch of other changes in. Then next thing they investigate batch size so batch size sorry batch size pretty seems to be pretty interesting for these large models in that they love large batch sizes and they actually explore batch sizes 512 here as a smallest one and they go up to 8000 so this they do this actually in a in a data parallel way where they have many many machines with many GPUs and they parallelize the data and then they accumulate the gradient of all of these different samples and so they can go up to a batch size of about 8k and they find generally that the 2000 batch size here as you can see helps to improve the so perplexity lower is better and the other numbers higher is better helps to to improve the performances if you control the control for data set size so the number of times you go through the data set is the same but if you go with a larger batch size that seems to help up to a point here the 2000 seems to be the best they found so again marginal improvement you can make by training with larger batch sizes and then this the last thing they've looked at is actually is text encoding so how do you encode text and the the pit here is basically between byte pair encoding or word piece encoding to that to to decide how large your vocabulary is basically and as I understand it they didn't find a much of a difference between the different implementations of the text encoding so they decide they go with they decide to go with one I don't even remember which one I think they go decide to go with byte pair encoding instead of word pieces all right so they combine all of this into Roberta which is a robustly optimized Bert approach and they say Roberta is trained with dynamic masking so what they showed first full sentence without the next segment prediction loss large mini batches a larger byte level byte pair encoding as well as of course their collection of training data and then here they also investigate how long to pre train so if you look at the original Bert models or the XL net models and then compare it to Roberta so Roberta this is the original data and they already beat Bert yet they do not they do not yet beat Excel net with that so if they add data they get even better actually on par mostly with the with Excel net if they pre train longer they get even better and if they want to say pre train even longer right so that here's the the number of steps if your number of steps then match the number of steps that the Excel net does with the same additional data then or with their additional data then you outperform Excel net as well so this this kind of just an an overview of this and they evaluate on other downstream tasks and they basically show that in most of them they can reach state-of-the-art performance or exceed it with their approach and in conclusion they basically say well this only shows that kind of the the gains that these other models make and the reasons why they make gains may be questionable if you simply pre train Bert in a better way you can reach the same performances so I think the end is not reached yet most of all they publish their code their data I believe I have not looked into this but definitely check out their repository where this is implemented seems pretty easy seems pretty straightforward and that was it for me bye bye
[ { "end": 6.84, "start": 0, "text": " Hello everyone, today we're looking at Roberta, a robustly optimized BERT pre-training approach" }, { "end": 11.96, "start": 6.84, "text": " by Yin-Han Liu at AL, mainly of Facebook research." }, { "end": 18.84, "start": 11.96, "text": " So this paper is a pretty short, pretty simple paper and the main premise is we've seen a" }, { "end": 28.44, "start": 18.84, "text": " number of improvements over the initial BERT paper where different pre-training of the" }, { "end": 35.92, "start": 28.44, "text": " transformer architecture or extensions of the architecture have been shown to have better" }, { "end": 38.8, "start": 35.92, "text": " performance than the original BERT model." }, { "end": 48.56, "start": 38.8, "text": " And this paper basically says if you get the design choices right, then BERT is able to" }, { "end": 53.28, "start": 48.56, "text": " basically be on par or exceed all of these other methods so far." }, { "end": 60.28, "start": 53.28, "text": " So they're basically exploring design choices in the pre-training and training of BERT." }, { "end": 67.84, "start": 60.28, "text": " Alright, so if you don't know what BERT is, by the way, I have made a video about BERT," }, { "end": 72.08, "start": 67.84, "text": " I've also made a video about transformers." }, { "end": 81.44, "start": 72.08, "text": " In very quick terms, BERT is a language neural network architecture that takes as input text" }, { "end": 90.4, "start": 81.44, "text": " such as this kind of thing you see here, text such as that, and it will kind of encode it" }, { "end": 99.12, "start": 90.4, "text": " out and it can do various things, for example, classify it into certain categories or kind" }, { "end": 106.03999999999999, "start": 99.12, "text": " of segment it, extract answers from questions and so on." }, { "end": 111.92, "start": 106.04, "text": " The whole thing is pre-trained with what's called a masked language model objective where" }, { "end": 113.52000000000001, "start": 111.92, "text": " you don't need labels to train it." }, { "end": 118.96000000000001, "start": 113.52000000000001, "text": " So in a masked language model objective, you basically mask out certain words during training" }, { "end": 126.32000000000001, "start": 118.96000000000001, "text": " and then you ask BERT to reconstruct these words from the surrounding information." }, { "end": 133.56, "start": 126.32000000000001, "text": " And that kind of has given some improvements in the original BERT paper, but subsequent" }, { "end": 138.88, "start": 133.56, "text": " papers have claimed that you can improve even more by using different pre-training objectives" }, { "end": 142.6, "start": 138.88, "text": " and so on such as Excel, NET." }, { "end": 150.52, "start": 142.6, "text": " But here, these researchers basically explore different things." }, { "end": 156.48000000000002, "start": 150.52, "text": " So they use a regular BERT architecture, that's what they describe here, so they use both" }, { "end": 167.07999999999998, "start": 156.48, "text": " the BERT base, the 12-layer, as well as the 24-layer BERT that has originally been described." }, { "end": 176.83999999999997, "start": 167.07999999999998, "text": " They use masked language modeling as a pre-training objective and they explore the necessity of" }, { "end": 180.79999999999998, "start": 176.83999999999997, "text": " this next sentence prediction loss that has been part of BERT." }, { "end": 187.36, "start": 180.8, "text": " So along with the masked sentence modeling, BERT has also had an objective where if you" }, { "end": 194.10000000000002, "start": 187.36, "text": " input a piece of, actually you input two pieces of text, two sentences such as this, these" }, { "end": 199.92000000000002, "start": 194.10000000000002, "text": " are two sentences, and BERT has to decide if the second sentence follows the first sentence" }, { "end": 205.04000000000002, "start": 199.92000000000002, "text": " in the corpus or in 50% of the cases, the second sentence is sampled from a different" }, { "end": 206.12, "start": 205.04000000000002, "text": " document." }, { "end": 212.76, "start": 206.12, "text": " This kind of is, so the original paper argued this is necessary to incorporate long-distance" }, { "end": 215.8, "start": 212.76, "text": " relationships between text." }, { "end": 222.6, "start": 215.8, "text": " Yeah, here the NSP objective was designed to improve performance on downstream tasks" }, { "end": 227.36, "start": 222.6, "text": " such as natural language inference." }, { "end": 231.24, "start": 227.36, "text": " And this paper kind of explores the necessity of that loss." }, { "end": 237.44, "start": 231.24, "text": " In terms of optimization, there is of course kind of a pre-training scheme and then a training" }, { "end": 245.32000000000002, "start": 237.44, "text": " scheme using Adam here with certain parameters and also this paper explores the use of these" }, { "end": 247.28, "start": 245.32000000000002, "text": " parameters." }, { "end": 254.56, "start": 247.28, "text": " Lastly you have data and of course these models sometimes they're trained on different data" }, { "end": 259.76, "start": 254.56, "text": " and that's why comparing them makes it a bit harder to compare them because the pre-training" }, { "end": 265.64, "start": 259.76, "text": " is done on differently sized and differently structured data." }, { "end": 271.4, "start": 265.64, "text": " This paper also tries to investigate the influence of the training data and especially what happens" }, { "end": 275.28, "start": 271.4, "text": " if we keep the training data constant." }, { "end": 287.8, "start": 275.28, "text": " So all right, so they implement BERT, they re-implement BERT and then they fix some hyperparameters" }, { "end": 291.88, "start": 287.8, "text": " while they tune others and first of all the data set." }, { "end": 295.28000000000003, "start": 291.88, "text": " So they use different data sets." }, { "end": 301.44, "start": 295.28000000000003, "text": " The original BERT has been trained on this Book Corpus and Wikipedia, English Wikipedia" }, { "end": 304.52, "start": 301.44, "text": " data set which is 16 gigabytes large." }, { "end": 311.92, "start": 304.52, "text": " Now this paper here collects a, what's this CC News data set which is the subset of the" }, { "end": 316.36, "start": 311.92, "text": " Common Crawl News data set which is all in." }, { "end": 326.2, "start": 316.36, "text": " So the subset is the English portion and that's 76 gigabytes which is on par with for example" }, { "end": 330.16, "start": 326.2, "text": " what GPT-2 used I believe." }, { "end": 338.8, "start": 330.16, "text": " So this is a very large training set and kind of comparing this original data to the large" }, { "end": 344.40000000000003, "start": 338.8, "text": " corpus, kind of what influence that is should make very clear what the influence of more" }, { "end": 347.64, "start": 344.4, "text": " training of more pre-training data is." }, { "end": 356.03999999999996, "start": 347.64, "text": " They also have a number of other corpora open web text as well as here I believe there's" }, { "end": 358.12, "start": 356.03999999999996, "text": " one more stories, yes." }, { "end": 366, "start": 358.12, "text": " So these are also pretty sizable but these are like, yeah these are like, have very specific" }, { "end": 369.79999999999995, "start": 366, "text": " schemas to them." }, { "end": 377.28000000000003, "start": 369.8, "text": " Then the evaluation here happens on several different kind of downstream tasks." }, { "end": 383.6, "start": 377.28000000000003, "text": " So the idea is you first you pre-train this BERT model on with the masked language modeling" }, { "end": 392.64, "start": 383.6, "text": " and so on and then you have this GLU task which is actually a collection of nine tasks" }, { "end": 402.24, "start": 392.64, "text": " and you have some other tasks such as SQUAD which is a question answering task and here" }, { "end": 408.4, "start": 402.24, "text": " RACE I don't even know what that is in particular but suffice to say these are kind of downstream" }, { "end": 410.08, "start": 408.4, "text": " NLP tasks." }, { "end": 417.47999999999996, "start": 410.08, "text": " The paper isn't about these downstream tasks but it's just a way to measure how well your" }, { "end": 425, "start": 417.48, "text": " pre-training worked if then you can fine tune on such a task and you get a good performance." }, { "end": 429.72, "start": 425, "text": " But what the tasks are in particular isn't too important." }, { "end": 433.88, "start": 429.72, "text": " Alright so here we get into the meat of the paper." }, { "end": 440.16, "start": 433.88, "text": " First they decide on what they call static versus dynamic masking." }, { "end": 446.16, "start": 440.16, "text": " So in the original BERT paper whenever they do masked language modeling they take a piece" }, { "end": 451.40000000000003, "start": 446.16, "text": " of text and they basically replicate it a bunch of times because they want to iterate" }, { "end": 457.6, "start": 451.40000000000003, "text": " through training data a bunch of times and then in each iteration they mask out different" }, { "end": 461.24, "start": 457.6, "text": " tokens." }, { "end": 468.40000000000003, "start": 461.24, "text": " They compare this to what's called dynamic masking." }, { "end": 471.28000000000003, "start": 468.40000000000003, "text": " So this is static masking." }, { "end": 480.96, "start": 471.28, "text": " Dynamic masking would be where you basically on the fly generate your mask." }, { "end": 484.41999999999996, "start": 480.96, "text": " You don't pre-compute it and save it you on the fly generate it." }, { "end": 490.91999999999996, "start": 484.41999999999996, "text": " This allows you to go through kind of more or less of the data as you want and when you" }, { "end": 498.67999999999995, "start": 490.91999999999996, "text": " encounter the same sample twice even though you replicate it in the original BERT model" }, { "end": 503.56, "start": 498.68, "text": " you could still encounter it twice if you train for longer than the number of replications." }, { "end": 511.08, "start": 503.56, "text": " Then you basically see the exact same mask again and the dynamic masking is actually" }, { "end": 513.2, "start": 511.08, "text": " much more useful." }, { "end": 514.32, "start": 513.2, "text": " It's much more ad hoc." }, { "end": 517.62, "start": 514.32, "text": " Each time you see a sample you generate the mask on the fly." }, { "end": 522.24, "start": 517.62, "text": " So they compare this here and they see that there is a marginal improvement so here higher" }, { "end": 533.04, "start": 522.24, "text": " is better marginal improvement in two tasks and a less marginal decrease in performance" }, { "end": 534.04, "start": 533.04, "text": " in one task." }, { "end": 542.94, "start": 534.04, "text": " So they decide that this dynamic masking is of use." }, { "end": 549.74, "start": 542.94, "text": " Second thing they investigate is the kind of input format and this next sentence prediction." }, { "end": 555.92, "start": 549.74, "text": " So as I already said the original BERT training objective always gets two sentences next to" }, { "end": 561.86, "start": 555.92, "text": " each other and has to decide if the second one follows from the first one." }, { "end": 569.16, "start": 561.86, "text": " Actually it doesn't it observes two concatenated document segments which are either sampled" }, { "end": 577.58, "start": 569.16, "text": " contiguously from the same document or from distinct documents and this is half and half." }, { "end": 581.62, "start": 577.58, "text": " So in addition to the masked language modeling the model is trained to predict whether the" }, { "end": 588.9000000000001, "start": 581.62, "text": " observed document segments come from the same or distinct document via an auxiliary next" }, { "end": 592.48, "start": 588.9000000000001, "text": " sentence prediction loss." }, { "end": 598.26, "start": 592.48, "text": " They investigate different ways of including or excluding this loss." }, { "end": 606.08, "start": 598.26, "text": " So first is what they define if here if it's plus NSP that means that this particular thing" }, { "end": 610.84, "start": 606.08, "text": " includes the next sentence or next segment prediction loss." }, { "end": 620.72, "start": 610.84, "text": " So they have segment pair plus NSP which means that each input has a pair of segments and" }, { "end": 628.5200000000001, "start": 620.72, "text": " these segments now the difference the distinction between a segment and a sentence is important" }, { "end": 635.36, "start": 628.5200000000001, "text": " where the sentence is really a natural sentence a segment can actually be multiple natural" }, { "end": 641.44, "start": 635.36, "text": " sentences which is what the original BERT does." }, { "end": 648.6800000000001, "start": 641.44, "text": " So as long as the combined length is less than 512 tokens there can also be multiple" }, { "end": 654.5600000000001, "start": 648.6800000000001, "text": " sentences but there's clearly two segments and you have to decide if they follow after" }, { "end": 656.6800000000001, "start": 654.5600000000001, "text": " each other or not." }, { "end": 661.96, "start": 656.6800000000001, "text": " The second thing they try is the same thing so the next segment prediction but now it's" }, { "end": 673, "start": 661.96, "text": " just two sentences it's just natural sentences so it must be one sentence a period and then" }, { "end": 678.72, "start": 673, "text": " the next sentence a period and you have to distinguish these two if they follow or not." }, { "end": 687, "start": 678.72, "text": " Then they investigate full sentences which is they leave away this next segment prediction" }, { "end": 695.04, "start": 687, "text": " loss and they simply fill up the 512 tokens with text from the corpus." }, { "end": 700.68, "start": 695.04, "text": " So each input is packed with full sentences sampled continuously from one or more documents" }, { "end": 706.48, "start": 700.68, "text": " and the one or more document means if you so if you sample text right you sample here" }, { "end": 711.82, "start": 706.48, "text": " text you put all of this in the thing and you are at the end of a document you simply" }, { "end": 717.4000000000001, "start": 711.82, "text": " continue with the next one and go on until you have the 512 tokens." }, { "end": 725.2800000000001, "start": 717.4000000000001, "text": " So you basically fill fill fill until you have 512 tokens and that's this variant here." }, { "end": 729.96, "start": 725.2800000000001, "text": " And then in the last variant you do the same thing this called dock sentences but you basically" }, { "end": 731.5200000000001, "start": 729.96, "text": " you stop at the end." }, { "end": 738.44, "start": 731.5200000000001, "text": " So even so you put all of this in your state and if you here you stop and then you have" }, { "end": 745.5200000000001, "start": 738.44, "text": " to be content by simply padding the rest of the 512 tokens or something like this so you" }, { "end": 752.6800000000001, "start": 745.5200000000001, "text": " don't have as much data but the all the text that you have in one sample is actually continuous" }, { "end": 755.1800000000001, "start": 752.6800000000001, "text": " text from the same document." }, { "end": 760.1, "start": 755.1800000000001, "text": " So they pit these four things against each other." }, { "end": 776.8000000000001, "start": 760.1, "text": " This is this table here and as you can see here the best thing is this dock sentences" }, { "end": 785.52, "start": 776.8000000000001, "text": " thing so on these things followed by the full sentences encoding." }, { "end": 794.68, "start": 785.52, "text": " So there's some some ambiguities here but in general you can kind of rank them as best" }, { "end": 803.92, "start": 794.68, "text": " second best and then here third best and fourth best and they conclude that this next segment" }, { "end": 812.8, "start": 803.92, "text": " or next sentence prediction loss here is more hurtful than helpful in the ways we see here" }, { "end": 819.8599999999999, "start": 812.8, "text": " and they say even though this is most most effective they in their case they'd rather" }, { "end": 824.28, "start": 819.8599999999999, "text": " go with this one because it's well I guess easier to implement you get more data through" }, { "end": 832, "start": 824.28, "text": " the model in the same time and the performance decrease isn't that much." }, { "end": 837.18, "start": 832, "text": " So but it's pretty interesting to see that this next next segment next sentence prediction" }, { "end": 847.0799999999999, "start": 837.18, "text": " isn't super super helpful in actuality." }, { "end": 855.56, "start": 847.0799999999999, "text": " Here so removing the NSP loss matches or slightly improves the downstream task performance." }, { "end": 859.68, "start": 855.56, "text": " This is yeah in contrast to what the original BERT authors found but you have to keep in" }, { "end": 868.04, "start": 859.68, "text": " mind this is also on hasn't a bunch of other changes in." }, { "end": 875.8, "start": 868.04, "text": " Then next thing they investigate batch size so batch size sorry batch size pretty seems" }, { "end": 882.4, "start": 875.8, "text": " to be pretty interesting for these large models in that they love large batch sizes and they" }, { "end": 891.68, "start": 882.4, "text": " actually explore batch sizes 512 here as a smallest one and they go up to 8000 so this" }, { "end": 895.88, "start": 891.68, "text": " they do this actually in a in a data parallel way where they have many many machines with" }, { "end": 904.3199999999999, "start": 895.88, "text": " many GPUs and they parallelize the data and then they accumulate the gradient of all of" }, { "end": 909.0799999999999, "start": 904.3199999999999, "text": " these different samples and so they can go up to a batch size of about 8k and they find" }, { "end": 916.88, "start": 909.08, "text": " generally that the 2000 batch size here as you can see helps to improve the so perplexity" }, { "end": 925.2, "start": 916.88, "text": " lower is better and the other numbers higher is better helps to to improve the performances" }, { "end": 929.5200000000001, "start": 925.2, "text": " if you control the control for data set size so the number of times you go through the" }, { "end": 936.44, "start": 929.5200000000001, "text": " data set is the same but if you go with a larger batch size that seems to help up to" }, { "end": 943.6800000000001, "start": 936.44, "text": " a point here the 2000 seems to be the best they found so again marginal improvement you" }, { "end": 951, "start": 943.6800000000001, "text": " can make by training with larger batch sizes and then this the last thing they've looked" }, { "end": 957.32, "start": 951, "text": " at is actually is text encoding so how do you encode text and the the pit here is basically" }, { "end": 968.84, "start": 957.32, "text": " between byte pair encoding or word piece encoding to that to to decide how large your vocabulary" }, { "end": 975.96, "start": 968.84, "text": " is basically and as I understand it they didn't find a much of a difference between the different" }, { "end": 984.6800000000001, "start": 975.96, "text": " implementations of the text encoding so they decide they go with they decide to go with" }, { "end": 991.04, "start": 984.68, "text": " one I don't even remember which one I think they go decide to go with byte pair encoding" }, { "end": 998.4, "start": 991.04, "text": " instead of word pieces all right so they combine all of this into Roberta which is a robustly" }, { "end": 1009.12, "start": 998.4, "text": " optimized Bert approach and they say Roberta is trained with dynamic masking so what they" }, { "end": 1016.96, "start": 1009.12, "text": " showed first full sentence without the next segment prediction loss large mini batches" }, { "end": 1024.08, "start": 1016.96, "text": " a larger byte level byte pair encoding as well as of course their collection of training" }, { "end": 1038.28, "start": 1024.08, "text": " data and then here they also investigate how long to pre train so if you look at the original" }, { "end": 1045.2, "start": 1038.28, "text": " Bert models or the XL net models and then compare it to Roberta so Roberta this is the" }, { "end": 1053.3999999999999, "start": 1045.2, "text": " original data and they already beat Bert yet they do not they do not yet beat Excel net" }, { "end": 1062.78, "start": 1053.3999999999999, "text": " with that so if they add data they get even better actually on par mostly with the with" }, { "end": 1069.28, "start": 1062.78, "text": " Excel net if they pre train longer they get even better and if they want to say pre train" }, { "end": 1075.96, "start": 1069.28, "text": " even longer right so that here's the the number of steps if your number of steps then match" }, { "end": 1085.8799999999999, "start": 1075.96, "text": " the number of steps that the Excel net does with the same additional data then or with" }, { "end": 1095.64, "start": 1085.88, "text": " their additional data then you outperform Excel net as well so this this kind of just" }, { "end": 1104.7600000000002, "start": 1095.64, "text": " an an overview of this and they evaluate on other downstream tasks and they basically" }, { "end": 1115.8600000000001, "start": 1104.7600000000002, "text": " show that in most of them they can reach state-of-the-art performance or exceed it with their approach" }, { "end": 1123.6, "start": 1115.86, "text": " and in conclusion they basically say well this only shows that kind of the the gains" }, { "end": 1128.4799999999998, "start": 1123.6, "text": " that these other models make and the reasons why they make gains may be questionable if" }, { "end": 1135.1999999999998, "start": 1128.4799999999998, "text": " you simply pre train Bert in a better way you can reach the same performances so I think" }, { "end": 1142.8, "start": 1135.1999999999998, "text": " the end is not reached yet most of all they publish their code their data I believe I" }, { "end": 1148.8799999999999, "start": 1142.8, "text": " have not looked into this but definitely check out their repository where this is implemented" }, { "end": 1176.88, "start": 1148.88, "text": " seems pretty easy seems pretty straightforward and that was it for me bye bye" } ]
AR3W-nfcDe4
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Auditing Radicalization Pathways on YouTube
[ "Science & Technology" ]
[ "machine learning", "data science", "empirical", "study", "youtube", "radicalization", "alt-right", "alt-lite", "idw", "intellectual dark web", "alt right", "alt lite", "jordan peterson", "joe rogan", "pipeline", "recommended", "network", "diffusion", "social graph", "infected", "ideology", "radical", "analysis", "suggested", "filter bubble", "fringe" ]
This paper claims that there is a radicalization pipeline on YouTube pushing people towards the Alt-Right, backing up their claims with empirical analysis of channel recommendations and commenting behavior. I suggest that there is a much simpler explanation of this data: A basic diffusion process. Abstract: Non-profits and the media claim there is a radicalization pipeline on YouTube. Its content creators would sponsor fringe ideas, and its recommender system would steer users towards edgier content. Yet, the supporting evidence for this claim is mostly anecdotal, and there are no proper measurements of the influence of YouTube's recommender system. In this work, we conduct a large scale audit of user radicalization on YouTube. We analyze 331,849 videos of 360 channels which we broadly classify into: control, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right ---channels in the I.D.W. and the Alt-lite would be gateways to fringe far-right ideology, here represented by Alt-right channels. Processing more than 79M comments, we show that the three communities increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube's recommendation algorithm, looking at more than 2M million recommendations for videos and channels between May and July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels via recommendations and that Alt-right channels may be reached from both I.D.W. and Alt-lite channels. Overall, we paint a comprehensive picture of user radicalization on YouTube and provide methods to transparently audit the platform and its recommender system. Authors: Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira https://arxiv.org/abs/1908.08313 YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Minds: https://www.minds.com/ykilcher BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/
Hi there! Today we're going to look at Auditing Radicalization Pathways on YouTube by Manuel Horta-Riberio at AL. So this paper is a bit different than the one we're usually looking at, but since I'm a YouTuber and this is in the kind of a data science realm, I thought it fits neatly. So yeah, we'll have a look. And this is mostly going to be an analysis and my opinion on it, so take that for what it is. This is, in my opinion, a paper where you can see very well what it looks like when you deceive yourself. So when you have a hypothesis of something and then only collect data that matches that, and you don't think of simpler solutions that explain the data, and therefore you don't think of experiments that could differentiate the simple solutions from what you propose. So it's a good example of how you can kind of trick yourself into believing you found something. And this isn't now about YouTube or anything. This happened to me so many times. It always pays off to take a step back and say, is there a simpler explanation for what's happening? And this is what I think is exactly happening here. So I'll present to you their hypothesis and then I'll present to you my kind of what I think is going on and a model that explains the data much much much easier and simpler and actually better. So let's dive in. This paper basically claims the following. So on YouTube there are channels and channels are, you know, independent channels. They make videos and you can actually arrange these channels. So each dot here is a channel. You can arrange these channels in kind of a network. And two channels you can claim they're connected and they can be a connection strength or whatever. For simplicity they can be connected if, for example, their topics are similar, if they reference each other, if they are recommended by YouTube from each other, if they have the same users watching those same channels or the videos of these channels. There are a number of metrics where you could make channels connected but all of them will turn out similar, like will give you the similar structure of channels being connected. Oh that's connected twice. So you can kind of build a graph of how these channels are connected and what you can do then is you can cluster them. You don't have to build a graph to cluster them but you can cluster the channels and what will emerge are parts of the graph that are very well connected. Right here this might be connected with this and with this. Parts of graph that are very well connected and are kind of well connected within and more sparsely connected to others, like also have a larger distance in between them. So if you start out from one channel and you're kind of watching recommended videos and recommended channels and so on, you'll stroll along here, you will get much faster to these things than to the other things. So these are called communities usually in these kind of social network analysis. So on YouTube you know there is a community for makeup, there's a community for sports, within sports there is a community for soccer, there's one for basketball and so on. So these are all these kind of communities that you can discover by clustering. This paper mainly deals with three communities. Namely the first of all is the IDW, which is the intellectual dark web. They discuss this here. So the intellectual dark web is they describe as a group of individuals that are in a rolling conversation with each other about topics that are, let's say, usually kind of difficult to talk about, such as gender differences or intelligence research in certain areas or even you know regular politics, but kind of the intellectual dark web are a wide variety of people that basically are conversing with each other about topics. The description is a bit vague but the main aspect is conversation and maybe topics that are kind of on the edge of what's acceptable to talk about. But the opinions range widely on these topics. The second group is the alt-right. And the alt-right here is kind of the, they're defined as ethno-nationalists. For example, here is an example, the fringe ideas such as white ethno-state, white supremacist ideology and so on. So specifically ethno-nationalists, nationalists that I think nations should be organized to along the lines of ethnicity. And the goal of the paper is actually to show that there is a kind of a dangerous pipeline on YouTube that will drive people to the alt-right and drive people into these radical ideas of the alt-right. Kind of in between is the alt-light, which is here defined as civic nationalists, which is simply as I understand it means that people should be organized into nations, not along ethnicity, but just should organize themselves into sovereign communities. And it would be more of your libertarian, classically liberal people, whereas the alt-right would be more of your, let's say, authoritarian right-wing person. So these three communities, they have a fourth community which is what they call a control group. And the control group consists of what they say are kind of mainstream channels on YouTube, simply to differentiate them from these three and two, see what's going on with them and if there is a difference. So this is kind of the setup and as I said the hypothesis is the following. People go on YouTube, so YouTube is here, YouTube, people come on YouTube, they go around, they explore a bit and all of a sudden they find IDW videos. These are recommended by YouTube on a fairly regular basis. That may mean they're interesting, people find it, they find it interesting and so on. And then there from the IDW there are recommendations and links to the alt-light. And the alt-light are still, so as I read this paper there is kind of an undertone, kind of the IDW and the alt-light are still okay. Like they discuss ideas that are sometimes political and so on, but the real worry is the alt-right, the kind of radical right-wing ethnic nationalists. And I mean yes, the formulation I can agree with. And then they claim, so you find IDW, that they have links to the alt-light or links, I mean recommendations and so on. And from the alt-light and to a certain degree also from the IDW you can then find the alt-right. So even though a user that goes on YouTube at first isn't likely to find the alt-right videos because it's fringe, it's extreme and so on, by through the YouTube recommendation algorithm basically by going to the IDW finding this, then from there they'll find the alt-light and from there and from the IDW they will then find the alt-right. So they claim that there's this pathway of radicalization here that kind of pushes people towards the alt-right. And that's their hypothesis. And they claim that they have evidence to support this and I claim that there is a simpler solution, namely... So first of all let me state I don't like the alt-right. I think their ideas are despicable. I should go without saying, though I have said it now, so you know, just as a disclaimer I'm not defending anyone here. I'm simply saying this paper has a simpler explanation for their data. Namely, what I think is happening here is YouTube again is channels. Each dot here is a channel. Channels can be clustered as such, right there, as we saw before. I'm just drawing more of them right now. Channels, channels, channels, channels, channels, channels, channels. So what I think is happening is there is a control group, what they call the control group. It's over here, it's large control, right? It's a bunch of channels. Then, which is kind of mainstream media, then over here there is, let's say, alternative media where all of these three groups belong into. So at some point you will have the IDW, then maybe a bit further away from the control group, but very close to the IDW you would have the alt-light, and very close to the two, maybe here you would have the alt-right. So notably, in my model, the IDW and the alt-light are kind of close together. They are in terms of comparative distance. So if you cluster these channels, let's say audience or topics or and so on, it will turn out that all of these three are far, far away from the control group. Those two are very close to each other and then here there is some distance, but how much distance is a question? But of course it's going to be smaller distance than the distance to the control group here. I mean I could draw the alt-right, maybe a more accurate picture would be something like this. So whatever, I mean it doesn't matter the details, but the distance here is smaller than the distance to the control group. In this model a second thing is also important, namely the alt-right, as you can see here, is much much smaller than the IDW and the alt-light. And these again are much smaller than the control group. And this I think accounts for most, so the distance relations between these and the size of the clusters account for most. So with size I mean mainly number of channels and also audience. This accounts for most of the data better than their model. So just keep this in mind. And my model of course doesn't include any kind of pipeline that they suggest. So first of all they go ahead and they say, alright, they collect channels. So they collect data for this and you know we could go over how they collect the data and criticize that and so on. They do human annotation and they start from already published reports and so on, which themselves can be criticized. I'm not gonna go into their data collection methodology. It can have mistakes, but then any collection methodology can have mistakes. What they end up with is a number of channels and here are the top channels from each category. And as you can see alt-right, alt-light, intellectual dark web, and control. So already here you can see pretty clearly the model I have in mind. They acknowledge all of this by the way. Look at the size of the alt-right channels, the biggest ones, compared to the size of the alt-light and the intellectual dark web. They're much much smaller in number of views. And then compare this to the size of the control group. The control group again is again larger than the other two groups. So just keep it in mind. Second thing to keep in mind, look at these channels. Maybe you know some of them. Joe Rogan, Sargon of Akkad, Paul Joseph Watson, Sticks Hexenhammer. These are youtubers. These are individuals making YouTube clips, creating content for YouTube, being on this platform. Whereas if you compare it with the control group, what's here? Vox, GQ, Wired, Business Insider. These aren't youtubers. These are websites or traditional media companies or their own kind of blogs and so on that have a YouTube channel where YouTube is one of the outlets of this media company. So I think there's a giant discrepancy here in the control group that can explain also some of this data that you see. So keep that in mind. I think the control group, they say they don't try to capture the user dynamic with the control group, but I think that there's many problems with this control group, including the fact that these are kind of traditional mainstream media that just have YouTube as an outlet. Moreover, a lot of these like Vox or Vice, they are clickbait media and rage bait media that it has worked for a number of years, but the algorithms are becoming more attuned to clickbait and these are crashing fast. Whereas the more youtuber people, they are not susceptible to that much to kind of the abolishment of clickbait. Alright, so this is the data. They have all these channels, they have all these videos and they first of all give some stats on it. Here you see on the bottom is always the year. So they do this over time and you see the active channels which are channels that have uploaded videos in some time. See the control group again is larger but has started to flatten out in the last few years. Whereas these communities, they are relatively flourishing. Another interesting point is that the paper somehow tries to tie this to the election of Donald Trump in 2016. But I think this is just kind of in there to gain relevance. A lot of these kind of trends and so on you'll see already start before that. So the start of the rise here, if you see these bumps here and so on, a lot of them start before 2016. So as we go through this make up your own mind of how much this is actually tied to the election or not. I think it's much more the years when clickbait started to go down as a business model. Never mind though. So the active channels growing, though the control group not growing as much. Videos published, even though the control group isn't growing so much, they still publish the most videos. But you can see generally the site is growing. Generally YouTube is growing. Like counts. And here you see something interesting starting to happen. Namely these communities, especially the alt-light and the intellectual dark web, they're starting to catch up. And this is one of the things that the paper also states is that if you look at for example comments per video, this light and the intellectual dark web outperform the control group vastly. Also if you look at views per video and likes per video, the control group simply don't have an engaged audience. Which I think first of all is because they produce clickbait. Second of all they're just not that interesting. And third of all they're not youtubers. Like this isn't their thing. They're just simply an outlet. But yeah so that's kind of a one, just kind of a bunch of metrics that they show here. The next table is a bit more interesting. In the next table they do a user intersection. So what they do is they collect all these videos and then they collect all the comments of these videos. And the comment of course always comes with a username. You need to be logged into YouTube to make a comment. And they see which users comment on multiple videos or on videos of multiple categories. And then they can look at how many users of category A also comment in category B and vice versa. So they have two metrics here. Jucard similarity which is for two communities A and B, number of users commenting on A and B divided by number of users commenting on A or B. And the second the overlap coefficient is number of users commenting on A and B divided by the minimum size of A and B. They say that the overlap coefficient is more useful to compare communities of different sizes. So we'll look at that. The top graphs are always always jacquard difference and the jacquard similarity in the bottom one are overlap coefficient. The first graphs though are number of commenting users per year. And you already see that even though the control group has much more views and probably much more videos, much larger, the comments don't... so the again the the users of the all light and the intellectual dark web are much more engaged. Also comments per user. This is the cumulative distribution function. Most people that comment on control group videos maybe comment once and then but these other communities they comment more. Self similarity means year after year. So always compared to the year before how many users are similar. So how well do these communities retain users. And you can already see here the control group is actually very bad at retaining users. It does have this overlap coefficient high but it has the jacquard self similarity low which basically if you think of the formula of the jacquard similarity means that this number is small and this number is high which means that A and B are very disjoint which means that the last year's users aren't this year's users basically. So they they constantly have to appeal to new users because they're losing old users because well I guess they're boring. Whereas the all light and intellectual dark web are much more are much better at retaining users. Interestingly the alt right not as good as retaining users as the other two. This could also be an effect of size like if your community is smaller the users might wander away more quickly. But I think this already speaks against the radicalization pipeline. If the if the alt right if YouTube was radicalizing people towards alt right we I think we would see a the alt right being on top of user retention. Then here they have intersections between communities. So green here is alt light and IDW while the blue is alt right and alt light and the other blue is alt right and IDW. So basically the green is alt light and IDW and the blues are the other two. And we see that the overlap in terms of overlap coefficient is similar. The overlap in terms of jacquard similarity the alt light and the IDW are very much more sharing users which in the picture I painted makes sense if you think my model is valid. My model explains this very well in that these two communities are quite close together therefore share a similar user base. The alt right smaller and a bit further apart therefore not as similar though more similar than the control group which is the last graph. The last graph is sorry the last graph is how similar are these communities to the control group and here we see the IDW and the alt light kind of similar. The alt right not as similar though in the overlap coefficient they're about the same. So the paper here claims oh look at the similarity this is definitely a radicalization. So they don't claim yet this is a radicalization pipeline but they claim that there's a higher similarity. If you actually look at the numbers it's not so I mean here you're around the 50% similarity and here at the end you're also around the 50% similarity with the control group. So this is within these groups and this is here with the control group. Also here if I look at the kind of mean here you're at whatever 20-18% and here you're also you may be a bit lower but you're also going towards this. What it looks to me like rather than there being a radicalization pipeline if you look at the shape of this and kind of where it starts in 2013-2014 it starts to go up here and you look at the shape of this it's simply the same shape delayed and I mean there's no reason why this graph wouldn't go up here wouldn't go up here in the future and reach the exact same numbers as here. It seems that the graph is simply shifted which makes total sense if you think these communities are... I'm gonna draw the same picture here... right IDW, alt light and over here control. If you think they're they're like that if you think simply think well YouTube is growing users are growing users are starting somewhere here and then spreading out pretty much randomly like they're spreading out spreading out spreading out users start here spreading out users start here spreading out here spreading out everywhere users just kind of there's a diffusion process going on not in a particular direction like they claim if there is just a diffusion process going on what would you expect you would expect users that started here to reach the IDW and alt right much sooner than they reach the control group but ultimately as the diffusion continues all users will have commented on most videos if you run YouTube infinitely and these numbers would go that's why the numbers go up right if you just let it go the diffusion process will go along and it simply takes a longer time to go from here all the way over here then it goes then between these communities so to me we're looking at a simple diffusion process here that is shifted in time and that explains very much the discrepancy in number but also the shape of the curve that is exactly the same but shifted their model does not explain the shape of the curve they simply say well here it's 75% and here it's only 50% that means that these communities are kind of shipping users towards each other so I think the explanation is easier then so they claim this does not alone kind of show that there is a pipeline what they now do however will show that basically so they claim this is the experiment that really shows that there is it is pipeline so what they do is they define what they call an infection so what they say is okay we are for example this this row here we're taking users that are alt light users at the beginning in this time so basically they only comment on the only comment on alt light videos during this time right so discard all users that comment on anything else just retain the ones that only comment on alt light videos during this time then we're going to follow them over time and see how many of them have at least one comment in an alt right video so this is only directed from the community over here towards the alt right and then they call a user infected specifically if they comment on one or two alt right videos they're lightly infected if they comment on three to five they're mildly infected and if they comment on more they're severely infected so as you can see users starting from the alt light or from the IDW or from both they will become in some will become infected over time namely and I postulate we simply look at the since that the tendencies between the groups are similar we'll simply look at the light infections here so they say okay after you know in 2018 about 8 to 10 percent of the users become infected in these groups you see here here about the same trajectories whereas it so whereas in the control group it's less here though honestly I don't think it's that much less right I think that again I think there's a normal diffusion process here they do this similarly with the with the other ones and to me like to them this makes total sense like oh yeah users that start in these communities they migrate to get infected by the alt right they go towards the alt right because you can find it so easily and to me this simply looks like a normal diffusion process here's what you need if you want and by the way the control group isn't that much different here's what you need if you want to show that there is a pipeline in this direction you need this exact same graph in the other direction and you need to show that people that started in the alt right do not go back in the same fashion towards the alt light or the IDW and they do especially not go to the control group you need to show this basically between each pair of these and you need to show that the direction of infection is only in a single direction namely towards radicalization otherwise you're just looking at a normal diffusion process between differently distance and differently sized groups so they go on to analyze and they say well how much basically how much of the alt right audience makes is made up by people that have been radicalized that have been infected so that this infection is kind of their proxy for what they call a radicalization and if you become infected then basically you're not part of the alt right or something even though you might have you might have commented something negative actually the might engage with their ideas and call them their crap but in any case you're now infected and they ask themselves how much of the alt right audience has are of these infected so basically how much of the alt right audience have our people that in the past have been not alt writers have been exclusively commenting on alt light or IDW videos and they find that for example for alt light 23% of the alt right audience are former alt lighters and have our former alt lighters that have now made one comment on an alt right video so that their claim is well there is a sizable portion of the alt right that at the beginning wasn't alt right that basically became infected and therefore that that kind of shows this radicalization pipeline that the alt right audience is mainly consistent of people that have not been alt right previously but have become so and to me again this is simply a function of the size of these communities right if if you think of this again and you start randomly somewhere on YouTube let's let's make this assumption people start randomly somewhere on YouTube what's the probability that you're going to start in the alt right very small right so what's the the kind of natural let's say the natural size of alt right before users go and migrate is very tiny right so not many users are going to be what you would consult originally alt writers whatever their their first comment basically what this thing measures is where is your first comment and are any of your subsequent comments alt right if your first comment is not in the alt right then you become a potential candidate for infection and if any comment is on the alt right then you're infected so what's the probability that your first comment is not alt right well you're gonna land somewhere on YouTube YouTube is huge the alt right is very small thus that probability is extremely small and then you let you simply let people diffuse let them diffuse let them diffuse some will end up in the alt right and since the alt right is so small to begin with actually most people that will comment at some point on an alt right video will will have their first comment from somewhere outside the alt right videos simply simply a numbers game right simply the alt right is so small that this is virtually guaranteed so what they find here is again simply an evidence of a regular diffusion process between these differently sized groups and the claims they make from this are just over the top again that their comparison to the control group if you if you look at the numbers they're actually not that different from this from the IDW numbers there they're different than the alt light here substantially different but again that simply a function of distance in my opinion in these in these clusters lastly they look at the YouTube recommender system and they say okay if we look at these videos and the channels and we look at on these videos what other videos are recommended and what other channels are recommended so if you have like a video on YouTube you have the video here and here you have like recommended videos similarly when you have a channel right you have a channel this is a person yeah I'm this person the person can have first of all they can have featured channels where they say look these are channels that I find cool I go check them out and then they also have recommended channels that are kind of given by YouTube as recommendations so here YouTube controls basically everything here the creator controls part and the YouTube controls dollar part so they look to both first of all the channels channels recommend recommendations so these are both sections here and they look at if you start on a alt light video how likely if you do a random walk are you to end up in the alt right or in the intellectual dark web or control group after one step two steps three steps four steps so that the big line is the random Walker and actually the dashed line is the distance if you were to target Lee go into the direction of such a video like what's the minimum number of clicks you need and you can see here the the if you start at alt light after one or two steps the random Walker is kind of a 2% chance to end up at an alt right video and about a 25% chance here of ending up in a intellectual dark web video and about a 50% chance of ending up again at an alt light video the scales here really different so it's very difficult to judge how it compares to the control group which is kind of at zero here but to me again this is a reflection of the size of these communities and I think it's a bit you know we are to to then claim oh these are reachable basically so 2% chance of landing on an alt right video um I'm not sure but again if you compare if you start from the control group there's almost no chance you'll end up in a alt right video so I guess the comparison is is okay if you compare to control group if you start look at videos however again if you start at alt light after one step you are approximately 25% likely to be in an IDW video you're a bit over 50% likely to stay in an alt light video however compare this to channels you're almost super unlikely to end at a control channel if you start at an alt light channel but in video recommendations you're actually also about 25% chance of ending in a control group video where as look at the scale here you're only about 0.03% likely to end up in an alt right video and also here so here even look at this if you start an IDW video the chance that you're going to end up in a control again super high much higher than an alt light video whereas with the channel recommendations this was completely turned around so we see the alt right completely loses when it comes to video recommendations and mainly the control group gains compared to the channel recommendations I think here's what I think I think this is due to this section here this section here where the creators have power and also this section here YouTube recommending I think they're putting a lot of work into the video recommendations I think they're putting not that much work into these recommendations and by work I mean actually manually intervening and deciding what's kind of good videos and bad videos and the the control group they're probably there's probably big advertisement money in that so they might be pushed up a bit in the video recommendations since most people are going by video recommendations I've actually never used the channel recommendations feature and the channel recommendations first of all the creator has power over part of it and then also YouTube may not put as much work into these related channels so both have in the effect that I would say that that the data here first of all it doesn't doesn't convince me of a radicalization pipeline it simply convinces me that some communities are larger smaller and closer together but second of all that this down here if you forget about the alt-right for a moment yeah they're irrelevant this down here actually compared to up here shows maybe a bit of evidence of an algorithmic promotion of these mainstream media channels compared to how the communities are actually clustering which I think this this up here might be a much more accurate picture so you know that it's just kind of a funky thing in the data yeah that alt-right is irrelevant to this part because they're they're just too small so this is this is kind of my take on this they didn't give recommendations and is this a pipeline and so on and I don't think so you've now heard my idea and you've heard their idea decide for yourself but I think it's a good example of how if you are convinced of an underlying mechanism you're going to collect evidence in support of that mechanism and if you catch yourself doing that really really think isn't there an easier explanation for this all right that was it for me have fun
[ { "end": 5.44, "start": 0, "text": " Hi there! Today we're going to look at Auditing Radicalization Pathways on" }, { "end": 12.96, "start": 5.44, "text": " YouTube by Manuel Horta-Riberio at AL. So this paper is a bit different than the" }, { "end": 19.52, "start": 12.96, "text": " one we're usually looking at, but since I'm a YouTuber and this is in the kind" }, { "end": 26.52, "start": 19.52, "text": " of a data science realm, I thought it fits neatly. So yeah, we'll have a look." }, { "end": 34.04, "start": 26.52, "text": " And this is mostly going to be an analysis and my opinion on it, so take" }, { "end": 42.4, "start": 34.04, "text": " that for what it is. This is, in my opinion, a paper where you can see very" }, { "end": 50.96, "start": 42.4, "text": " well what it looks like when you deceive yourself. So when you have a" }, { "end": 57.92, "start": 50.96, "text": " hypothesis of something and then only collect data that matches that, and you" }, { "end": 64.08, "start": 57.92, "text": " don't think of simpler solutions that explain the data, and" }, { "end": 68.56, "start": 64.08, "text": " therefore you don't think of experiments that could differentiate the simple" }, { "end": 72.84, "start": 68.56, "text": " solutions from what you propose. So it's a good example of how you can kind of" }, { "end": 77.96000000000001, "start": 72.84, "text": " trick yourself into believing you found something. And this isn't" }, { "end": 83.36, "start": 77.96, "text": " now about YouTube or anything. This happened to me so many times. It always" }, { "end": 89.83999999999999, "start": 83.36, "text": " pays off to take a step back and say, is there a simpler explanation for what's" }, { "end": 94.19999999999999, "start": 89.83999999999999, "text": " happening? And this is what I think is exactly happening here. So I'll present" }, { "end": 101.6, "start": 94.19999999999999, "text": " to you their hypothesis and then I'll present to you my kind of what I think" }, { "end": 108.55999999999999, "start": 101.6, "text": " is going on and a model that explains the data much much much easier and" }, { "end": 117.67999999999999, "start": 108.55999999999999, "text": " simpler and actually better. So let's dive in. This paper basically claims" }, { "end": 124.47999999999999, "start": 117.67999999999999, "text": " the following. So on YouTube there are channels and channels are, you know," }, { "end": 128.72, "start": 124.47999999999999, "text": " independent channels. They make videos and you can actually arrange these" }, { "end": 134.84, "start": 128.72, "text": " channels. So each dot here is a channel. You can arrange these channels in kind" }, { "end": 139.96, "start": 134.84, "text": " of a network. And two channels you can claim they're connected and they can be" }, { "end": 145.16, "start": 139.96, "text": " a connection strength or whatever. For simplicity they can be connected if, for" }, { "end": 150.64, "start": 145.16, "text": " example, their topics are similar, if they reference each other, if they are" }, { "end": 155.12, "start": 150.64, "text": " recommended by YouTube from each other, if they have the same users watching" }, { "end": 159.64000000000001, "start": 155.12, "text": " those same channels or the videos of these channels. There are a number of" }, { "end": 166.64000000000001, "start": 159.64000000000001, "text": " metrics where you could make channels connected but all of them" }, { "end": 172.72, "start": 166.64000000000001, "text": " will turn out similar, like will give you the similar structure of channels" }, { "end": 179.16, "start": 172.72, "text": " being connected. Oh that's connected twice. So you can kind of build a" }, { "end": 183.6, "start": 179.16, "text": " graph of how these channels are connected and what you can do then is you" }, { "end": 188, "start": 183.6, "text": " can cluster them. You don't have to build a graph to cluster them but you" }, { "end": 193.92, "start": 188, "text": " can cluster the channels and what will emerge are parts of the graph that are" }, { "end": 199.64, "start": 193.92, "text": " very well connected. Right here this might be connected with this and with" }, { "end": 206.88, "start": 199.64, "text": " this. Parts of graph that are very well connected and are kind of well" }, { "end": 211.35999999999999, "start": 206.88, "text": " connected within and more sparsely connected to others, like also have a" }, { "end": 217.88000000000002, "start": 211.36, "text": " larger distance in between them. So if you start out from one channel and you're" }, { "end": 222.16000000000003, "start": 217.88000000000002, "text": " kind of watching recommended videos and recommended channels and so on, you'll" }, { "end": 227.32000000000002, "start": 222.16000000000003, "text": " stroll along here, you will get much faster to these things than to the other" }, { "end": 231.16000000000003, "start": 227.32000000000002, "text": " things. So these are called communities usually in these kind of" }, { "end": 235.76000000000002, "start": 231.16000000000003, "text": " social network analysis. So on YouTube you know there is a community for" }, { "end": 242.35999999999999, "start": 235.76, "text": " makeup, there's a community for sports, within sports there is a community for" }, { "end": 246.51999999999998, "start": 242.35999999999999, "text": " soccer, there's one for basketball and so on. So these are all these kind of" }, { "end": 251.07999999999998, "start": 246.51999999999998, "text": " communities that you can discover by clustering. This paper mainly deals with" }, { "end": 257.71999999999997, "start": 251.07999999999998, "text": " three communities. Namely the first of all is the IDW, which is the" }, { "end": 263.36, "start": 257.71999999999997, "text": " intellectual dark web. They discuss this here. So the intellectual dark web is" }, { "end": 272.28000000000003, "start": 263.36, "text": " they describe as a group of individuals that are in a rolling conversation with" }, { "end": 278.72, "start": 272.28000000000003, "text": " each other about topics that are, let's say, usually kind of difficult to talk" }, { "end": 285.40000000000003, "start": 278.72, "text": " about, such as gender differences or intelligence research in certain areas" }, { "end": 293.71999999999997, "start": 285.4, "text": " or even you know regular politics, but kind of the intellectual dark web are a" }, { "end": 300.4, "start": 293.71999999999997, "text": " wide variety of people that basically are conversing with each other about" }, { "end": 307.59999999999997, "start": 300.4, "text": " topics. The description is a bit vague but the main aspect is conversation" }, { "end": 315.08, "start": 307.59999999999997, "text": " and maybe topics that are kind of on the edge of what's acceptable to talk" }, { "end": 322.44, "start": 315.08, "text": " about. But the opinions range widely on these topics. The second group is the alt-right." }, { "end": 331.88, "start": 322.44, "text": " And the alt-right here is kind of the, they're defined as ethno-nationalists." }, { "end": 339.47999999999996, "start": 331.88, "text": " For example, here is an example, the fringe ideas such as white ethno-state," }, { "end": 345.8, "start": 339.48, "text": " white supremacist ideology and so on. So specifically ethno-nationalists," }, { "end": 350.96000000000004, "start": 345.8, "text": " nationalists that I think nations should be organized to along the lines of" }, { "end": 357.72, "start": 350.96000000000004, "text": " ethnicity. And the goal of the paper is actually to show that there is a" }, { "end": 364.44, "start": 357.72, "text": " kind of a dangerous pipeline on YouTube that will drive people to the alt-right" }, { "end": 370.28, "start": 364.44, "text": " and drive people into these radical ideas of the alt-right. Kind of in between is" }, { "end": 377.44, "start": 370.28, "text": " the alt-light, which is here defined as civic nationalists, which is simply as I" }, { "end": 382.76, "start": 377.44, "text": " understand it means that people should be organized into nations, not along" }, { "end": 386.64, "start": 382.76, "text": " ethnicity, but just should organize themselves into sovereign communities." }, { "end": 396.52, "start": 386.64, "text": " And it would be more of your libertarian, classically liberal people, whereas the" }, { "end": 404.96, "start": 396.52, "text": " alt-right would be more of your, let's say, authoritarian right-wing person." }, { "end": 409.68, "start": 404.96, "text": " So these three communities, they have a fourth community which is what they call a" }, { "end": 413.47999999999996, "start": 409.68, "text": " control group. And the control group consists of what they say are kind of" }, { "end": 420.92, "start": 413.48, "text": " mainstream channels on YouTube, simply to differentiate them from these three" }, { "end": 427.64000000000004, "start": 420.92, "text": " and two, see what's going on with them and if there is a difference. So this is" }, { "end": 432.40000000000003, "start": 427.64000000000004, "text": " kind of the setup and as I said the hypothesis is the following." }, { "end": 438.84000000000003, "start": 432.40000000000003, "text": " People go on YouTube, so YouTube is here, YouTube, people come on YouTube, they go" }, { "end": 444.52, "start": 438.84, "text": " around, they explore a bit and all of a sudden they find IDW videos. These are" }, { "end": 449.28, "start": 444.52, "text": " recommended by YouTube on a fairly regular basis. That may mean they're" }, { "end": 453.12, "start": 449.28, "text": " interesting, people find it, they find it interesting and so on. And then there from" }, { "end": 460.91999999999996, "start": 453.12, "text": " the IDW there are recommendations and links to the alt-light. And the alt-light" }, { "end": 467.15999999999997, "start": 460.91999999999996, "text": " are still, so as I read this paper there is kind of an undertone, kind of the IDW" }, { "end": 472.44, "start": 467.16, "text": " and the alt-light are still okay. Like they discuss ideas that are" }, { "end": 477.88000000000005, "start": 472.44, "text": " sometimes political and so on, but the real worry is the alt-right, the" }, { "end": 486.04, "start": 477.88000000000005, "text": " kind of radical right-wing ethnic nationalists. And I mean yes, the" }, { "end": 492.44000000000005, "start": 486.04, "text": " formulation I can agree with. And then they claim, so you find IDW," }, { "end": 497.52, "start": 492.44, "text": " that they have links to the alt-light or links, I mean recommendations and so on." }, { "end": 502.64, "start": 497.52, "text": " And from the alt-light and to a certain degree also from the IDW you can then" }, { "end": 510.36, "start": 502.64, "text": " find the alt-right. So even though a user that goes on YouTube at first isn't" }, { "end": 517.16, "start": 510.36, "text": " likely to find the alt-right videos because it's fringe, it's extreme and so" }, { "end": 521.84, "start": 517.16, "text": " on, by through the YouTube recommendation algorithm basically by" }, { "end": 527.24, "start": 521.84, "text": " going to the IDW finding this, then from there they'll find the alt-light and" }, { "end": 534.96, "start": 527.24, "text": " from there and from the IDW they will then find the alt-right. So they claim" }, { "end": 542.26, "start": 534.96, "text": " that there's this pathway of radicalization here that kind of pushes" }, { "end": 551.76, "start": 542.26, "text": " people towards the alt-right. And that's their hypothesis. And they claim" }, { "end": 558.84, "start": 551.76, "text": " that they have evidence to support this and I claim that there is a simpler" }, { "end": 565.28, "start": 558.84, "text": " solution, namely... So first of all let me state I don't like the alt-right. I think" }, { "end": 574.64, "start": 565.28, "text": " their ideas are despicable. I should go without saying, though I have said it now," }, { "end": 581.28, "start": 574.64, "text": " so you know, just as a disclaimer I'm not defending anyone here. I'm simply saying" }, { "end": 586.56, "start": 581.28, "text": " this paper has a simpler explanation for their data. Namely, what I think is" }, { "end": 595.6, "start": 586.56, "text": " happening here is YouTube again is channels. Each dot here is a channel." }, { "end": 601.36, "start": 595.6, "text": " Channels can be clustered as such, right there, as we saw before. I'm just drawing" }, { "end": 606.88, "start": 601.36, "text": " more of them right now. Channels, channels, channels, channels, channels, channels, channels." }, { "end": 614.12, "start": 606.88, "text": " So what I think is happening is there is a control group, what they call the" }, { "end": 621.6, "start": 614.12, "text": " control group. It's over here, it's large control, right? It's a bunch of channels." }, { "end": 630.28, "start": 621.6, "text": " Then, which is kind of mainstream media, then over here there is, let's say," }, { "end": 635.56, "start": 630.28, "text": " alternative media where all of these three groups belong into. So at some" }, { "end": 642.28, "start": 635.56, "text": " point you will have the IDW, then maybe a bit further away from the control group," }, { "end": 647.68, "start": 642.28, "text": " but very close to the IDW you would have the alt-light, and very close to the two," }, { "end": 656.0799999999999, "start": 647.68, "text": " maybe here you would have the alt-right. So notably, in my model, the" }, { "end": 662.76, "start": 656.0799999999999, "text": " IDW and the alt-light are kind of close together. They are in terms of" }, { "end": 667.84, "start": 662.76, "text": " comparative distance. So if you cluster these channels, let's say audience or" }, { "end": 674.88, "start": 667.84, "text": " topics or and so on, it will turn out that all of these three are far, far" }, { "end": 679.68, "start": 674.88, "text": " away from the control group. Those two are very close to each other and then" }, { "end": 686.72, "start": 679.68, "text": " here there is some distance, but how much distance is a question?" }, { "end": 691.28, "start": 686.72, "text": " But of course it's going to be smaller distance than the distance to the" }, { "end": 697.6, "start": 691.28, "text": " control group here. I mean I could draw the alt-right, maybe a more" }, { "end": 705.0799999999999, "start": 697.6, "text": " accurate picture would be something like this. So whatever, I mean" }, { "end": 710.8, "start": 705.0799999999999, "text": " it doesn't matter the details, but the distance here is smaller" }, { "end": 719.12, "start": 710.8, "text": " than the distance to the control group. In this model a second thing is" }, { "end": 725.52, "start": 719.12, "text": " also important, namely the alt-right, as you can see here, is much much smaller" }, { "end": 731.84, "start": 725.52, "text": " than the IDW and the alt-light. And these again are much smaller than the" }, { "end": 737.96, "start": 731.84, "text": " control group. And this I think accounts for most, so the distance relations" }, { "end": 749.6, "start": 737.96, "text": " between these and the size of the clusters account for most. So with" }, { "end": 754.9200000000001, "start": 749.6, "text": " size I mean mainly number of channels and also audience. This accounts" }, { "end": 761.0400000000001, "start": 754.9200000000001, "text": " for most of the data better than their model. So just keep this in mind." }, { "end": 767.36, "start": 761.0400000000001, "text": " And my model of course doesn't include any kind of pipeline that they" }, { "end": 776.08, "start": 767.36, "text": " suggest. So first of all they go ahead and they say, alright, they collect" }, { "end": 781.5600000000001, "start": 776.08, "text": " channels. So they collect data for this and you know we could go over how they" }, { "end": 786.2, "start": 781.5600000000001, "text": " collect the data and criticize that and so on. They do human annotation and they" }, { "end": 791.8000000000001, "start": 786.2, "text": " start from already published reports and so on, which themselves can be criticized." }, { "end": 796.46, "start": 791.8000000000001, "text": " I'm not gonna go into their data collection methodology. It can have" }, { "end": 803.5600000000001, "start": 796.46, "text": " mistakes, but then any collection methodology can have mistakes. What they" }, { "end": 807.5600000000001, "start": 803.5600000000001, "text": " end up with is a number of channels and here are the top channels from each" }, { "end": 814, "start": 807.5600000000001, "text": " category. And as you can see alt-right, alt-light, intellectual dark web," }, { "end": 821.2800000000001, "start": 814, "text": " and control. So already here you can see pretty clearly the model I have in mind." }, { "end": 827.16, "start": 821.28, "text": " They acknowledge all of this by the way. Look at the size of the alt-right" }, { "end": 832.56, "start": 827.16, "text": " channels, the biggest ones, compared to the size of the alt-light and the" }, { "end": 838.0799999999999, "start": 832.56, "text": " intellectual dark web. They're much much smaller in number of views. And then" }, { "end": 843.52, "start": 838.0799999999999, "text": " compare this to the size of the control group. The control group again is again" }, { "end": 849.9599999999999, "start": 843.52, "text": " larger than the other two groups. So just keep it in mind. Second thing to keep in" }, { "end": 856, "start": 849.96, "text": " mind, look at these channels. Maybe you know some of them. Joe Rogan, Sargon of" }, { "end": 864.14, "start": 856, "text": " Akkad, Paul Joseph Watson, Sticks Hexenhammer. These are" }, { "end": 870.88, "start": 864.14, "text": " youtubers. These are individuals making YouTube clips, creating content for" }, { "end": 876.52, "start": 870.88, "text": " YouTube, being on this platform. Whereas if you compare it with the control group," }, { "end": 884.56, "start": 876.52, "text": " what's here? Vox, GQ, Wired, Business Insider. These aren't youtubers. These are" }, { "end": 890.1999999999999, "start": 884.56, "text": " websites or traditional media companies or their own kind of" }, { "end": 895.84, "start": 890.1999999999999, "text": " blogs and so on that have a YouTube channel where YouTube is one of the" }, { "end": 904.24, "start": 895.84, "text": " outlets of this media company. So I think there's a giant discrepancy" }, { "end": 909.34, "start": 904.24, "text": " here in the control group that can explain also some of this data that you" }, { "end": 915.04, "start": 909.34, "text": " see. So keep that in mind. I think the control group, they say they don't try to" }, { "end": 919.24, "start": 915.04, "text": " capture the user dynamic with the control group, but I think that there's" }, { "end": 923.6800000000001, "start": 919.24, "text": " many problems with this control group, including the fact that these are" }, { "end": 929.6800000000001, "start": 923.6800000000001, "text": " kind of traditional mainstream media that just have YouTube as an outlet." }, { "end": 936.28, "start": 929.68, "text": " Moreover, a lot of these like Vox or Vice, they are clickbait media and" }, { "end": 943.28, "start": 936.28, "text": " rage bait media that it has worked for a number of years, but the algorithms" }, { "end": 949.8399999999999, "start": 943.28, "text": " are becoming more attuned to clickbait and these are crashing fast." }, { "end": 958.4, "start": 949.8399999999999, "text": " Whereas the more youtuber people, they are not susceptible to" }, { "end": 965.12, "start": 958.4, "text": " that much to kind of the abolishment of clickbait. Alright, so this is" }, { "end": 970.68, "start": 965.12, "text": " the data. They have all these channels, they have all these videos and they first of" }, { "end": 979.28, "start": 970.68, "text": " all give some stats on it. Here you see on the bottom is always the year." }, { "end": 987.38, "start": 979.28, "text": " So they do this over time and you see the active channels which are channels" }, { "end": 993.72, "start": 987.38, "text": " that have uploaded videos in some time. See the control group again is larger" }, { "end": 1001.68, "start": 993.72, "text": " but has started to flatten out in the last few years. Whereas these" }, { "end": 1007.52, "start": 1001.68, "text": " communities, they are relatively flourishing. Another interesting point" }, { "end": 1015.08, "start": 1007.52, "text": " is that the paper somehow tries to tie this to the election of Donald Trump in" }, { "end": 1022.08, "start": 1015.08, "text": " 2016. But I think this is just kind of in there to gain" }, { "end": 1027.8400000000001, "start": 1022.08, "text": " relevance. A lot of these kind of trends and so on you'll see already start" }, { "end": 1035.2, "start": 1027.8400000000001, "text": " before that. So the start of the rise here, if you see these" }, { "end": 1041.96, "start": 1035.2, "text": " bumps here and so on, a lot of them start before 2016. So as we go through this" }, { "end": 1046.48, "start": 1041.96, "text": " make up your own mind of how much this is actually tied to the election" }, { "end": 1054.92, "start": 1046.48, "text": " or not. I think it's much more the years when clickbait started to go" }, { "end": 1060.4, "start": 1054.92, "text": " down as a business model. Never mind though. So the active channels" }, { "end": 1069.16, "start": 1060.4, "text": " growing, though the control group not growing as much. Videos published, even" }, { "end": 1073, "start": 1069.16, "text": " though the control group isn't growing so much, they still publish the most" }, { "end": 1079.5600000000002, "start": 1073, "text": " videos. But you can see generally the site is growing. Generally YouTube is" }, { "end": 1085.88, "start": 1079.5600000000002, "text": " growing. Like counts. And here you see something interesting starting to happen." }, { "end": 1089.6000000000001, "start": 1085.88, "text": " Namely these communities, especially the alt-light and the intellectual dark web," }, { "end": 1094.2, "start": 1089.6000000000001, "text": " they're starting to catch up. And this is one of the things that the paper also" }, { "end": 1100.4, "start": 1094.2, "text": " states is that if you look at for example comments per video, this" }, { "end": 1107.6000000000001, "start": 1100.4, "text": " light and the intellectual dark web outperform the control group vastly." }, { "end": 1117.1200000000001, "start": 1107.6000000000001, "text": " Also if you look at views per video and likes per video, the control" }, { "end": 1123, "start": 1117.1200000000001, "text": " group simply don't have an engaged audience. Which I think first of all is" }, { "end": 1127.68, "start": 1123, "text": " because they produce clickbait. Second of all they're just not that interesting." }, { "end": 1132.32, "start": 1127.68, "text": " And third of all they're not youtubers. Like this isn't their thing. They're" }, { "end": 1140.4, "start": 1132.32, "text": " just simply an outlet. But yeah so that's kind of a one, just kind of a" }, { "end": 1149.76, "start": 1140.4, "text": " bunch of metrics that they show here. The next table is a bit more" }, { "end": 1155.44, "start": 1149.76, "text": " interesting. In the next table they do a user intersection. So what they do is they" }, { "end": 1159.76, "start": 1155.44, "text": " collect all these videos and then they collect all the comments of these" }, { "end": 1165.28, "start": 1159.76, "text": " videos. And the comment of course always comes with a username. You need to be" }, { "end": 1170.84, "start": 1165.28, "text": " logged into YouTube to make a comment. And they see which users comment on" }, { "end": 1176.08, "start": 1170.84, "text": " multiple videos or on videos of multiple categories. And then they can look at" }, { "end": 1181.9199999999998, "start": 1176.08, "text": " how many users of category A also comment in category B and vice versa." }, { "end": 1188.28, "start": 1181.9199999999998, "text": " So they have two metrics here. Jucard similarity which is for two" }, { "end": 1193.84, "start": 1188.28, "text": " communities A and B, number of users commenting on A and B divided" }, { "end": 1199.04, "start": 1193.84, "text": " by number of users commenting on A or B. And the second the overlap coefficient" }, { "end": 1205.32, "start": 1199.04, "text": " is number of users commenting on A and B divided by the minimum size of A and B." }, { "end": 1212.2, "start": 1205.32, "text": " They say that the overlap coefficient is more useful to compare communities of" }, { "end": 1220.1599999999999, "start": 1212.2, "text": " different sizes. So we'll look at that. The top graphs are always always" }, { "end": 1226.8799999999999, "start": 1220.1599999999999, "text": " jacquard difference and the jacquard similarity in the bottom one are" }, { "end": 1232.32, "start": 1226.8799999999999, "text": " overlap coefficient. The first graphs though are number of commenting users" }, { "end": 1238.28, "start": 1232.32, "text": " per year. And you already see that even though the control group has much more" }, { "end": 1245, "start": 1238.28, "text": " views and probably much more videos, much larger, the comments don't... so the" }, { "end": 1250.04, "start": 1245, "text": " again the the users of the all light and the intellectual dark web are much more" }, { "end": 1258.2, "start": 1250.04, "text": " engaged. Also comments per user. This is the cumulative distribution function." }, { "end": 1264.44, "start": 1258.2, "text": " Most people that comment on control group videos maybe comment once" }, { "end": 1271.64, "start": 1264.44, "text": " and then but these other communities they comment more. Self similarity" }, { "end": 1277.1200000000001, "start": 1271.64, "text": " means year after year. So always compared to the year before how many users are" }, { "end": 1283.52, "start": 1277.1200000000001, "text": " similar. So how well do these communities retain users. And you can" }, { "end": 1289.04, "start": 1283.52, "text": " already see here the control group is actually very bad at retaining users. It" }, { "end": 1295.2, "start": 1289.04, "text": " does have this overlap coefficient high but it has the jacquard self similarity" }, { "end": 1299.72, "start": 1295.2, "text": " low which basically if you think of the formula of the jacquard similarity means" }, { "end": 1308.32, "start": 1299.72, "text": " that this number is small and this number is high which means that A and" }, { "end": 1314.96, "start": 1308.32, "text": " B are very disjoint which means that the last year's users aren't this year's" }, { "end": 1321.6, "start": 1314.96, "text": " users basically. So they they constantly have to appeal to new users because" }, { "end": 1327.36, "start": 1321.6, "text": " they're losing old users because well I guess they're boring. Whereas the" }, { "end": 1332.9199999999998, "start": 1327.36, "text": " all light and intellectual dark web are much more are much better at retaining" }, { "end": 1342.6000000000001, "start": 1332.92, "text": " users. Interestingly the alt right not as good as retaining users as the other two." }, { "end": 1347.2, "start": 1342.6000000000001, "text": " This could also be an effect of size like if your community is smaller the" }, { "end": 1354.1200000000001, "start": 1347.2, "text": " users might wander away more quickly. But I think this already speaks against the" }, { "end": 1360.88, "start": 1354.1200000000001, "text": " radicalization pipeline. If the if the alt right if YouTube was radicalizing" }, { "end": 1368.8000000000002, "start": 1360.88, "text": " people towards alt right we I think we would see a the alt right being on top" }, { "end": 1379.68, "start": 1368.8000000000002, "text": " of user retention. Then here they have intersections between communities. So" }, { "end": 1390.8400000000001, "start": 1379.68, "text": " green here is alt light and IDW while the blue is alt right and alt light and" }, { "end": 1396.4399999999998, "start": 1390.84, "text": " the other blue is alt right and IDW. So basically the green is alt light and IDW" }, { "end": 1404.8, "start": 1396.4399999999998, "text": " and the blues are the other two. And we see that the overlap in terms of overlap" }, { "end": 1411.52, "start": 1404.8, "text": " coefficient is similar. The overlap in terms of jacquard similarity the alt" }, { "end": 1418.8, "start": 1411.52, "text": " light and the IDW are very much more sharing users which in the picture I" }, { "end": 1425.8799999999999, "start": 1418.8, "text": " painted makes sense if you think my model is valid. My model explains this" }, { "end": 1434.04, "start": 1425.8799999999999, "text": " very well in that these two communities are quite close together therefore share" }, { "end": 1438.52, "start": 1434.04, "text": " a similar user base. The alt right smaller and a bit further apart" }, { "end": 1445.54, "start": 1438.52, "text": " therefore not as similar though more similar than the control group which is" }, { "end": 1450.96, "start": 1445.54, "text": " the last graph. The last graph is sorry the last graph is how similar are these" }, { "end": 1461.12, "start": 1450.96, "text": " communities to the control group and here we see the IDW and the alt light" }, { "end": 1467.36, "start": 1461.12, "text": " kind of similar. The alt right not as similar though in the overlap" }, { "end": 1476.56, "start": 1467.36, "text": " coefficient they're about the same. So the paper here claims oh look at the" }, { "end": 1481.6, "start": 1476.56, "text": " similarity this is definitely a radicalization. So they don't claim yet this" }, { "end": 1485.56, "start": 1481.6, "text": " is a radicalization pipeline but they claim that there's a higher similarity." }, { "end": 1491.36, "start": 1485.56, "text": " If you actually look at the numbers it's not so I mean here you're" }, { "end": 1496.9599999999998, "start": 1491.36, "text": " around the 50% similarity and here at the end you're also around the 50%" }, { "end": 1500.76, "start": 1496.96, "text": " similarity with the control group. So this is within these groups and this is" }, { "end": 1506.8400000000001, "start": 1500.76, "text": " here with the control group. Also here if I look at the kind of mean here" }, { "end": 1513.8, "start": 1506.8400000000001, "text": " you're at whatever 20-18% and here you're also you may be a bit lower but" }, { "end": 1519.6000000000001, "start": 1513.8, "text": " you're also going towards this. What it looks to me like rather than there being" }, { "end": 1525.16, "start": 1519.6000000000001, "text": " a radicalization pipeline if you look at the shape of this and kind" }, { "end": 1532.44, "start": 1525.16, "text": " of where it starts in 2013-2014 it starts to go up here and you look at the" }, { "end": 1538.64, "start": 1532.44, "text": " shape of this it's simply the same shape delayed and I mean there's no reason why" }, { "end": 1547.64, "start": 1538.64, "text": " this graph wouldn't go up here wouldn't go up here in the future and reach the" }, { "end": 1551.88, "start": 1547.64, "text": " exact same numbers as here. It seems that the graph is simply shifted which makes" }, { "end": 1557.0800000000002, "start": 1551.88, "text": " total sense if you think these communities are... I'm gonna draw the same" }, { "end": 1568.3600000000001, "start": 1557.0800000000002, "text": " picture here... right IDW, alt light and over here control. If you think they're" }, { "end": 1574.2800000000002, "start": 1568.3600000000001, "text": " they're like that if you think simply think well YouTube is growing users are" }, { "end": 1580.8400000000001, "start": 1574.2800000000002, "text": " growing users are starting somewhere here and then spreading out pretty much" }, { "end": 1585.12, "start": 1580.84, "text": " randomly like they're spreading out spreading out spreading out users start" }, { "end": 1588.32, "start": 1585.12, "text": " here spreading out users start here spreading out here spreading out" }, { "end": 1593.52, "start": 1588.32, "text": " everywhere users just kind of there's a diffusion process going on not in a" }, { "end": 1597.56, "start": 1593.52, "text": " particular direction like they claim if there is just a diffusion process going" }, { "end": 1604.24, "start": 1597.56, "text": " on what would you expect you would expect users that started here to reach" }, { "end": 1611.68, "start": 1604.24, "text": " the IDW and alt right much sooner than they reach the control group but" }, { "end": 1617, "start": 1611.68, "text": " ultimately as the diffusion continues all users will have commented on most" }, { "end": 1621.72, "start": 1617, "text": " videos if you run YouTube infinitely and these numbers would go that's why the" }, { "end": 1626.92, "start": 1621.72, "text": " numbers go up right if you just let it go the diffusion process will go along" }, { "end": 1633.04, "start": 1626.92, "text": " and it simply takes a longer time to go from here all the way over here then it" }, { "end": 1639.8, "start": 1633.04, "text": " goes then between these communities so to me we're looking at a simple diffusion" }, { "end": 1647.92, "start": 1639.8, "text": " process here that is shifted in time and that explains very much the discrepancy" }, { "end": 1651.6, "start": 1647.92, "text": " in number but also the shape of the curve that is exactly the same but" }, { "end": 1656.04, "start": 1651.6, "text": " shifted their model does not explain the shape of the curve they simply say well" }, { "end": 1663.1599999999999, "start": 1656.04, "text": " here it's 75% and here it's only 50% that means that these communities are" }, { "end": 1671.1599999999999, "start": 1663.1599999999999, "text": " kind of shipping users towards each other so I think the explanation is" }, { "end": 1677.24, "start": 1671.1599999999999, "text": " easier then so they claim this does not alone kind of show that there is a" }, { "end": 1683.32, "start": 1677.24, "text": " pipeline what they now do however will show that basically so they claim this" }, { "end": 1689.12, "start": 1683.32, "text": " is the experiment that really shows that there is it is pipeline so what they do" }, { "end": 1697.84, "start": 1689.12, "text": " is they define what they call an infection so what they say is okay we" }, { "end": 1706.36, "start": 1697.84, "text": " are for example this this row here we're taking users that are alt light users" }, { "end": 1713.4799999999998, "start": 1706.36, "text": " at the beginning in this time so basically they only comment on the only" }, { "end": 1720.08, "start": 1713.4799999999998, "text": " comment on alt light videos during this time right so discard all users that" }, { "end": 1724.6799999999998, "start": 1720.08, "text": " comment on anything else just retain the ones that only comment on alt light" }, { "end": 1730.76, "start": 1724.6799999999998, "text": " videos during this time then we're going to follow them over time and see how" }, { "end": 1738.84, "start": 1730.76, "text": " many of them have at least one comment in an alt right video so this is only" }, { "end": 1744.56, "start": 1738.84, "text": " directed from the community over here towards the alt right and then they call" }, { "end": 1750.72, "start": 1744.56, "text": " a user infected specifically if they comment on one or two alt right videos" }, { "end": 1757.2, "start": 1750.72, "text": " they're lightly infected if they comment on three to five they're mildly infected" }, { "end": 1765.04, "start": 1757.2, "text": " and if they comment on more they're severely infected so as you can see" }, { "end": 1773.92, "start": 1765.04, "text": " users starting from the alt light or from the IDW or from both they will" }, { "end": 1781.4, "start": 1773.92, "text": " become in some will become infected over time namely and I postulate we simply" }, { "end": 1785.8400000000001, "start": 1781.4, "text": " look at the since that the tendencies between the groups are similar we'll" }, { "end": 1794.56, "start": 1785.84, "text": " simply look at the light infections here so they say okay after you know in 2018" }, { "end": 1799.04, "start": 1794.56, "text": " about 8 to 10 percent of the users become infected in these groups you see" }, { "end": 1806.9199999999998, "start": 1799.04, "text": " here here about the same trajectories whereas it so whereas in the control" }, { "end": 1817.1200000000001, "start": 1806.92, "text": " group it's less here though honestly I don't think it's that much less right I" }, { "end": 1823.52, "start": 1817.1200000000001, "text": " think that again I think there's a normal diffusion process here they do" }, { "end": 1831.44, "start": 1823.52, "text": " this similarly with the with the other ones and to me like to them this makes" }, { "end": 1836.4, "start": 1831.44, "text": " total sense like oh yeah users that start in these communities they migrate" }, { "end": 1839.88, "start": 1836.4, "text": " to get infected by the alt right they go towards the alt right because you can" }, { "end": 1844.0800000000002, "start": 1839.88, "text": " find it so easily and to me this simply looks like a normal diffusion process" }, { "end": 1850.64, "start": 1844.0800000000002, "text": " here's what you need if you want and by the way the control group isn't that" }, { "end": 1855.72, "start": 1850.64, "text": " much different here's what you need if you want to show that there is a" }, { "end": 1863.5600000000002, "start": 1855.72, "text": " pipeline in this direction you need this exact same graph in the other direction" }, { "end": 1872.24, "start": 1863.56, "text": " and you need to show that people that started in the alt right do not go back" }, { "end": 1878.56, "start": 1872.24, "text": " in the same fashion towards the alt light or the IDW and they do especially" }, { "end": 1883.6, "start": 1878.56, "text": " not go to the control group you need to show this basically between each pair of" }, { "end": 1889.96, "start": 1883.6, "text": " these and you need to show that the direction of infection is only in a" }, { "end": 1895.4, "start": 1889.96, "text": " single direction namely towards radicalization otherwise you're just" }, { "end": 1899.56, "start": 1895.4, "text": " looking at a normal diffusion process between differently distance and" }, { "end": 1907.64, "start": 1899.56, "text": " differently sized groups so they go on to analyze and they say well how much" }, { "end": 1914.56, "start": 1907.64, "text": " basically how much of the alt right audience makes is made up by people that" }, { "end": 1919.2, "start": 1914.56, "text": " have been radicalized that have been infected so that this infection is kind" }, { "end": 1923.3600000000001, "start": 1919.2, "text": " of their proxy for what they call a radicalization and if you become" }, { "end": 1930.72, "start": 1923.3600000000001, "text": " infected then basically you're not part of the alt right or something even though" }, { "end": 1936.56, "start": 1930.72, "text": " you might have you might have commented something negative actually the might" }, { "end": 1942.32, "start": 1936.56, "text": " engage with their ideas and call them their crap but in any case you're now" }, { "end": 1948.8, "start": 1942.32, "text": " infected and they ask themselves how much of the alt right audience has" }, { "end": 1954.44, "start": 1948.8, "text": " are of these infected so basically how much of the alt right audience have our" }, { "end": 1960.9199999999998, "start": 1954.44, "text": " people that in the past have been not alt writers have been exclusively" }, { "end": 1970.76, "start": 1960.9199999999998, "text": " commenting on alt light or IDW videos and they find that for example for alt" }, { "end": 1978.6, "start": 1970.76, "text": " light 23% of the alt right audience are former alt lighters and have our former" }, { "end": 1984.6, "start": 1978.6, "text": " alt lighters that have now made one comment on an alt right video so that" }, { "end": 1992.52, "start": 1984.6, "text": " their claim is well there is a sizable portion of the alt right that at the" }, { "end": 1998.12, "start": 1992.52, "text": " beginning wasn't alt right that basically became infected and therefore" }, { "end": 2002.84, "start": 1998.12, "text": " that that kind of shows this radicalization pipeline that the alt" }, { "end": 2009.8, "start": 2002.84, "text": " right audience is mainly consistent of people that have not been alt right" }, { "end": 2017.1599999999999, "start": 2009.8, "text": " previously but have become so and to me again this is simply a function of the" }, { "end": 2024.36, "start": 2017.1599999999999, "text": " size of these communities right if if you think of this again and you start" }, { "end": 2028.6, "start": 2024.36, "text": " randomly somewhere on YouTube let's let's make this assumption people start" }, { "end": 2033.56, "start": 2028.6, "text": " randomly somewhere on YouTube what's the probability that you're going to start" }, { "end": 2040.4399999999998, "start": 2033.56, "text": " in the alt right very small right so what's the the kind of natural let's say" }, { "end": 2048.24, "start": 2040.4399999999998, "text": " the natural size of alt right before users go and migrate is very tiny right" }, { "end": 2054.4, "start": 2048.24, "text": " so not many users are going to be what you would consult originally alt writers" }, { "end": 2058.12, "start": 2054.4, "text": " whatever their their first comment basically what this thing measures is" }, { "end": 2064.2799999999997, "start": 2058.12, "text": " where is your first comment and are any of your subsequent comments alt right if" }, { "end": 2068.68, "start": 2064.2799999999997, "text": " your first comment is not in the alt right then you become a potential" }, { "end": 2073.3199999999997, "start": 2068.68, "text": " candidate for infection and if any comment is on the alt right then you're" }, { "end": 2077.4, "start": 2073.3199999999997, "text": " infected so what's the probability that your first comment is not alt right well" }, { "end": 2080.8399999999997, "start": 2077.4, "text": " you're gonna land somewhere on YouTube YouTube is huge the alt right is very" }, { "end": 2088.96, "start": 2080.84, "text": " small thus that probability is extremely small and then you let you simply let" }, { "end": 2095, "start": 2088.96, "text": " people diffuse let them diffuse let them diffuse some will end up in the alt" }, { "end": 2099.96, "start": 2095, "text": " right and since the alt right is so small to begin with actually most people" }, { "end": 2106.2400000000002, "start": 2099.96, "text": " that will comment at some point on an alt right video will will have their" }, { "end": 2114.3999999999996, "start": 2106.24, "text": " first comment from somewhere outside the alt right videos simply simply a" }, { "end": 2119.56, "start": 2114.3999999999996, "text": " numbers game right simply the alt right is so small that this is virtually" }, { "end": 2124.64, "start": 2119.56, "text": " guaranteed so what they find here is again simply an evidence of a regular" }, { "end": 2130.9599999999996, "start": 2124.64, "text": " diffusion process between these differently sized groups and the claims" }, { "end": 2136.2, "start": 2130.9599999999996, "text": " they make from this are just over the top again that their comparison to" }, { "end": 2140.3199999999997, "start": 2136.2, "text": " the control group if you if you look at the numbers they're actually not that" }, { "end": 2147.7599999999998, "start": 2140.3199999999997, "text": " different from this from the IDW numbers there they're different than the alt" }, { "end": 2156.9199999999996, "start": 2147.7599999999998, "text": " light here substantially different but again that simply a function of distance" }, { "end": 2164.96, "start": 2156.9199999999996, "text": " in my opinion in these in these clusters lastly they look at the YouTube" }, { "end": 2173.4, "start": 2164.96, "text": " recommender system and they say okay if we look at these videos and the channels" }, { "end": 2179.8, "start": 2173.4, "text": " and we look at on these videos what other videos are recommended and what" }, { "end": 2183.64, "start": 2179.8, "text": " other channels are recommended so if you have like a video on YouTube you have" }, { "end": 2187.6, "start": 2183.64, "text": " the video here and here you have like recommended videos similarly when you" }, { "end": 2191.7200000000003, "start": 2187.6, "text": " have a channel right you have a channel this is a person yeah I'm this person" }, { "end": 2195.68, "start": 2191.72, "text": " the person can have first of all they can have featured channels where they" }, { "end": 2200.8399999999997, "start": 2195.68, "text": " say look these are channels that I find cool I go check them out and then they" }, { "end": 2204.7599999999998, "start": 2200.8399999999997, "text": " also have recommended channels that are kind of given by YouTube as" }, { "end": 2211.08, "start": 2204.7599999999998, "text": " recommendations so here YouTube controls basically everything here the creator" }, { "end": 2217.72, "start": 2211.08, "text": " controls part and the YouTube controls dollar part so they look to both first" }, { "end": 2225.3599999999997, "start": 2217.72, "text": " of all the channels channels recommend recommendations so these are both" }, { "end": 2233.7999999999997, "start": 2225.3599999999997, "text": " sections here and they look at if you start on a alt light video how likely if" }, { "end": 2240.52, "start": 2233.7999999999997, "text": " you do a random walk are you to end up in the alt right or in the intellectual" }, { "end": 2245.7999999999997, "start": 2240.52, "text": " dark web or control group after one step two steps three steps four steps so that" }, { "end": 2251.8, "start": 2245.8, "text": " the big line is the random Walker and actually the dashed line is the distance" }, { "end": 2257.0800000000004, "start": 2251.8, "text": " if you were to target Lee go into the direction of such a video like what's" }, { "end": 2268, "start": 2257.0800000000004, "text": " the minimum number of clicks you need and you can see here the the if you" }, { "end": 2274.36, "start": 2268, "text": " start at alt light after one or two steps the random Walker is kind of a 2%" }, { "end": 2282.32, "start": 2274.36, "text": " chance to end up at an alt right video and about a 25% chance here of ending up" }, { "end": 2289.08, "start": 2282.32, "text": " in a intellectual dark web video and about a 50% chance of ending up again at" }, { "end": 2293.56, "start": 2289.08, "text": " an alt light video the scales here really different so it's very difficult" }, { "end": 2301, "start": 2293.56, "text": " to judge how it compares to the control group which is kind of at zero here but" }, { "end": 2306.68, "start": 2301, "text": " to me again this is a reflection of the size of these communities and I think" }, { "end": 2313.68, "start": 2306.68, "text": " it's a bit you know we are to to then claim oh these are reachable basically so" }, { "end": 2321.56, "start": 2313.68, "text": " 2% chance of landing on an alt right video um I'm not sure but again if you" }, { "end": 2326.48, "start": 2321.56, "text": " compare if you start from the control group there's almost no chance you'll" }, { "end": 2335, "start": 2326.48, "text": " end up in a alt right video so I guess the comparison is is okay if you compare" }, { "end": 2344.92, "start": 2335, "text": " to control group if you start look at videos however again if you start at alt" }, { "end": 2355, "start": 2344.92, "text": " light after one step you are approximately 25% likely to be in an IDW" }, { "end": 2360.8, "start": 2355, "text": " video you're a bit over 50% likely to stay in an alt light video however" }, { "end": 2367.32, "start": 2360.8, "text": " compare this to channels you're almost super unlikely to end at a control" }, { "end": 2372.16, "start": 2367.32, "text": " channel if you start at an alt light channel but in video recommendations" }, { "end": 2379.6, "start": 2372.16, "text": " you're actually also about 25% chance of ending in a control group video where" }, { "end": 2388, "start": 2379.6, "text": " as look at the scale here you're only about 0.03% likely to end up in an alt" }, { "end": 2399.64, "start": 2388, "text": " right video and also here so here even look at this if you start an IDW video" }, { "end": 2405.92, "start": 2399.64, "text": " the chance that you're going to end up in a control again super high much" }, { "end": 2413.32, "start": 2405.92, "text": " higher than an alt light video whereas with the channel recommendations this" }, { "end": 2418.4, "start": 2413.32, "text": " was completely turned around so we see the alt right completely loses when it" }, { "end": 2423.7200000000003, "start": 2418.4, "text": " comes to video recommendations and mainly the control group gains compared" }, { "end": 2430.84, "start": 2423.7200000000003, "text": " to the channel recommendations I think here's what I think I think this is due" }, { "end": 2437.08, "start": 2430.84, "text": " to this section here this section here where the creators have power and also" }, { "end": 2442.2400000000002, "start": 2437.08, "text": " this section here YouTube recommending I think they're putting a lot of work" }, { "end": 2447.1600000000003, "start": 2442.2400000000002, "text": " into the video recommendations I think they're putting not that much work into" }, { "end": 2451.76, "start": 2447.1600000000003, "text": " these recommendations and by work I mean actually manually intervening and" }, { "end": 2457.08, "start": 2451.76, "text": " deciding what's kind of good videos and bad videos and the the control group" }, { "end": 2463.7999999999997, "start": 2457.08, "text": " they're probably there's probably big advertisement money in that so they" }, { "end": 2467.36, "start": 2463.7999999999997, "text": " might be pushed up a bit in the video recommendations since most people are" }, { "end": 2472.2, "start": 2467.36, "text": " going by video recommendations I've actually never used the channel" }, { "end": 2476.12, "start": 2472.2, "text": " recommendations feature and the channel recommendations first of all the" }, { "end": 2481.24, "start": 2476.12, "text": " creator has power over part of it and then also YouTube may not put as much" }, { "end": 2491.08, "start": 2481.24, "text": " work into these related channels so both have in the effect that I would say that" }, { "end": 2496.56, "start": 2491.08, "text": " that the data here first of all it doesn't doesn't convince me of a" }, { "end": 2500.9599999999996, "start": 2496.56, "text": " radicalization pipeline it simply convinces me that some communities are" }, { "end": 2506.8599999999997, "start": 2500.9599999999996, "text": " larger smaller and closer together but second of all that this down here if you" }, { "end": 2512.36, "start": 2506.86, "text": " forget about the alt-right for a moment yeah they're irrelevant this down here" }, { "end": 2518.28, "start": 2512.36, "text": " actually compared to up here shows maybe a bit of evidence of an algorithmic" }, { "end": 2527.48, "start": 2518.28, "text": " promotion of these mainstream media channels compared to how the communities" }, { "end": 2531.84, "start": 2527.48, "text": " are actually clustering which I think this this up here might be a much more" }, { "end": 2541.1600000000003, "start": 2531.84, "text": " accurate picture so you know that it's just kind of a funky thing in the data" }, { "end": 2546.96, "start": 2541.1600000000003, "text": " yeah that alt-right is irrelevant to this part because they're they're just" }, { "end": 2556.08, "start": 2546.96, "text": " too small so this is this is kind of my take on this they didn't give" }, { "end": 2562.96, "start": 2556.08, "text": " recommendations and is this a pipeline and so on and I don't think so you've" }, { "end": 2571.24, "start": 2562.96, "text": " now heard my idea and you've heard their idea decide for yourself but I think" }, { "end": 2578.64, "start": 2571.24, "text": " it's a good example of how if you are convinced of an underlying mechanism" }, { "end": 2584.48, "start": 2578.64, "text": " you're going to collect evidence in support of that mechanism and if you" }, { "end": 2588.76, "start": 2584.48, "text": " catch yourself doing that really really think isn't there an easier explanation" }, { "end": 2618.5200000000004, "start": 2588.76, "text": " for this all right that was it for me have fun" } ]
wZWn7Hm8osA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Gauge Equivariant Convolutional Networks and the Icosahedral CNN
[ "Science & Technology" ]
[ "machine learning", "deep learning", "artificial intelligence", "ai", "data science", "convolution", "convolutional neural networks", "cnn", "manifolds", "curvature", "parallel transport", "gauge", "gauge transformation", "icosahedron", "weight sharing", "coordinate frame", "invariant", "coordinate system", "equivariance", "sphere", "spherical" ]
Ever wanted to do a convolution on a Klein Bottle? This paper defines CNNs over manifolds such that they are independent of which coordinate frame you choose. Amazingly, this then results in an efficient practical method to achieve state-of-the-art in several tasks! https://arxiv.org/abs/1902.04615 Abstract: The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry, and which includes many popular methods from equivariant and geometric deep learning. We implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere. By choosing to work with this very regular manifold, we are able to implement the gauge equivariant convolution using a single conv2d call, making it a highly scalable and practical alternative to Spherical CNNs. Using this method, we demonstrate substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns. Authors: Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling
What you're looking at here are manifolds. Specifically you're looking at 2D manifolds embedded in a 3D space. So naturally these are some kind of bodies that have a surface and one of the things you might want to do with a manifold like this is to define a convolutional neural network to work on this surface. So usually we have convolutional neural network working on flat surfaces such as images. But what if you could actually work on a manifold like this? An easy example is a sphere. You might want to work on a sphere. Why is that useful? Maybe you want to predict the climate and then you actually want to work on the Earth's surface which is approximated by a sphere. So today we'll look at the following paper. Gauge-equivariant convolutional networks and the icosahedral CNN by Tachokohen, Maurice Weiler, Burkai Kichang, and Max Welling. So as I already said this paper tries to define convolutional neural networks on any kind of manifold. So what's the problem inherently when you're doing this? Can't you just, you know, place a filter move it around like you do in a regular CNN? That's exactly the problem actually. So if you have a picture, and let me draw a picture of a cat, right? Cat here, here, here, here, eye, eye. Alright, cat smiling. This is a terrible cat. What you do is you have your filter, right, and that's a little patch in the image. You're just going to move this filter, move it around, move it around, and at each point you convolve the filter. If this is larger, you convolve each of the elements of the filter. Here maybe you have nine elements. So each of these elements here is convolved with the underlying image. At the end you aggregate all of them into a single point, usually by adding them up. And there you, from this image, you produce a new image that is a different thing. So if this kernel here, for example, is a specific kernel that detects lines, you might end up with, or that detects specifically up-down lines, you might end up with just the lines that go up and down in this. So the eyes here, here, right. So this might be the result of this convolution. Of course in CNN these convolutional kernels then are learned as parameters. So it seems pretty easy, right? You just simply take a kernel and kind of shift it around. At each point you convolve the underlying image and that's it. Well it's not so easy if you work on a manifold. And why is that? It's illustrated here on a sphere. So if you have a sphere and you place a kernel, it really matters which direction you place the kernel in. Of course I mean it does on an image, but bear with me. So here you place a kernel in the direction of this arrow, right? You place the kernel maybe like this here, you place your little kernel on it, and you say up. Basically up is here, right? And then you move that kernel around and ultimately you want to move it all the way to the other side of the sphere. So back here you want to move it over there, you want to move it all around the sphere, right? Now what happens if you move it this way, right? You convolve here, you move it this way, you convolve here. You see already by the red arrows where is up. Up is where the red arrows point, right? If you move it along here the red arrows will always point up up up up up. Okay so you arrive back here with your kernel. I'm gonna try to draw this dashed with the up in the kernel being this direction, because you've moved it around like so. But if you for some reason choose to move your kernel in another direction, namely in this direction up here, then as you can see if you place it here and then you place it here, you place it here, you place it back here and ultimately here. Where is up? If you just keep track of where up is in your kernel it's always going to be to the front of the sphere. So on one hand you have up being to the back here and on the other hand you have one up being to the front here. So this doesn't match. So it actually depends on which path you take from this original point to any other point. It depends which path you take, how your kernel is gonna end up there. And that's of course very unfortunate because we're not used to this on this on this 2d thing. Because if I you know move it down first and then up here, over here sorry, where is up in my... so if up is here, if it's down here, up is here and over here up is here. And if I kind of move it straight over here and then down and then here and then here, you see up is always the same direction. There is no problem in a flat surface. That's why we can simply define it as we do. But in a sphere or any kind of manifold it's called parallel transport is path dependent in technical terms. The way you transport a thing from one place to another really depends on the path you take. So this paper is trying to address this problem and define a convolution on any manifold. So how is this done? First of all to define a convolution on the curved surface what they do is they say okay we have a convolutional filter and the convolutional filter is actually some sort of a flat object and it works in what's called the tangent space of the manifold. The tangent space is a flat space that you can define at any point on the manifold. So here the manifold is the sphere. At some point P you define the tangent space as simply the tangent kind of a sheet, a straight sheet touching the surface at point P. So this is now a flat space where we can define a let's say a regular convolutional kernel as we did laying it up here. The question is how do you map points from the sphere to this tangent space and back and that's happening via this exponential map. The exponential map in this sense is not the same as the exponential map that you are used to by simply you know exponentiating things. The exponential map here basically means if I want to go from a point in the tangent space to a point on the manifold what I do is I take this vector here which is a straight vector in the tangent space and I go on the manifold in this direction for a predefined length. So this is usually a length of one on the manifold. For a predefined length I walk into this direction along the geodesic. It's along the shortest path into this direction and then I stop and where I end up that's where I basically end up. So that's the corresponding point to this point here on the tangent space. So to define a convolution fully it means that first you lay your kernel and then for each element in the kernel you will multiply that kernel entry, let me use a blue here, multiply that kernel entry by the corresponding point on the manifold itself. So by mapping this point in the tangent space to the manifold. You can also say you basically back project from the manifold to the tangent space and there you do your regular convolution. So that's how you define a convolution in the classic sense if you have for example a sphere and what the authors here of course noticed already is that this is dependent on how you get there and in technical terms it's called this is dependent on your gauge. So the gauge basically is defining this coordinate frame in the tangent space. So this tangent vector here is an abstract object, it's just a vector, but in order to do something with it, in order to do something with a kernel and convolution and so on, you have to express it in numbers and numbers are expressed with respect to a base usually. If you have a vector v here you can express it with respect to this two basis vectors. So maybe v is here is 2 and here is 3. So v can be represented as the vector 2, 3 with respect to the base e1, e2. And so this choice of base basically is what's called a gauge. Now I'm probably butchering this topic completely for any physicists or mathematicians listening but just kind of give you an impression. So this choice of bases is called a gauge and we can imagine a different choice of bases. So let me draw another basis here. So another basis might be 1, 2. So e1 is here, e2 is here. So the new coordinates here would be something like v can also be expressed in this new basis as say 1, here's maybe 1 and this is very far so this is maybe 5. So 5 in this direction. And to transform between the two there is formulas basically from from you know them from linear algebra from vector spaces. In general they're called gauge transformations and if we want our convolution to be invariant to the basically chosen coordinate frames we have to say in technical terms what we mean is the convolution should be gauge-equivariant. That means no matter which base we choose. If we choose this base or if we choose this the result should basically be the same. So within the computation of the convolution we must account for the fact of which gauge is chosen and then basically have the result be invariant. And with the result we don't mean the numbers of the result because these will change but we mean the the actual object that is resulting, the geometric object that is resulting should be equivalent under gauge transformations. So this is a it sounds very technical but the way I understand it is basically you want to define a convolution on these manifolds such that you it's such that the result is not dependent on exactly how you shift the kernel around as long as you account for the fact that you shifted it around this way should give you the same the same result. So for this they define a condition and the condition is that the kernel must behave as such. So the V is the input here and G minus 1 is a a transformation of the of the gauge as I understand it. And so basically if you transform the input by a different coordinate frame then at the kernel applied to that different input must behave exactly as the kernel applied to the original input and then perturbed by these two operations. So this is this you might notice this you might know things like this from discussions maybe of what it means for a function to be linear or something where the function applied to a transformed version must correspond to the function applied to the original version of the input transformed so the result transformed by some some operation. So if this holds so this is a condition on the kernel of the convolution and if you so if you define your convolution in this way this is a modification to the convolution on the tangent space that we had then your result will be gauge equivalent. What is this transformation and what is this new convolution they define they say if you do the convolution this way then these things will hold. So what is this this way basically again you convolve the kernel with the input but you the f here is the input k is the kernel but what you do if we come up here again what you do you have to do a slight modification your kernel here if you want to convolve it let's say this point here you would not combine this point with the point along the exponential map corresponding to it right this point here but what you would do is you would transport this point back along the geodesic to here and then you would and then you would compute your regular convolution. So this means sorry this is what this term here means technically. If you don't understand it don't worry I don't either I guess this is simply saying that if you perform convolutions in on manifolds in this way and you have the appropriate kernel then they will be gauge equivalent. So this is pretty cool because what they do next is they define the convolution on an icosahedron and an icosahedron is a shape a 3d geometric shape that's made of like triangles and I can try to maybe they have drawn it yes so all right this is an icosahedron and so they can now define a convolution on this with where a filter is basically the filter looks like this it's this kind of hexagon I yes and the and the filter is kind of shifted around and of course it's the problem is whenever it shifts over one of these boundaries here or whenever it shifts over the these corners here what do you do what do you do then because if you look at it you can't basically flatten the corner if you try to flatten the corner you're gonna have this wedge sticking out that's terrible you're gonna have a wedge here sticking out if you try to flatten the corner so you have to define basically the convolution on this they do it in their framework and specifically what they do is they flatten and pad the icosahedron to this representation so they put it into five pieces they have to pad a bit you see here each colored edge here this colored edge corresponds to this colored edge so that would be padded from here to nicely define this convolution and then they put this into a regular 2d image with the color things they are sometimes repeated in this image and then they define the filters in this following way so this these are the filters for basically for a six channel input image and what they have to do is they have to do a weight sharing between the filters in a very specific way and in order for the kernel to have these properties they need to see replicate these filters down here and if you look the different colors in these different let's call them channels they each have different intensities and if you look down here they're all slightly different which means they're all slightly different linear combinations of the of the filter up here or rotations basically they're all differently arranged but they're basically this blue field here is this blue field but is also let's see this one and this one and this one and this one so the the weights here are these original filters are basically arranged such that the weights are shared in this form down here but if you do this if you arrange them like this when you replicate each filter basically six times because you also want six output channels then the filter will have the desired properties and your convolution will be gauge equivalent so they apply this to to ICO M this so the complete algorithm is actually down here they can actually use if they pad the image in the correct way to the 2d image and expand the kernel to arrange it as we just saw they can use a regular 2d convolution to compute their result and that's pretty cool and this means this also is very very very efficient on this Ico Sahedron so what they do is they apply this to Ico M NIST where they project basically they project M NIST on an Ico Sahedron so they take the image M NIST and they project it onto this and then they try to classify it on that I can actually show that their method outperforms other method and learns these invariances so learns the the symmetries of the Ico Sahedron or basic sorry is invariant to them being invariant to the symmetries means you don't have to learn them anymore if you're not invariant to symmetries it means you have to learn each one of them separately right but if you're invariant to symmetries then you have only have to learn one thing once and then if the Ico Sahedron is rotated you're just like ma that's just the same thing as this other thing they also do this interestingly to climate pattern segmentation and also a kind of 2d or 3d omni-directional segmentation where you're in a room a 3d room and you have an omni-directional picture sorry from everywhere you have a picture a 3d sphere picture from everywhere you're asked to segment things in the room and actually outperform all other methods on these data sets so I find this extremely cool that kind of this ultra theoretical work starting out as ultra theoretical then gets implemented into something that beats state-of-the-art methods on relevant tasks alright so that was just a brief overview and a very dirty look at these things but I hope you got something out of it and thus far that was it for me bye bye
[ { "end": 5.68, "start": 0, "text": " What you're looking at here are manifolds. Specifically you're looking at" }, { "end": 13.36, "start": 5.68, "text": " 2D manifolds embedded in a 3D space. So naturally these are some kind of bodies" }, { "end": 17.96, "start": 13.36, "text": " that have a surface and one of the things you might want to do with a" }, { "end": 25.16, "start": 17.96, "text": " manifold like this is to define a convolutional neural network to work on" }, { "end": 29.48, "start": 25.16, "text": " this surface. So usually we have convolutional neural network working on" }, { "end": 36.120000000000005, "start": 29.48, "text": " flat surfaces such as images. But what if you could actually work on a manifold" }, { "end": 42.88, "start": 36.120000000000005, "text": " like this? An easy example is a sphere. You might want to work on a sphere. Why is" }, { "end": 47.6, "start": 42.88, "text": " that useful? Maybe you want to predict the climate and then you actually want" }, { "end": 53.400000000000006, "start": 47.6, "text": " to work on the Earth's surface which is approximated by a sphere. So today we'll" }, { "end": 58.2, "start": 53.400000000000006, "text": " look at the following paper. Gauge-equivariant convolutional networks" }, { "end": 67.12, "start": 58.2, "text": " and the icosahedral CNN by Tachokohen, Maurice Weiler, Burkai Kichang," }, { "end": 75.64, "start": 67.12, "text": " and Max Welling. So as I already said this paper tries to define" }, { "end": 82.32000000000001, "start": 75.64, "text": " convolutional neural networks on any kind of manifold. So what's the problem" }, { "end": 87.80000000000001, "start": 82.32000000000001, "text": " inherently when you're doing this? Can't you just, you know, place a filter" }, { "end": 92.75999999999999, "start": 87.8, "text": " move it around like you do in a regular CNN? That's exactly the problem actually." }, { "end": 103.44, "start": 92.75999999999999, "text": " So if you have a picture, and let me draw a picture of a cat, right? Cat here, here," }, { "end": 109.75999999999999, "start": 103.44, "text": " here, here, eye, eye. Alright, cat smiling. This is a terrible cat. What you do is you" }, { "end": 115.16, "start": 109.75999999999999, "text": " have your filter, right, and that's a little patch in the image. You're just" }, { "end": 121, "start": 115.16, "text": " going to move this filter, move it around, move it around, and at each point you" }, { "end": 125.6, "start": 121, "text": " convolve the filter. If this is larger, you convolve each of the elements of the" }, { "end": 129.76, "start": 125.6, "text": " filter. Here maybe you have nine elements. So each of these elements here is" }, { "end": 136.51999999999998, "start": 129.76, "text": " convolved with the underlying image. At the end you aggregate all of them into a" }, { "end": 142.4, "start": 136.51999999999998, "text": " single point, usually by adding them up. And there you, from this image, you" }, { "end": 149.88, "start": 142.4, "text": " produce a new image that is a different thing. So if this kernel here, for example," }, { "end": 155.68, "start": 149.88, "text": " is a specific kernel that detects lines, you might end up with, or that detects" }, { "end": 163.16, "start": 155.68, "text": " specifically up-down lines, you might end up with just the lines that go up and" }, { "end": 171.12, "start": 163.16, "text": " down in this. So the eyes here, here, right. So this might be the result of this" }, { "end": 175.36, "start": 171.12, "text": " convolution. Of course in CNN these convolutional kernels then are learned" }, { "end": 182.08, "start": 175.36, "text": " as parameters. So it seems pretty easy, right? You just simply take a kernel and" }, { "end": 187.28, "start": 182.08, "text": " kind of shift it around. At each point you convolve the underlying image" }, { "end": 193.16, "start": 187.28, "text": " and that's it. Well it's not so easy if you work on a manifold. And why is that?" }, { "end": 199.84, "start": 193.16, "text": " It's illustrated here on a sphere. So if you have a sphere and you place a kernel," }, { "end": 204.36, "start": 199.84, "text": " it really matters which direction you place the kernel in. Of course I mean it" }, { "end": 209.04, "start": 204.36, "text": " does on an image, but bear with me. So here you place a kernel in the direction" }, { "end": 213.76, "start": 209.04, "text": " of this arrow, right? You place the kernel maybe like this here, you place your little" }, { "end": 221.4, "start": 213.76, "text": " kernel on it, and you say up. Basically up is here, right? And then you move that" }, { "end": 225.32, "start": 221.4, "text": " kernel around and ultimately you want to move it all the way to the other side of" }, { "end": 229.4, "start": 225.32, "text": " the sphere. So back here you want to move it over there, you want to move it all" }, { "end": 236.08, "start": 229.4, "text": " around the sphere, right? Now what happens if you move it this way, right? You" }, { "end": 240.8, "start": 236.08, "text": " convolve here, you move it this way, you convolve here. You see already by the red" }, { "end": 246.92000000000002, "start": 240.8, "text": " arrows where is up. Up is where the red arrows point, right? If you move it along" }, { "end": 254.16, "start": 246.92000000000002, "text": " here the red arrows will always point up up up up up. Okay so you arrive back here" }, { "end": 261.92, "start": 254.16, "text": " with your kernel. I'm gonna try to draw this dashed with the up in the" }, { "end": 267.04, "start": 261.92, "text": " kernel being this direction, because you've moved it around like so. But if" }, { "end": 273.04, "start": 267.04, "text": " you for some reason choose to move your kernel in another direction, namely in" }, { "end": 278.64, "start": 273.04, "text": " this direction up here, then as you can see if you place it here and then you" }, { "end": 284.8, "start": 278.64, "text": " place it here, you place it here, you place it back here and ultimately here." }, { "end": 291.2, "start": 284.8, "text": " Where is up? If you just keep track of where up is in your kernel it's always" }, { "end": 297.41999999999996, "start": 291.2, "text": " going to be to the front of the sphere. So on one hand you have up being to the" }, { "end": 302.52, "start": 297.41999999999996, "text": " back here and on the other hand you have one up being to the front here. So this" }, { "end": 309.64, "start": 302.52, "text": " doesn't match. So it actually depends on which path you take from this original" }, { "end": 317.47999999999996, "start": 309.64, "text": " point to any other point. It depends which path you take, how your kernel is" }, { "end": 321.76, "start": 317.47999999999996, "text": " gonna end up there. And that's of course very unfortunate because we're not used" }, { "end": 327.4, "start": 321.76, "text": " to this on this on this 2d thing. Because if I you know move it down first and then" }, { "end": 334.84, "start": 327.4, "text": " up here, over here sorry, where is up in my... so if up is here, if it's down here, up" }, { "end": 341.56, "start": 334.84, "text": " is here and over here up is here. And if I kind of move it straight over here and" }, { "end": 346, "start": 341.56, "text": " then down and then here and then here, you see up is always the same direction." }, { "end": 353.4, "start": 346, "text": " There is no problem in a flat surface. That's why we can simply define it" }, { "end": 358.44, "start": 353.4, "text": " as we do. But in a sphere or any kind of manifold it's called parallel" }, { "end": 366.79999999999995, "start": 358.44, "text": " transport is path dependent in technical terms. The way you transport a thing from" }, { "end": 371.88, "start": 366.79999999999995, "text": " one place to another really depends on the path you take. So this paper is" }, { "end": 379.47999999999996, "start": 371.88, "text": " trying to address this problem and define a convolution on any manifold. So" }, { "end": 387.96000000000004, "start": 379.48, "text": " how is this done? First of all to define a convolution on the curved surface what" }, { "end": 391.44, "start": 387.96000000000004, "text": " they do is they say okay we have a convolutional filter and the" }, { "end": 397.04, "start": 391.44, "text": " convolutional filter is actually some sort of a flat object and it works in" }, { "end": 401.84000000000003, "start": 397.04, "text": " what's called the tangent space of the manifold. The tangent space is a flat" }, { "end": 406.92, "start": 401.84000000000003, "text": " space that you can define at any point on the manifold. So here the manifold is" }, { "end": 413.72, "start": 406.92, "text": " the sphere. At some point P you define the tangent space as simply the tangent" }, { "end": 421.36, "start": 413.72, "text": " kind of a sheet, a straight sheet touching the surface at point P. So this" }, { "end": 426.16, "start": 421.36, "text": " is now a flat space where we can define a let's say a regular convolutional" }, { "end": 434.28000000000003, "start": 426.16, "text": " kernel as we did laying it up here. The question is how do you map" }, { "end": 438.67999999999995, "start": 434.28, "text": " points from the sphere to this tangent space and back and that's happening via" }, { "end": 444.71999999999997, "start": 438.67999999999995, "text": " this exponential map. The exponential map in this sense is not the same as the" }, { "end": 450.76, "start": 444.71999999999997, "text": " exponential map that you are used to by simply you know exponentiating things." }, { "end": 458.28, "start": 450.76, "text": " The exponential map here basically means if I want to go from a point in" }, { "end": 463.64, "start": 458.28, "text": " the tangent space to a point on the manifold what I do is I take this vector" }, { "end": 469.64, "start": 463.64, "text": " here which is a straight vector in the tangent space and I go on the manifold in" }, { "end": 480, "start": 469.64, "text": " this direction for a predefined length. So this is usually a length of one on" }, { "end": 485.36, "start": 480, "text": " the manifold. For a predefined length I walk into this direction along the" }, { "end": 490.71999999999997, "start": 485.36, "text": " geodesic. It's along the shortest path into this direction and then I stop and" }, { "end": 496.56, "start": 490.72, "text": " where I end up that's where I basically end up. So that's the corresponding point" }, { "end": 502.72, "start": 496.56, "text": " to this point here on the tangent space. So to define a convolution fully it means" }, { "end": 509.08000000000004, "start": 502.72, "text": " that first you lay your kernel and then for each element in the kernel you will" }, { "end": 515.64, "start": 509.08000000000004, "text": " multiply that kernel entry, let me use a blue here, multiply that kernel entry by" }, { "end": 525.12, "start": 515.64, "text": " the corresponding point on the manifold itself. So by mapping this" }, { "end": 529.8, "start": 525.12, "text": " point in the tangent space to the manifold. You can also say you basically" }, { "end": 534.04, "start": 529.8, "text": " back project from the manifold to the tangent space and there you do your" }, { "end": 540.6, "start": 534.04, "text": " regular convolution. So that's how you define a convolution in the classic sense" }, { "end": 549, "start": 540.6, "text": " if you have for example a sphere and what the authors here of course noticed" }, { "end": 555.64, "start": 549, "text": " already is that this is dependent on how you get there and in technical terms" }, { "end": 561.64, "start": 555.64, "text": " it's called this is dependent on your gauge. So the gauge basically is defining" }, { "end": 566.84, "start": 561.64, "text": " this coordinate frame in the tangent space. So this tangent vector here is an" }, { "end": 571.5600000000001, "start": 566.84, "text": " abstract object, it's just a vector, but in order to do something with it, in" }, { "end": 574.4, "start": 571.5600000000001, "text": " order to do something with a kernel and convolution and so on, you have to" }, { "end": 580.44, "start": 574.4, "text": " express it in numbers and numbers are expressed with respect to a base" }, { "end": 587.6800000000001, "start": 580.44, "text": " usually. If you have a vector v here you can express it with respect to this two" }, { "end": 596.4000000000001, "start": 587.6800000000001, "text": " basis vectors. So maybe v is here is 2 and here is 3. So v can be represented" }, { "end": 605.56, "start": 596.4, "text": " as the vector 2, 3 with respect to the base e1, e2. And so this choice of base" }, { "end": 612.16, "start": 605.56, "text": " basically is what's called a gauge. Now I'm probably butchering this topic" }, { "end": 617.0799999999999, "start": 612.16, "text": " completely for any physicists or mathematicians listening but just kind" }, { "end": 625.76, "start": 617.0799999999999, "text": " of give you an impression. So this choice of bases is called a gauge and we" }, { "end": 630.6, "start": 625.76, "text": " can imagine a different choice of bases. So let me draw another basis here. So" }, { "end": 642.4399999999999, "start": 630.6, "text": " another basis might be 1, 2. So e1 is here, e2 is here. So the new" }, { "end": 648.3199999999999, "start": 642.4399999999999, "text": " coordinates here would be something like v can also be expressed in this new" }, { "end": 655.16, "start": 648.3199999999999, "text": " basis as say 1, here's maybe 1 and this is very far so this is maybe 5. So 5 in" }, { "end": 662.56, "start": 655.16, "text": " this direction. And to transform between the two there is formulas basically from" }, { "end": 666.8399999999999, "start": 662.56, "text": " from you know them from linear algebra from vector spaces. In general they're" }, { "end": 674.06, "start": 666.8399999999999, "text": " called gauge transformations and if we want our convolution to be invariant to" }, { "end": 681.12, "start": 674.06, "text": " the basically chosen coordinate frames we have to say in technical terms what" }, { "end": 687.08, "start": 681.12, "text": " we mean is the convolution should be gauge-equivariant. That means no matter" }, { "end": 694.44, "start": 687.08, "text": " which base we choose. If we choose this base or if we choose this the result" }, { "end": 701.08, "start": 694.44, "text": " should basically be the same. So within the computation of the convolution we" }, { "end": 707.04, "start": 701.08, "text": " must account for the fact of which gauge is chosen and then basically have the" }, { "end": 711.56, "start": 707.04, "text": " result be invariant. And with the result we don't mean the numbers of the result" }, { "end": 717.48, "start": 711.56, "text": " because these will change but we mean the the actual object that is resulting," }, { "end": 723.48, "start": 717.48, "text": " the geometric object that is resulting should be equivalent under gauge" }, { "end": 733.64, "start": 723.48, "text": " transformations. So this is a it sounds very technical but the way I understand" }, { "end": 740.4, "start": 733.64, "text": " it is basically you want to define a convolution on these manifolds such that" }, { "end": 748.6, "start": 740.4, "text": " you it's such that the result is not dependent on exactly how you shift the" }, { "end": 754.16, "start": 748.6, "text": " kernel around as long as you account for the fact that you shifted it around this" }, { "end": 764.56, "start": 754.16, "text": " way should give you the same the same result. So for this they define a" }, { "end": 772.64, "start": 764.56, "text": " condition and the condition is that the kernel must behave as such. So the V is" }, { "end": 783.38, "start": 772.64, "text": " the input here and G minus 1 is a a transformation of the of the gauge as I" }, { "end": 789.62, "start": 783.38, "text": " understand it. And so basically if you transform the input by a different" }, { "end": 795.08, "start": 789.62, "text": " coordinate frame then at the kernel applied to that different input must" }, { "end": 805.2, "start": 795.08, "text": " behave exactly as the kernel applied to the original input and then perturbed by" }, { "end": 812.12, "start": 805.2, "text": " these two operations. So this is this you might notice this you might know things" }, { "end": 818.4, "start": 812.12, "text": " like this from discussions maybe of what it means for a function to be linear or" }, { "end": 825.08, "start": 818.4, "text": " something where the function applied to a transformed version must correspond" }, { "end": 830.8, "start": 825.08, "text": " to the function applied to the original version of the input transformed so the" }, { "end": 838.84, "start": 830.8, "text": " result transformed by some some operation. So if this holds so this is a" }, { "end": 843.4, "start": 838.84, "text": " condition on the kernel of the convolution and if you so if you define" }, { "end": 850.8000000000001, "start": 843.4, "text": " your convolution in this way this is a modification to the convolution on the" }, { "end": 856.0400000000001, "start": 850.8000000000001, "text": " tangent space that we had then your result will be gauge" }, { "end": 860.6800000000001, "start": 856.0400000000001, "text": " equivalent. What is this transformation and what is this new" }, { "end": 865.24, "start": 860.6800000000001, "text": " convolution they define they say if you do the convolution this way then these" }, { "end": 871.04, "start": 865.24, "text": " things will hold. So what is this this way basically again you convolve the" }, { "end": 878.84, "start": 871.04, "text": " kernel with the input but you the f here is the input k is the kernel but what" }, { "end": 885.84, "start": 878.84, "text": " you do if we come up here again what you do you have to do a slight" }, { "end": 892.28, "start": 885.84, "text": " modification your kernel here if you want to convolve it let's say this point" }, { "end": 900.16, "start": 892.28, "text": " here you would not combine this point with the point along the exponential map" }, { "end": 905.4, "start": 900.16, "text": " corresponding to it right this point here but what you would do is you would" }, { "end": 915.04, "start": 905.4, "text": " transport this point back along the geodesic to here and then you would and" }, { "end": 924.7199999999999, "start": 915.04, "text": " then you would compute your regular convolution. So this means sorry this is" }, { "end": 933.16, "start": 924.7199999999999, "text": " what this term here means technically. If you don't understand it don't worry I" }, { "end": 939.8, "start": 933.16, "text": " don't either I guess this is simply saying that if you perform convolutions" }, { "end": 947.12, "start": 939.8, "text": " in on manifolds in this way and you have the appropriate kernel then they will be" }, { "end": 953.4399999999999, "start": 947.12, "text": " gauge equivalent. So this is pretty cool because what they do next is they define" }, { "end": 964.92, "start": 953.4399999999999, "text": " the convolution on an icosahedron and an icosahedron is a shape a 3d geometric" }, { "end": 970.52, "start": 964.92, "text": " shape that's made of like triangles and I can try to maybe they have drawn it" }, { "end": 977.5999999999999, "start": 970.52, "text": " yes so all right this is an icosahedron and so they can now define a" }, { "end": 984.52, "start": 977.5999999999999, "text": " convolution on this with where a filter is basically the filter looks like this" }, { "end": 994.5999999999999, "start": 984.52, "text": " it's this kind of hexagon I yes and the and the filter is kind of shifted around" }, { "end": 999.72, "start": 994.6, "text": " and of course it's the problem is whenever it shifts over one of these" }, { "end": 1006.16, "start": 999.72, "text": " boundaries here or whenever it shifts over the these corners here what do you" }, { "end": 1011.6, "start": 1006.16, "text": " do what do you do then because if you look at it you can't basically flatten" }, { "end": 1016.84, "start": 1011.6, "text": " the corner if you try to flatten the corner you're gonna have this wedge" }, { "end": 1024.92, "start": 1016.84, "text": " sticking out that's terrible you're gonna have a wedge here sticking out if" }, { "end": 1031.08, "start": 1024.92, "text": " you try to flatten the corner so you have to define basically the convolution" }, { "end": 1035.76, "start": 1031.08, "text": " on this they do it in their framework and specifically what they do is they" }, { "end": 1043.28, "start": 1035.76, "text": " flatten and pad the icosahedron to this representation so they put it into five" }, { "end": 1049.08, "start": 1043.28, "text": " pieces they have to pad a bit you see here each colored edge here this colored" }, { "end": 1055.92, "start": 1049.08, "text": " edge corresponds to this colored edge so that would be padded from here to nicely" }, { "end": 1063.32, "start": 1055.92, "text": " define this convolution and then they put this into a regular 2d image with" }, { "end": 1069.56, "start": 1063.32, "text": " the color things they are sometimes repeated in this image and then they" }, { "end": 1078.24, "start": 1069.56, "text": " define the filters in this following way so this these are the filters for" }, { "end": 1086.04, "start": 1078.24, "text": " basically for a six channel input image and what they have to do is they have to" }, { "end": 1092.48, "start": 1086.04, "text": " do a weight sharing between the filters in a very specific way and in order for" }, { "end": 1097.72, "start": 1092.48, "text": " the kernel to have these properties they need to see replicate these filters down" }, { "end": 1104.16, "start": 1097.72, "text": " here and if you look the different colors in these different let's call" }, { "end": 1111.28, "start": 1104.16, "text": " them channels they each have different intensities and if you look down here" }, { "end": 1114.72, "start": 1111.28, "text": " they're all slightly different which means they're all slightly different" }, { "end": 1120.48, "start": 1114.72, "text": " linear combinations of the of the filter up here or rotations basically" }, { "end": 1126, "start": 1120.48, "text": " they're all differently arranged but they're basically this blue field here" }, { "end": 1134.84, "start": 1126, "text": " is this blue field but is also let's see this one and this one and this one and" }, { "end": 1142.76, "start": 1134.84, "text": " this one so the the weights here are these original filters are basically" }, { "end": 1150.52, "start": 1142.76, "text": " arranged such that the weights are shared in this form down here but if you" }, { "end": 1155, "start": 1150.52, "text": " do this if you arrange them like this when you replicate each filter basically" }, { "end": 1160.68, "start": 1155, "text": " six times because you also want six output channels then the filter will have" }, { "end": 1165.44, "start": 1160.68, "text": " the desired properties and your convolution will be gauge equivalent so" }, { "end": 1173.92, "start": 1165.44, "text": " they apply this to to ICO M this so the complete algorithm is actually down here" }, { "end": 1178.64, "start": 1173.92, "text": " they can actually use if they pad the image in the correct way to the 2d image" }, { "end": 1183.88, "start": 1178.64, "text": " and expand the kernel to arrange it as we just saw they can use a regular 2d" }, { "end": 1189.8000000000002, "start": 1183.88, "text": " convolution to compute their result and that's pretty cool and this means this" }, { "end": 1198.48, "start": 1189.8000000000002, "text": " also is very very very efficient on this Ico Sahedron so what they do is they" }, { "end": 1204.5600000000002, "start": 1198.48, "text": " apply this to Ico M NIST where they project basically they project M NIST on" }, { "end": 1210.4, "start": 1204.5600000000002, "text": " an Ico Sahedron so they take the image M NIST and they project it onto this and" }, { "end": 1215.8400000000001, "start": 1210.4, "text": " then they try to classify it on that I can actually show that their method" }, { "end": 1222.64, "start": 1215.8400000000001, "text": " outperforms other method and learns these invariances so learns the the" }, { "end": 1229.1200000000001, "start": 1222.64, "text": " symmetries of the Ico Sahedron or basic sorry is invariant to them being" }, { "end": 1233, "start": 1229.1200000000001, "text": " invariant to the symmetries means you don't have to learn them anymore if" }, { "end": 1237.48, "start": 1233, "text": " you're not invariant to symmetries it means you have to learn each one of them" }, { "end": 1242.2, "start": 1237.48, "text": " separately right but if you're invariant to symmetries then you have only have to" }, { "end": 1246.56, "start": 1242.2, "text": " learn one thing once and then if the Ico Sahedron is rotated you're just like" }, { "end": 1250.4, "start": 1246.56, "text": " ma that's just the same thing as this other thing they also do this" }, { "end": 1258, "start": 1250.4, "text": " interestingly to climate pattern segmentation and also a kind of 2d or 3d" }, { "end": 1264.68, "start": 1258, "text": " omni-directional segmentation where you're in a room a 3d room and you have" }, { "end": 1270.5600000000002, "start": 1264.68, "text": " an omni-directional picture sorry from everywhere you have a picture a 3d" }, { "end": 1275.48, "start": 1270.5600000000002, "text": " sphere picture from everywhere you're asked to segment things in the room and" }, { "end": 1283.4, "start": 1275.48, "text": " actually outperform all other methods on these data sets so I find this extremely" }, { "end": 1289.72, "start": 1283.4, "text": " cool that kind of this ultra theoretical work starting out as ultra theoretical" }, { "end": 1294.96, "start": 1289.72, "text": " then gets implemented into something that beats state-of-the-art methods on" }, { "end": 1301.64, "start": 1294.96, "text": " relevant tasks alright so that was just a brief overview and a very dirty look" }, { "end": 1308, "start": 1301.64, "text": " at these things but I hope you got something out of it and thus far that was" }, { "end": 1320.56, "start": 1308, "text": " it for me bye bye" } ]
H6Qiegq_36c
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Processing Megapixel Images with Deep Attention-Sampling Models
[ "Science & Technology" ]
[ "machine learning", "deep learning", "research", "attention", "attention sampling", "attention model", "attention distribution", "megapixel images", "large images", "artificial intelligence", "megapixel mnist", "street sign dataset", "monte carlo", "speed", "memory", "cnn", "convolutional neural networks", "limited resources", "ai", "image recognition", "image classifier" ]
Current CNNs have to downsample large images before processing them, which can lose a lot of detail information. This paper proposes attention sampling, which learns to selectively process parts of any large image in full resolution, while discarding uninteresting bits. This leads to enormous gains in speed and memory consumption. https://arxiv.org/abs/1905.03711 Abstract: Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images. Authors: Angelos Katharopoulos, François Fleuret
Hi there, today we're looking at processing megapixel images with deep attention sampling models by Angelos Kateropoulos and François Fleuret. This is another paper that I saw the talk of at ICML and it's a pretty cool idea, it's pretty simple and apparently it works very well. So consider the following image here of a street situation and ask yourself if a self-driving car sees this, what are the kind of things it needs to be aware of? So of course one of the things it needs to be aware of is like the road, the cars and so on but also what's encircled in red here, the street sign and the street sign especially is important because there's a number on it and you want to see what the number is otherwise you won't be able to adjust your speed. So if this is now a really large image, so if the camera is really good and the dimensions of this image are really large, then current machine learning methods have a problem because current machine learning methods kind of go up to maybe something like 200 by 200 pixels or the current image net models, some down sample and so on. So if this is much larger than this, what current machine learning models would do is they would simply down sample, like compress the size, just compress it a bit and so on. And by that, as you see here on the right, if the original patch in the image you could cut it out and enlarge it, it would look like this. If you compress the whole image, the same patch would now look like this, blurred. So in the bottom half you'd be able to recognize the number, in the top half you wouldn't. So a standard CNN might be able to recognize the road and the car still at the lower resolution but not the speed sign. What we want is a method that can selectively pay attention to parts of the image that it finds interesting and then look at those parts in full detail while basically deciding to discard other parts completely such as the sky here. So this paper is one that does this and does so in a very efficient manner. So the basic premise is very simple. All right, I'm going to show you on this on the same image. So what you do is first you actually compress the image. So this image will become a smaller image, right? So here maybe this is 1000 by 2000, you compress it down to maybe 100 by 200. Still the same image but compressed. Here's the road, here's a bunch of trees. I'm very good at drawing trees. And here's this street sign and here is a car and here is another car. All right, so and there is a sky up here. So now what you do is on this smaller version you classify every location. I guess you could classify, you could subsample but you want to classify every single location on it on how interesting is it. And what they do is they take this and just put it through what they call an attention network which is just this it just a neural network. In their case it's a CNN that for each location here for each blue location outputs a function a of a and let's call it a x y at coordinates x and y of this image x. Okay, this is stupid notation. That's a of x so the image is x at coordinates i, j. Right, so all of these blue things here are i's and j's. Different i's and j's. And then what does this gives you now if you normalize correctly, so if you normalize over all the a's and i, j, a, i, j. If you normalize this gives you a distribution over this image. So if we look at it in like 1D this gives you like a distribution not a continuous one in this case a discrete one. How interesting is each patch and at the end if you have this distribution, so let's finish here, what you want to do is you want to say which are the most interesting locations. So this one's pretty high and these are very high so that might correspond to over here that might correspond to some location. So this location is very high and these locations are very interesting and only in these locations you take them out and then only those you process in full resolution. So you might have extracted let's say four patches so now you have four of these patches and each of them individually you run through a second neural network which is called another CNN which is called F the feature network. So the feature network will take a patch and output a vector of features. So it will feed those in and output the vector of features and then what you do is you simply your final output which they call G, let me colorize this so G which is G is now the final output let's not call it G let's call it O. Output is you sum over all the patches you have extracted down here so the patch number P over all your patches and you sum these features F of patch P right and P might be at location IJ let's put IJ here so IJ in the extracted patches and you weigh each feature by how much attention it got at that location. So it looks more complicated than it is what you do is you simply determine these features by using this neural network only at the position where this neural network says are interesting then you get the features from the interesting positions and you basically just weigh them by how much attention they got in the attention distribution and that will be your final output of the network and it makes intuitive sense like one network decides what is interesting the other network decides what are we going to do with the interesting things in this image. And the cool thing about this is you can basically decide how many of these patches here how many you want to extract you can decide at what resolution you want to process this image and all of this are parameters that you set by how much time you have for computation and how much memory you have for your computation so that's pretty cool pretty module we can scale up we can scale down and the another cool thing is the theoretical guarantees that they give so basically here they prove that the way they do it especially by extracting the patch especially if they have an unbiased sorry especially have if they have sampling without replacement is that if they weigh the things correctly and if they do the things correctly they show that this is actually an unbiased estimator of the true neural network if you were to evaluate on the full image basically on each patch in full resolution so only taking the ones where the attention focuses is an unbiased estimator and not only is it an unbiased estimator it is in fact the estimator with the smallest variance and that's what they prove here so the minimum variance estimator and this is this is pretty pretty interesting pretty cool and works pretty well they also show how to derive the gradient update when you train with this attention sampling so now you train your neural you train your machine learning system not on the whole image but only on a subset of the image patches but it still behaves in expectation as if you were to train on the entire image so pretty neat so here they show how this compares to full CNN in this case we have the full CNN where the picture is simply down sampled and then classified and this is what's called megapixel amnest so in megapixel amnest you have a large image and you put three digits in there there are the same for example five five five from the amnest data set you put two random digits others like two three and you put also a bunch of noise noise patches somewhere so the task is to recognize which is the dominant digit here in this case it would be five right five five where was the other one five here so if you give this to a regular CNN you see it does about this well this is the training loss here training loss and this is the test loss and it takes this much time right time per epoch here and this much time to evaluate sorry if you now use this attention sampling and as I said you can actually modulate how many patches you want to take so as you go down you take more patches we would expect it to take more time this is exactly what happens you see for example down here in the test error if you take five patches per image it takes very little time but the error I mean the error is still better than the if you use the CNN simply because you can now pay attention to details much more as you use more patches your test error drops the also your training loss they drop so using more patches will be actually give you a better and better and better performing model but you sacrifice a little bit of time but still not never as as slow as with the full with that with the CNN so even though it's a down sampled CNN right so that is very interesting and very cool that not only do they beat the the baseline in terms of error but also a lot in terms of speed if you look at what the model does as it learns here you see for a given image this is always the same image from the data set at the beginning they have actually marked where the relevant the three relevant digits are in the picture with the red circle so if you look at how over the training of this model how this distribution evolves is pretty interesting yellow basically means high attention so at the beginning you have high attention everywhere in the image right and then as you go on and on and on you see for example here it pays attention to all the locations where basically where there is something in the image right this could be one of these three digits but it could also be one of the digits that it's trying to that is trying to distract the model like the false digits or the noise patches and as you go more and more and more it really learns to only pay attention to the relevant digits and then classify those at full resolution so this really shows the this this kind of attention distribution learns something very meaningful they do more experiments on two data sets namely this is a histopathology data set right here where the goal is I think to recognize this epithelial cells this type of cell and you can see that this here is the baseline and this here is the new method and the baseline basically what it does is it does similar thing namely it processes the image in patches but it processes every single patch maybe in succession but it still processes every single patch where the attention sampling only processes the patches that the attention sampling distribution suggests and this other data set here is a street sign data set that you saw at the beginning right here and the the again I think this is the baseline and this is the attention sample so both learn to pay attention to the street signs but again the attention sampling much more efficient so here you see the baseline performance the attention sampling performance is similar in terms of test error but if you look at how much time the baseline uses per sample and how much memory and then compare this to the attention sampling you see that they save at least an order of magnitude in time and memory and the same thing goes for the street sign data set you see test error here and then test error is similar for the attention sampling but again time memory much much lower so the attention sampling is faster and is more memory efficient than the baseline and that makes it makes it easy to process these megapixel images even on here they say process megapixel images in a single CPU or GPU and that really I like this because it kind of brings their research back to let's say regular people or maybe universities that don't have as much money as large companies and so all in all very cool paper very neat experiments to have a lot in the appendix check it out where they show their attention distribution in these images their theoretical analysis is pretty easy to follow if you want to check that out and with that thanks for listening and bye bye
[ { "end": 4.92, "start": 0, "text": " Hi there, today we're looking at processing megapixel images with deep" }, { "end": 12.72, "start": 4.92, "text": " attention sampling models by Angelos Kateropoulos and François Fleuret." }, { "end": 20.88, "start": 12.72, "text": " This is another paper that I saw the talk of at ICML and it's a pretty cool idea," }, { "end": 26.52, "start": 20.88, "text": " it's pretty simple and apparently it works very well. So consider the" }, { "end": 35.72, "start": 26.52, "text": " following image here of a street situation and ask yourself if a" }, { "end": 42.760000000000005, "start": 35.72, "text": " self-driving car sees this, what are the kind of things it needs to be aware of?" }, { "end": 48.28, "start": 42.760000000000005, "text": " So of course one of the things it needs to be aware of is like the road, the cars" }, { "end": 54.36, "start": 48.28, "text": " and so on but also what's encircled in red here, the street sign and the street" }, { "end": 59.88, "start": 54.36, "text": " sign especially is important because there's a number on it and you want to" }, { "end": 65.64, "start": 59.88, "text": " see what the number is otherwise you won't be able to adjust your speed. So if" }, { "end": 70.36, "start": 65.64, "text": " this is now a really large image, so if the camera is really good and the" }, { "end": 75.08, "start": 70.36, "text": " dimensions of this image are really large, then current machine learning" }, { "end": 81.88, "start": 75.08, "text": " methods have a problem because current machine learning methods kind of go up" }, { "end": 88.72, "start": 81.88, "text": " to maybe something like 200 by 200 pixels or the current image net models," }, { "end": 93.92, "start": 88.72, "text": " some down sample and so on. So if this is much larger than this, what current" }, { "end": 98.72, "start": 93.92, "text": " machine learning models would do is they would simply down sample, like compress" }, { "end": 105.46, "start": 98.72, "text": " the size, just compress it a bit and so on. And by that, as you see here on the" }, { "end": 110.32, "start": 105.46, "text": " right, if the original patch in the image you could cut it" }, { "end": 115.8, "start": 110.32, "text": " out and enlarge it, it would look like this. If you compress the whole image, the" }, { "end": 121.72, "start": 115.8, "text": " same patch would now look like this, blurred. So in the bottom half you'd be" }, { "end": 128, "start": 121.72, "text": " able to recognize the number, in the top half you wouldn't. So a standard CNN might" }, { "end": 132.16, "start": 128, "text": " be able to recognize the road and the car still at the lower resolution but" }, { "end": 138.35999999999999, "start": 132.16, "text": " not the speed sign. What we want is a method that can selectively pay" }, { "end": 145.04000000000002, "start": 138.36, "text": " attention to parts of the image that it finds interesting and then look at those" }, { "end": 150.60000000000002, "start": 145.04000000000002, "text": " parts in full detail while basically deciding to discard other parts" }, { "end": 158.04000000000002, "start": 150.60000000000002, "text": " completely such as the sky here. So this paper is one that does this and does so" }, { "end": 166.12, "start": 158.04000000000002, "text": " in a very efficient manner. So the basic premise is very simple. All right, I'm" }, { "end": 172.04, "start": 166.12, "text": " going to show you on this on the same image. So what you do is first you" }, { "end": 177.88, "start": 172.04, "text": " actually compress the image. So this image will become a smaller image, right?" }, { "end": 187.24, "start": 177.88, "text": " So here maybe this is 1000 by 2000, you compress it down to maybe 100 by 200." }, { "end": 191.68, "start": 187.24, "text": " Still the same image but compressed. Here's the road, here's a bunch of" }, { "end": 198.56, "start": 191.68, "text": " trees. I'm very good at drawing trees. And here's this street sign and here is a" }, { "end": 207.6, "start": 198.56, "text": " car and here is another car. All right, so and there is a sky up here. So now" }, { "end": 215.56, "start": 207.6, "text": " what you do is on this smaller version you classify every location. I guess" }, { "end": 220.44, "start": 215.56, "text": " you could classify, you could subsample but you want to classify every single" }, { "end": 229.4, "start": 220.44, "text": " location on it on how interesting is it. And what they do is they take this and" }, { "end": 234.44, "start": 229.4, "text": " just put it through what they call an attention network which is just this it" }, { "end": 242.16, "start": 234.44, "text": " just a neural network. In their case it's a CNN that for each location here for" }, { "end": 254.48, "start": 242.16, "text": " each blue location outputs a function a of a and let's call it a x y at" }, { "end": 264.8, "start": 254.48, "text": " coordinates x and y of this image x. Okay, this is stupid notation. That's a of x" }, { "end": 272.12, "start": 264.8, "text": " so the image is x at coordinates i, j. Right, so all of these blue things here" }, { "end": 279.40000000000003, "start": 272.12, "text": " are i's and j's. Different i's and j's. And then what does this gives you now if" }, { "end": 286.8, "start": 279.40000000000003, "text": " you normalize correctly, so if you normalize over all the a's and i, j, a, i, j. If you" }, { "end": 292.08000000000004, "start": 286.8, "text": " normalize this gives you a distribution over this image. So if we look at it in" }, { "end": 299.56, "start": 292.08, "text": " like 1D this gives you like a distribution not a continuous one in" }, { "end": 310.68, "start": 299.56, "text": " this case a discrete one. How interesting is each patch and at the end if you have" }, { "end": 315.71999999999997, "start": 310.68, "text": " this distribution, so let's finish here, what you want to do is you want to say" }, { "end": 320.68, "start": 315.71999999999997, "text": " which are the most interesting locations. So this one's pretty high and these are" }, { "end": 328.6, "start": 320.68, "text": " very high so that might correspond to over here that might correspond to some" }, { "end": 334.12, "start": 328.6, "text": " location. So this location is very high and these locations are very interesting" }, { "end": 341.92, "start": 334.12, "text": " and only in these locations you take them out and then only those you process" }, { "end": 347.16, "start": 341.92, "text": " in full resolution. So you might have extracted let's say four patches so now" }, { "end": 357.36, "start": 347.16, "text": " you have four of these patches and each of them individually you run through a" }, { "end": 364.24, "start": 357.36, "text": " second neural network which is called another CNN which is called F the" }, { "end": 370.56, "start": 364.24, "text": " feature network. So the feature network will take a patch and output a vector of" }, { "end": 379.8, "start": 370.56, "text": " features. So it will feed those in and output the vector of features and" }, { "end": 391.8, "start": 379.8, "text": " then what you do is you simply your final output which they call G, let me" }, { "end": 406.56, "start": 391.8, "text": " colorize this so G which is G is now the final output let's not call it G let's" }, { "end": 419.88, "start": 406.56, "text": " call it O. Output is you sum over all the patches you have extracted down here so" }, { "end": 432.04, "start": 419.88, "text": " the patch number P over all your patches and you sum these features F of patch P" }, { "end": 444.15999999999997, "start": 432.04, "text": " right and P might be at location IJ let's put IJ here so IJ in the extracted" }, { "end": 451.32000000000005, "start": 444.16, "text": " patches and you weigh each feature by how much attention it got at that" }, { "end": 457.56, "start": 451.32000000000005, "text": " location. So it looks more complicated than it is what you do is you" }, { "end": 463.56, "start": 457.56, "text": " simply determine these features by using this neural network only at the position" }, { "end": 467.64000000000004, "start": 463.56, "text": " where this neural network says are interesting then you get the features" }, { "end": 474.24, "start": 467.64, "text": " from the interesting positions and you basically just weigh them by how much" }, { "end": 479.36, "start": 474.24, "text": " attention they got in the attention distribution and that will be your final" }, { "end": 484.59999999999997, "start": 479.36, "text": " output of the network and it makes intuitive sense like one network decides" }, { "end": 489.84, "start": 484.59999999999997, "text": " what is interesting the other network decides what are we going to do with the" }, { "end": 497.52, "start": 489.84, "text": " interesting things in this image. And the cool thing about this is you" }, { "end": 503.35999999999996, "start": 497.52, "text": " can basically decide how many of these patches here how many you want to" }, { "end": 508.35999999999996, "start": 503.35999999999996, "text": " extract you can decide at what resolution you want to process this" }, { "end": 516.48, "start": 508.35999999999996, "text": " image and all of this are parameters that you set by how much time you have" }, { "end": 522, "start": 516.48, "text": " for computation and how much memory you have for your computation so that's" }, { "end": 526.64, "start": 522, "text": " pretty cool pretty module we can scale up we can scale down and the another cool" }, { "end": 531.52, "start": 526.64, "text": " thing is the theoretical guarantees that they give so basically here they prove" }, { "end": 540.52, "start": 531.52, "text": " that the way they do it especially by extracting the patch especially if they" }, { "end": 545.28, "start": 540.52, "text": " have an unbiased sorry especially have if they have sampling without replacement" }, { "end": 553.52, "start": 545.28, "text": " is that if they weigh the things correctly and if they do the things" }, { "end": 558.6, "start": 553.52, "text": " correctly they show that this is actually an unbiased estimator of the" }, { "end": 566.0799999999999, "start": 558.6, "text": " true neural network if you were to evaluate on the full image basically on" }, { "end": 575.36, "start": 566.0799999999999, "text": " each patch in full resolution so only taking the ones where the attention" }, { "end": 582.88, "start": 575.36, "text": " focuses is an unbiased estimator and not only is it an unbiased estimator it is" }, { "end": 587.52, "start": 582.88, "text": " in fact the estimator with the smallest variance and that's what they prove" }, { "end": 598.32, "start": 587.52, "text": " here so the minimum variance estimator and this is this is pretty pretty" }, { "end": 603.56, "start": 598.32, "text": " interesting pretty cool and works pretty well they also show how to derive the" }, { "end": 609.52, "start": 603.56, "text": " gradient update when you train with this attention sampling so now you train your" }, { "end": 614.28, "start": 609.52, "text": " neural you train your machine learning system not on the whole image but only" }, { "end": 621.4399999999999, "start": 614.28, "text": " on a subset of the image patches but it still behaves in expectation as if you" }, { "end": 626.8, "start": 621.4399999999999, "text": " were to train on the entire image so pretty neat so here they show how this" }, { "end": 635.64, "start": 626.8, "text": " compares to full CNN in this case we have the full CNN where the picture is" }, { "end": 641.6, "start": 635.64, "text": " simply down sampled and then classified and this is what's called megapixel" }, { "end": 647.04, "start": 641.6, "text": " amnest so in megapixel amnest you have a large image and you put three digits in" }, { "end": 652.3199999999999, "start": 647.04, "text": " there there are the same for example five five five from the amnest data set" }, { "end": 658.84, "start": 652.3199999999999, "text": " you put two random digits others like two three and you put also a bunch of" }, { "end": 665.6, "start": 658.84, "text": " noise noise patches somewhere so the task is to recognize which is the" }, { "end": 671.4, "start": 665.6, "text": " dominant digit here in this case it would be five right five five where was" }, { "end": 678.5600000000001, "start": 671.4, "text": " the other one five here so if you give this to a regular CNN you see it does" }, { "end": 683.84, "start": 678.5600000000001, "text": " about this well this is the training loss here training loss and this is the" }, { "end": 690.96, "start": 683.84, "text": " test loss and it takes this much time right time per epoch here and this much" }, { "end": 698.84, "start": 690.96, "text": " time to evaluate sorry if you now use this attention sampling and as I said" }, { "end": 702.64, "start": 698.84, "text": " you can actually modulate how many patches you want to take so as you go" }, { "end": 708.44, "start": 702.64, "text": " down you take more patches we would expect it to take more time this is" }, { "end": 712.48, "start": 708.44, "text": " exactly what happens you see for example down here in the test error if you take" }, { "end": 719.4, "start": 712.48, "text": " five patches per image it takes very little time but the error I mean the" }, { "end": 724.44, "start": 719.4, "text": " error is still better than the if you use the CNN simply because you can now" }, { "end": 732.28, "start": 724.44, "text": " pay attention to details much more as you use more patches your test error" }, { "end": 737.28, "start": 732.28, "text": " drops the also your training loss they drop so using more patches will be" }, { "end": 742.28, "start": 737.28, "text": " actually give you a better and better and better performing model but you" }, { "end": 749.3199999999999, "start": 742.28, "text": " sacrifice a little bit of time but still not never as as slow as with the full" }, { "end": 757.16, "start": 749.3199999999999, "text": " with that with the CNN so even though it's a down sampled CNN right so that" }, { "end": 762.64, "start": 757.16, "text": " is very interesting and very cool that not only do they beat the the baseline" }, { "end": 768.92, "start": 762.64, "text": " in terms of error but also a lot in terms of speed if you look at what the" }, { "end": 774.92, "start": 768.92, "text": " model does as it learns here you see for a given image this is always the same" }, { "end": 779.5999999999999, "start": 774.92, "text": " image from the data set at the beginning they have actually marked where the" }, { "end": 785.8399999999999, "start": 779.5999999999999, "text": " relevant the three relevant digits are in the picture with the red circle so if" }, { "end": 793.64, "start": 785.8399999999999, "text": " you look at how over the training of this model how this distribution evolves" }, { "end": 798.76, "start": 793.64, "text": " is pretty interesting yellow basically means high attention so at the beginning" }, { "end": 806.8, "start": 798.76, "text": " you have high attention everywhere in the image right and then as you go on and" }, { "end": 812.24, "start": 806.8, "text": " on and on you see for example here it pays attention to all the locations" }, { "end": 818.8, "start": 812.24, "text": " where basically where there is something in the image right this could be one of" }, { "end": 823, "start": 818.8, "text": " these three digits but it could also be one of the digits that it's trying to" }, { "end": 827.4399999999999, "start": 823, "text": " that is trying to distract the model like the false digits or the noise" }, { "end": 834.2, "start": 827.44, "text": " patches and as you go more and more and more it really learns to only pay" }, { "end": 839.6800000000001, "start": 834.2, "text": " attention to the relevant digits and then classify those at full resolution" }, { "end": 845.2800000000001, "start": 839.6800000000001, "text": " so this really shows the this this kind of attention distribution learns" }, { "end": 855.08, "start": 845.2800000000001, "text": " something very meaningful they do more experiments on two data sets namely this" }, { "end": 861.76, "start": 855.08, "text": " is a histopathology data set right here where the goal is I think to recognize" }, { "end": 873.44, "start": 861.76, "text": " this epithelial cells this type of cell and you can see that this here is the" }, { "end": 882.6800000000001, "start": 873.44, "text": " baseline and this here is the new method and the baseline basically what it does" }, { "end": 887.4399999999999, "start": 882.68, "text": " is it does similar thing namely it processes the image in patches but it" }, { "end": 895.12, "start": 887.4399999999999, "text": " processes every single patch maybe in succession but it still processes every" }, { "end": 899.64, "start": 895.12, "text": " single patch where the attention sampling only processes the patches that" }, { "end": 906.88, "start": 899.64, "text": " the attention sampling distribution suggests and this other data set here" }, { "end": 912.5999999999999, "start": 906.88, "text": " is a street sign data set that you saw at the beginning right here and the" }, { "end": 920.6, "start": 912.6, "text": " the again I think this is the baseline and this is the attention sample so both" }, { "end": 925.44, "start": 920.6, "text": " learn to pay attention to the street signs but again the attention sampling" }, { "end": 933.52, "start": 925.44, "text": " much more efficient so here you see the baseline performance the attention" }, { "end": 939.6, "start": 933.52, "text": " sampling performance is similar in terms of test error but if you look at how" }, { "end": 945.6, "start": 939.6, "text": " much time the baseline uses per sample and how much memory and then compare" }, { "end": 951.48, "start": 945.6, "text": " this to the attention sampling you see that they save at least an order of" }, { "end": 956.6, "start": 951.48, "text": " magnitude in time and memory and the same thing goes for the street sign" }, { "end": 964.24, "start": 956.6, "text": " data set you see test error here and then test error is similar for the" }, { "end": 973.48, "start": 964.24, "text": " attention sampling but again time memory much much lower so the attention" }, { "end": 982, "start": 973.48, "text": " sampling is faster and is more memory efficient than the baseline and that" }, { "end": 988.6, "start": 982, "text": " makes it makes it easy to process these megapixel images even on here they say" }, { "end": 995.5600000000001, "start": 988.6, "text": " process megapixel images in a single CPU or GPU and that really I like this" }, { "end": 1001.44, "start": 995.5600000000001, "text": " because it kind of brings their research back to let's say regular people or" }, { "end": 1009.8000000000001, "start": 1001.44, "text": " maybe universities that don't have as much money as large companies and so all" }, { "end": 1014.6800000000001, "start": 1009.8000000000001, "text": " in all very cool paper very neat experiments to have a lot in the" }, { "end": 1020, "start": 1014.68, "text": " appendix check it out where they show their attention distribution in these" }, { "end": 1025.32, "start": 1020, "text": " images their theoretical analysis is pretty easy to follow if you want to" }, { "end": 1045.12, "start": 1025.32, "text": " check that out and with that thanks for listening and bye bye" } ]
1L83tM8nwHU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Manifold Mixup: Better Representations by Interpolating Hidden States
[ "Science & Technology" ]
[ "deep learning", "neural networks", "adversarial examples", "machine learning", "bengio", "classification", "smooth", "flat representations", "ai", "artificial intelligence", "supervised learning", "regluarization", "regularizer", "hidden representations", "overconfidence" ]
Standard neural networks suffer from problems such as un-smooth classification boundaries and overconfidence. Manifold Mixup is an easy regularization technique that rectifies these problems. It works by interpolating hidden representations of different data points and then train them to predict equally interpolated labels. https://arxiv.org/abs/1806.05236 Abstract: Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood. Authors: Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio
Hi there, today we're looking at manifold mixup, better representations by interpolating hidden states by Vikas Verma et al. A number of big names on this paper as you can see and I also saw this at ICML so I was intrigued by it. They propose manifold mixup which is sort of a regularizer of neural networks is specifically of supervised learning and it's actually a pretty simple concept and they kind of show that it has some nice properties and outperforms other regularizers. So what's the problem? The problem is that if you look at this spiral problem here which is often kind of used to to show properties of neural networks, what you have are blue points and the blue points are one class and the red points are another class. You see the two classes here are in this kind of spiral pattern. The data space is just two-dimensional. You see here this is one class, this is the other class. This is pretty difficult for a model to learn because of course the easy models would be like linear classifiers but there's no way to put a line through this such that one class is on one side mostly. So neural networks, if you train them, they will give you something like you see here. They will try to kind of bound the regions with the red points from the blue points but then there's some weird things like here is a weird thing, here is a weird thing. So you'd imagine a correct model would actually classify this area as blue but the neural network has no concept of let's say that the spiral should continue that thus it simply sees here's blue, here's blue, here's a bit of a gap in the training data. So in this case it assigns a red class to it. So this is one problem that the decision boundaries are rather squiggly and irregular and the second one if you look at the actual colors, full blue means very confident blue class, full red means very confident red class and in between you kind of see going into the the white so if you look very closely I can't actually zoom in more here. If you look very closely you'll see that the blue gets lighter and lighter until it reaches white and from here the red goes lighter and lighter until it reaches white and white means not confident, white means like 50-50. So you see the area of not confident is actually very small right. If you consider a point here is actually still very confident that it's a blue point and the area of non-confidence is very small even though maybe as as humans we would judge like a relatively large band in the middle to be not confident like if we get a point like this. And the third problem is that you can see in multiple locations like here or here or here that the decision boundary is very close to the data points unnecessarily close. So especially if you look here the decision boundary could be much more optimally placed probably something like this right given the training data but the neural networks because they only see training data they they have no basically no incentive to do this. Alright one might think of you know something like a support vector machine that actually has an incentive to to put the decision boundary away from the from the training data but the neural networks currently they're not SVMs they're basically logistic regressions and as such have no no incentive to do this. So this these are the problems the other problems are this is the input space. If you look at the hidden space so they build neural networks specifically they have like the 2d input and then that goes through a bunch of layers and then at one point there's a bottleneck layer with just two hidden nodes and then I guess that goes again and then it goes into a classifier. So in this bottleneck layer they analyze the hidden representations of the data points and in this case for this spiral data set what happens is so in red you see again the red classes in blue the blue class it's 2d so you can plot it what it does is it bunches up the hidden representations fairly fairly so it bunches them kind of up it spreads them out in directions here here here most are bunched up here and it does these kind of weird arrangements here with the pockets of those and of course the neural network is powerful enough such that it can actually you know separate all of this from each other but it's not ideal and the black dots they represent kind of points in between or points from the input space that are not part of the training data so they say they sample uniformly in the range of the input space you see that the black dots are all over the place right some are confident blue some are confident red some are like somewhere all right what you would expect from a good model is that if you input something that's kind of in between or not really sure not even part of the input distribution that it assigns like a low confidence to it that it says well I'm not sure about this this must be somewhere in the middle so just to jump in jump forward to the results what does manifold mixup do without knowing what it is in the same data set it gives you a picture like this you see the decision boundaries are much more smooth right the region of no confidence or of low confidence indicated by the light colors here is much larger and also the decision boundary here we had specifically this data point here you see the decision boundary is pushed away though you could argue about that particular point but the decision boundary is generally pushed away from the data points you also see no more kind of these squiggles here it doesn't happen in in here also if you look at the hidden representations the hidden representations now are spread out the classes are bunched up so not all the points are bunched up but the the points of individual classes are bunched up together and the randomly sampled points are in the middle as they should be you say only confident red is down here confident blue is up here and everything in between is on confident and third if you look at the singular value decompositions of the hidden player and that's kind of a measure of how spread out in the different dimensions a data set is you see that the manifold mix up here in green it concentrates or it it lowers the singular values of the kind of lower indexes so the first singular value is large which means that there is like a dominant direction in the in the data and this is done for each class separately as I understand it it puts a lot of weight on the first singular vector and then it pushes down the contributions of the other singular vector which means that the data set that is analyzed is is concentrated into fewer directions of variance this is layer one and here is layer three means so you see it happens in both that the manifold mix up compared to the baseline model does this so now you might ask what is manifold mix up it's actually pretty pretty simple concept all right here is another comparing it to other kind of regularization techniques and showing that none of them really does this so manifold mix up is this basically what you do is when you train a neural network you have input data and you take many batches of input data specifically you take two many batches X and Y and X prime Y prime right and then what you do is if I have the draw the neural network here so here is the inputs like a picture of a cat it goes through layers right and then what you do is you say at some particular you say stop stop right you take the representation out you and you do this with two different many batches so here is this is cat one and I'm down back here is cat two whatever or dog that's a cat you pass it in right here you take it out here you pass it through the network and you take it out so you now have two different forward paths of two different many batches and then you define a lambda and I guess they randomly sample a lambda in zero one right in the range of zero one so this is a mixing coefficient and then you mix you say lambda times hidden representation of batch one plus one minus lambda of hidden representation of batch two and that is what you pass through the rest of the network right so basically you forward propagate two different batches until a certain layer here then you mix them with a random coefficient and then you pass it through the rest and then the only thing you also have to do is then at the end if you think of the labels of these two things you want to mix the labels in the same fashion so you want to mix lambda times y of batch one plus one minus lambda of y of batch two and then this is your training signal for whatever comes out here right so it's it's um these are these are one hot labels so if it's class three it's zero zero one zero and if y2 is class five it's zero zero zero zero one and then you simply mix the two right and that becomes your training signal so in a practical example if let's just have a mini batch size of one so just one sample if this is cat and this is dog you would pass them forward right you would mix so in the hidden representation it would kind of become a cat dog maybe you do it 50 50 but then you would also mix the labels of cat and dog 50 50 and tell the network this is a mixture of 50% cat 50% dog and then you would train the network to predict that 50 50 coefficient so they do this the question is at which layer do you do this and they simply I think for each mini batch sample one hidden layer at random they might have some weighting or something but the way they describe it is they simply sample one layer for me per mini batch and then do the mixing there and then you can actually back prop through everything everything is differentiable this mixing is differentiable so you can back prop through any everything and there's even you know kind of an engineering trick to only use a single mini batch by mixing it with itself so that's that's pretty neat so this manifold mix up as you can see here is the that's kind of the description you mix the hidden representations with lambda and you mix the labels with the same lambda and that will become your actual training signal all right so they give some theory to it that it flattens representations and specifically they say under some conditions namely if the network is large enough so if the dimension of the hidden representation is of a certain size then if you optimize this manifold mix up like if you optimize over every lambda and over the entire training data set what you will end up is actually a linear function of the input this is not too surprising that if you because what you do is you mix linearly this mixture happens in a linear fashion so if you optimize for and you not only optimize for the training set but you optimize for every possible mixture of the training set linear mixture your minimization your minimizer function will actually become a linear function it's not surprising but they have a formal proof of this and they also have a proof that if certain assumptions are given then the minimizers if you apply the minimizers the hidden representations will actually fall on a low dimensional subspace which is also not surprising but it's kind of the theoretical analog to what they show with with the singular value distribution that it basically suppresses low singular values that means the data set is much more into a single direction the hidden representations sorry all right so this the theory part is you can you can read it if you if you want to it's yeah it's it's to the results are to be expected I would say from what they do and the last thing they give a pictorial example of why manifold mix up flattened representations so both of these things the fact that the minimizers will become linear functions and the fact that the singular value spectrum is more concentrated on the first singular value means basically that representations are flattened and here is a pictorial representation so in this case what happens if you if you basically have these four data points a 1a 2b 1 and b 2 where a 1 and a 2 are blue class and b 1 and b 2 are red class and if you now look at an interpolation point between the two so if you look at this interpolation point between a 1 and b 2 what happens is that in this case this should be 50 50 blue and red but if you now look at the points that it where it's not interpolated on this is very close to a 2 in this case it's probably should be more like 95 blue and 5 red do they say here well if you use manifold mix up to learn the network what you'll actually do is you say okay actually this hidden representation needs to be pushed outward and you will achieve something over here where any mixture of two points of the opposite class will actually give you a 50 50 so all the mid points here will give you a 50 50 mixture between the labels which basically means what you end up with is a line between this data and this data and it means that basically the network becomes more linear and the representations become more flat because flat is the optimal if your distributions are flat all the distances to the line are the same and this objective is optimized and this is basically my my kind of biggest problem with the method is that it it kind of mixes the input with a linear function where we know that that is kind of not the shape of the true data manifold the input manifolds as you can see here the input manifold here isn't linear or flat it's actually very very tangled and we know that neural networks as you continue in the layers will flatten those representations because ultimately at the end it needs to classify the data set linearly because the last layer is a softmax layer but the the idea that you could apply this to any layer seems a bit shady to me of course it works and they show it works and it's really nice that it works but applying this to low layers in neural networks seems a bit not principled to me so I think this is not the end of the story of this line of work and there is kind of more that can be done in a more principled fashion but in any case they show that this actually works in terms of performance on generalization on kind of standard data sets so they have results on CIFAR-10 and CIFAR-100 which are famous image data sets and they show that the hair regularizer outperforms others and they also show that they can withstand one step single step adversarial attacks more kind of better so they have a better performance against single step adversarial attacks after regularizing mostly again giving kind of an idea that the if you push if you push it if you have a two points this is X this is X X 1 X 2 there are different classes if you put the decision boundary really close to X 2 then an adversarial attack can simply move the point across the decision boundary with a very small step but if you actually have the decision boundary pushed away from both data points then the an adversarial attack must go a very long way to the decision boundary and thus if you limit the size of adversarial attacks which is what you usually do you can maybe not reach this decision boundary and thus you mitigate some of the problem so it's pretty cool I think yeah there's work to be done but I think this is pretty cool it's implemented pretty easy I've seen there's a lot of libraries already available with it in and yeah won't hurt to add this to your code make your network better and more robust all right that was it from me bye bye
[ { "end": 5.5200000000000005, "start": 0, "text": " Hi there, today we're looking at manifold mixup, better representations by" }, { "end": 11.48, "start": 5.5200000000000005, "text": " interpolating hidden states by Vikas Verma et al. A number of big names on" }, { "end": 18, "start": 11.48, "text": " this paper as you can see and I also saw this at ICML so I was intrigued by it." }, { "end": 26.34, "start": 18, "text": " They propose manifold mixup which is sort of a regularizer of neural networks" }, { "end": 32.56, "start": 26.34, "text": " is specifically of supervised learning and it's actually a pretty simple concept" }, { "end": 37.96, "start": 32.56, "text": " and they kind of show that it has some nice properties and outperforms other" }, { "end": 45, "start": 37.96, "text": " regularizers. So what's the problem? The problem is that if you look at this" }, { "end": 51.400000000000006, "start": 45, "text": " spiral problem here which is often kind of used to to show properties of neural" }, { "end": 57.68, "start": 51.4, "text": " networks, what you have are blue points and the blue points are one class and" }, { "end": 62.2, "start": 57.68, "text": " the red points are another class. You see the two classes here are in this kind" }, { "end": 66.68, "start": 62.2, "text": " of spiral pattern. The data space is just two-dimensional. You see here" }, { "end": 71.92, "start": 66.68, "text": " this is one class, this is the other class. This is pretty difficult for a" }, { "end": 77.52, "start": 71.92, "text": " model to learn because of course the easy models would be like linear" }, { "end": 82.75999999999999, "start": 77.52, "text": " classifiers but there's no way to put a line through this such that one" }, { "end": 88.75999999999999, "start": 82.75999999999999, "text": " class is on one side mostly. So neural networks, if you train them, they will" }, { "end": 93.47999999999999, "start": 88.75999999999999, "text": " give you something like you see here. They will try to kind of bound the" }, { "end": 99.84, "start": 93.47999999999999, "text": " regions with the red points from the blue points but then there's" }, { "end": 104.56, "start": 99.84, "text": " some weird things like here is a weird thing, here is a weird thing. So you'd" }, { "end": 110.28, "start": 104.56, "text": " imagine a correct model would actually classify this area as blue but the" }, { "end": 117.10000000000001, "start": 110.28, "text": " neural network has no concept of let's say that the spiral should continue" }, { "end": 121, "start": 117.10000000000001, "text": " that thus it simply sees here's blue, here's blue, here's a bit of a gap in" }, { "end": 128.32, "start": 121, "text": " the training data. So in this case it assigns a red class to it. So this is" }, { "end": 133.12, "start": 128.32, "text": " one problem that the decision boundaries are rather squiggly and" }, { "end": 139.24, "start": 133.12, "text": " irregular and the second one if you look at the actual colors, full blue means" }, { "end": 145.08, "start": 139.24, "text": " very confident blue class, full red means very confident red class and in between" }, { "end": 150.56, "start": 145.08, "text": " you kind of see going into the the white so if you look very closely I can't" }, { "end": 154.76, "start": 150.56, "text": " actually zoom in more here. If you look very closely you'll see that the blue" }, { "end": 160.08, "start": 154.76, "text": " gets lighter and lighter until it reaches white and from here the red goes" }, { "end": 164.96, "start": 160.08, "text": " lighter and lighter until it reaches white and white means not confident," }, { "end": 172.08, "start": 164.96, "text": " white means like 50-50. So you see the area of not confident is actually very" }, { "end": 178.88000000000002, "start": 172.08, "text": " small right. If you consider a point here is actually still very confident that" }, { "end": 184.28, "start": 178.88000000000002, "text": " it's a blue point and the area of non-confidence is very small even though" }, { "end": 190.96, "start": 184.28, "text": " maybe as as humans we would judge like a relatively large band in the middle to" }, { "end": 197.08, "start": 190.96, "text": " be not confident like if we get a point like this. And the third problem is that" }, { "end": 203.12, "start": 197.08, "text": " you can see in multiple locations like here or here or here that the decision" }, { "end": 211.08, "start": 203.12, "text": " boundary is very close to the data points unnecessarily close. So especially" }, { "end": 215.96, "start": 211.08, "text": " if you look here the decision boundary could be much more optimally placed" }, { "end": 221.88000000000002, "start": 215.96, "text": " probably something like this right given the training data but the neural" }, { "end": 228, "start": 221.88000000000002, "text": " networks because they only see training data they they have no basically no" }, { "end": 234.52, "start": 228, "text": " incentive to do this. Alright one might think of you know something like a" }, { "end": 238.8, "start": 234.52, "text": " support vector machine that actually has an incentive to to put the decision" }, { "end": 245.84, "start": 238.8, "text": " boundary away from the from the training data but the neural networks currently" }, { "end": 252.28, "start": 245.84, "text": " they're not SVMs they're basically logistic regressions and as such have" }, { "end": 258.44, "start": 252.28, "text": " no no incentive to do this. So this these are the problems the other problems are" }, { "end": 263.36, "start": 258.44, "text": " this is the input space. If you look at the hidden space so they build neural" }, { "end": 268.2, "start": 263.36, "text": " networks specifically they have like the 2d input and then that goes through a" }, { "end": 271.8, "start": 268.2, "text": " bunch of layers and then at one point there's a bottleneck layer with just two" }, { "end": 276.71999999999997, "start": 271.8, "text": " hidden nodes and then I guess that goes again and then it goes into a classifier." }, { "end": 283.71999999999997, "start": 276.71999999999997, "text": " So in this bottleneck layer they analyze the hidden representations of the data" }, { "end": 290.44, "start": 283.71999999999997, "text": " points and in this case for this spiral data set what happens is so in red you" }, { "end": 294.4, "start": 290.44, "text": " see again the red classes in blue the blue class it's 2d so you can plot it" }, { "end": 300.67999999999995, "start": 294.4, "text": " what it does is it bunches up the hidden representations fairly fairly so it" }, { "end": 306.32, "start": 300.67999999999995, "text": " bunches them kind of up it spreads them out in directions here here here most" }, { "end": 311.47999999999996, "start": 306.32, "text": " are bunched up here and it does these kind of weird arrangements here with the" }, { "end": 316.79999999999995, "start": 311.47999999999996, "text": " pockets of those and of course the neural network is powerful enough such" }, { "end": 321.84, "start": 316.79999999999995, "text": " that it can actually you know separate all of this from each other but it's not" }, { "end": 327.44, "start": 321.84, "text": " ideal and the black dots they represent kind of points in between or points from" }, { "end": 331.28, "start": 327.44, "text": " the input space that are not part of the training data so they say they sample" }, { "end": 337.2, "start": 331.28, "text": " uniformly in the range of the input space you see that the black dots are" }, { "end": 342.03999999999996, "start": 337.2, "text": " all over the place right some are confident blue some are confident red" }, { "end": 348.03999999999996, "start": 342.03999999999996, "text": " some are like somewhere all right what you would expect from a good model is" }, { "end": 352.16, "start": 348.04, "text": " that if you input something that's kind of in between or not really sure not" }, { "end": 358.08000000000004, "start": 352.16, "text": " even part of the input distribution that it assigns like a low confidence to it" }, { "end": 361.40000000000003, "start": 358.08000000000004, "text": " that it says well I'm not sure about this this must be somewhere in the" }, { "end": 368.52000000000004, "start": 361.40000000000003, "text": " middle so just to jump in jump forward to the results what does manifold mixup" }, { "end": 373.24, "start": 368.52000000000004, "text": " do without knowing what it is in the same data set it gives you a picture like" }, { "end": 379.44, "start": 373.24, "text": " this you see the decision boundaries are much more smooth right the region of no" }, { "end": 384.32, "start": 379.44, "text": " confidence or of low confidence indicated by the light colors here is" }, { "end": 391.6, "start": 384.32, "text": " much larger and also the decision boundary here we had specifically this" }, { "end": 396.88, "start": 391.6, "text": " data point here you see the decision boundary is pushed away though you could" }, { "end": 401.04, "start": 396.88, "text": " argue about that particular point but the decision boundary is generally" }, { "end": 406.24, "start": 401.04, "text": " pushed away from the data points you also see no more kind of these squiggles" }, { "end": 414.24, "start": 406.24, "text": " here it doesn't happen in in here also if you look at the hidden representations" }, { "end": 422.20000000000005, "start": 414.24, "text": " the hidden representations now are spread out the classes are bunched up so" }, { "end": 426.76, "start": 422.20000000000005, "text": " not all the points are bunched up but the the points of individual classes are" }, { "end": 432.68, "start": 426.76, "text": " bunched up together and the randomly sampled points are in the middle as" }, { "end": 439.2, "start": 432.68, "text": " they should be you say only confident red is down here confident blue is up" }, { "end": 447.34, "start": 439.2, "text": " here and everything in between is on confident and third if you look at the" }, { "end": 452.59999999999997, "start": 447.34, "text": " singular value decompositions of the hidden player and that's kind of a" }, { "end": 458.96000000000004, "start": 452.6, "text": " measure of how spread out in the different dimensions a data set is you" }, { "end": 466.52000000000004, "start": 458.96000000000004, "text": " see that the manifold mix up here in green it concentrates or it it lowers" }, { "end": 474.44, "start": 466.52000000000004, "text": " the singular values of the kind of lower indexes so the first singular value is" }, { "end": 480.16, "start": 474.44, "text": " large which means that there is like a dominant direction in the in the data" }, { "end": 487.6, "start": 480.16, "text": " and this is done for each class separately as I understand it it puts a" }, { "end": 490.96000000000004, "start": 487.6, "text": " lot of weight on the first singular vector and then it pushes down the" }, { "end": 494.64000000000004, "start": 490.96000000000004, "text": " contributions of the other singular vector which means that the data set" }, { "end": 504.02000000000004, "start": 494.64000000000004, "text": " that is analyzed is is concentrated into fewer directions of variance this is" }, { "end": 511.76, "start": 504.02, "text": " layer one and here is layer three means so you see it happens in both that the" }, { "end": 518.84, "start": 511.76, "text": " manifold mix up compared to the baseline model does this so now you might ask" }, { "end": 523.52, "start": 518.84, "text": " what is manifold mix up it's actually pretty pretty simple concept all right" }, { "end": 529.16, "start": 523.52, "text": " here is another comparing it to other kind of regularization techniques and" }, { "end": 538.3199999999999, "start": 529.16, "text": " showing that none of them really does this so manifold mix up is this" }, { "end": 546.24, "start": 538.3199999999999, "text": " basically what you do is when you train a neural network you have input data" }, { "end": 552.24, "start": 546.24, "text": " and you take many batches of input data specifically you take two many batches X" }, { "end": 559.76, "start": 552.24, "text": " and Y and X prime Y prime right and then what you do is if I have the draw the" }, { "end": 567.72, "start": 559.76, "text": " neural network here so here is the inputs like a picture of a cat it goes" }, { "end": 573.8, "start": 567.72, "text": " through layers right and then what you do is you say at some particular you say" }, { "end": 581.36, "start": 573.8, "text": " stop stop right you take the representation out you and you do this" }, { "end": 587.24, "start": 581.36, "text": " with two different many batches so here is this is cat one and I'm down back" }, { "end": 596.92, "start": 587.24, "text": " here is cat two whatever or dog that's a cat you pass it in right here you take" }, { "end": 602.88, "start": 596.92, "text": " it out here you pass it through the network and you take it out so you now" }, { "end": 608, "start": 602.88, "text": " have two different forward paths of two different many batches and then you" }, { "end": 616.36, "start": 608, "text": " define a lambda and I guess they randomly sample a lambda in zero one" }, { "end": 621.68, "start": 616.36, "text": " right in the range of zero one so this is a mixing coefficient and then you" }, { "end": 631.16, "start": 621.68, "text": " mix you say lambda times hidden representation of batch one plus one" }, { "end": 637, "start": 631.16, "text": " minus lambda of hidden representation of batch two and that is what you pass" }, { "end": 642.16, "start": 637, "text": " through the rest of the network right so basically you forward propagate two" }, { "end": 650.04, "start": 642.16, "text": " different batches until a certain layer here then you mix them with a random" }, { "end": 655.56, "start": 650.04, "text": " coefficient and then you pass it through the rest and then the only thing you" }, { "end": 662.92, "start": 655.56, "text": " also have to do is then at the end if you think of the labels of these two" }, { "end": 669.28, "start": 662.92, "text": " things you want to mix the labels in the same fashion so you want to mix lambda" }, { "end": 678.3199999999999, "start": 669.28, "text": " times y of batch one plus one minus lambda of y of batch two and then this" }, { "end": 685.56, "start": 678.3199999999999, "text": " is your training signal for whatever comes out here right so it's it's um" }, { "end": 692.88, "start": 685.56, "text": " these are these are one hot labels so if it's class three it's zero zero one zero" }, { "end": 698.2399999999999, "start": 692.88, "text": " and if y2 is class five it's zero zero zero zero one and then you simply mix" }, { "end": 704.5999999999999, "start": 698.2399999999999, "text": " the two right and that becomes your training signal so in a practical" }, { "end": 710.8399999999999, "start": 704.5999999999999, "text": " example if let's just have a mini batch size of one so just one sample if this" }, { "end": 717.08, "start": 710.84, "text": " is cat and this is dog you would pass them forward right you would mix so in" }, { "end": 721.6800000000001, "start": 717.08, "text": " the hidden representation it would kind of become a cat dog maybe you do it 50" }, { "end": 726.44, "start": 721.6800000000001, "text": " 50 but then you would also mix the labels of cat and dog 50 50 and tell the" }, { "end": 732.72, "start": 726.44, "text": " network this is a mixture of 50% cat 50% dog and then you would train the" }, { "end": 739.36, "start": 732.72, "text": " network to predict that 50 50 coefficient so they do this the question" }, { "end": 744.76, "start": 739.36, "text": " is at which layer do you do this and they simply I think for each mini batch" }, { "end": 750.8000000000001, "start": 744.76, "text": " sample one hidden layer at random they might have some weighting or something" }, { "end": 756.44, "start": 750.8000000000001, "text": " but the way they describe it is they simply sample one layer for me per mini" }, { "end": 761.4, "start": 756.44, "text": " batch and then do the mixing there and then you can actually back prop through" }, { "end": 764.6800000000001, "start": 761.4, "text": " everything everything is differentiable this mixing is differentiable so you" }, { "end": 768.62, "start": 764.6800000000001, "text": " can back prop through any everything and there's even you know kind of an" }, { "end": 774.04, "start": 768.62, "text": " engineering trick to only use a single mini batch by mixing it with itself so" }, { "end": 778.32, "start": 774.04, "text": " that's that's pretty neat so this manifold mix up as you can see here is" }, { "end": 783.24, "start": 778.32, "text": " the that's kind of the description you mix the hidden representations with" }, { "end": 787.88, "start": 783.24, "text": " lambda and you mix the labels with the same lambda and that will become your" }, { "end": 798.08, "start": 787.88, "text": " actual training signal all right so they give some theory to it that it flattens" }, { "end": 805.12, "start": 798.08, "text": " representations and specifically they say under some conditions namely if the" }, { "end": 810.0400000000001, "start": 805.12, "text": " network is large enough so if the dimension of the hidden representation" }, { "end": 816.8000000000001, "start": 810.0400000000001, "text": " is of a certain size then if you optimize this manifold mix up like if" }, { "end": 822.2800000000001, "start": 816.8000000000001, "text": " you optimize over every lambda and over the entire training data set what you" }, { "end": 832.12, "start": 822.28, "text": " will end up is actually a linear function of the input this is not" }, { "end": 838.8399999999999, "start": 832.12, "text": " too surprising that if you because what you do is you mix linearly this mixture" }, { "end": 846.56, "start": 838.8399999999999, "text": " happens in a linear fashion so if you optimize for and you not only optimize" }, { "end": 849.92, "start": 846.56, "text": " for the training set but you optimize for every possible mixture of the" }, { "end": 855.12, "start": 849.92, "text": " training set linear mixture your minimization your minimizer function" }, { "end": 860.18, "start": 855.12, "text": " will actually become a linear function it's not surprising but they have a" }, { "end": 870, "start": 860.18, "text": " formal proof of this and they also have a proof that if certain assumptions are" }, { "end": 876.28, "start": 870, "text": " given then the minimizers if you apply the minimizers the hidden representations" }, { "end": 882.24, "start": 876.28, "text": " will actually fall on a low dimensional subspace which is also not surprising" }, { "end": 889.12, "start": 882.24, "text": " but it's kind of the theoretical analog to what they show with with the singular" }, { "end": 894.24, "start": 889.12, "text": " value distribution that it basically suppresses low singular values that" }, { "end": 898.66, "start": 894.24, "text": " means the data set is much more into a single direction the hidden" }, { "end": 908.16, "start": 898.66, "text": " representations sorry all right so this the theory part is you can you can read" }, { "end": 914.36, "start": 908.16, "text": " it if you if you want to it's yeah it's it's to the results are to be expected I" }, { "end": 922.9599999999999, "start": 914.36, "text": " would say from what they do and the last thing they give a pictorial example of" }, { "end": 928.72, "start": 922.96, "text": " why manifold mix up flattened representations so both of these things" }, { "end": 934.12, "start": 928.72, "text": " the fact that the minimizers will become linear functions and the fact that the" }, { "end": 938.2, "start": 934.12, "text": " singular value spectrum is more concentrated on the first singular value" }, { "end": 945.52, "start": 938.2, "text": " means basically that representations are flattened and here is a pictorial" }, { "end": 957.28, "start": 945.52, "text": " representation so in this case what happens if you if you basically have" }, { "end": 964.72, "start": 957.28, "text": " these four data points a 1a 2b 1 and b 2 where a 1 and a 2 are blue class and b 1" }, { "end": 973.24, "start": 964.72, "text": " and b 2 are red class and if you now look at an interpolation point between" }, { "end": 980.16, "start": 973.24, "text": " the two so if you look at this interpolation point between a 1 and b 2" }, { "end": 989.52, "start": 980.16, "text": " what happens is that in this case this should be 50 50 blue and red but if you" }, { "end": 994.16, "start": 989.52, "text": " now look at the points that it where it's not interpolated on this is very" }, { "end": 1001.5600000000001, "start": 994.16, "text": " close to a 2 in this case it's probably should be more like 95 blue and 5 red" }, { "end": 1009.28, "start": 1001.56, "text": " do they say here well if you use manifold mix up to learn the network what" }, { "end": 1014.88, "start": 1009.28, "text": " you'll actually do is you say okay actually this hidden representation" }, { "end": 1022.1199999999999, "start": 1014.88, "text": " needs to be pushed outward and you will achieve something over here where any" }, { "end": 1031.84, "start": 1022.12, "text": " mixture of two points of the opposite class will actually give you a 50 50 so" }, { "end": 1039.84, "start": 1031.84, "text": " all the mid points here will give you a 50 50 mixture between the labels which" }, { "end": 1046.36, "start": 1039.84, "text": " basically means what you end up with is a line between this data and this data" }, { "end": 1052.08, "start": 1046.36, "text": " and it means that basically the network becomes more linear and the" }, { "end": 1057.6, "start": 1052.08, "text": " representations become more flat because flat is the optimal if your" }, { "end": 1063.6, "start": 1057.6, "text": " distributions are flat all the distances to the line are the same and this" }, { "end": 1071.12, "start": 1063.6, "text": " objective is optimized and this is basically my my kind of biggest problem" }, { "end": 1081.04, "start": 1071.12, "text": " with the method is that it it kind of mixes the input with a linear function" }, { "end": 1089.52, "start": 1081.04, "text": " where we know that that is kind of not the shape of the true data manifold the" }, { "end": 1097.8, "start": 1089.52, "text": " input manifolds as you can see here the input manifold here isn't linear or flat" }, { "end": 1104.08, "start": 1097.8, "text": " it's actually very very tangled and we know that neural networks as you" }, { "end": 1108.6399999999999, "start": 1104.08, "text": " continue in the layers will flatten those representations because ultimately" }, { "end": 1114.76, "start": 1108.6399999999999, "text": " at the end it needs to classify the data set linearly because the last layer is a" }, { "end": 1121.08, "start": 1114.76, "text": " softmax layer but the the idea that you could apply this to any layer seems a" }, { "end": 1126.24, "start": 1121.08, "text": " bit shady to me of course it works and they show it works and it's really nice" }, { "end": 1132.72, "start": 1126.24, "text": " that it works but applying this to low layers in neural networks seems a bit" }, { "end": 1141.4, "start": 1132.72, "text": " not principled to me so I think this is not the end of the story of this line of" }, { "end": 1147.76, "start": 1141.4, "text": " work and there is kind of more that can be done in a more principled fashion but" }, { "end": 1153.72, "start": 1147.76, "text": " in any case they show that this actually works in terms of performance on" }, { "end": 1161.1200000000001, "start": 1153.72, "text": " generalization on kind of standard data sets so they have results on CIFAR-10" }, { "end": 1166.4, "start": 1161.1200000000001, "text": " and CIFAR-100 which are famous image data sets and they show that the" }, { "end": 1175.3600000000001, "start": 1166.4, "text": " hair regularizer outperforms others and they also show that they can withstand" }, { "end": 1184.24, "start": 1175.36, "text": " one step single step adversarial attacks more kind of better so they have a" }, { "end": 1189.12, "start": 1184.24, "text": " better performance against single step adversarial attacks after" }, { "end": 1199.04, "start": 1189.12, "text": " regularizing mostly again giving kind of an idea that the if you push if you" }, { "end": 1205.32, "start": 1199.04, "text": " push it if you have a two points this is X this is X X 1 X 2 there are different" }, { "end": 1212.76, "start": 1205.32, "text": " classes if you put the decision boundary really close to X 2 then an adversarial" }, { "end": 1217.8, "start": 1212.76, "text": " attack can simply move the point across the decision boundary with a very small" }, { "end": 1225.06, "start": 1217.8, "text": " step but if you actually have the decision boundary pushed away from both" }, { "end": 1231.36, "start": 1225.06, "text": " data points then the an adversarial attack must go a very long way to the" }, { "end": 1237.12, "start": 1231.36, "text": " decision boundary and thus if you limit the size of adversarial attacks which is" }, { "end": 1242.4399999999998, "start": 1237.12, "text": " what you usually do you can maybe not reach this decision boundary and thus" }, { "end": 1249.12, "start": 1242.4399999999998, "text": " you mitigate some of the problem so it's pretty cool I think yeah there's work to" }, { "end": 1253.6799999999998, "start": 1249.12, "text": " be done but I think this is pretty cool it's implemented pretty easy I've seen" }, { "end": 1260.6, "start": 1253.6799999999998, "text": " there's a lot of libraries already available with it in and yeah won't hurt" }, { "end": 1265.08, "start": 1260.6, "text": " to add this to your code make your network better and more robust all right" }, { "end": 1292.32, "start": 1265.08, "text": " that was it from me bye bye" } ]
Qk4lJdp7ZAs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
[ "Science & Technology" ]
[ "deep learning", "reinforcement learning", "deep reinforcement learning", "world model", "hierarchical reinforcement learning", "planning", "salesforce", "research", "machine learning", "navigation", "pivot states", "ai", "artificial intelligence" ]
The goal of hierarchical reinforcement learning is to divide a task into different levels of coarseness with the top-level agent planning only over a high-level view of the world and each subsequent layer having a more detailed view. This paper proposes to learn a set of important states as well as their connections to each other as a high-level abstraction. https://arxiv.org/abs/1907.00664 Abstract: In many real-world scenarios, an autonomous agent often encounters various tasks within a single complex environment. We propose to build a graph abstraction over the environment structure to accelerate the learning of these tasks. Here, nodes are important points of interest (pivotal states) and edges represent feasible traversals between them. Our approach has two stages. First, we jointly train a latent pivotal state model and a curiosity-driven goal-conditioned policy in a task-agnostic manner. Second, provided with the information from the world graph, a high-level Manager quickly finds solution to new tasks and expresses subgoals in reference to pivotal states to a low-level Worker. The Worker can then also leverage the graph to easily traverse to the pivotal states of interest, even across long distance, and explore non-locally. We perform a thorough ablation study to evaluate our approach on a suite of challenging maze tasks, demonstrating significant advantages from the proposed framework over baselines that lack world graph knowledge in terms of performance and efficiency. Authors: Wenling Shang, Alex Trott, Stephan Zheng, Caiming Xiong, Richard Socher
Hi there. Today we're looking at learning world graphs to accelerate hierarchical reinforcement learning by Wenling Sheng et al from Salesforce Research. This work is based in the world of reinforcement learning and especially hierarchical reinforcement learning. So in hierarchical reinforcement learning the idea is that in order to perform a task like in this case they perform all of their experiments on mazes like this. So imagine you have this maze and this red thing here is the agent and the goal is the green square and the gray things obviously are walls and the black things are everywhere the agent can move. The agent can always move one step in any direction that it wants and that isn't blocked by a wall. So in order to fulfill such a task the agent needs to take many many steps like go here here here here here here each one of those is a step. In addition this specific maze has an additional property namely that there's a locked door here and first you need to pick up the key to basically to open the locked door. So in order to reach the goal the agent needs first to pick up the key then open the door then go to the goal and each one of these it has to traverse many many steps. So the idea in hierarchical reinforcement learning is that you have two parts to it to the agent. So your agent which is this entire box here is divided into what's called a manager and a worker and this is a divide. So what the manager sees the manager sees basically I do an example here they do it differently but the manager could see large could see the world basically only in these large chunks right and it doesn't really care what is in or it cares what is in the chunks but it doesn't distinguish points within the chunks it just knows about these these chunks basically and what the manager will say oh first I need to go to this chunk here then because there's the key in this chunk and then I need to go to this chunk here because there is the door and then I need to go to this chunk here because there's the goal. So the in the view of the manager which has a very high level view of the world is the the action sequence is down here over here then over here. Those are like three actions that's a pretty simple and then the manager would pass this information to the worker and it would say hey worker please go to this state here please go to the first state and then the worker would be tasked with basically moving the individual steps to go not to the final goal but only to go to that chunk and then in that chunk the worker would go to the key and then once it has the key the manager would say good job now please perform the second action which is go to to this chunk here so the second action that the worker would so you basically get the idea whoops I am doing something here you get the idea that the I'm creating text boxes that the worker and the manager work together and that the manager has a high level view of the world and then the worker can basically execute the actual actions that the manager has decided on in a fine-grained way. So this is gives you several advantages namely the manager can plan high level and far away things and then the worker really only has to care about its close neighborhood because each step the manager proposes is a fairly short range so the worker can implement it. They do this in a kind of different way so let's actually start from the back from of this paper which is I find is a bit more explanatory and it makes a bit more sense to look at it what they propose is to learn a world graph so in a world graph what is a world graph a world graph consists of two things first a set of states which is the are the blue states here so all these blue states which are so-called pivot states or important states so these are states in the world that are very important determined by some measure right so these are basically states that look at look at where they are they're often at like narrow passes you see here here they're at these narrow passes so basically if you if you reach those states as an intermediary goal then you can basically go a lot of places from here so these are very let's say powerful states and these states are connected by a neighborhood graph so basically which states of these are close to each other and for example here you would connect of course those two because they're neighbors those you would probably connect those some I'm attempting to to kind of draw the world graph you could you might connect those doesn't need to be like a tree it can be like such so you see that the graph kind of takes shape these are fairly reachable so whenever a node in the graph whenever one of these important states is fairly easily reachable by some other state it's designated as a neighbor so with that with this world graph here this is what you get you get an abstraction basically you get a set of states with connections between them that says how easy or hard is it to reach from one state to the other if you have these things then you can really easily imagine a hierarchical reinforcement learning algorithm that now in let incorporates this information namely the manager will only use the important states to plan so for example if the goal the goal isn't drawn in here but let's say the goal is here and then the door the door is here it's a locked door here and then the key let's draw in the key come on okay this doesn't want to all right the key is somewhere let's say here there's the key he is this all right then the no let's put the key further away come on door here I'm off with the colors and key here all right so what would the manager do the manager would then say ah okay the keys here so this would be a good state to reach of my importance if the manager is only allowed to go important states right so the manager says because it has the graph right it says aha this state is easily reachable from let's say this state and this state is easily reachable from this state so it plans go here and go here and then go here then get the key right this is a kind of a micro action that is not in the importance they then I need to you know go here this is reachable from this state that's reachable from this state and from this state and that's reachable from my origin so from the key then next go here go here go here go here and then open the door and then of course go here and solve the the task the worker then would only ever need to implement the following it starts here and it says aha I need to go here what do I need to do I need to go for example down and over and now once I've done this I need to go here so I need to go right down right so you see the worker only ever has to care about going from one hop to the next hop making it really easy for the worker while the manager only has these blue states available which makes its search space much more much more condensed and much more much more overviewable especially with the nodes in between the world graph so that's if you have the world graph right if you have this set of states and how important are how easily they reachable reachable they are between each other you can very easily do a reinforcement learning approach that that is a hierarchical has the manager plan on the world graph has and then has the worker implement the fine-grained actions and there is already a method that does this this paper here uses feudal networks so we won't go into that later just saying it's pretty easy if you have those things so the real question is how do they learn the world graph and what they do is the following and they describe it here in kind of this sorry this way what they want to to finally learn is a prior that tells them for a given state how important it is it and that's a beta prior a beta distribution is a continuous approximation on a on a kind of a binary zero one variable so how do they do it they use an LSTM to encode trajectories so these are trajectories from kind of rollouts of policy and then the the LSTM encodes it and for each step it outputs this posterior over the what's called these latent variables here they say how important is a state so these are the posteriors whereas this over here is the prior and the posterior of course only make sense in context of a trajectory that's why the ultimate decision happens for the prior because the state needs to be important or not important to any trajectory so what they do is they roll out policies and they have certain methods of of doing this so they have they have random exploration of curiosity goals but they also train this continuously so they updated continuously via this what's called a goal condition policy and what a goal condition policy is basically is you put the agent somewhere in the maze actually let's use this maze over here you put the agent somewhere in the maze let's say here you for example make a bunch of ran make a random exploration let's say here so you know these two things are reachable and then you train the agency go from here to here right this is your goal now the agent tries to kind of reconstruct this random walk to there and you can riff so so this is how you train an agent to go it basically go from any two well reachable states to each other right from here to here and so on now you won't train it to go directly from here to over here because a random walk would be very hard for a random walk to find its way over there but what you end up with is is somehow an agent that is able to reach close by states and that's exactly what the worker is supposed to do right here and so of of these trajectories you can then unroll them and decide on the kind of on these on these pivotal states so how do you do that and this is where this top part here comes in so down here you input the trajectory and you output how important is each state all right and now you see in this example here the light color means that the LSTM decides this state isn't important and the darker orange color means the LSTM decides this state is important so what you do next is the states where it decides it is important and notice the beginning at the end are always important it feeds to a second LSTM as an input you see here here here so in this case of these two of these six states in the trajectory three are important namely the start the end and this one here where the LSTM decides hey that's important that goes into a second LSTM which is generator so this here is an encoder and this here is a decoder and what it does is it decodes the sequence of actions right here given nothing just given this it decodes a sequence of actions and at the end what you want is that the actions output here reconstruct the actions input this might sound a little confusing but the core value of this is what you want is to reconstruct the actions of the trajectory taken given only the important states what does this mean in our example in our example here this means if I have to go from here to here right and for example I took the following path this is this so right right down down right this is these were my action sequence now if I only have the start the end and one state in between let's say this one right then can I reconstruct what actions were taken and if I erase the blue thing and I tell you I went from here via here to here then you could very much reconstruct the actions here so this state here is a good candidate for being an important state whereas if it were a different state if it were for example if I told you I went from over here to here and then to here you'd say well this could be either something like this or it could be a path like this right it could be many many paths or like this could be many paths leading from here to here so this state here is not probably not very important so that's kind of how they how they learn which one are the important state via this encoding trajectories in an LSTM and trying to reconstruct the state the actions taken in the trajectory given only the states that were deemed important by the LSTM so that's how you train the LSTM to recognize important states and once you've recognized the important states in a trajectory you can then use those to learn prior so basically you ask over all possible trajectories which of the states are generally important and that's how you end up with these blue states all right and then the last part is to connect the blue states and that is fairly easily done in their approach what they say is all right we have blue states we should be pick one and we do a random walk from it right random walk random walk random walk if we hit another blue state like this one here in the random walk we simply say well there are probably neighbors so we do this a bunch of times if you hit the blue states of course without hitting another blue state first then you connect the two in a graph so these would be connected these would probably be connected what we ended up at the beginning right you have this graph maybe these two are connected and so on so this gives you this world graph and now you end up with a set of important states and connections between them that tell you which ones are easily reachable from each other so you can train the manager on that you can train the worker as we said before to simply select two close by states train it to go from one to the other that by the worker will learn that so in essence that's how they they do it you can look at the experiments themselves they show that this basically transfers so if you train like this pre train then you can give more specific and more complicated tasks and this will this will rapidly accelerate the learning of this yeah look at the experiments if you have time that was it for me thank you for listening
[ { "end": 4.62, "start": 0, "text": " Hi there. Today we're looking at learning world graphs to accelerate" }, { "end": 9.86, "start": 4.62, "text": " hierarchical reinforcement learning by Wenling Sheng et al from Salesforce" }, { "end": 16.62, "start": 9.86, "text": " Research. This work is based in the world of reinforcement learning and" }, { "end": 21.36, "start": 16.62, "text": " especially hierarchical reinforcement learning. So in hierarchical reinforcement" }, { "end": 29, "start": 21.36, "text": " learning the idea is that in order to perform a task like in this case they" }, { "end": 34, "start": 29, "text": " perform all of their experiments on mazes like this. So imagine you have this" }, { "end": 42.08, "start": 34, "text": " maze and this red thing here is the agent and the goal is the green square" }, { "end": 47.36, "start": 42.08, "text": " and the gray things obviously are walls and the black things are everywhere the" }, { "end": 53.8, "start": 47.36, "text": " agent can move. The agent can always move one step in any direction that it" }, { "end": 61.519999999999996, "start": 53.8, "text": " wants and that isn't blocked by a wall. So in order to fulfill such a task the" }, { "end": 66.52, "start": 61.519999999999996, "text": " agent needs to take many many steps like go here here here here here here each" }, { "end": 73.56, "start": 66.52, "text": " one of those is a step. In addition this specific maze has an additional property" }, { "end": 78.44, "start": 73.56, "text": " namely that there's a locked door here and first you need to pick up the key to" }, { "end": 85.39999999999999, "start": 78.44, "text": " basically to open the locked door. So in order to reach the goal the agent needs" }, { "end": 90.4, "start": 85.39999999999999, "text": " first to pick up the key then open the door then go to the goal and each one of" }, { "end": 97.12, "start": 90.4, "text": " these it has to traverse many many steps. So the idea in hierarchical reinforcement" }, { "end": 104.1, "start": 97.12, "text": " learning is that you have two parts to it to the agent. So your agent which is" }, { "end": 110.72, "start": 104.1, "text": " this entire box here is divided into what's called a manager and a" }, { "end": 118.83999999999999, "start": 110.72, "text": " worker and this is a divide. So what the manager sees the manager sees basically" }, { "end": 122.96, "start": 118.83999999999999, "text": " I do an example here they do it differently but the manager could see" }, { "end": 131.07999999999998, "start": 122.96, "text": " large could see the world basically only in these large chunks right and it" }, { "end": 136.60000000000002, "start": 131.08, "text": " doesn't really care what is in or it cares what is in the chunks but it" }, { "end": 142.08, "start": 136.60000000000002, "text": " doesn't distinguish points within the chunks it just knows about these these" }, { "end": 148.84, "start": 142.08, "text": " chunks basically and what the manager will say oh first I need to go to this" }, { "end": 153.72000000000003, "start": 148.84, "text": " chunk here then because there's the key in this chunk and then I need to go to" }, { "end": 158.16000000000003, "start": 153.72000000000003, "text": " this chunk here because there is the door and then I need to go to this chunk" }, { "end": 163.35999999999999, "start": 158.16, "text": " here because there's the goal. So the in the view of the manager which has a very" }, { "end": 170, "start": 163.35999999999999, "text": " high level view of the world is the the action sequence is down here over here" }, { "end": 174.84, "start": 170, "text": " then over here. Those are like three actions that's a pretty simple and then" }, { "end": 179.96, "start": 174.84, "text": " the manager would pass this information to the worker and it would say hey worker" }, { "end": 186.72, "start": 179.96, "text": " please go to this state here please go to the first state and then the worker" }, { "end": 195, "start": 186.72, "text": " would be tasked with basically moving the individual steps to go not to the" }, { "end": 200.64, "start": 195, "text": " final goal but only to go to that chunk and then in that chunk the worker would" }, { "end": 205.56, "start": 200.64, "text": " go to the key and then once it has the key the manager would say good job now" }, { "end": 210.48, "start": 205.56, "text": " please perform the second action which is go to to this chunk here so the" }, { "end": 216.16, "start": 210.48, "text": " second action that the worker would so you basically get the idea whoops I am" }, { "end": 222.92, "start": 216.16, "text": " doing something here you get the idea that the I'm creating text boxes that" }, { "end": 227.07999999999998, "start": 222.92, "text": " the worker and the manager work together and that the manager has a high level" }, { "end": 233.92, "start": 227.07999999999998, "text": " view of the world and then the worker can basically execute the actual actions" }, { "end": 240.64, "start": 233.92, "text": " that the manager has decided on in a fine-grained way. So this is gives you" }, { "end": 246.04, "start": 240.64, "text": " several advantages namely the manager can plan high level and far away things" }, { "end": 251.12, "start": 246.04, "text": " and then the worker really only has to care about its close neighborhood" }, { "end": 256.03999999999996, "start": 251.12, "text": " because each step the manager proposes is a fairly short range so the worker" }, { "end": 264.36, "start": 256.03999999999996, "text": " can implement it. They do this in a kind of different way so let's actually start" }, { "end": 271.76, "start": 264.36, "text": " from the back from of this paper which is I find is a bit more explanatory and" }, { "end": 277.44, "start": 271.76, "text": " it makes a bit more sense to look at it what they propose is to learn a world" }, { "end": 284.03999999999996, "start": 277.44, "text": " graph so in a world graph what is a world graph a world graph consists of" }, { "end": 291.03999999999996, "start": 284.03999999999996, "text": " two things first a set of states which is the are the blue states here so all" }, { "end": 298.4, "start": 291.03999999999996, "text": " these blue states which are so-called pivot states or important states so" }, { "end": 305.84, "start": 298.4, "text": " these are states in the world that are very important determined by some measure" }, { "end": 313.79999999999995, "start": 305.84, "text": " right so these are basically states that look at look at where they are they're" }, { "end": 319.35999999999996, "start": 313.79999999999995, "text": " often at like narrow passes you see here here they're at these narrow passes so" }, { "end": 325.64, "start": 319.35999999999996, "text": " basically if you if you reach those states as an intermediary goal then you" }, { "end": 329.68, "start": 325.64, "text": " can basically go a lot of places from here so these are very let's say" }, { "end": 336.56, "start": 329.68, "text": " powerful states and these states are connected by a neighborhood graph so" }, { "end": 342.47999999999996, "start": 336.56, "text": " basically which states of these are close to each other and for example here" }, { "end": 346.08, "start": 342.47999999999996, "text": " you would connect of course those two because they're neighbors those you" }, { "end": 352.71999999999997, "start": 346.08, "text": " would probably connect those some I'm attempting to to kind of draw the world" }, { "end": 358.48, "start": 352.72, "text": " graph you could you might connect those doesn't need to be like a tree it can be" }, { "end": 367.64000000000004, "start": 358.48, "text": " like such so you see that the graph kind of takes shape these are fairly" }, { "end": 373.20000000000005, "start": 367.64000000000004, "text": " reachable so whenever a node in the graph whenever one of these important" }, { "end": 378.6, "start": 373.20000000000005, "text": " states is fairly easily reachable by some other state it's designated as a" }, { "end": 386.52000000000004, "start": 378.6, "text": " neighbor so with that with this world graph here this is what you get you get" }, { "end": 391.16, "start": 386.52000000000004, "text": " an abstraction basically you get a set of states with connections between them" }, { "end": 396.72, "start": 391.16, "text": " that says how easy or hard is it to reach from one state to the other if you" }, { "end": 403.32000000000005, "start": 396.72, "text": " have these things then you can really easily imagine a hierarchical" }, { "end": 408.12, "start": 403.32000000000005, "text": " reinforcement learning algorithm that now in let incorporates this information" }, { "end": 414.44, "start": 408.12, "text": " namely the manager will only use the important states to plan so for example" }, { "end": 420.8, "start": 414.44, "text": " if the goal the goal isn't drawn in here but let's say the goal is here and then" }, { "end": 431, "start": 420.8, "text": " the door the door is here it's a locked door here and then the key let's draw in" }, { "end": 438.64, "start": 431, "text": " the key come on okay this doesn't want to all right the key is somewhere let's" }, { "end": 445.16, "start": 438.64, "text": " say here there's the key he is this all right then the no let's put the key" }, { "end": 457.24, "start": 445.16, "text": " further away come on door here I'm off with the colors and key here all right" }, { "end": 465.36, "start": 457.24, "text": " so what would the manager do the manager would then say ah okay the keys here so" }, { "end": 470.16, "start": 465.36, "text": " this would be a good state to reach of my importance if the manager is only" }, { "end": 475.6, "start": 470.16, "text": " allowed to go important states right so the manager says because it has the" }, { "end": 481.6, "start": 475.6, "text": " graph right it says aha this state is easily reachable from let's say this" }, { "end": 486.40000000000003, "start": 481.6, "text": " state and this state is easily reachable from this state so it plans go here and" }, { "end": 492.2, "start": 486.4, "text": " go here and then go here then get the key right this is a kind of a micro" }, { "end": 497.28, "start": 492.2, "text": " action that is not in the importance they then I need to you know go here" }, { "end": 505.64, "start": 497.28, "text": " this is reachable from this state that's reachable from this state and from this" }, { "end": 510.71999999999997, "start": 505.64, "text": " state and that's reachable from my origin so from the key then next go here" }, { "end": 517.2, "start": 510.72, "text": " go here go here go here and then open the door and then of course go here and" }, { "end": 528.76, "start": 517.2, "text": " solve the the task the worker then would only ever need to implement the" }, { "end": 535.64, "start": 528.76, "text": " following it starts here and it says aha I need to go here what do I need to do" }, { "end": 540.7, "start": 535.64, "text": " I need to go for example down and over and now once I've done this I need to" }, { "end": 547.2800000000001, "start": 540.7, "text": " go here so I need to go right down right so you see the worker only ever has to" }, { "end": 552.6400000000001, "start": 547.2800000000001, "text": " care about going from one hop to the next hop making it really easy for the" }, { "end": 557.6, "start": 552.6400000000001, "text": " worker while the manager only has these blue states available which makes its" }, { "end": 566.76, "start": 557.6, "text": " search space much more much more condensed and much more much more" }, { "end": 574.8, "start": 566.76, "text": " overviewable especially with the nodes in between the world graph so that's if" }, { "end": 579.88, "start": 574.8, "text": " you have the world graph right if you have this set of states and how important" }, { "end": 585.72, "start": 579.88, "text": " are how easily they reachable reachable they are between each other you can very" }, { "end": 590.8, "start": 585.72, "text": " easily do a reinforcement learning approach that that is a hierarchical has" }, { "end": 595.2, "start": 590.8, "text": " the manager plan on the world graph has and then has the worker implement the" }, { "end": 600.2800000000001, "start": 595.2, "text": " fine-grained actions and there is already a method that does this this" }, { "end": 605.0400000000001, "start": 600.2800000000001, "text": " paper here uses feudal networks so we won't go into that later just saying" }, { "end": 608.84, "start": 605.0400000000001, "text": " it's pretty easy if you have those things so the real question is how do" }, { "end": 617.12, "start": 608.84, "text": " they learn the world graph and what they do is the following and they describe it" }, { "end": 629.88, "start": 617.12, "text": " here in kind of this sorry this way what they want to to finally learn is a prior" }, { "end": 636.68, "start": 629.88, "text": " that tells them for a given state how important it is it and that's a beta" }, { "end": 641.84, "start": 636.68, "text": " prior a beta distribution is a continuous approximation on a on a kind" }, { "end": 652.5600000000001, "start": 641.84, "text": " of a binary zero one variable so how do they do it they use an LSTM to encode" }, { "end": 660.44, "start": 652.5600000000001, "text": " trajectories so these are trajectories from kind of rollouts of policy and then" }, { "end": 668.6, "start": 660.44, "text": " the the LSTM encodes it and for each step it outputs this posterior over the" }, { "end": 674.52, "start": 668.6, "text": " what's called these latent variables here they say how important is a state" }, { "end": 679.72, "start": 674.52, "text": " so these are the posteriors whereas this over here is the prior and the posterior" }, { "end": 686.08, "start": 679.72, "text": " of course only make sense in context of a trajectory that's why the ultimate" }, { "end": 690.2, "start": 686.08, "text": " decision happens for the prior because the state needs to be important or not" }, { "end": 698.48, "start": 690.2, "text": " important to any trajectory so what they do is they roll out policies and they" }, { "end": 707.12, "start": 698.48, "text": " have certain methods of of doing this so they have they have random" }, { "end": 711.84, "start": 707.12, "text": " exploration of curiosity goals but they also train this continuously so they" }, { "end": 716.84, "start": 711.84, "text": " updated continuously via this what's called a goal condition policy and what" }, { "end": 723.12, "start": 716.84, "text": " a goal condition policy is basically is you put the agent somewhere in the maze" }, { "end": 729.04, "start": 723.12, "text": " actually let's use this maze over here you put the agent somewhere in the maze" }, { "end": 738.16, "start": 729.04, "text": " let's say here you for example make a bunch of ran make a random exploration" }, { "end": 743.84, "start": 738.16, "text": " let's say here so you know these two things are reachable and then you train" }, { "end": 749, "start": 743.84, "text": " the agency go from here to here right this is your goal now the agent tries to" }, { "end": 755.84, "start": 749, "text": " kind of reconstruct this random walk to there and you can riff so so this is how" }, { "end": 761.28, "start": 755.84, "text": " you train an agent to go it basically go from any two well reachable states to" }, { "end": 765.54, "start": 761.28, "text": " each other right from here to here and so on now you won't train it to go" }, { "end": 770.64, "start": 765.54, "text": " directly from here to over here because a random walk would be very hard for a" }, { "end": 776.2, "start": 770.64, "text": " random walk to find its way over there but what you end up with is is somehow an" }, { "end": 781.4200000000001, "start": 776.2, "text": " agent that is able to reach close by states and that's exactly what the" }, { "end": 791.1600000000001, "start": 781.4200000000001, "text": " worker is supposed to do right here and so of of these trajectories you can then" }, { "end": 799.76, "start": 791.1600000000001, "text": " unroll them and decide on the kind of on these on these pivotal states so how do" }, { "end": 805.76, "start": 799.76, "text": " you do that and this is where this top part here comes in so down here you" }, { "end": 811.28, "start": 805.76, "text": " input the trajectory and you output how important is each state all right and" }, { "end": 818.84, "start": 811.28, "text": " now you see in this example here the light color means that the LSTM decides" }, { "end": 823.6, "start": 818.84, "text": " this state isn't important and the darker orange color means the LSTM decides" }, { "end": 830.68, "start": 823.6, "text": " this state is important so what you do next is the states where it decides it" }, { "end": 837.1999999999999, "start": 830.68, "text": " is important and notice the beginning at the end are always important it feeds to" }, { "end": 844.04, "start": 837.1999999999999, "text": " a second LSTM as an input you see here here here so in this case of these two" }, { "end": 849.4799999999999, "start": 844.04, "text": " of these six states in the trajectory three are important namely the start" }, { "end": 856.3599999999999, "start": 849.4799999999999, "text": " the end and this one here where the LSTM decides hey that's important that goes" }, { "end": 862.5600000000001, "start": 856.36, "text": " into a second LSTM which is generator so this here is an encoder and this here is" }, { "end": 869.28, "start": 862.5600000000001, "text": " a decoder and what it does is it decodes the sequence of actions right here given" }, { "end": 875.28, "start": 869.28, "text": " nothing just given this it decodes a sequence of actions and at the end what" }, { "end": 880.96, "start": 875.28, "text": " you want is that the actions output here reconstruct the actions input this might" }, { "end": 887.52, "start": 880.96, "text": " sound a little confusing but the core value of this is what you want is to" }, { "end": 894.32, "start": 887.52, "text": " reconstruct the actions of the trajectory taken given only the important" }, { "end": 900.4000000000001, "start": 894.32, "text": " states what does this mean in our example in our example here this means" }, { "end": 907.12, "start": 900.4000000000001, "text": " if I have to go from here to here right and for example I took the following" }, { "end": 912.52, "start": 907.12, "text": " path this is this so right right down down right this is these were my action" }, { "end": 920.16, "start": 912.52, "text": " sequence now if I only have the start the end and one state in between let's" }, { "end": 927.76, "start": 920.16, "text": " say this one right then can I reconstruct what actions were taken and" }, { "end": 936.88, "start": 927.76, "text": " if I erase the blue thing and I tell you I went from here via here to here then" }, { "end": 943.36, "start": 936.88, "text": " you could very much reconstruct the actions here so this state here is a" }, { "end": 947.88, "start": 943.36, "text": " good candidate for being an important state whereas if it were a different" }, { "end": 953.48, "start": 947.88, "text": " state if it were for example if I told you I went from over here to here and" }, { "end": 958.36, "start": 953.48, "text": " then to here you'd say well this could be either something like this or it" }, { "end": 963.04, "start": 958.36, "text": " could be a path like this right it could be many many paths or like this" }, { "end": 969.8399999999999, "start": 963.04, "text": " could be many paths leading from here to here so this state here is not probably" }, { "end": 977.16, "start": 969.8399999999999, "text": " not very important so that's kind of how they how they learn which one are the" }, { "end": 983.56, "start": 977.16, "text": " important state via this encoding trajectories in an LSTM and trying to" }, { "end": 991.48, "start": 983.56, "text": " reconstruct the state the actions taken in the trajectory given only the states" }, { "end": 995.96, "start": 991.48, "text": " that were deemed important by the LSTM so that's how you train the LSTM to" }, { "end": 1001, "start": 995.96, "text": " recognize important states and once you've recognized the important states" }, { "end": 1008.8000000000001, "start": 1001, "text": " in a trajectory you can then use those to learn prior so basically you ask over" }, { "end": 1015.8000000000001, "start": 1008.8000000000001, "text": " all possible trajectories which of the states are generally important and" }, { "end": 1022.28, "start": 1015.8, "text": " that's how you end up with these blue states all right and then the last part" }, { "end": 1028.1599999999999, "start": 1022.28, "text": " is to connect the blue states and that is fairly easily done in their approach" }, { "end": 1034.04, "start": 1028.1599999999999, "text": " what they say is all right we have blue states we should be pick one and we do a" }, { "end": 1039.24, "start": 1034.04, "text": " random walk from it right random walk random walk random walk if we hit another" }, { "end": 1044.6, "start": 1039.24, "text": " blue state like this one here in the random walk we simply say well there are" }, { "end": 1048.7199999999998, "start": 1044.6, "text": " probably neighbors so we do this a bunch of times if you hit the blue states of" }, { "end": 1053.9599999999998, "start": 1048.7199999999998, "text": " course without hitting another blue state first then you connect the two in a" }, { "end": 1057.9599999999998, "start": 1053.9599999999998, "text": " graph so these would be connected these would probably be connected what we" }, { "end": 1064.9199999999998, "start": 1057.9599999999998, "text": " ended up at the beginning right you have this graph maybe these two are connected" }, { "end": 1069.52, "start": 1064.9199999999998, "text": " and so on so this gives you this world graph and now you end up with a set of" }, { "end": 1075.76, "start": 1069.52, "text": " important states and connections between them that tell you which ones are easily" }, { "end": 1081.8799999999999, "start": 1075.76, "text": " reachable from each other so you can train the manager on that you can train" }, { "end": 1087.32, "start": 1081.8799999999999, "text": " the worker as we said before to simply select two close by states train it to" }, { "end": 1093.6399999999999, "start": 1087.32, "text": " go from one to the other that by the worker will learn that so in essence" }, { "end": 1099.16, "start": 1093.6399999999999, "text": " that's how they they do it you can look at the experiments themselves they show" }, { "end": 1105.3200000000002, "start": 1099.16, "text": " that this basically transfers so if you train like this pre train then you can" }, { "end": 1110.76, "start": 1105.3200000000002, "text": " give more specific and more complicated tasks and this will this will rapidly" }, { "end": 1115.52, "start": 1110.76, "text": " accelerate the learning of this yeah look at the experiments if you have time" }, { "end": 1129.92, "start": 1115.52, "text": " that was it for me thank you for listening" } ]
ZAW9EyNo2fw
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reconciling modern machine learning and the bias-variance trade-off
[ "Science & Technology" ]
[ "machine learning", "bias", "variance", "tradeoff", "generalization", "overfitting", "interpolation", "parameters", "model class", "complexity", "deep learning", "neural networks", "overparameterization", "erm", "random fourier features" ]
It turns out that the classic view of generalization and overfitting is incomplete! If you add parameters beyond the number of points in your dataset, generalization performance might increase again due to the increased smoothness of overparameterized functions. Abstract: The question of generalization in machine learning---how algorithms are able to learn predictors from a training sample to make accurate predictions out-of-sample---is revisited in light of the recent breakthroughs in modern machine learning technology. The classical approach to understanding generalization is based on bias-variance trade-offs, where model complexity is carefully calibrated so that the fit on the training sample reflects performance out-of-sample. However, it is now common practice to fit highly complex models like deep neural networks to data with (nearly) zero training error, and yet these interpolating predictors are observed to have good out-of-sample accuracy even for noisy data. How can the classical understanding of generalization be reconciled with these observations from modern machine learning practice? In this paper, we bridge the two regimes by exhibiting a new "double descent" risk curve that extends the traditional U-shaped bias-variance curve beyond the point of interpolation. Specifically, the curve shows that as soon as the model complexity is high enough to achieve interpolation on the training sample---a point that we call the "interpolation threshold"---the risk of suitably chosen interpolating predictors from these models can, in fact, be decreasing as the model complexity increases, often below the risk achieved using non-interpolating models. The double descent risk curve is demonstrated for a broad range of models, including neural networks and random forests, and a mechanism for producing this behavior is posited. Authors: Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal https://arxiv.org/abs/1812.11118
Hi there! Today we're looking at reconciling modern machine learning and the bias-variance trade-off by Mikhail Belkin et al. This paper struck me as interesting at ICML when I heard a talk by Mikhail Belkin. The paper is very interesting in terms of what it proposes about modern machine learning. What's the problem? The problem is they contrast what they call classical machine learning and how to understand machine learning, namely in terms of bias-variance trade-offs, and modern machine learning where it's for example deep neural networks which have very different properties. Basically the best way to describe it is probably with an example. Let's say we have four data points. Here is a coordinate system in two dimensions. One, two, three, four. Four data points. Why not? These four data points we want to fit a function from X to Y. Y is our target. It's kind of a regression problem. Let's say we have just one parameter which we can use to describe our function. Probably the best thing we could do is to do something like this, which is a line. The only parameter here is the slope of that line. Our model would be this one line and it would pass basically through the data and would describe the data fairly well as you can see. If we have two parameters now we can introduce for example a bias term and not have the line at the origin. This line here, now we have the bias which is the distance to this point to describe it as well as the slope of this line as parameters. So two parameters and if you look at this line here it describes the data a bit better than before. It passes kind of through the center of the data. If we go to three or four parameters, it's well known that if I have the same number of parameters as I have data points, I can actually fit the data perfectly. How to do this? It would be like an order for polynomial which... Let's see if I can draw an order for polynomial. It needs to go... It needs to rip and then... Okay well... No that's... Okay that's more than order for. In any case I can fit actually the data perfectly. Now if you think about all of these functions, let's contrast these. Alright let's contrast them and let's look at what is the data distribution probably. Data distribution is probably, if I fill in the rest of the data that is not in our training set, maybe something like this. So which of these functions generalize as well to this general data, the unseen data? Probably the first function not doing very poorly. The first function actually doing okay. The second function doing even better as we saw. If we add a parameter to the first function it gets better, but if we then add more parameters it gets worse. This is kind of taught in current machine learning classes as the phenomenon of overfitting. Whereas here the function that has the most parameters actually doesn't fit well. What is troubling now is that if you think of things like neural networks, modern architectures, they actually have even more... They have oftentimes more parameters than their data points in the data set. So they can fit the training data perfectly and still have kind of spare room, spare capacity. These models actually generalize fairly well. This paper asks what's going on here and what they propose is the following picture. Here we have a classical view of machine learning. On the x-axis is the complexity of H. You can think of the complexity of the... This is H is the model class. H is the class of all the models you could fit. For example it would be every linear model with one parameter. This was our first model. The first model would be somewhere here one. The complexity is one. Then here we'd have the complexity of two where we added a parameter, three parameters and then four parameters. This is what we saw. At the beginning one parameter we had some training risk. Here simply another term for loss. We had some training loss. Then as we added a parameter the training loss decreased. It got better and also the test loss on the unseen data decreased. So it got better on the test set as well as we added parameter. Then as we added more parameters it was able to fit the training data better and better going to almost zero risk here. But on unseen data the performance actually got worse again. Again this is what we teach as overfitting. These authors propose this is incomplete. Namely the picture actually looks like this and all we've done so far is look at this left hand side here. Namely that there is a peak here and this is called the interpolation threshold. The interpolation threshold is roughly at the point where you have as many parameters as you have data points. After the interpolation threshold if you give even more parameters the training risk of course stays low because you can fit the training data perfectly from the interpolation threshold forward. But the test risk actually decreases again. This is really interesting. Let me just preempt this and say this is not due to regularization. It's not because people regularize their models or anything like this. In any case regularization would actually move you to less of a complexity of your model class. Because now if you regularize you're no longer able to fit certain models as easily or converge to them. They propose that this is happening and they give some reason why this might happening and they give some evidence that this is happening. Here is the evidence that this is happening and they do this here for example. This is a random Fourier features classifier. What are random Fourier features? They describe them here. If you have a data point X what you do is you push this through a function which or you push this through many of them. You sample capital N of these vectors v and of each of the vectors v you take the inner product and raise it. Take the exponential function of it and then aggregate them. These random Fourier features are the random Fourier features and these are the weights that you learn. This is basically a linear classifier but not of the original features but of intermediary features which are fixed for a given random seed. The good thing is here you can sample, you can decide how many intermediary features that you want. The other good thing is if you let n go to infinity this actually becomes an infinite dimensional kernel machine. It becomes a kernel SVM with the Gaussian kernel which is operating in an infinite dimensional space. If you don't go as far then it's just an approximation to that. It's a cool model where you can choose how many parameters you want. It's a perfect model to explore this phenomenon. What are they doing? They are doing the following. They take MNIST and they just apply this model. On the x-axis here are the number of parameters and the number of random Fourier features that they construct. Here you can see the mean squared error on the test set. As you can see at the beginning the error goes down as proposed. Then here is probably this sweet spot of classical machine learning. After that you start to overfit, it goes up again. There's a giant peak and then it goes down again. Here 10,000 I think they do it with a subset of MNIST if I remember correctly. Around 10,000 is exactly the number of data points they use or multiplied by the classes. I don't remember correctly but in any case at this number you have the same amount of parameters as data points. After that the test error decreases again. As you give more and more and more features every single classifier on this line is able to fit the training data perfectly but they successfully get less and less error on the test set. You can see it approaches this dotted line here which is if you perfectly solve the infinite dimensional problem. If you actually use a kernel SVM to solve this problem, you can see this gives you a lower bound. It really shows nicely that the random Fourier features classifier approximates this as you go higher and higher with capital N. It actually approximates the kernel SVM. This is really interesting that this actually happens in practice. What they also see here is when they look at the norm of the solution. The norm of the solution they calculate as basically the norm in the Hilbert space but they can't because it's hard to compute. A proxy for this is simply the norm of the weight vector that you learn. The norm of the solution as you add more parameters of course for first it goes up because you add more parameters, you fit each of them, they have some value and then it goes up. It peaks at this interpolation threshold. There you have a really high norm solution and after that the norm goes down again of the solution. Again it approximates the norm of the perfectly solved kernel machine. That's extremely interesting and is a part of an explanation they give why this is happening. Namely the following. If you have too many parameters what you might do with the correct inductive bias is find a low norm solution. What does a low norm solution mean? A low norm solution means a relatively simple function. As you add parameters your model is better and better able to find a simple function that describes the training data. Not in terms of simple of less parameters but simple in terms of how it moves between the training data. If you imagine the training data again from before and you imagine it perfectly fit this polynomial here that we drew with four parameters. If I have many many many more parameters I can do something like... I have many parameters but I can be kind of squeaky but they have... right? So this something like this here I grab this here I grab this something like this and this moves smoothly between the training data. It has many parameters because it has many many squiggles here but it's a low norm solution. The low norm will cause the solution to kind of be smooth whereas a high norm solution that perfectly interpolates the training data would look something like this. So the authors here say if your inductive bias is able to find a low norm solution that perfectly fits the training data then that will generalize well. It turns out that modern architectures tend to find low norm solutions if you train them for example with SGD. The combination of many parameters and low norm solutions will give you a smooth function and the smoothness of the function will be the thing that generalizes to unseen data because the smoothness kind of ensures that everything in between the data will be nicely kind of interpolated here. So that's the the perspective. They go on from these random Fourier features to neural networks and what they do here is they train a neural network on MNIST with a one hidden layer. So there's two weight layers now and again you can see as the as the number of parameters so this means basically the number of hidden nodes they increase the number of hidden nodes in the hidden layer and as they increase this the training and test error go down. The training error continues to go down test error goes up until the interpolation threshold again and then the test error drops again while the training error continues to be almost zero. They do the same thing with decision trees and random forests and show the exact same thing that there is this interpolation threshold after which the test error drops even though the training error is almost zero. To me this is really remarkable and they show this in the appendix of many many more experiments where they show this phenomenon happening on different datasets and on different architectures here random ReLU features and so on and it kind of gives a new perspective on generalization and why our models generalize so well. They finally conclude with why has this not been seen yet and they give some nice reasons basically that for example models where you can choose the models where you can choose the the complexity for example random Fourier features are originally proposed as an approximation to kernel machines if you have too many data points and don't want to compute as many features so they they're basically only ever used in this regime where the classical paradigm holds and the neural networks in the other hand often are simply made super large and they say this peak here that they show is very localized and you might if you increase your neural network maybe you try one at this size this size this size and this size and all you then see is kind of a downward trajectory you kind of miss this peak so it leads to the impression that simply oh bigger neural networks perform better. Yeah so I found this interesting I hope you did as well and definitely check out more of this group's work. That was it for now have a nice day
[ { "end": 4.5600000000000005, "start": 0, "text": " Hi there! Today we're looking at reconciling modern machine learning and" }, { "end": 11.64, "start": 4.5600000000000005, "text": " the bias-variance trade-off by Mikhail Belkin et al. This paper struck me as" }, { "end": 19.92, "start": 11.64, "text": " interesting at ICML when I heard a talk by Mikhail Belkin. The" }, { "end": 26.28, "start": 19.92, "text": " paper is very interesting in terms of what it proposes about modern machine" }, { "end": 31.400000000000002, "start": 26.28, "text": " learning. What's the problem? The problem is they contrast what they call" }, { "end": 38.760000000000005, "start": 31.400000000000002, "text": " classical machine learning and how to understand machine learning, namely in" }, { "end": 45.32, "start": 38.760000000000005, "text": " terms of bias-variance trade-offs, and modern machine learning where it's for" }, { "end": 52.24, "start": 45.32, "text": " example deep neural networks which have very different properties. Basically" }, { "end": 56.72, "start": 52.24, "text": " the best way to describe it is probably with an example. Let's say we have" }, { "end": 62.28, "start": 56.72, "text": " four data points. Here is a coordinate system in two dimensions." }, { "end": 73.2, "start": 62.28, "text": " One, two, three, four. Four data points. Why not?" }, { "end": 83.60000000000001, "start": 73.2, "text": " These four data points we want to fit a function from X to Y. Y is our" }, { "end": 90, "start": 83.60000000000001, "text": " target. It's kind of a regression problem. Let's say we have just one" }, { "end": 95.64, "start": 90, "text": " parameter which we can use to describe our function. Probably the best" }, { "end": 103.72, "start": 95.64, "text": " thing we could do is to do something like this, which is a line. The" }, { "end": 111.72, "start": 103.72, "text": " only parameter here is the slope of that line. Our model would be" }, { "end": 117.28, "start": 111.72, "text": " this one line and it would pass basically through the data and would" }, { "end": 122.52, "start": 117.28, "text": " describe the data fairly well as you can see. If we have two parameters now we can" }, { "end": 128.48, "start": 122.52, "text": " introduce for example a bias term and not have the line at the origin. This" }, { "end": 136.07999999999998, "start": 128.48, "text": " line here, now we have the bias which is the distance to this point to describe" }, { "end": 141.24, "start": 136.07999999999998, "text": " it as well as the slope of this line as parameters. So two parameters and if you" }, { "end": 146.92, "start": 141.24, "text": " look at this line here it describes the data a bit better than" }, { "end": 152.48, "start": 146.92, "text": " before. It passes kind of through the center of the data. If we" }, { "end": 157.44, "start": 152.48, "text": " go to three or four parameters, it's well known that if I" }, { "end": 164.35999999999999, "start": 157.44, "text": " have the same number of parameters as I have data points, I can" }, { "end": 169.28, "start": 164.35999999999999, "text": " actually fit the data perfectly. How to do this? It would be like an order" }, { "end": 177.56, "start": 169.28, "text": " for polynomial which... Let's see if I can draw an order for polynomial. It" }, { "end": 195.44, "start": 177.56, "text": " needs to go... It needs to rip and then... Okay well... No that's... Okay that's more than" }, { "end": 202.28, "start": 195.44, "text": " order for. In any case I can fit actually the data perfectly. Now if you think" }, { "end": 207, "start": 202.28, "text": " about all of these functions, let's contrast these. Alright let's contrast" }, { "end": 214.2, "start": 207, "text": " them and let's look at what is the data distribution probably." }, { "end": 219.64, "start": 214.2, "text": " Data distribution is probably, if I fill in the rest of the data that is not in" }, { "end": 227.48, "start": 219.64, "text": " our training set, maybe something like this. So which of these functions" }, { "end": 234.6, "start": 227.48, "text": " generalize as well to this general data, the unseen data? Probably the first" }, { "end": 240.68, "start": 234.6, "text": " function not doing very poorly. The first function actually doing okay. The second" }, { "end": 247.16, "start": 240.68, "text": " function doing even better as we saw. If we add a parameter to the" }, { "end": 251.76, "start": 247.16, "text": " first function it gets better, but if we then add more parameters it gets worse." }, { "end": 255.88, "start": 251.76, "text": " This is kind of taught in current machine learning classes as the" }, { "end": 261.84, "start": 255.88, "text": " phenomenon of overfitting. Whereas here the function that has the most" }, { "end": 267.91999999999996, "start": 261.84, "text": " parameters actually doesn't fit well. What is troubling now is that if you" }, { "end": 272.47999999999996, "start": 267.91999999999996, "text": " think of things like neural networks, modern architectures, they actually have" }, { "end": 278.35999999999996, "start": 272.47999999999996, "text": " even more... They have oftentimes more parameters than their data points in the" }, { "end": 285.12, "start": 278.35999999999996, "text": " data set. So they can fit the training data perfectly and still have kind of" }, { "end": 292.64, "start": 285.12, "text": " spare room, spare capacity. These models actually generalize fairly well." }, { "end": 299.32, "start": 292.64, "text": " This paper asks what's going on here and what they propose is the following" }, { "end": 305.76, "start": 299.32, "text": " picture. Here we have a classical view of machine learning. On the x-axis is" }, { "end": 312.88, "start": 305.76, "text": " the complexity of H. You can think of the complexity of the... This is H is" }, { "end": 320.6, "start": 312.88, "text": " the model class. H is the class of all the models you could fit. For" }, { "end": 325.92, "start": 320.6, "text": " example it would be every linear model with one parameter. This was our" }, { "end": 330.08, "start": 325.92, "text": " first model. The first model would be somewhere here one. The" }, { "end": 334.84, "start": 330.08, "text": " complexity is one. Then here we'd have the complexity of two where we added a" }, { "end": 340.32, "start": 334.84, "text": " parameter, three parameters and then four parameters. This is what we saw." }, { "end": 346.32, "start": 340.32, "text": " At the beginning one parameter we had some training risk." }, { "end": 351.52, "start": 346.32, "text": " Here simply another term for loss. We had some training loss. Then as" }, { "end": 358.03999999999996, "start": 351.52, "text": " we added a parameter the training loss decreased. It got better and also" }, { "end": 364.52, "start": 358.03999999999996, "text": " the test loss on the unseen data decreased. So it got better on the" }, { "end": 369.38, "start": 364.52, "text": " test set as well as we added parameter. Then as we added more parameters it was" }, { "end": 374.12, "start": 369.38, "text": " able to fit the training data better and better going to almost zero risk here." }, { "end": 382.8, "start": 374.12, "text": " But on unseen data the performance actually got worse again." }, { "end": 387.36, "start": 382.8, "text": " Again this is what we teach as overfitting. These authors propose this" }, { "end": 392.52, "start": 387.36, "text": " is incomplete. Namely the picture actually looks like this and all we've" }, { "end": 399.2, "start": 392.52, "text": " done so far is look at this left hand side here. Namely that there is a peak" }, { "end": 403.92, "start": 399.2, "text": " here and this is called the interpolation threshold. The interpolation" }, { "end": 408.84, "start": 403.92, "text": " threshold is roughly at the point where you have as many parameters as you have" }, { "end": 415.15999999999997, "start": 408.84, "text": " data points. After the interpolation threshold if you give even more" }, { "end": 419.41999999999996, "start": 415.15999999999997, "text": " parameters the training risk of course stays low because you can fit the" }, { "end": 425.24, "start": 419.41999999999996, "text": " training data perfectly from the interpolation threshold forward. But the" }, { "end": 431.56, "start": 425.24, "text": " test risk actually decreases again. This is really interesting." }, { "end": 439.2, "start": 431.56, "text": " Let me just preempt this and say this is not due to regularization. It's" }, { "end": 443.88, "start": 439.2, "text": " not because people regularize their models or anything like this. In any" }, { "end": 449.40000000000003, "start": 443.88, "text": " case regularization would actually move you to less of a complexity of your" }, { "end": 454.68, "start": 449.40000000000003, "text": " model class. Because now if you regularize you're no longer able to fit" }, { "end": 464.40000000000003, "start": 454.68, "text": " certain models as easily or converge to them. They propose that this is" }, { "end": 468.08, "start": 464.40000000000003, "text": " happening and they give some reason why this might happening and they give some" }, { "end": 473.56, "start": 468.08, "text": " evidence that this is happening. Here is the evidence that this is happening" }, { "end": 481.64, "start": 473.56, "text": " and they do this here for example. This is a random Fourier features classifier." }, { "end": 486.24, "start": 481.64, "text": " What are random Fourier features? They describe them here. If you have a" }, { "end": 498.24, "start": 486.24, "text": " data point X what you do is you push this through a function which or you" }, { "end": 504.15999999999997, "start": 498.24, "text": " push this through many of them. You sample capital N of these vectors v and" }, { "end": 510.91999999999996, "start": 504.15999999999997, "text": " of each of the vectors v you take the inner product and raise it." }, { "end": 518.9200000000001, "start": 510.92, "text": " Take the exponential function of it and then aggregate them. These" }, { "end": 522.32, "start": 518.9200000000001, "text": " random Fourier features are the random Fourier features and these" }, { "end": 528.44, "start": 522.32, "text": " are the weights that you learn. This is basically a linear classifier but" }, { "end": 535.5600000000001, "start": 528.44, "text": " not of the original features but of intermediary features which are fixed" }, { "end": 540.88, "start": 535.5600000000001, "text": " for a given random seed. The good thing is here you can sample, you can decide" }, { "end": 546.12, "start": 540.88, "text": " how many intermediary features that you want. The other good thing is if you let" }, { "end": 553.4, "start": 546.12, "text": " n go to infinity this actually becomes an infinite dimensional kernel machine." }, { "end": 559.84, "start": 553.4, "text": " It becomes a kernel SVM with the Gaussian kernel which is operating in" }, { "end": 567.12, "start": 559.84, "text": " an infinite dimensional space. If you don't go as far then it's just an" }, { "end": 571.5600000000001, "start": 567.12, "text": " approximation to that. It's a cool model where you can choose how" }, { "end": 578.88, "start": 571.5600000000001, "text": " many parameters you want. It's a perfect model to explore this phenomenon." }, { "end": 585.72, "start": 578.88, "text": " What are they doing? They are doing the following. They take MNIST and they just" }, { "end": 592.48, "start": 585.72, "text": " apply this model. On the x-axis here are the number of parameters and" }, { "end": 600.32, "start": 592.48, "text": " the number of random Fourier features that they construct. Here you can see" }, { "end": 609.4, "start": 600.32, "text": " the mean squared error on the test set. As you can see at the beginning" }, { "end": 616.5600000000001, "start": 609.4, "text": " the error goes down as proposed. Then here is probably this sweet spot" }, { "end": 621.5600000000001, "start": 616.5600000000001, "text": " of classical machine learning. After that you start to overfit, it goes up again." }, { "end": 628.56, "start": 621.56, "text": " There's a giant peak and then it goes down again." }, { "end": 635.88, "start": 628.56, "text": " Here 10,000 I think they do it with a subset of MNIST if I remember correctly." }, { "end": 642.04, "start": 635.88, "text": " Around 10,000 is exactly the number of data points they use or" }, { "end": 648.3599999999999, "start": 642.04, "text": " multiplied by the classes. I don't remember correctly but in any case at" }, { "end": 658, "start": 648.36, "text": " this number you have the same amount of parameters as data points." }, { "end": 665.04, "start": 658, "text": " After that the test error decreases again. As you give more and more and" }, { "end": 670.4, "start": 665.04, "text": " more features every single classifier on this line is able to fit the" }, { "end": 675.96, "start": 670.4, "text": " training data perfectly but they successfully get less and less error on" }, { "end": 683.32, "start": 675.96, "text": " the test set. You can see it approaches this dotted line here which is if" }, { "end": 687.6800000000001, "start": 683.32, "text": " you perfectly solve the infinite dimensional problem. If you actually" }, { "end": 694.8000000000001, "start": 687.6800000000001, "text": " use a kernel SVM to solve this problem, you can see this" }, { "end": 701.1600000000001, "start": 694.8000000000001, "text": " gives you a lower bound. It really shows nicely that the" }, { "end": 706.4399999999999, "start": 701.16, "text": " random Fourier features classifier approximates this as you go higher and" }, { "end": 713.9599999999999, "start": 706.4399999999999, "text": " higher with capital N. It actually approximates the kernel SVM." }, { "end": 718.76, "start": 713.9599999999999, "text": " This is really interesting that this actually happens in practice. What" }, { "end": 724.64, "start": 718.76, "text": " they also see here is when they look at the norm of the solution. The norm" }, { "end": 733.88, "start": 724.64, "text": " of the solution they calculate as basically the norm in the" }, { "end": 739.04, "start": 733.88, "text": " Hilbert space but they can't because it's hard to compute. A proxy for this" }, { "end": 746.68, "start": 739.04, "text": " is simply the norm of the weight vector that you learn. The norm of the" }, { "end": 752.6, "start": 746.68, "text": " solution as you add more parameters of course for first it goes up because you" }, { "end": 759.24, "start": 752.6, "text": " add more parameters, you fit each of them, they have some value and then" }, { "end": 767.96, "start": 759.24, "text": " it goes up. It peaks at this interpolation threshold. There you have a" }, { "end": 773.84, "start": 767.96, "text": " really high norm solution and after that the norm goes down again of the solution." }, { "end": 782.72, "start": 773.84, "text": " Again it approximates the norm of the perfectly solved kernel" }, { "end": 788.36, "start": 782.72, "text": " machine. That's extremely interesting and is a part of an explanation they" }, { "end": 796.0400000000001, "start": 788.36, "text": " give why this is happening. Namely the following. If you have too many" }, { "end": 802.76, "start": 796.0400000000001, "text": " parameters what you might do with the correct inductive bias is find a low" }, { "end": 807.3199999999999, "start": 802.76, "text": " norm solution. What does a low norm solution mean? A low norm solution" }, { "end": 813.28, "start": 807.3199999999999, "text": " means a relatively simple function. As you add parameters your model is" }, { "end": 819.64, "start": 813.28, "text": " better and better able to find a simple function that describes the training" }, { "end": 827.12, "start": 819.64, "text": " data. Not in terms of simple of less parameters but simple in terms" }, { "end": 833.48, "start": 827.12, "text": " of how it moves between the training data. If you imagine the training" }, { "end": 844.2, "start": 833.48, "text": " data again from before and you imagine it perfectly fit this polynomial" }, { "end": 848.48, "start": 844.2, "text": " here that we drew with four parameters. If I have many many many more" }, { "end": 855.32, "start": 848.48, "text": " parameters I can do something like... I have many parameters but I can be" }, { "end": 862.6400000000001, "start": 855.32, "text": " kind of squeaky but they have... right? So this something like this here I grab" }, { "end": 868.12, "start": 862.6400000000001, "text": " this here I grab this something like this and this moves smoothly between the" }, { "end": 871.6800000000001, "start": 868.12, "text": " training data. It has many parameters because it has many many squiggles here" }, { "end": 876.6, "start": 871.6800000000001, "text": " but it's a low norm solution. The low norm will cause the solution to kind of" }, { "end": 883.5600000000001, "start": 876.6, "text": " be smooth whereas a high norm solution that perfectly interpolates the training" }, { "end": 893.64, "start": 883.56, "text": " data would look something like this. So the authors here say if your" }, { "end": 900.0799999999999, "start": 893.64, "text": " inductive bias is able to find a low norm solution that perfectly fits the" }, { "end": 907.16, "start": 900.0799999999999, "text": " training data then that will generalize well. It turns out that modern" }, { "end": 912.9599999999999, "start": 907.16, "text": " architectures tend to find low norm solutions if you train them for example" }, { "end": 921.1600000000001, "start": 912.96, "text": " with SGD. The combination of many parameters and low norm" }, { "end": 925.96, "start": 921.1600000000001, "text": " solutions will give you a smooth function and the smoothness of the" }, { "end": 932.44, "start": 925.96, "text": " function will be the thing that generalizes to unseen data because the" }, { "end": 940.1600000000001, "start": 932.44, "text": " smoothness kind of ensures that everything in between the data will be" }, { "end": 948.7199999999999, "start": 940.16, "text": " nicely kind of interpolated here. So that's the the perspective." }, { "end": 955, "start": 948.7199999999999, "text": " They go on from these random Fourier features to neural networks and what" }, { "end": 961.16, "start": 955, "text": " they do here is they train a neural network on MNIST with a one hidden" }, { "end": 968.88, "start": 961.16, "text": " layer. So there's two weight layers now and again you can see as the as the" }, { "end": 973.36, "start": 968.88, "text": " number of parameters so this means basically the number of hidden nodes" }, { "end": 978.04, "start": 973.36, "text": " they increase the number of hidden nodes in the hidden layer and as they increase" }, { "end": 982.96, "start": 978.04, "text": " this the training and test error go down. The training error continues to go down" }, { "end": 987.88, "start": 982.96, "text": " test error goes up until the interpolation threshold again and then" }, { "end": 994.48, "start": 987.88, "text": " the test error drops again while the training error continues to be almost" }, { "end": 1005.32, "start": 994.48, "text": " zero. They do the same thing with decision trees and random forests and" }, { "end": 1011.28, "start": 1005.32, "text": " show the exact same thing that there is this interpolation threshold after which" }, { "end": 1021.16, "start": 1011.28, "text": " the test error drops even though the training error is almost zero. To me" }, { "end": 1026.12, "start": 1021.16, "text": " this is really remarkable and they show this in the appendix of many many more" }, { "end": 1031.68, "start": 1026.12, "text": " experiments where they show this phenomenon happening on different" }, { "end": 1040.32, "start": 1031.68, "text": " datasets and on different architectures here random ReLU features and so on and" }, { "end": 1046.6399999999999, "start": 1040.32, "text": " it kind of gives a new perspective on generalization and why our models" }, { "end": 1055.8400000000001, "start": 1046.64, "text": " generalize so well. They finally conclude with why has this not been seen yet and" }, { "end": 1065.1200000000001, "start": 1055.8400000000001, "text": " they give some nice reasons basically that for example models where you can" }, { "end": 1072.8400000000001, "start": 1065.1200000000001, "text": " choose the models where you can choose the the complexity for example random" }, { "end": 1079.1999999999998, "start": 1072.84, "text": " Fourier features are originally proposed as an approximation to kernel machines" }, { "end": 1082.9199999999998, "start": 1079.1999999999998, "text": " if you have too many data points and don't want to compute as many features" }, { "end": 1088.08, "start": 1082.9199999999998, "text": " so they they're basically only ever used in this regime where the classical" }, { "end": 1094.6399999999999, "start": 1088.08, "text": " paradigm holds and the neural networks in the other hand often are simply made" }, { "end": 1102.1599999999999, "start": 1094.6399999999999, "text": " super large and they say this peak here that they show is very localized and you" }, { "end": 1105.6000000000001, "start": 1102.16, "text": " might if you increase your neural network maybe you try one at this size" }, { "end": 1111.2, "start": 1105.6000000000001, "text": " this size this size and this size and all you then see is kind of a downward" }, { "end": 1116.0400000000002, "start": 1111.2, "text": " trajectory you kind of miss this peak so it leads to the impression that simply" }, { "end": 1124.48, "start": 1116.0400000000002, "text": " oh bigger neural networks perform better. Yeah so I found this interesting I hope" }, { "end": 1130.8400000000001, "start": 1124.48, "text": " you did as well and definitely check out more of this group's work. That was it" }, { "end": 1134.52, "start": 1130.84, "text": " for now have a nice day" } ]
l8JeokY5NsU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Conversation about Population-Based Methods (Re-upload)
[ "Science & Technology" ]
[ "machine learning", "ai", "artificial intelligence", "open ended learning", "quality diversity", "conference", "icml", "icml2019", "tutorial", "population-based search", "goal switching", "serendipidy", "evolution", "interview", "podcast" ]
Being interviewed by Connor Shorten of Henry AI Labs (https://www.youtube.com/channel/UCHB9VepY6kYvZjj0Bgxnpbw) on the topic of population-based methods and open-ended learning. Tutorial: https://www.facebook.com/icml.imls/videos/481758745967365/ Book: https://www.amazon.com/dp/B00X57B4JG/
Hi there, I've recently been interviewed by the YouTube channel Henry AI Labs by Connor Shorten and what follows is the resulting conversation we had about population-based methods and Open-ended learning things like that basically topics of the ICML tutorial that we both saw It's important to note that none of us is really an expert on the topic but we are trying to make sense of it and mainly just kind of talking about the ideas So please enjoy the conversation with Connor Shorten definitely check out the Henry AI Labs channel and Now have a good time Thanks for watching the Henry AI Labs deep learning podcast today. I'm joined with Janek Kilcher Janek works in the data analytics lab at ETH. He has a great YouTube channel I really enjoy watching his paper summary videos If you like any of the videos that I'm making you definitely also like checking out this channel I'm gonna put the link in the description at the end of the talk So Janek, thanks for doing this with me. I really appreciate it Thanks for having me. It's cool. So what we're gonna talk about is population-based search and Presentation that ICML that I really thought was interesting about emphasizing Diversity and novelty in search. So the first question I just wanted to start by generally talking about your opinion on population-based search and The differences between population-based search and my gradient descent going straight for one solution Yeah, so the the kind of main difference Is that in population-based search as the name implies you maintain kind of a large population of solutions? So you don't want to limit yourself to just one trajectory say I start here and then I run towards my goal but you kind of maintain a lot of hypotheses of what the solution could be and then you kind of want to update all of them at the same time and So there's many different variants of population-based search but they all have this this thing in common where you maintain many solutions and you kind of bet on One of them becoming a good one basically Yes, so one other thing they they present their paper where they have the robot walking And if it breaks one of its legs, for example, it can go back to the map elites table and and say okay Well, I've lost this leg, but I think maybe this solution I was I wasn't too clear on how that would really be related. So I was maybe wondering if you had more insight on that Yes, so the so the maybe the the context is yeah You want to teach a robot to walk and the robot had six legs I believe so and if you think of what's the solution to the problem a solution is kind of an Algorithm that takes the current sensor input and outputs how to move the motors, right? So and If you just have like say your gradient descent algorithm converging on the best solution of the robot Of how to move the robot. It's just going to be like, oh, these are the sensors Okay, I'm gonna move like this like this like this like this but if One leg breaks, of course, you're lost because you only know this one way of moving and now the sorry So you only know this one way of moving Basically, and that's it But in population based search if you think of the solution as a way to move you maintain many many many ways to move so you basically the objective if you can call it like this is Algorithm find me a lot of different ways to move Right with my six legs and now if one of my legs I still can evaluate all of them I still can find okay, which one's the best but if now one of them falls away I have all these other solutions that I can try Right. So then what they would do is like this life falls away. Now. They just reevaluate all of those solutions while only having five legs and the best of those like is much more likely to kind of Work then if you had just your single solution So that kind of that's the its population base because you maintain many different ways of solving the problem Yes, I was also thinking about like using the search algorithms that control neural architecture search and things like that So it's trying to think of how you might extend these ideas from the robot walking with six legs To the RNN controller designing the convolutional network, but like maybe I might have like more of a Storage constraint and more of a latency constraint and I could jump to a different solution like that I'm just wondering how you think like these ideas of population-based search translate into the neural architecture search and specifically if it really is important because like you've got I feel like in neural architecture search you have such a direct signal with the Classification accuracy like I don't see as much variance as those in the in the objective function Yeah, I really think this population based approach is they shine in So they shine in multiple different areas, but one area where they shine is definitely when the environment changes So when you know something about whatever your input changes like the robot losing a leg so in kind of neural architecture search you might You might find these methods working if you then go for let's say transfer learning So you kind of train your network on one task you want to actually do it in another task, right? And then if you maintain many solutions and you can evaluate all of them In a in this transfer setting it's much more likely that one of them is gonna be is gonna be fine So but you're right of I also believe that directly in architecture search Maybe it's not Maybe it doesn't yield that many grades results though the other of course the other Area where these methods shine and this is with respect to algorithms like novelty search Which can be implemented as a population based method is They gave this really good example of deception in a search problem So a deception would be like if you have a robot walking a maze and the robot just wants to get to the goal Right and you would program it the robot to be rewarded the closer it gets to the goal But if like there's a wall in between and you actually need to go around the wall Kind of then for a while you would need to move away from the goal in order to reach it So if you have like a pure objective driven approach, you just go straight to the goal You would always get stuck at the wall But if you then kind of do what is called a novelty search where you basically reward the robot for things It has never done before it would actually find its way around the wall So you can maintain population of solutions that all kind of explore the space And that in our neural architecture search, maybe it's of a benefit that actually You know if I I probably always benefit from like adding more layers or neurons or something like this, but maybe I actually want to prune some stuff first and then add some more stuff So I maybe want to get worse first before I can get even better, right? So so is this a reach where I can imagine that happening? But I don't know Yeah, I was thinking the changing environment I definitely think like when you deploy a model and then you're getting new data that you could frame that as a changing environment And then also I was thinking about like in the context of GAN Which is something that I think is really interesting that the discriminator classifying the GAN Sam the generator samples, it's a changing environment because of the generators updates So maybe having some kind of population based GAN or discriminator model might help it avoid that like Continual learning problem, I guess is sort of an Yeah, that could that might as might very well be There are approaches to GANs, I believe where you basically you have like many discriminators And each one kind of only has let's say has its own limited view on the data And you're trying to kind of fool a lot of them at the same time, but it's not the same thing. But yes I think that that might make sense. Yeah, I've seen that multiple generator multiple discriminator model too I think that's really interesting as well So then one other thing I was curious about is this idea of goal switching and how that might relate to the like AutoML on our existing More like heavily studied things like classification, localization, semantic segmentation Like how do you think goal switching could be important? Like one idea I had is maybe if you've got like multi-class classification And it's got like a really low false positive rate or something on like one class You might say well you've somehow learned a decision boundary on that class Or do you think that wouldn't generalize and that there's no sense in goal switching in like a multi-class classification problem? So yeah, in general, well when you think of goal switching in general How they introduced it was also in the context of like this population based search of these map elites Maybe it's kind of so what map elites the algorithm does basically is it says Okay, I have a number of dimensions that I could solve the problem on and they introduced Okay, let's take life on earth needs to whatever survive So I can either be like a super tall creature right to reach food that no one else can reach I could be a super fast creature right to kind of run away from everything Or it can be a super heavy creature so that no one can attack me And so these are kind of the dimensions that you can solve the problem of reproduction and survival And within so what map elites does it it would segment this area So let's say size and speed it would segment this into a grid And then in each grid it would kind of maintain the best solution so far that is within that grid And then what they see is when they then kind of evolve this over time and improve each each grid is that Inventions let's say inventions algorithm discoveries in one grid say for a very fast creature They would then kind of be adapted to the very let's say the very heavy creatures so like fast creature Kind of discovers or longer legs make me even faster Maybe the longer legs can be then be combined in the heavy creature to do something else So this kind of goal switching it's think of like feathers being first kind of developed or evolved for warmth For temperature regulation then being goal switched over to adapt it for flight So in the in terms of multi class classification I guess it's a bit of a different problem if you just have one classifier You can definitely make the argument that since you know you're learning maybe to classify one class really well The low false positive rate you have learned very good features for that class And if some other class kind of like the zebra is a horse with stripes and then the horse is a horse But with the feature stripes being really low you can probably classify that better or something making stuff up here But it's a bit of a different context I feel the if you have a single classifier do multi class classification But definitely the logic applies in the feature space I would say where you learn features for one class and they might become useful for another class Yeah I had this other thoughts sort of when you're discussing that is like what about like multi class multitask learning Like maybe my intermediate features get mapped to a classifier get mapped to a segmentation get mapped to again Like could goal switching improve multitask learning Yeah I would definitely say so I think that that's exactly what we're seeing when you look at for example pre training So if you think of like these wherever these newest big language models like BERT or something they're really good at tasks I don't know what it was an NLP task labeling of sentiment sentiment classification is the classic right If they evaluate on that because it's so easy but let's say BERT is really good at sentiment classification But if you were to just to train it out right on sentiment classification it's probably not going to work because there's just too little signal But then what happens is you pre train it as a language model as this masked language model and it kind of gets really good at simply comprehending language And that skill can then be kind of adapted over into the into the cement sorry into the sentiment classification realm So I think if you look at something like pre training or multitask as you say then definitely one tap what the addition of a task might give rise to certain features That then all of a sudden can be adapted by another task whereas if you just trained the latter task by itself that maybe would have been too difficult So yeah there's definitely an analogy so then what I think about is so I'm going from my pre training language model into sentiment classification And maybe I also add like question answer during document summarization named entity like this like vector of tasks that it can go do I'm then curious like when your goal switching it's like how do you then combine the features later on or do you just like take it as if I need this task I'll go to this model like yeah Well the question here is do you whether or not you implement this as a single model and kind of refer to the goal switching of features within that model Or whether you also do this now as a population based method where basically you maintain you you maintain different neural networks for different combination of these tasks Then you'd actually need a method to kind of combine and reproduce the neural networks themselves which I yeah I see that's that's going to be a bit of a difficult task Like some cross distillation or some something crazy yeah I don't know how that will work exactly Yeah I just wonder about two things it's like do for my population based search could you have like the weights be the population like different sets of weights Or would it necessarily need to be like taking apart the layers and designing new internal like cells as in the architecture search like Because if I just have the weights maybe I could treat the diversity search or goal switching as like stochastic weight averaging and just like mesh them all together when I'm finished with my goal switching at the end But if it's yeah it's definitely be if you wanted to if you yeah if you wanted to if you wanted to implement your multi task multi task tasking as a population based approach Where yeah you could def it would definitely give you an easier time if you keep the architecture of your neural networks the same and simply have different weights And then you could indeed consider something like weight averaging or or yeah I guess a more modern approach will be like distillation from the two teacher models into one child model It's actually a good metaphor for a for reproduction kind of a distillation from multiple teacher model don't know if anyone's done that yet but yeah I guess that that might be the way to do it if you also maintain different architectures for different problems that might be a bit of a yeah Yeah that's an interesting thing too if you have the goal switching and then you model distill it all into one model that is yes Well if you think of map elites right you'd simply you'd simply distill it into the appropriate I don't even know what the what the axis would be probably I can imagine okay you have like three tasks so you have three axis and then you'd mix the task maybe in accordance on how far up your of these axes you are or something like this It's not exactly map elites because your actual objectives are on the axis but I don't know Yes pretty cool so just to backtrack one step I want to talk about like diversity centric search novelty like when I was thinking about that I was like can't you just initialize it such such that it has maximum diversity like can't you just initialize the population such that they're all like uniformly spaced and then search locally from there So I just wonder what you think on that and how this is different from that So yeah in these in these diversity search algorithms basically what you're you're doing is your your only goal is or your main goal depends on the algorithm but let's say your only goal is to find diverse behaviors or diverse solutions diverse whatever I think the main problem with that is is that the search space is so extremely large That you're going to have a hard time even even defining what a kind of a uniform distribution is because it's such a high dimensional space that even if you sample uniformly it's it's almost empty like you're almost right you're not you're not getting anywhere because you have finite finite computer you need to implement an algorithm Even if you even if my computer can hold a hundred thousand different members of a population in high dimensions that is nothing right so to me yet the initialization might be definitely important But I don't think you'll you'll get around some sort of iterative procedure and going around weeding out weeding out things such that you have space for interesting things because ultimately what you want to find is something interesting In the robot maze example the novelty search basically is here is a robot you started right and then you want to do something that you haven't done yet right so if the robots crashes into a wall the first time that's a good thing you say oh cool you haven't done that yet But if it crashes into the wall the second time you're like you've done that already right so you you you basically need a measure of saying how close to behaviors are but if the robot has crashed into every wall once the only thing it can do if it wants to do something new is actually go around the wall and then you're like oh cool you've done something new But the space of behaviors often is so large that you can't simply enumerate all the all the behaviors so you I think that's the main problem why you can't just make it diverse from the beginning Yeah when I think about that I was thinking that maybe the like reward function if you're like navigating the maze it needs to be more refined so like if it crashes into the wall that needs to be like I don't know plus three some some like unique signal I feel like in order to create that kind of because like Thinking of if it's just like reward zero everywhere but one if you hit that finish line and then maybe some kind of like discounting for how long it takes you to get there is like I don't see how it could interpret that it's done a new behavior if all it has is it so to me it feels like it's all about the design of the reward space now to implement such a thing Yes absolutely so the that the definitely if you wanted to do novelty search you would need to implement a measure of how close to behaviors are so there's no way around and I think that's kind of crux of the of this method is that by specifying how close to behaviors are so what what constitutes novelty and what doesn't You already implicitly kind of telling the robot something about the nature of of the world so I think that the kind of the objective because they now say oh we don't give the robot the objecting of reaching the target we simply give it the objective of not doing the same thing twice I think the kind of objective sneaks in Like again through the specification of how of you how close are to be a risk but definitely this is just kind of a really simple example of what they want to say is that these methods really become important when you have ambitious objectives in the maze we can all agree if we just designed the reward Crashing walls bad you don't have to actually go straight to the goal you can you know but go around walls good and so on then it's easy right but in really ambitious objectives like I don't know flying reaching the moon in the in the 1960s designing general AI Curing cancer and so on we don't actually know how to design the reward right because we don't know which steps need to be fulfilled in order to to fly to the moon I guess now we do in hindsight right but we couldn't have predicted we don't know which steps need to be discovered In order to cure cancer and it's very very probable if you look at history that the fundamental discoveries that lead to us curing cancer will not directly come from cancer research that's that's their entire point right it's not like you can have a goal go straight towards it if it's like a really ambitious goal very probably The solutions will come in part from extremely non related fields and they and you kind of have to make advances everywhere and in order to solve that problem so the the the question of it's all designed it's all about designing the reward yes but we would have to know how the reward must be must look and in these really ambitious objectives we don't And that's that's where they argue well the best thing actually you can do is to just explore and you just find interesting things along the way and you kind of hope that these interesting things will come no you know the interesting things will combine to form new interesting things right but you just don't know where you're going to end up right Yeah, I guess maybe you could just keep a trip like the trajectory of states and use that as your signal of novelty. But then I think like if you've got like a robotic arm with like x degrees of freedom it's like the state space would be too infinite to really like say oh this was significantly this is a significantly different sequential procedure of states and this other thing. So then the next thing. Yes, I think this is a good transition into their pick breeder experiment. And so anyone who listens to this who hasn't watched their talk the pick breeder is like, they've got these generator neural networks with sets of weights. And they have like humans go on and they pick two of the generated images to blend together and derive a new image. And so this repeats on and on until it goes from like just like a spiral pattern into like a skull face drawing or a butterfly drawing or something like that. And they. So this idea is supposed to represent open endedness in an environment and not so it just generally I, I just found it to be really interesting. I think it's one of the things in their talk that you look at it and you're like oh it's interesting what what is going on here. But it's like the, the mutation is really guided by the human search, which is so complex I feel like I was just wondering what you thought of that pick breeder experiment. Yeah, it's really cool. And it's, it's, it's actually the basis for their entire books I've read the book, the white greatness cannot be planned I believe I've got the title. But, so that this, they actually they kind of start out with this as a motivational example of what if, what if the only goal is to do something interesting and without any objective so all you do is kind of choose slight variations on the current picture, and you see what you end up with and I thought, I thought it illustrates their points extremely well so it illustrates, for example, goal switching is that if you were done with your sequence of image manipulations you could then save it into the database and someone else could pick up from it, and then kind of continue it. And since every human finds slightly different things interesting right, you could take someone else's final result and say, ah, you know that that kind of looks weird but then you, your modifications to it will be different than that human continued breeding the picture. So what you end up is, and they show this, for example, one picture ends up being a car, and it had been adapted from an alien face where the eyes of the alien face became the wheels of the car. And so the first person might have been like, oh, this looks more and more like an alien face, I'm going to make it more like an alien face, and then the second person is like, oh, that kind of looks nice, I'm going to modify it in a different, so they basically give this example of if you have an ambitious goal like getting to a car just from these very simple picture generation networks. Then the stepping stones to get there have nothing to do with cars, and the people that did it didn't have a car in mind while going there. And the second thing is that if you try to get a car from the beginning, I believe they've done this, if you try to, you can't. Like, it's just the sequence of things that you have to go through is so complicated and convoluted that if you were to try to end up with a result, it's basically impossible. So these kind of illustrate their points very, very nicely. And I mean, it's a cool experiment in itself, but they use it kind of as a basis metaphor for them going on, jumping off. Yeah, I just think it's so interesting, this idea that it's like you can't design a car unless you don't try it, unless you just happen to come across that. It's sort of like I think about like if I was to fire up GarageBand and start trying to make a song, it's like I don't know exactly what it's going to sound like. I'm just going to kind of explore until I come across something. So then I was thinking about like with the GANs and the way that the GANs design images. So this is sort of a design I drew up that I'm curious what you think of. It's like what if the generator just tries to make some object and then a pre-chained classifier says, oh, I think it looks like this maybe. And then you send it to like a refining network. So the GAN just sort of searches for objects and then some classifiers are like, oh, I think it looks like sort of like how the pig breeders sort of like how we're like, oh, I think this looks like a skull or whatever. So I'm going to try to refine it now. Do you think that would be an interesting thing or? You'd have like a two stage process. First you do something general and then it gets classified. And then you'd have like a special generator just for the skull class and the special discriminator just for that. Yeah, I don't see why not. It might be hard. It might be hard to get the first generator to be sufficiently diverse. So you might might need some kind of discriminator signal at the even at the beginning. So yes, I mean, you're like, how do you think the pig breeder experiment could become fully automated such that there's no human in the loop? Yeah, that's that's a thought I had as well, because to me it seems that the kind of, of course, the resulting pictures, the fact that they look like human objects or recognizable objects is a result from them being being bred by humans. Like the fact that it looks like a car or a skull or something like this is is very much. But also, I guess that that could be abstracted in. We just not expect the results to be like human recognizable objects, but maybe something else. The much more deeper construction in pig breeder is the fact that the measure of interestingness is provided by the humans. Right. So the humans, they they click on a picture and then they get variants of that picture and they click on the one that they most like. This this sense of interestingness of I like this one is that's what's that's the fundamental core that's provided by the humans as an input to the system. That's what drives the entire thing. That's exactly the same as before. It's when you write when you teach the robot which two behaviors are close enough, like, oh, no, that's too close to before. That's not novel. Or yes, that's sufficiently different than before. That is novel. Right. This this sense is somehow you either need to specify it or you need to have the human in the loop to provide it. I feel it's very, very hard to capture that in an algorithm as as of today. Yeah, like something I think about is like maybe I'd have like my thousand class image net classifier and then maybe I'd have like like a style classifier, like a neural style transfer network that I've like chopped off the like some intermediate feature. I'm going to take that as my style. And so maybe I'm like classifying. I think it's like an airplane. And then I kind of like this style for it. That's sort of like my like how I would think about trying to automate that. Like, I don't know, I guess, like, I don't know if I I guess it's interesting. But I also feel like when you're doing the pick reader, you're kind of like, oh, I'm going to try it now. Now that I see this vision, I'm going to try to make it like look like that now, I suppose. Like, yeah, yeah. I think I could mold this into a skull and then you start doing. Yes, yes, they're very much so they're not they're not advocating random exploration. What they're advocating is basically if you have an ambitious goal, then you basically don't know the stepping stones. But from stepping stone to stepping stone, that's where objectives are very handy. So when you want to say I this already kind of looks like something, I want to make it more like that. I want to make it more into a skull. Right. It already has like two circles and kind of the shape. But I'm going to drive it there. That that is very that can be very objective driven. But in the grand scheme of things, you don't know. Then once you have the skull, someone else can develop that into an even new thing. So, yeah, indeed, if if you if you are in kind of a local search in this space, then an objective driven behavior like what you're saying, like I want to make it as much this as possible. That's very that's actually a thing they're advocating for. But then from their end result, yeah, you would need to then restart again, do the same thing with like something else. Huh? Yeah, it's really interesting. Just thinking about, yeah, I think about like the stepping stones and like is how would you define the space of stepping stones to such a to any kind of thing? I guess it's like you could still design some kind of maybe it's discrete or maybe you have some kind of signal you can get back from it. And I guess it's just a lot to think about. Directly, I think they give this they give this great analogy. I feel like if you have a really ambitious objective, it's like crossing a lake, but the lake is covered in fog. So you basically can't really see very far, but you can always kind of see the next stepping stones. Right. And you can then you can then try to go from stepping stone to stepping stone, but you don't know which one to take if there's like a fork. There's two ways possible. You don't know which one. Right. So all you can do is basically go the most interesting one. And they relate this to scientific research. So, yeah, if we want to accomplish some really great research goal, like artificial general intelligence, we don't like we don't know. But we can see the next stepping stones. Right. We can see, oh, from what we have right now, what interesting combination could we make that still kind of it still kind of makes that's not total garbage. Right. So in the local search, I can try to say I want to I don't know. I want to do this. I want to do multiple generators and multi stage and then this thing. Right. This this is kind of a stepping stone and maybe that will then lead to something more interesting and so on. So, yeah, that's that's kind of how they relate. I like this metaphor of the lake. Yeah. Yeah. I just like could like a meta controller try to put the stones down and then the objective is or is the space too enormous that that idea of having a meta controller guide the stepping stone placement is too big. The stepping stone placement is just like absurd in that and there's no way that that would work. That's sort of where I'm thinking with this now is like. So they actually that's that's exactly the question. Right. Of what I so I believe you need such a meta whatever because the space is too large. You somehow need a way to choose the stepping stones in the first place. Right. You somehow need a way to do this. Now, what they're saying is that if you're if your goal is really ambitious, then a meta controller that simply wants to reach the goal is bad because right because what we discussed before, you might need a lot of inventions from other fields in order to make goal happen. And if you simply go your field maximum power towards your goal, that's not going to happen. Now, if your meta controller is actually just something that wants to produce interesting things, then that's actually something they advocate for. That is exactly what their algorithms are trying to capture. They're trying to capture locally. Yeah, we want to get better at a particular thing. What those particular things are and the order of these that should be novelty driven instead of goal driven. Yeah, yeah. Yeah. The interesting component. I guess I'm sort of biased towards liking the objective design. And now I'm thinking like, OK, well, let's abstract those meta controllers one level up and have a meta meta controller and just repeat this and hierarchy makes sense. And that if you if you if you're if you're a bit cynical, that is what you will also hear out of here out of and they have to argue in the in their book a lot against that like isn't the question isn't the kind of isn't the implementation of a meta controller that just searches for novelty in itself. And that's the objective again. And then they give some good reasons why actually you don't. It is different. It's more like a constraint on your search. If you think of natural evolution, for example, it isn't really doesn't really have an objective. You think reproduction and survival is the objective of natural evolution. It doesn't really the good the good reason they give is the objective has already been fulfilled by the very first organism to ever live. Right. Why didn't it stop there? Why didn't it stop very first cell? OK, done. We've fulfilled the objective. It's more of a it's more of an actually a constrained optimization where the constraint is you need to be able to survive. That's kind of the minimum bar of to being on this planet. And then I'm saying constrained optimization, but it's it's not it's not an optimization. It's more of like a constraint constraint search. OK, yeah, I think, yeah, I guess it's just like I don't think I'm closed in this world of trying to think of these constraint problems. And I haven't really like thought more generally about just like exploration as a whole. But but anyway, so I just wanted to ask you generally like your deep learning researcher, I want to ask like what areas of deep learning are you really interested in right now? And what do you think is promising in the near future? So I'm currently working in adversarial examples. That is a really interesting topic. There's lots of questions still still open, but I'm generally interested in pretty much any anything that is not. I'm not too interested in like the newest the newest fine technique on getting the latest state of the art numbers, even though that's probably super important for practitioners. Basically, agreeing more with the authors of this tutorial of that. Let's just try to do interesting things. And to me, these these actually these these areas in terms of open ended, open ended search, open ended learning are very interesting. I think reinforcement learning still has a long way to go. I think actually NLP still has a long way to go because I don't believe it's the current models are the end of it. So I think it's really exciting time. Yeah, I love thinking about adversarial examples because it definitely flips the CNN idea on its head. And then I had one other thing about adversarial examples that I'm interested in is there is like an interview with Elon Musk and this Lex Friedman researcher where he asked him about adversarial examples on his self-driving cars. And he seems dismissive of it. He says he thinks basically you could just average different patches of like test time augmentation to overcome adversarial examples. So in your research, do you think that like the example where they add the noise mass to the panda and they're like, oh, it's a given now, if they just perturbed it like nine more times, do you think the prediction would average out to pandas? That is a very difficult question. And from experience, simply adding noise and then feeding it to the classifier, even if you average after that, usually will defend against adversarial examples to a point. But it will also degrade your classification performance. Because so maybe I understood it wrong, but my understanding is I have my input, right? I simply add noise to it and then feed it through the network. And I could do this many times, right? And then average the prediction. But usually this will help against adversarial examples, but it will also degrade the accuracy of that classifier. So it might actually make your self-driving car worse in the overall. Because how often is it going to be attacked against a adversarial example? It's going to be attacked maybe once or twice a year, maybe if it drives by some hacker's house, right? Sticker on a stop sign or something. But the rest of the time, I would actually like to retain the best possible classifier. And if I always have to add noise, then that's not possible. So the research we're doing is actually into the direction of can we retain the original accuracy while still kind of detecting these samples? I mean, you somehow have to get a trade off somewhere, but just adding noise isn't the final solution yet. I was like, so with these adversarial examples, they're only going to make misclassifications like that if it really is adversarially sought after. It's not just like the noise perturbation would be such an enormous space to find it otherwise. Yes, you really need to try. So it's very unlikely that some random thing. Of course, these networks can be confused by random noise, but I think one of the self-driving cars once drove into a big white truck because it was large and white, so it thought it was sky. But other than these failures, you really have to try to find an adversarial example. Really cool. Yannick, thanks so much for doing this. Anybody watching or listening, definitely check out Yannick's YouTube channel. He has really great paper summaries and all sorts of things. Thank you. Thanks so much for having me.
[ { "end": 5.74, "start": 0, "text": " Hi there, I've recently been interviewed by the YouTube channel Henry AI Labs" }, { "end": 8.3, "start": 6.3, "text": " by Connor Shorten and" }, { "end": 14.74, "start": 8.64, "text": " what follows is the resulting conversation we had about population-based methods and" }, { "end": 22.400000000000002, "start": 15.84, "text": " Open-ended learning things like that basically topics of the ICML tutorial that we both saw" }, { "end": 27, "start": 23.44, "text": " It's important to note that none of us is really an expert on the topic" }, { "end": 30.8, "start": 27, "text": " but we are trying to make sense of it and" }, { "end": 33.68, "start": 31.68, "text": " mainly just kind of talking about the ideas" }, { "end": 40.8, "start": 33.68, "text": " So please enjoy the conversation with Connor Shorten definitely check out the Henry AI Labs channel and" }, { "end": 43.64, "start": 41.64, "text": " Now have a good time" }, { "end": 48.72, "start": 43.96, "text": " Thanks for watching the Henry AI Labs deep learning podcast today. I'm joined with Janek Kilcher" }, { "end": 54.24, "start": 48.92, "text": " Janek works in the data analytics lab at ETH. He has a great YouTube channel" }, { "end": 57.24, "start": 54.24, "text": " I really enjoy watching his paper summary videos" }, { "end": 61.24, "start": 57.24, "text": " If you like any of the videos that I'm making you definitely also like checking out this channel" }, { "end": 63.52, "start": 61.24, "text": " I'm gonna put the link in the description at the end of the talk" }, { "end": 66.92, "start": 63.92, "text": " So Janek, thanks for doing this with me. I really appreciate it" }, { "end": 74.24000000000001, "start": 68.28, "text": " Thanks for having me. It's cool. So what we're gonna talk about is population-based search and" }, { "end": 78.72, "start": 75.8, "text": " Presentation that ICML that I really thought was interesting about" }, { "end": 81.12, "start": 79.86, "text": " emphasizing" }, { "end": 88.96000000000001, "start": 81.12, "text": " Diversity and novelty in search. So the first question I just wanted to start by generally talking about your opinion on population-based search and" }, { "end": 95.92, "start": 90.24000000000001, "text": " The differences between population-based search and my gradient descent going straight for one solution" }, { "end": 100.08000000000001, "start": 97.56, "text": " Yeah, so the the kind of main difference" }, { "end": 107.36000000000001, "start": 100.72, "text": " Is that in population-based search as the name implies you maintain kind of a large population of solutions?" }, { "end": 113.66, "start": 107.36, "text": " So you don't want to limit yourself to just one trajectory say I start here and then I run towards my goal" }, { "end": 120.14, "start": 113.66, "text": " but you kind of maintain a lot of hypotheses of what the solution could be and then you kind of" }, { "end": 124, "start": 120.72, "text": " want to update all of them at the same time and" }, { "end": 127.88, "start": 124.4, "text": " So there's many different variants of population-based search" }, { "end": 135.16, "start": 127.88, "text": " but they all have this this thing in common where you maintain many solutions and you kind of bet on" }, { "end": 137.82, "start": 135.16, "text": " One of them becoming a good one" }, { "end": 139.85999999999999, "start": 138.34, "text": " basically" }, { "end": 145.74, "start": 139.85999999999999, "text": " Yes, so one other thing they they present their paper where they have the robot walking" }, { "end": 151.66, "start": 145.74, "text": " And if it breaks one of its legs, for example, it can go back to the map elites table and and say okay" }, { "end": 155.42, "start": 151.66, "text": " Well, I've lost this leg, but I think maybe this solution" }, { "end": 161.78, "start": 155.42, "text": " I was I wasn't too clear on how that would really be related. So I was maybe wondering if you had more insight on that" }, { "end": 166.98, "start": 161.78, "text": " Yes, so the so the maybe the the context is yeah" }, { "end": 170.78, "start": 166.98, "text": " You want to teach a robot to walk and the robot had six legs" }, { "end": 176.5, "start": 170.78, "text": " I believe so and if you think of what's the solution to the problem a solution is kind of an" }, { "end": 183.94, "start": 177.06, "text": " Algorithm that takes the current sensor input and outputs how to move the motors, right? So and" }, { "end": 191.3, "start": 184.74, "text": " If you just have like say your gradient descent algorithm converging on the best solution of the robot" }, { "end": 195.34, "start": 191.3, "text": " Of how to move the robot. It's just going to be like, oh, these are the sensors" }, { "end": 199.02, "start": 195.34, "text": " Okay, I'm gonna move like this like this like this like this but if" }, { "end": 207.98000000000002, "start": 200.06, "text": " One leg breaks, of course, you're lost because you only know this one way of moving and now the sorry" }, { "end": 211.54000000000002, "start": 209.54000000000002, "text": " So you only know this one way of moving" }, { "end": 213.94, "start": 212.18, "text": " Basically, and that's it" }, { "end": 220.34, "start": 213.94, "text": " But in population based search if you think of the solution as a way to move you maintain many many" }, { "end": 222.34, "start": 220.34, "text": " many ways to move" }, { "end": 224.38, "start": 222.70000000000002, "text": " so you" }, { "end": 225.9, "start": 224.38, "text": " basically the" }, { "end": 227.9, "start": 225.9, "text": " objective if you can call it like this is" }, { "end": 234.06, "start": 229.9, "text": " Algorithm find me a lot of different ways to move" }, { "end": 240.3, "start": 234.5, "text": " Right with my six legs and now if one of my legs I still can evaluate all of them" }, { "end": 244.94, "start": 240.3, "text": " I still can find okay, which one's the best but if now one of them falls away" }, { "end": 247.68, "start": 244.94, "text": " I have all these other solutions that I can try" }, { "end": 254.08, "start": 247.68, "text": " Right. So then what they would do is like this life falls away. Now. They just reevaluate all of those solutions" }, { "end": 260.92, "start": 254.88, "text": " while only having five legs and the best of those like is much more likely to kind of" }, { "end": 265.52, "start": 262, "text": " Work then if you had just your single solution" }, { "end": 272.26, "start": 265.76, "text": " So that kind of that's the its population base because you maintain many different ways of solving the problem" }, { "end": 280.62, "start": 272.26, "text": " Yes, I was also thinking about like using the search algorithms that control neural architecture search and things like that" }, { "end": 286.02, "start": 280.9, "text": " So it's trying to think of how you might extend these ideas from the robot walking with six legs" }, { "end": 291.48, "start": 286.26, "text": " To the RNN controller designing the convolutional network, but like maybe I might have" }, { "end": 294.82, "start": 292.82, "text": " like more of a" }, { "end": 299.18, "start": 295.3, "text": " Storage constraint and more of a latency constraint and I could jump to a different solution like that" }, { "end": 306.82, "start": 299.18, "text": " I'm just wondering how you think like these ideas of population-based search translate into the neural architecture" }, { "end": 316.16, "start": 306.94, "text": " search and specifically if it really is important because like you've got I feel like in neural architecture search you have such a direct signal with the" }, { "end": 323.02, "start": 316.82, "text": " Classification accuracy like I don't see as much variance as those in the in the objective function" }, { "end": 328.34, "start": 323.02, "text": " Yeah, I really think this population based approach is they shine in" }, { "end": 334.21999999999997, "start": 328.65999999999997, "text": " So they shine in multiple different areas, but one area where they shine is definitely when the environment changes" }, { "end": 343.06, "start": 334.7, "text": " So when you know something about whatever your input changes like the robot losing a leg so in kind of neural architecture search you might" }, { "end": 349.41999999999996, "start": 343.62, "text": " You might find these methods working if you then go for let's say transfer learning" }, { "end": 355.66, "start": 349.42, "text": " So you kind of train your network on one task you want to actually do it in another task, right?" }, { "end": 360.3, "start": 355.66, "text": " And then if you maintain many solutions and you can evaluate all of them" }, { "end": 366.94, "start": 360.78000000000003, "text": " In a in this transfer setting it's much more likely that one of them is gonna be is gonna be fine" }, { "end": 372.02000000000004, "start": 366.94, "text": " So but you're right of I also believe that directly in architecture search" }, { "end": 374.54, "start": 372.54, "text": " Maybe it's not" }, { "end": 377.74, "start": 374.94, "text": " Maybe it doesn't yield that many grades" }, { "end": 381.1, "start": 377.74, "text": " results though the other of course the other" }, { "end": 388.62, "start": 382.46000000000004, "text": " Area where these methods shine and this is with respect to algorithms like novelty search" }, { "end": 394.06, "start": 390.46000000000004, "text": " Which can be implemented as a population based method is" }, { "end": 400.14, "start": 395.34000000000003, "text": " They gave this really good example of deception in a search problem" }, { "end": 406.14, "start": 400.14, "text": " So a deception would be like if you have a robot walking a maze and the robot just wants to get to the goal" }, { "end": 411.74, "start": 406.14, "text": " Right and you would program it the robot to be rewarded the closer it gets to the goal" }, { "end": 417.18, "start": 412.21999999999997, "text": " But if like there's a wall in between and you actually need to go around the wall" }, { "end": 422.14, "start": 417.18, "text": " Kind of then for a while you would need to move away from the goal in order to reach it" }, { "end": 427.74, "start": 422.14, "text": " So if you have like a pure objective driven approach, you just go straight to the goal" }, { "end": 429.74, "start": 427.74, "text": " You would always get stuck at the wall" }, { "end": 436.3, "start": 429.74, "text": " But if you then kind of do what is called a novelty search where you basically reward the robot for things" }, { "end": 440.62, "start": 436.3, "text": " It has never done before it would actually find its way around the wall" }, { "end": 445.26, "start": 440.62, "text": " So you can maintain population of solutions that all kind of explore the space" }, { "end": 450.7, "start": 445.26, "text": " And that in our neural architecture search, maybe it's of a benefit that actually" }, { "end": 458.22, "start": 451.42, "text": " You know if I I probably always benefit from like adding more layers or neurons or something" }, { "end": 463.02000000000004, "start": 458.22, "text": " like this, but maybe I actually want to prune some stuff first and then add some more stuff" }, { "end": 467.1, "start": 463.02000000000004, "text": " So I maybe want to get worse first before I can get even better, right?" }, { "end": 473.74, "start": 467.1, "text": " So so is this a reach where I can imagine that happening? But I don't know" }, { "end": 476.54, "start": 473.74, "text": " Yeah, I was thinking the changing environment" }, { "end": 482.70000000000005, "start": 476.54, "text": " I definitely think like when you deploy a model and then you're getting new data that you could frame that as a changing environment" }, { "end": 486.86, "start": 482.70000000000005, "text": " And then also I was thinking about like in the context of GAN" }, { "end": 492.78000000000003, "start": 486.86, "text": " Which is something that I think is really interesting that the discriminator classifying the GAN" }, { "end": 497.02000000000004, "start": 492.78000000000003, "text": " Sam the generator samples, it's a changing environment because of the generators updates" }, { "end": 505.34000000000003, "start": 497.02000000000004, "text": " So maybe having some kind of population based GAN or discriminator model might help it avoid that like" }, { "end": 509.26, "start": 505.34000000000003, "text": " Continual learning problem, I guess is sort of an" }, { "end": 514.22, "start": 510.7, "text": " Yeah, that could that might as might very well be" }, { "end": 520.22, "start": 514.22, "text": " There are approaches to GANs, I believe where you basically you have like many discriminators" }, { "end": 525.4200000000001, "start": 520.22, "text": " And each one kind of only has let's say has its own limited view on the data" }, { "end": 529.5, "start": 525.4200000000001, "text": " And you're trying to kind of fool a lot of them at the same time, but it's not the same thing. But yes" }, { "end": 536.38, "start": 529.5, "text": " I think that that might make sense. Yeah, I've seen that multiple generator multiple discriminator model too" }, { "end": 538.38, "start": 536.38, "text": " I think that's really interesting as well" }, { "end": 548.38, "start": 538.38, "text": " So then one other thing I was curious about is this idea of goal switching and how that might relate to the like AutoML on our existing" }, { "end": 553.42, "start": 548.38, "text": " More like heavily studied things like classification, localization, semantic segmentation" }, { "end": 556.62, "start": 553.42, "text": " Like how do you think goal switching could be important?" }, { "end": 560.62, "start": 556.62, "text": " Like one idea I had is maybe if you've got like multi-class classification" }, { "end": 565.18, "start": 560.62, "text": " And it's got like a really low false positive rate or something on like one class" }, { "end": 569.18, "start": 565.18, "text": " You might say well you've somehow learned a decision boundary on that class" }, { "end": 577.18, "start": 569.18, "text": " Or do you think that wouldn't generalize and that there's no sense in goal switching in like a multi-class classification problem?" }, { "end": 583.18, "start": 577.18, "text": " So yeah, in general, well when you think of goal switching in general" }, { "end": 589.18, "start": 583.18, "text": " How they introduced it was also in the context of like this population based search of these map elites" }, { "end": 595.18, "start": 589.18, "text": " Maybe it's kind of so what map elites the algorithm does basically is it says" }, { "end": 600.18, "start": 595.18, "text": " Okay, I have a number of dimensions that I could solve the problem on and they introduced" }, { "end": 605.18, "start": 600.18, "text": " Okay, let's take life on earth needs to whatever survive" }, { "end": 611.18, "start": 605.18, "text": " So I can either be like a super tall creature right to reach food that no one else can reach" }, { "end": 615.18, "start": 611.18, "text": " I could be a super fast creature right to kind of run away from everything" }, { "end": 619.18, "start": 615.18, "text": " Or it can be a super heavy creature so that no one can attack me" }, { "end": 626.18, "start": 619.18, "text": " And so these are kind of the dimensions that you can solve the problem of reproduction and survival" }, { "end": 633.18, "start": 626.18, "text": " And within so what map elites does it it would segment this area" }, { "end": 638.18, "start": 633.18, "text": " So let's say size and speed it would segment this into a grid" }, { "end": 645.18, "start": 638.18, "text": " And then in each grid it would kind of maintain the best solution so far that is within that grid" }, { "end": 654.18, "start": 645.18, "text": " And then what they see is when they then kind of evolve this over time and improve each each grid is that" }, { "end": 662.18, "start": 654.18, "text": " Inventions let's say inventions algorithm discoveries in one grid say for a very fast creature" }, { "end": 669.18, "start": 662.18, "text": " They would then kind of be adapted to the very let's say the very heavy creatures so like fast creature" }, { "end": 672.18, "start": 669.18, "text": " Kind of discovers or longer legs make me even faster" }, { "end": 677.18, "start": 672.18, "text": " Maybe the longer legs can be then be combined in the heavy creature to do something else" }, { "end": 687.18, "start": 677.18, "text": " So this kind of goal switching it's think of like feathers being first kind of developed or evolved for warmth" }, { "end": 693.18, "start": 687.18, "text": " For temperature regulation then being goal switched over to adapt it for flight" }, { "end": 702.18, "start": 693.18, "text": " So in the in terms of multi class classification I guess it's a bit of a different problem if you just have one classifier" }, { "end": 709.18, "start": 702.18, "text": " You can definitely make the argument that since you know you're learning maybe to classify one class really well" }, { "end": 714.18, "start": 709.18, "text": " The low false positive rate you have learned very good features for that class" }, { "end": 724.18, "start": 714.18, "text": " And if some other class kind of like the zebra is a horse with stripes and then the horse is a horse" }, { "end": 731.18, "start": 724.18, "text": " But with the feature stripes being really low you can probably classify that better or something making stuff up here" }, { "end": 739.18, "start": 731.18, "text": " But it's a bit of a different context I feel the if you have a single classifier do multi class classification" }, { "end": 749.18, "start": 739.18, "text": " But definitely the logic applies in the feature space I would say where you learn features for one class and they might become useful for another class" }, { "end": 756.18, "start": 749.18, "text": " Yeah I had this other thoughts sort of when you're discussing that is like what about like multi class multitask learning" }, { "end": 763.18, "start": 756.18, "text": " Like maybe my intermediate features get mapped to a classifier get mapped to a segmentation get mapped to again" }, { "end": 768.18, "start": 763.18, "text": " Like could goal switching improve multitask learning" }, { "end": 776.18, "start": 768.18, "text": " Yeah I would definitely say so I think that that's exactly what we're seeing when you look at for example pre training" }, { "end": 785.18, "start": 776.18, "text": " So if you think of like these wherever these newest big language models like BERT or something they're really good at tasks" }, { "end": 794.18, "start": 785.18, "text": " I don't know what it was an NLP task labeling of sentiment sentiment classification is the classic right" }, { "end": 801.18, "start": 794.18, "text": " If they evaluate on that because it's so easy but let's say BERT is really good at sentiment classification" }, { "end": 810.18, "start": 801.18, "text": " But if you were to just to train it out right on sentiment classification it's probably not going to work because there's just too little signal" }, { "end": 820.18, "start": 810.18, "text": " But then what happens is you pre train it as a language model as this masked language model and it kind of gets really good at simply comprehending language" }, { "end": 832.18, "start": 820.18, "text": " And that skill can then be kind of adapted over into the into the cement sorry into the sentiment classification realm" }, { "end": 845.18, "start": 832.18, "text": " So I think if you look at something like pre training or multitask as you say then definitely one tap what the addition of a task might give rise to certain features" }, { "end": 855.18, "start": 845.18, "text": " That then all of a sudden can be adapted by another task whereas if you just trained the latter task by itself that maybe would have been too difficult" }, { "end": 864.18, "start": 855.18, "text": " So yeah there's definitely an analogy so then what I think about is so I'm going from my pre training language model into sentiment classification" }, { "end": 872.18, "start": 864.18, "text": " And maybe I also add like question answer during document summarization named entity like this like vector of tasks that it can go do" }, { "end": 886.18, "start": 872.18, "text": " I'm then curious like when your goal switching it's like how do you then combine the features later on or do you just like take it as if I need this task I'll go to this model like yeah" }, { "end": 895.18, "start": 886.18, "text": " Well the question here is do you whether or not you implement this as a single model and kind of refer to the goal switching of features within that model" }, { "end": 907.18, "start": 895.18, "text": " Or whether you also do this now as a population based method where basically you maintain you you maintain different neural networks for different combination of these tasks" }, { "end": 919.18, "start": 907.18, "text": " Then you'd actually need a method to kind of combine and reproduce the neural networks themselves which I yeah I see that's that's going to be a bit of a difficult task" }, { "end": 927.18, "start": 919.18, "text": " Like some cross distillation or some something crazy yeah I don't know how that will work exactly" }, { "end": 937.18, "start": 927.18, "text": " Yeah I just wonder about two things it's like do for my population based search could you have like the weights be the population like different sets of weights" }, { "end": 945.18, "start": 937.18, "text": " Or would it necessarily need to be like taking apart the layers and designing new internal like cells as in the architecture search like" }, { "end": 957.18, "start": 945.18, "text": " Because if I just have the weights maybe I could treat the diversity search or goal switching as like stochastic weight averaging and just like mesh them all together when I'm finished with my goal switching at the end" }, { "end": 980.18, "start": 957.18, "text": " But if it's yeah it's definitely be if you wanted to if you yeah if you wanted to if you wanted to implement your multi task multi task tasking as a population based approach" }, { "end": 993.18, "start": 980.18, "text": " Where yeah you could def it would definitely give you an easier time if you keep the architecture of your neural networks the same and simply have different weights" }, { "end": 1007.18, "start": 993.18, "text": " And then you could indeed consider something like weight averaging or or yeah I guess a more modern approach will be like distillation from the two teacher models into one child model" }, { "end": 1025.1799999999998, "start": 1007.18, "text": " It's actually a good metaphor for a for reproduction kind of a distillation from multiple teacher model don't know if anyone's done that yet but yeah I guess that that might be the way to do it if you also maintain different architectures for different problems that might be a bit of a yeah" }, { "end": 1034.1799999999998, "start": 1025.1799999999998, "text": " Yeah that's an interesting thing too if you have the goal switching and then you model distill it all into one model that is yes" }, { "end": 1059.18, "start": 1034.18, "text": " Well if you think of map elites right you'd simply you'd simply distill it into the appropriate I don't even know what the what the axis would be probably I can imagine okay you have like three tasks so you have three axis and then you'd mix the task maybe in accordance on how far up your of these axes you are or something like this" }, { "end": 1067.18, "start": 1059.18, "text": " It's not exactly map elites because your actual objectives are on the axis but I don't know" }, { "end": 1087.18, "start": 1067.18, "text": " Yes pretty cool so just to backtrack one step I want to talk about like diversity centric search novelty like when I was thinking about that I was like can't you just initialize it such such that it has maximum diversity like can't you just initialize the population such that they're all like uniformly spaced and then search locally from there" }, { "end": 1092.18, "start": 1087.18, "text": " So I just wonder what you think on that and how this is different from that" }, { "end": 1121.18, "start": 1092.18, "text": " So yeah in these in these diversity search algorithms basically what you're you're doing is your your only goal is or your main goal depends on the algorithm but let's say your only goal is to find diverse behaviors or diverse solutions diverse whatever I think the main problem with that is is that the search space is so extremely large" }, { "end": 1148.18, "start": 1121.18, "text": " That you're going to have a hard time even even defining what a kind of a uniform distribution is because it's such a high dimensional space that even if you sample uniformly it's it's almost empty like you're almost right you're not you're not getting anywhere because you have finite finite computer you need to implement an algorithm" }, { "end": 1167.18, "start": 1148.18, "text": " Even if you even if my computer can hold a hundred thousand different members of a population in high dimensions that is nothing right so to me yet the initialization might be definitely important" }, { "end": 1186.18, "start": 1167.18, "text": " But I don't think you'll you'll get around some sort of iterative procedure and going around weeding out weeding out things such that you have space for interesting things because ultimately what you want to find is something interesting" }, { "end": 1208.18, "start": 1186.18, "text": " In the robot maze example the novelty search basically is here is a robot you started right and then you want to do something that you haven't done yet right so if the robots crashes into a wall the first time that's a good thing you say oh cool you haven't done that yet" }, { "end": 1232.18, "start": 1208.18, "text": " But if it crashes into the wall the second time you're like you've done that already right so you you you basically need a measure of saying how close to behaviors are but if the robot has crashed into every wall once the only thing it can do if it wants to do something new is actually go around the wall and then you're like oh cool you've done something new" }, { "end": 1247.18, "start": 1232.18, "text": " But the space of behaviors often is so large that you can't simply enumerate all the all the behaviors so you I think that's the main problem why you can't just make it diverse from the beginning" }, { "end": 1264.18, "start": 1247.18, "text": " Yeah when I think about that I was thinking that maybe the like reward function if you're like navigating the maze it needs to be more refined so like if it crashes into the wall that needs to be like I don't know plus three some some like unique signal I feel like in order to create that kind of because like" }, { "end": 1286.18, "start": 1264.18, "text": " Thinking of if it's just like reward zero everywhere but one if you hit that finish line and then maybe some kind of like discounting for how long it takes you to get there is like I don't see how it could interpret that it's done a new behavior if all it has is it so to me it feels like it's all about the design of the reward space now to implement such a thing" }, { "end": 1308.18, "start": 1286.18, "text": " Yes absolutely so the that the definitely if you wanted to do novelty search you would need to implement a measure of how close to behaviors are so there's no way around and I think that's kind of crux of the of this method is that by specifying how close to behaviors are so what what constitutes novelty and what doesn't" }, { "end": 1331.18, "start": 1308.18, "text": " You already implicitly kind of telling the robot something about the nature of of the world so I think that the kind of the objective because they now say oh we don't give the robot the objecting of reaching the target we simply give it the objective of not doing the same thing twice I think the kind of objective sneaks in" }, { "end": 1353.18, "start": 1331.18, "text": " Like again through the specification of how of you how close are to be a risk but definitely this is just kind of a really simple example of what they want to say is that these methods really become important when you have ambitious objectives in the maze we can all agree if we just designed the reward" }, { "end": 1374.18, "start": 1353.18, "text": " Crashing walls bad you don't have to actually go straight to the goal you can you know but go around walls good and so on then it's easy right but in really ambitious objectives like I don't know flying reaching the moon in the in the 1960s designing general AI" }, { "end": 1395.18, "start": 1374.18, "text": " Curing cancer and so on we don't actually know how to design the reward right because we don't know which steps need to be fulfilled in order to to fly to the moon I guess now we do in hindsight right but we couldn't have predicted we don't know which steps need to be discovered" }, { "end": 1418.18, "start": 1395.18, "text": " In order to cure cancer and it's very very probable if you look at history that the fundamental discoveries that lead to us curing cancer will not directly come from cancer research that's that's their entire point right it's not like you can have a goal go straight towards it if it's like a really ambitious goal very probably" }, { "end": 1445.18, "start": 1418.18, "text": " The solutions will come in part from extremely non related fields and they and you kind of have to make advances everywhere and in order to solve that problem so the the the question of it's all designed it's all about designing the reward yes but we would have to know how the reward must be must look and in these really ambitious objectives we don't" }, { "end": 1469.18, "start": 1445.18, "text": " And that's that's where they argue well the best thing actually you can do is to just explore and you just find interesting things along the way and you kind of hope that these interesting things will come no you know the interesting things will combine to form new interesting things right but you just don't know where you're going to end up right" }, { "end": 1495.18, "start": 1469.18, "text": " Yeah, I guess maybe you could just keep a trip like the trajectory of states and use that as your signal of novelty. But then I think like if you've got like a robotic arm with like x degrees of freedom it's like the state space would be too infinite to really like say oh this was significantly this is a significantly different sequential procedure of states and this other thing." }, { "end": 1511.18, "start": 1495.18, "text": " So then the next thing. Yes, I think this is a good transition into their pick breeder experiment. And so anyone who listens to this who hasn't watched their talk the pick breeder is like, they've got these generator neural networks with sets of weights." }, { "end": 1530.18, "start": 1511.18, "text": " And they have like humans go on and they pick two of the generated images to blend together and derive a new image. And so this repeats on and on until it goes from like just like a spiral pattern into like a skull face drawing or a butterfly drawing or something like that." }, { "end": 1546.18, "start": 1530.18, "text": " And they. So this idea is supposed to represent open endedness in an environment and not so it just generally I, I just found it to be really interesting. I think it's one of the things in their talk that you look at it and you're like oh it's interesting what what is going on here." }, { "end": 1559.18, "start": 1546.18, "text": " But it's like the, the mutation is really guided by the human search, which is so complex I feel like I was just wondering what you thought of that pick breeder experiment." }, { "end": 1576.18, "start": 1559.18, "text": " Yeah, it's really cool. And it's, it's, it's actually the basis for their entire books I've read the book, the white greatness cannot be planned I believe I've got the title." }, { "end": 1603.18, "start": 1576.18, "text": " But, so that this, they actually they kind of start out with this as a motivational example of what if, what if the only goal is to do something interesting and without any objective so all you do is kind of choose slight variations on the current picture, and you see what you end up with and I thought, I thought it illustrates their points" }, { "end": 1619.18, "start": 1603.18, "text": " extremely well so it illustrates, for example, goal switching is that if you were done with your sequence of image manipulations you could then save it into the database and someone else could pick up from it, and then kind of continue it." }, { "end": 1639.18, "start": 1619.18, "text": " And since every human finds slightly different things interesting right, you could take someone else's final result and say, ah, you know that that kind of looks weird but then you, your modifications to it will be different than that human continued breeding the picture." }, { "end": 1655.18, "start": 1639.18, "text": " So what you end up is, and they show this, for example, one picture ends up being a car, and it had been adapted from an alien face where the eyes of the alien face became the wheels of the car." }, { "end": 1682.18, "start": 1655.18, "text": " And so the first person might have been like, oh, this looks more and more like an alien face, I'm going to make it more like an alien face, and then the second person is like, oh, that kind of looks nice, I'm going to modify it in a different, so they basically give this example of if you have an ambitious goal like getting to a car just from these very simple picture generation networks." }, { "end": 1692.18, "start": 1682.18, "text": " Then the stepping stones to get there have nothing to do with cars, and the people that did it didn't have a car in mind while going there." }, { "end": 1701.18, "start": 1692.18, "text": " And the second thing is that if you try to get a car from the beginning, I believe they've done this, if you try to, you can't." }, { "end": 1714.18, "start": 1701.18, "text": " Like, it's just the sequence of things that you have to go through is so complicated and convoluted that if you were to try to end up with a result, it's basically impossible." }, { "end": 1720.18, "start": 1714.18, "text": " So these kind of illustrate their points very, very nicely." }, { "end": 1729.18, "start": 1720.18, "text": " And I mean, it's a cool experiment in itself, but they use it kind of as a basis metaphor for them going on, jumping off." }, { "end": 1739.18, "start": 1729.18, "text": " Yeah, I just think it's so interesting, this idea that it's like you can't design a car unless you don't try it, unless you just happen to come across that." }, { "end": 1747.18, "start": 1739.18, "text": " It's sort of like I think about like if I was to fire up GarageBand and start trying to make a song, it's like I don't know exactly what it's going to sound like." }, { "end": 1749.18, "start": 1747.18, "text": " I'm just going to kind of explore until I come across something." }, { "end": 1754.18, "start": 1749.18, "text": " So then I was thinking about like with the GANs and the way that the GANs design images." }, { "end": 1759.18, "start": 1754.18, "text": " So this is sort of a design I drew up that I'm curious what you think of." }, { "end": 1767.18, "start": 1759.18, "text": " It's like what if the generator just tries to make some object and then a pre-chained classifier says, oh, I think it looks like this maybe." }, { "end": 1770.18, "start": 1767.18, "text": " And then you send it to like a refining network." }, { "end": 1780.18, "start": 1770.18, "text": " So the GAN just sort of searches for objects and then some classifiers are like, oh, I think it looks like sort of like how the pig breeders sort of like how we're like, oh, I think this looks like a skull or whatever." }, { "end": 1783.18, "start": 1780.18, "text": " So I'm going to try to refine it now." }, { "end": 1786.18, "start": 1783.18, "text": " Do you think that would be an interesting thing or?" }, { "end": 1788.18, "start": 1786.18, "text": " You'd have like a two stage process." }, { "end": 1792.18, "start": 1788.18, "text": " First you do something general and then it gets classified." }, { "end": 1800.18, "start": 1792.18, "text": " And then you'd have like a special generator just for the skull class and the special discriminator just for that." }, { "end": 1803.18, "start": 1800.18, "text": " Yeah, I don't see why not." }, { "end": 1804.18, "start": 1803.18, "text": " It might be hard." }, { "end": 1810.18, "start": 1804.18, "text": " It might be hard to get the first generator to be sufficiently diverse." }, { "end": 1820.18, "start": 1810.18, "text": " So you might might need some kind of discriminator signal at the even at the beginning." }, { "end": 1831.18, "start": 1820.18, "text": " So yes, I mean, you're like, how do you think the pig breeder experiment could become fully automated such that there's no human in the loop?" }, { "end": 1839.18, "start": 1831.18, "text": " Yeah, that's that's a thought I had as well, because to me it seems that the kind of, of course, the resulting pictures," }, { "end": 1848.18, "start": 1839.18, "text": " the fact that they look like human objects or recognizable objects is a result from them being being bred by humans." }, { "end": 1853.18, "start": 1848.18, "text": " Like the fact that it looks like a car or a skull or something like this is is very much." }, { "end": 1858.18, "start": 1853.18, "text": " But also, I guess that that could be abstracted in." }, { "end": 1867.18, "start": 1858.18, "text": " We just not expect the results to be like human recognizable objects, but maybe something else." }, { "end": 1877.18, "start": 1867.18, "text": " The much more deeper construction in pig breeder is the fact that the measure of interestingness is provided by the humans." }, { "end": 1885.18, "start": 1877.18, "text": " Right. So the humans, they they click on a picture and then they get variants of that picture and they click on the one that they most like." }, { "end": 1896.18, "start": 1885.18, "text": " This this sense of interestingness of I like this one is that's what's that's the fundamental core that's provided by the humans as an input to the system." }, { "end": 1900.18, "start": 1896.18, "text": " That's what drives the entire thing. That's exactly the same as before." }, { "end": 1911.18, "start": 1900.18, "text": " It's when you write when you teach the robot which two behaviors are close enough, like, oh, no, that's too close to before." }, { "end": 1915.18, "start": 1911.18, "text": " That's not novel. Or yes, that's sufficiently different than before." }, { "end": 1925.18, "start": 1915.18, "text": " That is novel. Right. This this sense is somehow you either need to specify it or you need to have the human in the loop to provide it." }, { "end": 1933.18, "start": 1925.18, "text": " I feel it's very, very hard to capture that in an algorithm as as of today." }, { "end": 1944.18, "start": 1933.18, "text": " Yeah, like something I think about is like maybe I'd have like my thousand class image net classifier and then maybe I'd have like like a style classifier," }, { "end": 1949.18, "start": 1944.18, "text": " like a neural style transfer network that I've like chopped off the like some intermediate feature." }, { "end": 1957.18, "start": 1949.18, "text": " I'm going to take that as my style. And so maybe I'm like classifying. I think it's like an airplane. And then I kind of like this style for it." }, { "end": 1961.18, "start": 1957.18, "text": " That's sort of like my like how I would think about trying to automate that." }, { "end": 1967.18, "start": 1961.18, "text": " Like, I don't know, I guess, like, I don't know if I I guess it's interesting." }, { "end": 1970.18, "start": 1967.18, "text": " But I also feel like when you're doing the pick reader, you're kind of like, oh, I'm going to try it now." }, { "end": 1979.18, "start": 1970.18, "text": " Now that I see this vision, I'm going to try to make it like look like that now, I suppose. Like, yeah, yeah." }, { "end": 1984.18, "start": 1979.18, "text": " I think I could mold this into a skull and then you start doing." }, { "end": 1989.18, "start": 1984.18, "text": " Yes, yes, they're very much so they're not they're not advocating random exploration." }, { "end": 1998.18, "start": 1989.18, "text": " What they're advocating is basically if you have an ambitious goal, then you basically don't know the stepping stones." }, { "end": 2004.18, "start": 1998.18, "text": " But from stepping stone to stepping stone, that's where objectives are very handy." }, { "end": 2010.18, "start": 2004.18, "text": " So when you want to say I this already kind of looks like something, I want to make it more like that." }, { "end": 2012.18, "start": 2010.18, "text": " I want to make it more into a skull. Right." }, { "end": 2015.18, "start": 2012.18, "text": " It already has like two circles and kind of the shape." }, { "end": 2022.18, "start": 2015.18, "text": " But I'm going to drive it there. That that is very that can be very objective driven." }, { "end": 2030.18, "start": 2022.18, "text": " But in the grand scheme of things, you don't know. Then once you have the skull, someone else can develop that into an even new thing." }, { "end": 2045.18, "start": 2030.18, "text": " So, yeah, indeed, if if you if you are in kind of a local search in this space, then an objective driven behavior like what you're saying, like I want to make it as much this as possible." }, { "end": 2059.1800000000003, "start": 2045.18, "text": " That's very that's actually a thing they're advocating for. But then from their end result, yeah, you would need to then restart again, do the same thing with like something else." }, { "end": 2062.1800000000003, "start": 2059.1800000000003, "text": " Huh? Yeah, it's really interesting." }, { "end": 2076.18, "start": 2062.18, "text": " Just thinking about, yeah, I think about like the stepping stones and like is how would you define the space of stepping stones to such a to any kind of thing?" }, { "end": 2084.18, "start": 2076.18, "text": " I guess it's like you could still design some kind of maybe it's discrete or maybe you have some kind of signal you can get back from it." }, { "end": 2087.18, "start": 2084.18, "text": " And I guess it's just a lot to think about." }, { "end": 2092.18, "start": 2087.18, "text": " Directly, I think they give this they give this great analogy." }, { "end": 2101.18, "start": 2092.18, "text": " I feel like if you have a really ambitious objective, it's like crossing a lake, but the lake is covered in fog." }, { "end": 2108.18, "start": 2101.18, "text": " So you basically can't really see very far, but you can always kind of see the next stepping stones." }, { "end": 2117.18, "start": 2108.18, "text": " Right. And you can then you can then try to go from stepping stone to stepping stone, but you don't know which one to take if there's like a fork." }, { "end": 2123.18, "start": 2117.18, "text": " There's two ways possible. You don't know which one. Right. So all you can do is basically go the most interesting one." }, { "end": 2126.18, "start": 2123.18, "text": " And they relate this to scientific research." }, { "end": 2135.18, "start": 2126.18, "text": " So, yeah, if we want to accomplish some really great research goal, like artificial general intelligence, we don't like we don't know." }, { "end": 2149.18, "start": 2135.18, "text": " But we can see the next stepping stones. Right. We can see, oh, from what we have right now, what interesting combination could we make that still kind of it still kind of makes that's not total garbage." }, { "end": 2156.18, "start": 2149.18, "text": " Right. So in the local search, I can try to say I want to I don't know. I want to do this." }, { "end": 2168.18, "start": 2156.18, "text": " I want to do multiple generators and multi stage and then this thing. Right. This this is kind of a stepping stone and maybe that will then lead to something more interesting and so on." }, { "end": 2185.18, "start": 2168.18, "text": " So, yeah, that's that's kind of how they relate. I like this metaphor of the lake. Yeah. Yeah. I just like could like a meta controller try to put the stones down and then the objective is or is the space too enormous that that idea of having a meta controller guide the stepping stone placement is too big." }, { "end": 2192.18, "start": 2185.18, "text": " The stepping stone placement is just like absurd in that and there's no way that that would work. That's sort of where I'm thinking with this now is like." }, { "end": 2203.18, "start": 2192.18, "text": " So they actually that's that's exactly the question. Right. Of what I so I believe you need such a meta whatever because the space is too large." }, { "end": 2211.18, "start": 2203.18, "text": " You somehow need a way to choose the stepping stones in the first place. Right. You somehow need a way to do this." }, { "end": 2229.18, "start": 2211.18, "text": " Now, what they're saying is that if you're if your goal is really ambitious, then a meta controller that simply wants to reach the goal is bad because right because what we discussed before, you might need a lot of inventions from other fields in order to make goal happen." }, { "end": 2247.18, "start": 2229.18, "text": " And if you simply go your field maximum power towards your goal, that's not going to happen. Now, if your meta controller is actually just something that wants to produce interesting things, then that's actually something they advocate for." }, { "end": 2257.18, "start": 2247.18, "text": " That is exactly what their algorithms are trying to capture. They're trying to capture locally. Yeah, we want to get better at a particular thing." }, { "end": 2268.18, "start": 2257.18, "text": " What those particular things are and the order of these that should be novelty driven instead of goal driven." }, { "end": 2275.18, "start": 2268.18, "text": " Yeah, yeah. Yeah. The interesting component. I guess I'm sort of biased towards liking the objective design." }, { "end": 2286.18, "start": 2275.18, "text": " And now I'm thinking like, OK, well, let's abstract those meta controllers one level up and have a meta meta controller and just repeat this and hierarchy makes sense." }, { "end": 2309.18, "start": 2286.18, "text": " And that if you if you if you're if you're a bit cynical, that is what you will also hear out of here out of and they have to argue in the in their book a lot against that like isn't the question isn't the kind of isn't the implementation of a meta controller that just searches for novelty in itself." }, { "end": 2316.18, "start": 2309.18, "text": " And that's the objective again. And then they give some good reasons why actually you don't." }, { "end": 2327.18, "start": 2316.18, "text": " It is different. It's more like a constraint on your search. If you think of natural evolution, for example, it isn't really doesn't really have an objective." }, { "end": 2342.18, "start": 2327.18, "text": " You think reproduction and survival is the objective of natural evolution. It doesn't really the good the good reason they give is the objective has already been fulfilled by the very first organism to ever live." }, { "end": 2350.18, "start": 2342.18, "text": " Right. Why didn't it stop there? Why didn't it stop very first cell? OK, done. We've fulfilled the objective." }, { "end": 2357.18, "start": 2350.18, "text": " It's more of a it's more of an actually a constrained optimization where the constraint is you need to be able to survive." }, { "end": 2366.18, "start": 2357.18, "text": " That's kind of the minimum bar of to being on this planet. And then I'm saying constrained optimization, but it's it's not it's not an optimization." }, { "end": 2370.18, "start": 2366.18, "text": " It's more of like a constraint constraint search." }, { "end": 2385.18, "start": 2370.18, "text": " OK, yeah, I think, yeah, I guess it's just like I don't think I'm closed in this world of trying to think of these constraint problems. And I haven't really like thought more generally about just like exploration as a whole." }, { "end": 2394.18, "start": 2385.18, "text": " But but anyway, so I just wanted to ask you generally like your deep learning researcher, I want to ask like what areas of deep learning are you really interested in right now?" }, { "end": 2404.18, "start": 2394.18, "text": " And what do you think is promising in the near future? So I'm currently working in adversarial examples." }, { "end": 2415.18, "start": 2404.18, "text": " That is a really interesting topic. There's lots of questions still still open, but I'm generally interested in pretty much any anything that is not." }, { "end": 2432.18, "start": 2415.18, "text": " I'm not too interested in like the newest the newest fine technique on getting the latest state of the art numbers, even though that's probably super important for practitioners." }, { "end": 2439.18, "start": 2432.18, "text": " Basically, agreeing more with the authors of this tutorial of that." }, { "end": 2455.18, "start": 2439.18, "text": " Let's just try to do interesting things. And to me, these these actually these these areas in terms of open ended, open ended search, open ended learning are very interesting." }, { "end": 2458.18, "start": 2455.18, "text": " I think reinforcement learning still has a long way to go." }, { "end": 2466.18, "start": 2458.18, "text": " I think actually NLP still has a long way to go because I don't believe it's the current models are the end of it." }, { "end": 2469.18, "start": 2466.18, "text": " So I think it's really exciting time." }, { "end": 2476.18, "start": 2469.18, "text": " Yeah, I love thinking about adversarial examples because it definitely flips the CNN idea on its head." }, { "end": 2491.18, "start": 2476.18, "text": " And then I had one other thing about adversarial examples that I'm interested in is there is like an interview with Elon Musk and this Lex Friedman researcher where he asked him about adversarial examples on his self-driving cars." }, { "end": 2501.18, "start": 2491.18, "text": " And he seems dismissive of it. He says he thinks basically you could just average different patches of like test time augmentation to overcome adversarial examples." }, { "end": 2516.18, "start": 2501.18, "text": " So in your research, do you think that like the example where they add the noise mass to the panda and they're like, oh, it's a given now, if they just perturbed it like nine more times, do you think the prediction would average out to pandas?" }, { "end": 2533.18, "start": 2516.18, "text": " That is a very difficult question. And from experience, simply adding noise and then feeding it to the classifier, even if you average after that, usually will defend against adversarial examples to a point." }, { "end": 2538.18, "start": 2533.18, "text": " But it will also degrade your classification performance." }, { "end": 2547.18, "start": 2538.18, "text": " Because so maybe I understood it wrong, but my understanding is I have my input, right? I simply add noise to it and then feed it through the network." }, { "end": 2551.18, "start": 2547.18, "text": " And I could do this many times, right? And then average the prediction." }, { "end": 2568.18, "start": 2551.18, "text": " But usually this will help against adversarial examples, but it will also degrade the accuracy of that classifier. So it might actually make your self-driving car worse in the overall." }, { "end": 2582.18, "start": 2568.18, "text": " Because how often is it going to be attacked against a adversarial example? It's going to be attacked maybe once or twice a year, maybe if it drives by some hacker's house, right?" }, { "end": 2591.18, "start": 2582.18, "text": " Sticker on a stop sign or something. But the rest of the time, I would actually like to retain the best possible classifier." }, { "end": 2605.18, "start": 2591.18, "text": " And if I always have to add noise, then that's not possible. So the research we're doing is actually into the direction of can we retain the original accuracy while still kind of detecting these samples?" }, { "end": 2616.18, "start": 2605.18, "text": " I mean, you somehow have to get a trade off somewhere, but just adding noise isn't the final solution yet." }, { "end": 2624.18, "start": 2616.18, "text": " I was like, so with these adversarial examples, they're only going to make misclassifications like that if it really is adversarially sought after." }, { "end": 2632.18, "start": 2624.18, "text": " It's not just like the noise perturbation would be such an enormous space to find it otherwise." }, { "end": 2638.18, "start": 2632.18, "text": " Yes, you really need to try. So it's very unlikely that some random thing." }, { "end": 2651.18, "start": 2638.18, "text": " Of course, these networks can be confused by random noise, but I think one of the self-driving cars once drove into a big white truck because it was large and white, so it thought it was sky." }, { "end": 2660.18, "start": 2651.18, "text": " But other than these failures, you really have to try to find an adversarial example." }, { "end": 2667.18, "start": 2660.18, "text": " Really cool. Yannick, thanks so much for doing this. Anybody watching or listening, definitely check out Yannick's YouTube channel." }, { "end": 2671.18, "start": 2667.18, "text": " He has really great paper summaries and all sorts of things. Thank you." }, { "end": 2700.18, "start": 2671.18, "text": " Thanks so much for having me." } ]
H5vpBCLo74U
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
XLNet: Generalized Autoregressive Pretraining for Language Understanding
[ "Science & Technology" ]
[ "deep learning", "machine learning", "artificial intelligence", "ai", "nlp", "natural language processing", "bert", "xlnet", "transformer", "transformer xl", "attention", "attention layer", "language model", "language modeling", "pretraining", "autoregressive", "autoencoder", "permutation", "google", "carnegie mellon", "cmu", "state of the art", "masked language model" ]
Abstract: With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. Authors: Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le https://arxiv.org/abs/1906.08237
Hi there, today we're looking at XLNet, Generalized Autoregressive Pre-Training for Language Understanding, by Jilin Yang and other people from Carnegie Mellon University as well as Google Brain. So this is kind of the elephant in the room currently as XLNet is the first model to beat BERT, which was the previous state of the art on a lot of NLP tasks, to beat BERT at a lot of these same NLP tasks. So the chief state of the art result on 18 of 20 tasks I believe, maybe they test more, they outperformed BERT on 20, the chief state of the art on 18, including things as question answering, natural language inference, sentiment analysis and so on. So those are kind of remarkable results and even more remarkable is that the architecture of the network is actually very, fairly similar to BERT. The kind of new introduction is a pre-training, a different pre-training procedure and we'll look into that. So let's actually jump into their main points straight away. What they go into is there are two kinds of currently used pre-training methods for these NLP tasks and both can be understood as kind of language modeling. So language modeling for those of you who don't know is predict the next word in a sequence. So if I give you the sequence here, unsupervised representation learning has been and then I ask you what's next and then you're supposed to say highly. That's language modeling in a nutshell. So what they differentiate are two kinds of language modeling. The first one, they say is autoregressive language modeling. Now what autoregressive language modeling does is exactly what we've looked at. I give you unsupervised learning has been, you're supposed to predict highly. And then in the next step I give you unsupervised representation learning has been highly and you're supposed to predict successful and so on. So in the next step I'm going to give you the entire sentence up until here and you're supposed to predict in. Autoregressive because each token can look at the kind of previous ones in the in the sequence. So when you, sorry you can't see that, when you predict, when you predict you can always kind of autoregressively look at what the previous ones were, including what you've previously predicted. Of course during training this is teacher forced as I said so you put the actual words there. This is autoregressive modeling in contrast to what they call auto encoding. And auto encoding is what BERT does and this is the following. So in contrast to that let's say I have the same sequence unsupervised representation learning has been highly successful in the domain of something. And then I say okay I give you the sequence but I am going to delete this and this. And now I ask you to predict these two. So you can see the task is slightly different as you now have access to all of the sequence basically except the ones that you're asked to predict but you're you kind of asked to predict them not in any order but you're asked to predict them at the same time basically. So at the same time you're asked to predict this word and this word. So the first kind of these autoregressive language modeling has been used by transformer models until BERT and then basically BERT really pushed this auto encoding language model pre training, which made it so successful. And now this paper XLNET wants to like combine the best of both of them. And in order to understand what's the best of both of them. So what's good at BERT we've already seen it can actually draw information from all of the context of the words it's trying to predict. But what is the kind of pitfall of BERT and they they actually put this really nicely in an example they gave way further down where they say comparison to BERT. I don't know why that is not like also in the introduction but here they have the sentence New York is a city. Right. New York is a city. This one. And you're asked to predict these two words. And if you now compare BERT to what XLNET does. If. So the context is a city and you're asked to predict New York. What BERT does is it simply masks out the two words and says here please fill in these two words. Now this translates to the kind of objective being separated in the two words such that the prediction of York here is completely independent of the prediction of new. So if you know of any other city that is made of two words for example San Francisco or Los Angeles then these would be as valid and any mixture would be as valid. So you might BERT might end up with laws. York is a city and that will be perfectly fine for BERT because while it's predicting laws is a perfectly fine prediction for the first word of a two word city and York is a perfectly fine prediction for the last word of a two word city. Right. So these are the kind of mistakes that BERT can get into by not being autoregressive by basically predicting all of these tokens at the same time independently of each other. Whereas XLNET what it would do is it would specify an order. Let's say OK first I will predict the word new for the first word new something is a city. And then when I predict York I will actually take into account that I previously have predicted the word new. So that's the main advantage that autoregressive training has over auto encoding. Now what are the pitfalls. The pitfalls are if you have this sentence. If you look at it I'll write it down. New York is a city. If you have the sentence and let's say actually you're not you're not asked to predict New York you're asked to predict the word A. You're asked to predict that in autoregressive style or a city. It's a better example. The two words a city in autoregressive style if you predict the word A you can only ever look at what comes beforehand. Whereas if BERT were to predict A just the word A it would be able to look at all of it. Let's not predict city. So you see the kind of autoregressive model is bound to the order of the factorization of the sentence. So it's bound to the order in which it has to predict the tokens. So here if it's predicting A you can only look at stuff that comes before it because it needs to do it in order. Right. Once it gets to city you can actually look at the entire sentence here. But before that it only ever has partial information about the about the context. So actually it wouldn't be much better if I had said we're trying to predict these two words is and a right. And once I predict so BERT would actually have access to the word city here. Whereas the autoregressive models only have access to the ones before it. I hope that makes it clear. So the main idea in Excel net is where does this order dependence come from in the autoregressive model. The order dependence actually comes from the factorization of the sentence of the of the language model. So in a language model we're actually trying to assess the probability distribution of sentences here. X is a sentence. Right. And this can be naturally factorized into a product over the words where the probability of each word is only dependent on the words before it. This is a this is an equal is not an approximation. This is an equality. The probability of a sequence can be decomposed into a product of probabilities like this. Exactly. So this here is exactly what these autoregressive models implement. Each word is predicted from the words before it. Right. There are other kinds of autoregressive models that also do the other direction where here they say OK the probability of a sentence is a product and each word is predicted from the words after it. But it kind of is the same problem. You only ever have access into the one direction. Basically however you define the order of decoding you only ever have access from a given word to what was before it in the order. So the main idea of Excel that is they say hey why don't we consider all possible orderings. Right. I mean that that's kind of a. That's it's an idea. So let's go back to our thing here. They say why don't we consider all possible orderings. So basically what we will do is if this sample comes up New York is a city. All right. What I can do is I can define an ordering. Let's say I always want to predict two words. So typically masks out about 15 percent of its input to be predicted. And here let's say we'll mask out 20 percent which is two words. So of this sequence will mask two words and ask the model to predict it. That's that will be our pre training objective. The first time the sample comes up from the data set I might specify the order just classically. Right. Just one two three four five. All right. I'll predict the last two words. I'll kind of mask them out right. I give the model New York is and then I let it predict a. And then in the next step I'll give it New York is a and let it predict city. Cool. So now the pitfall is the word a here only has access to things before it and not to city itself. City has access to everything. All right. So but then I continue training and the next set time this sample right. It's in my data set. New York is a city. The next time it comes up I simply go for a different order. Let's say one two three four five. Right. So now again I'm asked I'm asking to predict the last two tokens which here are. City and York. So in the first step I would give it is a and new and I will ask it what's here. And I'll ask it to predict city. And then in the second step I'll also give it that and I'll ask it OK. Now what's here given all of that. Right. So new is a city. Right. You're asked to predict the missing word. So that that's pretty. So in the first step it's new is a. And you're asked to predict that the second and then the second step is new is the city and you're asked to predict the first. So now as you can see while predicting city here all of a sudden we didn't no longer in this ordering we don't have access to the word. York. So we'll have to learn to predict city from the rest of the context. Now even more even more if we now decide let's decide on a different ordering again. One two three four five. So now we'll actually first step is to ask. New York city please predict this thing here. Right. Yeah you might train the model to predict is and then the second step you say New York is city. Please predict this. Now you see before before when we were asked to predict the word a it only had access to things to the left of it. Then the very first example. But now it actually has access to the entire context. So the the idea is as we sample this data point multiple times and each time we decide on a different ordering to decode for each prediction of each token token sorry will actually have seen many many parts many different variants of the context. And in expectation will actually have seen all of the context just like Bert but will always having have done it in an order regressive way. So basically you get all the advantages of being order regressive namely that you are able to decode step by step while always referring to everything in front of you in the ordering. So the predictions are not independent but you also get the benefit of Bert that it's able to basically look at all of the rest of the context in expectation in order to make this prediction. So this is this is the main idea of of Excel net. They formalize this jump up again they formalize it in saying OK what Bert does here is it actually see it factorized log probability of a sentence into this sum. So the product in the log becomes a sum into the sum of log probabilities of no sorry this is order aggressive confused into the the words conditioned on everything in front of you. Everything in front of them. What Bert does is it actually approximately factorizes the log probability into each word and then everything in the context and everything that's not masked in the context. And this is only an approximate factorization because now you basically dropping away all these masked tokens. And what they do now is they do the same as the AR as the order aggressive models here. They decompose the log probability into a sum of log probabilities over each of the words given all the words before it but not before it in the sequence but before it in an chosen permutation Z. And Z is sampled uniformly from the set of all possible permutations. So in expectation they'll see all of the context. So this is the this is the main thing they show this here in a kind of a picture with. So here is the neural network. This is the input layer. And these are the hidden layers as the attention layers go up and up here you're asked to predict the token. So here you're always asked to predict X3. So there is no there's never going to be any weight here since if you knew X3 you would be able trivially to predict X3. All right so in the in the first example the factorization order chosen at random is 3 2 4 1. Now you're asked to predict X3 and we know OK we should only we should only do this with things that are before it in the permutation order. Well here since X3 is the first in the permutation order we actually don't we don't have anything to go on. We basically ask to predict X3 from scratch as if it were the start of the sentence. So we'll basically tell the model I have a sentence that goes hmm hmm hmm hmm please predict the third. All right it's a hard task. Yeah by the way you're always able to look at this memory thing here. Don't worry about this for now. This is just this is an augmentation they do on top of their idea. This is not the core idea. So OK but now the second time this sample comes up from the training set we decide on a different order. So the order here is 2 4 3 1. Now again we're asked to predict X3 and we're allowed to look at everything before it. So 2 and 4 as you see here there are weights from X2 and X4 into this column that finally is then a ask to predict X3. So this is also this is now an easier task right. You're allowed to look at the word to the left and to the right. If you have the following permutation order 1 4 2 3 you're actually allowed to look at all of the other words because X3 is at the end of the permutation order in order to produce X3. So all of these four and the fourth thing is a similar. So all of these four things will appear during training and you will learn from them. So in expectations you basically have seen all variants of different of different versions of the context which which helps a lot apparently. Right so in the in order to achieve this they had to make some architectural changes to the to the model. Namely what you want to do is in a single pass through the model here you not only want to predict one token but you want to do many predictions. This helps training a lot so BERT naturally always does like 15% of the tokens or so what was that like 40 50 tokens. So it masks them and it predicts them all at the same time. Now you would like to do this here as well you would like to predict all at the same time. The ones that you're asked to predict. But of course the problem is for here if you're asked if in this factorization order 2 4 3 1 if you're asked to predict X3 you're allowed to look at X2 and X4. If you're asked to predict X1 you're allowed to look at X2 X4 and X3. So if you only have a single pass through the model the question is do you now input X3 or do you not because the prediction of X3 is not allowed to look at X3. While the prediction of X1 is allowed to look at X3 so they do an architectural change in order to achieve both things so that you can have a single pass through the through the model. But the prediction of each token only depends on the things in front of it in the permutation order. And they do this by having these kind of two stream these masked to stream attention where they basically have not only one hidden representation like in classic transformers but they have at each step two hidden representations. One they call H and one they call G. So the H's are initialized with the embeddings of the tokens and the G's are just initialized randomly and then they get transformed. The point is the H of the next layer is always able to look at everything in front of it including its own its own H basically one layer down its own position one layer down. While the G is only allowed to look at the H's but the H's from before. Right so all the G's here are only ever able to look at the H's from before the current position whereas the H is always allowed here to look at the same but also at the H at the current position. And now at the last layer you simply ask the model to predict the token from just the G. And you can easily see that this results in these model only. Yeah only attending to things before it. The G by the way can also look at the G of the current layer so that's also the thing but it cannot look at the H. So there's never any information flowing from the current word embedding of the token you're trying to predict to the prediction layer. So basically that means the model can't just look like you're not telling the model the answer yet you're still able to feed to predict multiple things in a single pass through the model. Formally this is described here in the attention layer. So they divide how they produce the queries and how they produce the keys and values usually the queries and the keys and values are produced from the same hidden representation but here they produce the keys and values from the H's in both cases. But to update the G's they produce the queries from the last layer's G and to produce the H's they produce the queries from the last layer H's. And most importantly when they produce the keys and values the H's they look at here to update the G you're only allowed to look at H's before you in the permutation order. But to update the H you're allowed to look at everything before including the position you're currently at. So that's kind of the it's an engineering solution to the problem introduced by their augmentation. I think it's a pretty neat solution pretty cool. So the rest of the paper here is incorporating ideas from transformer Excel. So transformer Excel is one of these classic transformers that that is like this AR so this autoregressive style of transformer. But that has a few improvements over the classic vanilla transformer and they incorporate a number of things here namely first of all they incorporate this memory thing. So the memory thing allows you to input longer sequences. Let's say our our transformer input length is maximum of five tokens. What the transformer Excel allows you to do is you input five tokens and then you save you do your transformer thing you encode it and you save something into this memory. And then when you input the next five tokens your transformer is then allowed to look at the memory of the last sequence. Right and also update it so that that's kind of these these mem blocks you saw here. So you're always allowed to look at these mem blocks from last sequence and then the hidden representations here of this sequence. They will actually be stored in the mem block for the next sequence. This is kind of a trick to to to carry over information. It's not the the updating the memory part isn't learned with the objective to make the next prediction better but it's just some information kind of gradient free information to provide to the next step. And it apparently helps you can incorporate longer sequences into this transformer Excel. So they take this over and implement this into XL net. They also do relative positioning codings relative segment and codings. I won't go into this too much more here because it's not the main idea basically. So they do experiments and they compare to BERT architecture with the same basically same architecture the same number of parameters and or layers. And they beat BERT in all of these kind of NLP tasks or most of I think they said in 20. They reach new state of the art in 18 NLP tasks. So apparently their method works very well. So what they do here is the last thing I find important is an ablation study of the effects of their improvements. So they were because kind of my problem is I never know. Like they have this new idea. OK, we do these random permutations. But then they also say, oh, and also we include memory from XL net and we do relative positioning codings and so on. So for me, these kind of papers, of course, you reach better numbers, you get a new state of the art. So it's kind of a landmark paper. But to me, a paper should more be like a single thing. So whatever your idea is, this your idea is these orderings and whatever you need to do to make that work. OK, fine. But then why why the additional transformer Excel things? It's really then hard to estimate how much of the improvement comes from your idea and how much of the improvement simply comes from the fact that you already put these other things actually have nothing to do with it. So I appreciate these kind of analysis called ablation studies where they kind of try to take away the memory and these things and kind of look at what it's doing to the model. And you you see here kind of degrades down here as, for example, this column degrades as you take stuff away while still being more kind of more successful than BERT. So that that I would say also. Yeah, here is more unclear, but also kind of seems to degrade a bit while being more successful than BERT. So I appreciate this this kind of really trying to show that your gains really come from your new idea and not from some other stuff. All right. So the last thing I want to mention actually is this thing. So someone claiming or calculating that it costs two hundred and forty five thousand dollars to train the Excel net model the way they describe it in the paper. I'm sure it's going to be brought down because it was brought down that like the time to train was brought down with BERT as well. But this is just I mean, this is crazy. This is just training it. It kind of gives large questions about the state of research and the ability for kind of, let's say, more academic players to participate in research. On the one hand, of course, we like, of course, these companies should be able to do this. And on the other hand, if it seems like currently in some fields, just putting more money on the table will get you a better result. Not this. This actually like this paper is actually a cool idea, but it's still kind of prohibitively expensive to even reproduce it. Yeah, right. So that was that was that for this paper. I hope you enjoyed this and see you.
[ { "end": 14, "start": 0, "text": " Hi there, today we're looking at XLNet, Generalized Autoregressive Pre-Training for Language Understanding, by Jilin Yang and other people from Carnegie Mellon University as well as Google Brain." }, { "end": 30, "start": 14, "text": " So this is kind of the elephant in the room currently as XLNet is the first model to beat BERT, which was the previous state of the art on a lot of NLP tasks, to beat BERT at a lot of these same NLP tasks." }, { "end": 49, "start": 30, "text": " So the chief state of the art result on 18 of 20 tasks I believe, maybe they test more, they outperformed BERT on 20, the chief state of the art on 18, including things as question answering, natural language inference, sentiment analysis and so on." }, { "end": 68, "start": 49, "text": " So those are kind of remarkable results and even more remarkable is that the architecture of the network is actually very, fairly similar to BERT. The kind of new introduction is a pre-training, a different pre-training procedure and we'll look into that." }, { "end": 84, "start": 68, "text": " So let's actually jump into their main points straight away. What they go into is there are two kinds of currently used pre-training methods for these NLP tasks and both can be understood as kind of language modeling." }, { "end": 102, "start": 84, "text": " So language modeling for those of you who don't know is predict the next word in a sequence. So if I give you the sequence here, unsupervised representation learning has been and then I ask you what's next and then you're supposed to say highly." }, { "end": 115, "start": 102, "text": " That's language modeling in a nutshell. So what they differentiate are two kinds of language modeling. The first one, they say is autoregressive language modeling." }, { "end": 124, "start": 115, "text": " Now what autoregressive language modeling does is exactly what we've looked at. I give you unsupervised learning has been, you're supposed to predict highly." }, { "end": 134, "start": 124, "text": " And then in the next step I give you unsupervised representation learning has been highly and you're supposed to predict successful and so on." }, { "end": 140, "start": 134, "text": " So in the next step I'm going to give you the entire sentence up until here and you're supposed to predict in." }, { "end": 164, "start": 140, "text": " Autoregressive because each token can look at the kind of previous ones in the in the sequence. So when you, sorry you can't see that, when you predict, when you predict you can always kind of autoregressively look at what the previous ones were, including what you've previously predicted." }, { "end": 178, "start": 164, "text": " Of course during training this is teacher forced as I said so you put the actual words there. This is autoregressive modeling in contrast to what they call auto encoding." }, { "end": 197, "start": 178, "text": " And auto encoding is what BERT does and this is the following. So in contrast to that let's say I have the same sequence unsupervised representation learning has been highly successful in the domain of something." }, { "end": 211, "start": 197, "text": " And then I say okay I give you the sequence but I am going to delete this and this. And now I ask you to predict these two." }, { "end": 229, "start": 211, "text": " So you can see the task is slightly different as you now have access to all of the sequence basically except the ones that you're asked to predict but you're you kind of asked to predict them not in any order but you're asked to predict them at the same time basically." }, { "end": 234, "start": 229, "text": " So at the same time you're asked to predict this word and this word." }, { "end": 256, "start": 234, "text": " So the first kind of these autoregressive language modeling has been used by transformer models until BERT and then basically BERT really pushed this auto encoding language model pre training, which made it so successful." }, { "end": 264, "start": 256, "text": " And now this paper XLNET wants to like combine the best of both of them." }, { "end": 278, "start": 264, "text": " And in order to understand what's the best of both of them. So what's good at BERT we've already seen it can actually draw information from all of the context of the words it's trying to predict." }, { "end": 290, "start": 278, "text": " But what is the kind of pitfall of BERT and they they actually put this really nicely in an example they gave way further down where they say comparison to BERT." }, { "end": 299, "start": 290, "text": " I don't know why that is not like also in the introduction but here they have the sentence New York is a city." }, { "end": 311, "start": 299, "text": " Right. New York is a city. This one. And you're asked to predict these two words. And if you now compare BERT to what XLNET does." }, { "end": 323, "start": 311, "text": " If. So the context is a city and you're asked to predict New York. What BERT does is it simply masks out the two words and says here please fill in these two words." }, { "end": 336, "start": 323, "text": " Now this translates to the kind of objective being separated in the two words such that the prediction of York here is completely independent of the prediction of new." }, { "end": 350, "start": 336, "text": " So if you know of any other city that is made of two words for example San Francisco or Los Angeles then these would be as valid and any mixture would be as valid." }, { "end": 369, "start": 350, "text": " So you might BERT might end up with laws. York is a city and that will be perfectly fine for BERT because while it's predicting laws is a perfectly fine prediction for the first word of a two word city and York is a perfectly fine prediction for the last word of a two word city." }, { "end": 381, "start": 369, "text": " Right. So these are the kind of mistakes that BERT can get into by not being autoregressive by basically predicting all of these tokens at the same time independently of each other." }, { "end": 392, "start": 381, "text": " Whereas XLNET what it would do is it would specify an order. Let's say OK first I will predict the word new for the first word new something is a city." }, { "end": 399, "start": 392, "text": " And then when I predict York I will actually take into account that I previously have predicted the word new." }, { "end": 408, "start": 399, "text": " So that's the main advantage that autoregressive training has over auto encoding." }, { "end": 414, "start": 408, "text": " Now what are the pitfalls. The pitfalls are if you have this sentence." }, { "end": 424, "start": 414, "text": " If you look at it I'll write it down. New York is a city." }, { "end": 436, "start": 424, "text": " If you have the sentence and let's say actually you're not you're not asked to predict New York you're asked to predict the word A." }, { "end": 444, "start": 436, "text": " You're asked to predict that in autoregressive style or a city. It's a better example." }, { "end": 452, "start": 444, "text": " The two words a city in autoregressive style if you predict the word A you can only ever look at what comes beforehand." }, { "end": 459, "start": 452, "text": " Whereas if BERT were to predict A just the word A it would be able to look at all of it." }, { "end": 472, "start": 459, "text": " Let's not predict city. So you see the kind of autoregressive model is bound to the order of the factorization of the sentence." }, { "end": 476, "start": 472, "text": " So it's bound to the order in which it has to predict the tokens." }, { "end": 482, "start": 476, "text": " So here if it's predicting A you can only look at stuff that comes before it because it needs to do it in order." }, { "end": 486, "start": 482, "text": " Right. Once it gets to city you can actually look at the entire sentence here." }, { "end": 494, "start": 486, "text": " But before that it only ever has partial information about the about the context." }, { "end": 504, "start": 494, "text": " So actually it wouldn't be much better if I had said we're trying to predict these two words is and a right." }, { "end": 510, "start": 504, "text": " And once I predict so BERT would actually have access to the word city here." }, { "end": 518, "start": 510, "text": " Whereas the autoregressive models only have access to the ones before it. I hope that makes it clear." }, { "end": 527, "start": 518, "text": " So the main idea in Excel net is where does this order dependence come from in the autoregressive model." }, { "end": 535, "start": 527, "text": " The order dependence actually comes from the factorization of the sentence of the of the language model." }, { "end": 544, "start": 535, "text": " So in a language model we're actually trying to assess the probability distribution of sentences here." }, { "end": 560, "start": 544, "text": " X is a sentence. Right. And this can be naturally factorized into a product over the words where the probability of each word is only dependent on the words before it." }, { "end": 571, "start": 560, "text": " This is a this is an equal is not an approximation. This is an equality. The probability of a sequence can be decomposed into a product of probabilities like this." }, { "end": 577, "start": 571, "text": " Exactly. So this here is exactly what these autoregressive models implement." }, { "end": 584, "start": 577, "text": " Each word is predicted from the words before it. Right." }, { "end": 595, "start": 584, "text": " There are other kinds of autoregressive models that also do the other direction where here they say OK the probability of a sentence is a product and each word is predicted from the words after it." }, { "end": 601, "start": 595, "text": " But it kind of is the same problem. You only ever have access into the one direction." }, { "end": 612, "start": 601, "text": " Basically however you define the order of decoding you only ever have access from a given word to what was before it in the order." }, { "end": 623, "start": 612, "text": " So the main idea of Excel that is they say hey why don't we consider all possible orderings." }, { "end": 627, "start": 623, "text": " Right. I mean that that's kind of a." }, { "end": 632, "start": 627, "text": " That's it's an idea. So let's go back to our thing here." }, { "end": 642, "start": 632, "text": " They say why don't we consider all possible orderings. So basically what we will do is if this sample comes up New York is a city. All right." }, { "end": 649, "start": 642, "text": " What I can do is I can define an ordering. Let's say I always want to predict two words." }, { "end": 656, "start": 649, "text": " So typically masks out about 15 percent of its input to be predicted." }, { "end": 664, "start": 656, "text": " And here let's say we'll mask out 20 percent which is two words. So of this sequence will mask two words and ask the model to predict it." }, { "end": 672, "start": 664, "text": " That's that will be our pre training objective. The first time the sample comes up from the data set I might specify the order just classically." }, { "end": 679, "start": 672, "text": " Right. Just one two three four five. All right. I'll predict the last two words." }, { "end": 688, "start": 679, "text": " I'll kind of mask them out right. I give the model New York is and then I let it predict a." }, { "end": 695, "start": 688, "text": " And then in the next step I'll give it New York is a and let it predict city. Cool." }, { "end": 703, "start": 695, "text": " So now the pitfall is the word a here only has access to things before it and not to city itself." }, { "end": 711, "start": 703, "text": " City has access to everything. All right. So but then I continue training and the next set time this sample right." }, { "end": 718, "start": 711, "text": " It's in my data set. New York is a city. The next time it comes up I simply go for a different order." }, { "end": 732, "start": 718, "text": " Let's say one two three four five. Right. So now again I'm asked I'm asking to predict the last two tokens which here are." }, { "end": 743, "start": 732, "text": " City and York. So in the first step I would give it is a and new and I will ask it what's here." }, { "end": 749, "start": 743, "text": " And I'll ask it to predict city. And then in the second step I'll also give it that and I'll ask it OK." }, { "end": 754, "start": 749, "text": " Now what's here given all of that. Right. So new is a city. Right." }, { "end": 763, "start": 754, "text": " You're asked to predict the missing word. So that that's pretty. So in the first step it's new is a." }, { "end": 774, "start": 763, "text": " And you're asked to predict that the second and then the second step is new is the city and you're asked to predict the first." }, { "end": 783, "start": 774, "text": " So now as you can see while predicting city here all of a sudden we didn't no longer in this ordering we don't have access to the word." }, { "end": 788, "start": 783, "text": " York. So we'll have to learn to predict city from the rest of the context." }, { "end": 796, "start": 788, "text": " Now even more even more if we now decide let's decide on a different ordering again." }, { "end": 808, "start": 796, "text": " One two three four five. So now we'll actually first step is to ask." }, { "end": 814, "start": 808, "text": " New York city please predict this thing here." }, { "end": 824, "start": 814, "text": " Right. Yeah you might train the model to predict is and then the second step you say New York is city." }, { "end": 833, "start": 824, "text": " Please predict this. Now you see before before when we were asked to predict the word a it only had access to things to the left of it." }, { "end": 840, "start": 833, "text": " Then the very first example. But now it actually has access to the entire context." }, { "end": 860, "start": 840, "text": " So the the idea is as we sample this data point multiple times and each time we decide on a different ordering to decode for each prediction of each token token sorry will actually have seen many many parts many different variants of the context." }, { "end": 870, "start": 860, "text": " And in expectation will actually have seen all of the context just like Bert but will always having have done it in an order regressive way." }, { "end": 884, "start": 870, "text": " So basically you get all the advantages of being order regressive namely that you are able to decode step by step while always referring to everything in front of you in the ordering." }, { "end": 898, "start": 884, "text": " So the predictions are not independent but you also get the benefit of Bert that it's able to basically look at all of the rest of the context in expectation in order to make this prediction." }, { "end": 903, "start": 898, "text": " So this is this is the main idea of of Excel net." }, { "end": 917, "start": 903, "text": " They formalize this jump up again they formalize it in saying OK what Bert does here is it actually see it factorized log probability of a sentence into this sum." }, { "end": 932, "start": 917, "text": " So the product in the log becomes a sum into the sum of log probabilities of no sorry this is order aggressive confused into the the words conditioned on everything in front of you." }, { "end": 934, "start": 932, "text": " Everything in front of them." }, { "end": 950, "start": 934, "text": " What Bert does is it actually approximately factorizes the log probability into each word and then everything in the context and everything that's not masked in the context." }, { "end": 958, "start": 950, "text": " And this is only an approximate factorization because now you basically dropping away all these masked tokens." }, { "end": 969, "start": 958, "text": " And what they do now is they do the same as the AR as the order aggressive models here." }, { "end": 986, "start": 969, "text": " They decompose the log probability into a sum of log probabilities over each of the words given all the words before it but not before it in the sequence but before it in an chosen permutation Z." }, { "end": 992, "start": 986, "text": " And Z is sampled uniformly from the set of all possible permutations." }, { "end": 995, "start": 992, "text": " So in expectation they'll see all of the context." }, { "end": 1004, "start": 995, "text": " So this is the this is the main thing they show this here in a kind of a picture with." }, { "end": 1006, "start": 1004, "text": " So here is the neural network." }, { "end": 1008, "start": 1006, "text": " This is the input layer." }, { "end": 1017, "start": 1008, "text": " And these are the hidden layers as the attention layers go up and up here you're asked to predict the token." }, { "end": 1020, "start": 1017, "text": " So here you're always asked to predict X3." }, { "end": 1030, "start": 1020, "text": " So there is no there's never going to be any weight here since if you knew X3 you would be able trivially to predict X3." }, { "end": 1040, "start": 1030, "text": " All right so in the in the first example the factorization order chosen at random is 3 2 4 1." }, { "end": 1049, "start": 1040, "text": " Now you're asked to predict X3 and we know OK we should only we should only do this with things that are before it in the permutation order." }, { "end": 1058, "start": 1049, "text": " Well here since X3 is the first in the permutation order we actually don't we don't have anything to go on." }, { "end": 1063, "start": 1058, "text": " We basically ask to predict X3 from scratch as if it were the start of the sentence." }, { "end": 1072, "start": 1063, "text": " So we'll basically tell the model I have a sentence that goes hmm hmm hmm hmm please predict the third." }, { "end": 1075, "start": 1072, "text": " All right it's a hard task." }, { "end": 1078, "start": 1075, "text": " Yeah by the way you're always able to look at this memory thing here." }, { "end": 1081, "start": 1078, "text": " Don't worry about this for now." }, { "end": 1086, "start": 1081, "text": " This is just this is an augmentation they do on top of their idea." }, { "end": 1088, "start": 1086, "text": " This is not the core idea." }, { "end": 1093, "start": 1088, "text": " So OK but now the second time this sample comes up from the training set we decide on a different order." }, { "end": 1097, "start": 1093, "text": " So the order here is 2 4 3 1." }, { "end": 1102, "start": 1097, "text": " Now again we're asked to predict X3 and we're allowed to look at everything before it." }, { "end": 1114, "start": 1102, "text": " So 2 and 4 as you see here there are weights from X2 and X4 into this column that finally is then a ask to predict X3." }, { "end": 1117, "start": 1114, "text": " So this is also this is now an easier task right." }, { "end": 1123, "start": 1117, "text": " You're allowed to look at the word to the left and to the right." }, { "end": 1131, "start": 1123, "text": " If you have the following permutation order 1 4 2 3 you're actually allowed to look at all of the other words" }, { "end": 1136, "start": 1131, "text": " because X3 is at the end of the permutation order in order to produce X3." }, { "end": 1140, "start": 1136, "text": " So all of these four and the fourth thing is a similar." }, { "end": 1145, "start": 1140, "text": " So all of these four things will appear during training and you will learn from them." }, { "end": 1157, "start": 1145, "text": " So in expectations you basically have seen all variants of different of different versions of the context which which helps a lot apparently." }, { "end": 1169, "start": 1157, "text": " Right so in the in order to achieve this they had to make some architectural changes to the to the model." }, { "end": 1178, "start": 1169, "text": " Namely what you want to do is in a single pass through the model here you not only want to predict one token but you want to do many predictions." }, { "end": 1188, "start": 1178, "text": " This helps training a lot so BERT naturally always does like 15% of the tokens or so what was that like 40 50 tokens." }, { "end": 1192, "start": 1188, "text": " So it masks them and it predicts them all at the same time." }, { "end": 1197, "start": 1192, "text": " Now you would like to do this here as well you would like to predict all at the same time." }, { "end": 1199, "start": 1197, "text": " The ones that you're asked to predict." }, { "end": 1213, "start": 1199, "text": " But of course the problem is for here if you're asked if in this factorization order 2 4 3 1 if you're asked to predict X3 you're allowed to look at X2 and X4." }, { "end": 1218, "start": 1213, "text": " If you're asked to predict X1 you're allowed to look at X2 X4 and X3." }, { "end": 1231, "start": 1218, "text": " So if you only have a single pass through the model the question is do you now input X3 or do you not because the prediction of X3 is not allowed to look at X3." }, { "end": 1244, "start": 1231, "text": " While the prediction of X1 is allowed to look at X3 so they do an architectural change in order to achieve both things so that you can have a single pass through the through the model." }, { "end": 1252, "start": 1244, "text": " But the prediction of each token only depends on the things in front of it in the permutation order." }, { "end": 1269, "start": 1252, "text": " And they do this by having these kind of two stream these masked to stream attention where they basically have not only one hidden representation like in classic transformers but they have at each step two hidden representations." }, { "end": 1272, "start": 1269, "text": " One they call H and one they call G." }, { "end": 1283, "start": 1272, "text": " So the H's are initialized with the embeddings of the tokens and the G's are just initialized randomly and then they get transformed." }, { "end": 1296, "start": 1283, "text": " The point is the H of the next layer is always able to look at everything in front of it including its own its own H basically one layer down its own position one layer down." }, { "end": 1307, "start": 1296, "text": " While the G is only allowed to look at the H's but the H's from before." }, { "end": 1323, "start": 1307, "text": " Right so all the G's here are only ever able to look at the H's from before the current position whereas the H is always allowed here to look at the same but also at the H at the current position." }, { "end": 1331, "start": 1323, "text": " And now at the last layer you simply ask the model to predict the token from just the G." }, { "end": 1338, "start": 1331, "text": " And you can easily see that this results in these model only." }, { "end": 1345, "start": 1338, "text": " Yeah only attending to things before it." }, { "end": 1355, "start": 1345, "text": " The G by the way can also look at the G of the current layer so that's also the thing but it cannot look at the H." }, { "end": 1368, "start": 1355, "text": " So there's never any information flowing from the current word embedding of the token you're trying to predict to the prediction layer." }, { "end": 1379, "start": 1368, "text": " So basically that means the model can't just look like you're not telling the model the answer yet you're still able to feed to predict multiple things in a single pass through the model." }, { "end": 1385, "start": 1379, "text": " Formally this is described here in the attention layer." }, { "end": 1403, "start": 1385, "text": " So they divide how they produce the queries and how they produce the keys and values usually the queries and the keys and values are produced from the same hidden representation but here they produce the keys and values from the H's in both cases." }, { "end": 1415, "start": 1403, "text": " But to update the G's they produce the queries from the last layer's G and to produce the H's they produce the queries from the last layer H's." }, { "end": 1427, "start": 1415, "text": " And most importantly when they produce the keys and values the H's they look at here to update the G you're only allowed to look at H's before you in the permutation order." }, { "end": 1434, "start": 1427, "text": " But to update the H you're allowed to look at everything before including the position you're currently at." }, { "end": 1442, "start": 1434, "text": " So that's kind of the it's an engineering solution to the problem introduced by their augmentation." }, { "end": 1446, "start": 1442, "text": " I think it's a pretty neat solution pretty cool." }, { "end": 1457, "start": 1446, "text": " So the rest of the paper here is incorporating ideas from transformer Excel." }, { "end": 1466, "start": 1457, "text": " So transformer Excel is one of these classic transformers that that is like this AR so this autoregressive style of transformer." }, { "end": 1478, "start": 1466, "text": " But that has a few improvements over the classic vanilla transformer and they incorporate a number of things here namely first of all they incorporate this memory thing." }, { "end": 1482, "start": 1478, "text": " So the memory thing allows you to input longer sequences." }, { "end": 1490, "start": 1482, "text": " Let's say our our transformer input length is maximum of five tokens." }, { "end": 1504, "start": 1490, "text": " What the transformer Excel allows you to do is you input five tokens and then you save you do your transformer thing you encode it and you save something into this memory." }, { "end": 1514, "start": 1504, "text": " And then when you input the next five tokens your transformer is then allowed to look at the memory of the last sequence." }, { "end": 1519, "start": 1514, "text": " Right and also update it so that that's kind of these these mem blocks you saw here." }, { "end": 1527, "start": 1519, "text": " So you're always allowed to look at these mem blocks from last sequence and then the hidden representations here of this sequence." }, { "end": 1531, "start": 1527, "text": " They will actually be stored in the mem block for the next sequence." }, { "end": 1537, "start": 1531, "text": " This is kind of a trick to to to carry over information." }, { "end": 1554, "start": 1537, "text": " It's not the the updating the memory part isn't learned with the objective to make the next prediction better but it's just some information kind of gradient free information to provide to the next step." }, { "end": 1559, "start": 1554, "text": " And it apparently helps you can incorporate longer sequences into this transformer Excel." }, { "end": 1563, "start": 1559, "text": " So they take this over and implement this into XL net." }, { "end": 1569, "start": 1563, "text": " They also do relative positioning codings relative segment and codings." }, { "end": 1577, "start": 1569, "text": " I won't go into this too much more here because it's not the main idea basically." }, { "end": 1588, "start": 1577, "text": " So they do experiments and they compare to BERT architecture with the same basically same architecture the same number of parameters and or layers." }, { "end": 1599, "start": 1588, "text": " And they beat BERT in all of these kind of NLP tasks or most of I think they said in 20." }, { "end": 1603, "start": 1599, "text": " They reach new state of the art in 18 NLP tasks." }, { "end": 1608, "start": 1603, "text": " So apparently their method works very well." }, { "end": 1618, "start": 1608, "text": " So what they do here is the last thing I find important is an ablation study of the effects of their improvements." }, { "end": 1624, "start": 1618, "text": " So they were because kind of my problem is I never know." }, { "end": 1627, "start": 1624, "text": " Like they have this new idea. OK, we do these random permutations." }, { "end": 1637, "start": 1627, "text": " But then they also say, oh, and also we include memory from XL net and we do relative positioning codings and so on." }, { "end": 1642, "start": 1637, "text": " So for me, these kind of papers, of course, you reach better numbers, you get a new state of the art." }, { "end": 1644, "start": 1642, "text": " So it's kind of a landmark paper." }, { "end": 1649, "start": 1644, "text": " But to me, a paper should more be like a single thing." }, { "end": 1655, "start": 1649, "text": " So whatever your idea is, this your idea is these orderings and whatever you need to do to make that work." }, { "end": 1663, "start": 1655, "text": " OK, fine. But then why why the additional transformer Excel things?" }, { "end": 1674, "start": 1663, "text": " It's really then hard to estimate how much of the improvement comes from your idea and how much of the improvement simply comes from the fact that you already put these other things actually have nothing to do with it." }, { "end": 1687, "start": 1674, "text": " So I appreciate these kind of analysis called ablation studies where they kind of try to take away the memory and these things and kind of look at what it's doing to the model." }, { "end": 1704, "start": 1687, "text": " And you you see here kind of degrades down here as, for example, this column degrades as you take stuff away while still being more kind of more successful than BERT." }, { "end": 1716, "start": 1704, "text": " So that that I would say also. Yeah, here is more unclear, but also kind of seems to degrade a bit while being more successful than BERT." }, { "end": 1727, "start": 1716, "text": " So I appreciate this this kind of really trying to show that your gains really come from your new idea and not from some other stuff." }, { "end": 1733, "start": 1727, "text": " All right. So the last thing I want to mention actually is this thing." }, { "end": 1746, "start": 1733, "text": " So someone claiming or calculating that it costs two hundred and forty five thousand dollars to train the Excel net model the way they describe it in the paper." }, { "end": 1753, "start": 1746, "text": " I'm sure it's going to be brought down because it was brought down that like the time to train was brought down with BERT as well." }, { "end": 1759, "start": 1753, "text": " But this is just I mean, this is crazy. This is just training it." }, { "end": 1771, "start": 1759, "text": " It kind of gives large questions about the state of research and the ability for kind of, let's say, more academic players to participate in research." }, { "end": 1777, "start": 1771, "text": " On the one hand, of course, we like, of course, these companies should be able to do this." }, { "end": 1788, "start": 1777, "text": " And on the other hand, if it seems like currently in some fields, just putting more money on the table will get you a better result." }, { "end": 1797, "start": 1788, "text": " Not this. This actually like this paper is actually a cool idea, but it's still kind of prohibitively expensive to even reproduce it." }, { "end": 1801, "start": 1797, "text": " Yeah, right. So that was that was that for this paper." }, { "end": 1819, "start": 1801, "text": " I hope you enjoyed this and see you." } ]
hkw-WDBipgo
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Talking to companies at ICML19
[ "Science & Technology" ]
[ "machine learning", "conference", "ai", "artificial intelligence", "industry", "academia", "deep learning", "hardware", "lidar", "graphcore" ]
A short rant on sponsor companies at ICML and how to talk to them.
All right, I quickly want to talk about kind of interaction with corporation company reps at these conferences, because to me it's still a bit of a secret or a bit of a not really clear of what to do. There's very different kinds of companies at these conferences, so some companies I feel are there to basically show off their technology, kind of wanting to use it. One example is for example Graphcore, the kind of new kid on the block for AI hardware in that they claim they have a chip specifically designed for the types of operations that machine learning applications do. So even more specialized than a GPU, and also they claim they are faster for equivalent kind of money spending than an Nvidia GPU, like a classic GPU. So basically you get much more bang for the buck. For now they just offer a cloud solution, I believe, and they're going to sell their cards through Dell. The way it works is they have kind of a low level compiler that will compile your model to these cards, and for now you can interact with it through C++, and then TensorFlow will come later, something like this. The thing about their card is that they have an extremely large memory right next to the compute unit, this would be kind of your traditional level one cache. That means that you get much faster access technically to your local variables, but then they don't have any kind of RAM, which means their entire card only has somewhat like 300 megabytes of memory, but they claim they can just basically distribute, if you have a large model you can distribute that over many cards, and then you'll get basically the speed up of the cards without having to sacrifice a model size. Another company that shows off really cool technology is a company that does LIDAR, and I forget the name right now, but when I try to look it up, they do a LIDAR sensor basically that is super tiny, and it costs a fraction of like a traditional LIDAR sensor. So I think they said theirs cost about $12,000, and it's really tiny, and has a couple of advantages compared to traditional sensors. As far as I understand, their lasers are mounted on the same chip, so they always point in the same direction, which reduces a lot of inaccuracies. I guess people would be interested in that, for self-driving cars and so on. These are kind of the hardware demonstrations that I've seen. Then there's other things, like there is a wellness center where you can get a massage, which is sponsored by the big companies, which is pretty nice, but I'm probably too much. I don't like these kinds of things too much. Maybe I'm just socially too awkward. For some companies, I feel that they're just there to recruit, and they don't really want to talk about what they do too much. So an indication of this would be a company where basically all of the reps at the booth are recruiters, so non-technical recruiters, that basically just kind of tell you what you can do as a career and not really what the company does as a whole. I never really know what to talk about then, because I feel like most people are interested and drawn towards interesting work, and if that comes with good working conditions, then that's a plus, but I don't feel for many people that that is the most important thing. So I could be wrong, and probably it's good that for some people it is, because otherwise everyone would take my jobs, the ones that I like. These companies will usually, if there is an engineer, they will not talk about too much what they do, like, oh, it's company secret and so on. So the funniest one was actually the NSA. Talking to the NSA was kind of painful because you kind of ask them, so what do you do? And they're like, yeah, machine learning. Because what I want to know as a researcher is, is there anything I could do there that I couldn't do anywhere else? So is there any unique problems that the NSA faces that actually demand new research, like demand new machine learning methods or some kind of change? So I ask this, and they're like, yes, there are problems like this. And you ask, like, which problems? And they're like, yeah, there are problems. We can't tell you. So everything's basically whatever. So I made it a game to ask them more specific questions and watch them, like, oh, this is classified. So yeah, if you're here, definitely check them out. It's fun. It's just fun to talk to them. Yeah, I feel to most companies, they're really interesting. I don't know more than half of them. So just going up, ask them what they do, kind of just get an overview over the landscape of what's needed currently in machine learning research. I think that's really useful, because as an academic, I tend to be very disconnected from the industry side of things and from what people actually need or want in practice. So talking to all these companies is really helpful to get an overview over that. Yeah, so but if you know a better way, I know some people are much more successful than me talking to companies at conferences. I'm definitely not the best at this. And yeah, if you have a better strategy, let me know. So I'm pretty happy so far. All right. That was that. See ya.
[ { "end": 11.76, "start": 0, "text": " All right, I quickly want to talk about kind of interaction with corporation company reps" }, { "end": 18.04, "start": 11.76, "text": " at these conferences, because to me it's still a bit of a secret or a bit of a not really" }, { "end": 20.64, "start": 18.04, "text": " clear of what to do." }, { "end": 26.92, "start": 20.64, "text": " There's very different kinds of companies at these conferences, so some companies I" }, { "end": 35, "start": 26.92, "text": " feel are there to basically show off their technology, kind of wanting to use it." }, { "end": 44.88, "start": 35, "text": " One example is for example Graphcore, the kind of new kid on the block for AI hardware" }, { "end": 51.2, "start": 44.88, "text": " in that they claim they have a chip specifically designed for the types of operations that" }, { "end": 54.760000000000005, "start": 51.2, "text": " machine learning applications do." }, { "end": 64.16, "start": 54.76, "text": " So even more specialized than a GPU, and also they claim they are faster for equivalent" }, { "end": 70, "start": 64.16, "text": " kind of money spending than an Nvidia GPU, like a classic GPU." }, { "end": 74.44, "start": 70, "text": " So basically you get much more bang for the buck." }, { "end": 80.72, "start": 74.44, "text": " For now they just offer a cloud solution, I believe, and they're going to sell their" }, { "end": 84.2, "start": 80.72, "text": " cards through Dell." }, { "end": 90.2, "start": 84.2, "text": " The way it works is they have kind of a low level compiler that will compile your model" }, { "end": 98.2, "start": 90.2, "text": " to these cards, and for now you can interact with it through C++, and then TensorFlow will" }, { "end": 100.32000000000001, "start": 98.2, "text": " come later, something like this." }, { "end": 108.96000000000001, "start": 100.32000000000001, "text": " The thing about their card is that they have an extremely large memory right next to the" }, { "end": 120.11999999999999, "start": 108.96, "text": " compute unit, this would be kind of your traditional level one cache." }, { "end": 125.08, "start": 120.11999999999999, "text": " That means that you get much faster access technically to your local variables, but then" }, { "end": 132.51999999999998, "start": 125.08, "text": " they don't have any kind of RAM, which means their entire card only has somewhat like 300" }, { "end": 137.72, "start": 132.51999999999998, "text": " megabytes of memory, but they claim they can just basically distribute, if you have a large" }, { "end": 145.6, "start": 137.72, "text": " model you can distribute that over many cards, and then you'll get basically the speed up" }, { "end": 152.2, "start": 145.6, "text": " of the cards without having to sacrifice a model size." }, { "end": 161.24, "start": 152.2, "text": " Another company that shows off really cool technology is a company that does LIDAR, and" }, { "end": 170.60000000000002, "start": 161.24, "text": " I forget the name right now, but when I try to look it up, they do a LIDAR sensor basically" }, { "end": 179.56, "start": 170.60000000000002, "text": " that is super tiny, and it costs a fraction of like a traditional LIDAR sensor." }, { "end": 188.20000000000002, "start": 179.56, "text": " So I think they said theirs cost about $12,000, and it's really tiny, and has a couple of" }, { "end": 192.16, "start": 188.2, "text": " advantages compared to traditional sensors." }, { "end": 197.95999999999998, "start": 192.16, "text": " As far as I understand, their lasers are mounted on the same chip, so they always point in" }, { "end": 205.6, "start": 197.95999999999998, "text": " the same direction, which reduces a lot of inaccuracies." }, { "end": 210.83999999999997, "start": 205.6, "text": " I guess people would be interested in that, for self-driving cars and so on." }, { "end": 215.67999999999998, "start": 210.83999999999997, "text": " These are kind of the hardware demonstrations that I've seen." }, { "end": 223.76000000000002, "start": 215.68, "text": " Then there's other things, like there is a wellness center where you can get a massage," }, { "end": 232.32, "start": 223.76000000000002, "text": " which is sponsored by the big companies, which is pretty nice, but I'm probably too much." }, { "end": 236.88, "start": 232.32, "text": " I don't like these kinds of things too much." }, { "end": 241.24, "start": 236.88, "text": " Maybe I'm just socially too awkward." }, { "end": 247.76000000000002, "start": 241.24, "text": " For some companies, I feel that they're just there to recruit, and they don't really want" }, { "end": 250.84, "start": 247.76000000000002, "text": " to talk about what they do too much." }, { "end": 259.76, "start": 250.84, "text": " So an indication of this would be a company where basically all of the reps at the booth" }, { "end": 267.76, "start": 259.76, "text": " are recruiters, so non-technical recruiters, that basically just kind of tell you what" }, { "end": 276.12, "start": 267.76, "text": " you can do as a career and not really what the company does as a whole." }, { "end": 284.24, "start": 276.12, "text": " I never really know what to talk about then, because I feel like most people are interested" }, { "end": 290, "start": 284.24, "text": " and drawn towards interesting work, and if that comes with good working conditions, then" }, { "end": 296.24, "start": 290, "text": " that's a plus, but I don't feel for many people that that is the most important thing." }, { "end": 302.40000000000003, "start": 296.24, "text": " So I could be wrong, and probably it's good that for some people it is, because otherwise" }, { "end": 307.8, "start": 302.40000000000003, "text": " everyone would take my jobs, the ones that I like." }, { "end": 312.32, "start": 307.8, "text": " These companies will usually, if there is an engineer, they will not talk about too" }, { "end": 315.48, "start": 312.32, "text": " much what they do, like, oh, it's company secret and so on." }, { "end": 319.32, "start": 315.48, "text": " So the funniest one was actually the NSA." }, { "end": 327.08, "start": 319.32, "text": " Talking to the NSA was kind of painful because you kind of ask them, so what do you do?" }, { "end": 331.84, "start": 327.08, "text": " And they're like, yeah, machine learning." }, { "end": 337.8, "start": 331.84, "text": " Because what I want to know as a researcher is, is there anything I could do there that" }, { "end": 339.88, "start": 337.8, "text": " I couldn't do anywhere else?" }, { "end": 348.44, "start": 339.88, "text": " So is there any unique problems that the NSA faces that actually demand new research, like" }, { "end": 354.24, "start": 348.44, "text": " demand new machine learning methods or some kind of change?" }, { "end": 358.88, "start": 354.24, "text": " So I ask this, and they're like, yes, there are problems like this." }, { "end": 360.88, "start": 358.88, "text": " And you ask, like, which problems?" }, { "end": 363.8, "start": 360.88, "text": " And they're like, yeah, there are problems." }, { "end": 364.8, "start": 363.8, "text": " We can't tell you." }, { "end": 366.8, "start": 364.8, "text": " So everything's basically whatever." }, { "end": 373.8, "start": 366.8, "text": " So I made it a game to ask them more specific questions and watch them, like, oh, this is" }, { "end": 374.8, "start": 373.8, "text": " classified." }, { "end": 379.16, "start": 374.8, "text": " So yeah, if you're here, definitely check them out." }, { "end": 380.16, "start": 379.16, "text": " It's fun." }, { "end": 384.08, "start": 380.16, "text": " It's just fun to talk to them." }, { "end": 389.84000000000003, "start": 384.08, "text": " Yeah, I feel to most companies, they're really interesting." }, { "end": 391.92, "start": 389.84000000000003, "text": " I don't know more than half of them." }, { "end": 398.68, "start": 391.92, "text": " So just going up, ask them what they do, kind of just get an overview over the landscape" }, { "end": 401.64, "start": 398.68, "text": " of what's needed currently in machine learning research." }, { "end": 409.88, "start": 401.64, "text": " I think that's really useful, because as an academic, I tend to be very disconnected from" }, { "end": 417.28, "start": 409.88, "text": " the industry side of things and from what people actually need or want in practice." }, { "end": 422.03999999999996, "start": 417.28, "text": " So talking to all these companies is really helpful to get an overview over that." }, { "end": 428.76, "start": 422.03999999999996, "text": " Yeah, so but if you know a better way, I know some people are much more successful than" }, { "end": 433.08, "start": 428.76, "text": " me talking to companies at conferences." }, { "end": 435.08, "start": 433.08, "text": " I'm definitely not the best at this." }, { "end": 439.28, "start": 435.08, "text": " And yeah, if you have a better strategy, let me know." }, { "end": 442.03999999999996, "start": 439.28, "text": " So I'm pretty happy so far." }, { "end": 443.03999999999996, "start": 442.03999999999996, "text": " All right." }, { "end": 444.03999999999996, "start": 443.03999999999996, "text": " That was that." }, { "end": 459.04, "start": 444.04, "text": " See ya." } ]
TFiZYA_JfJs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Population-Based Search and Open-Ended Algorithms
[ "Science & Technology" ]
[ "machine learning", "ai", "artificial intelligence", "open ended learning", "quality diversity", "conference", "icml", "icml2019", "tutorial", "population-based search", "goal switching", "serendipidy", "evolution" ]
Comments on the ICML2019 tutorial on population-based search and open-ended learning. Talk: https://www.facebook.com/icml.imls/videos/481758745967365/ Slides: http://www.cs.uwyo.edu/~jeffclune/share/2019_06_10_ICML_Tutorial.pdf Book: https://www.amazon.com/dp/B00X57B4JG/ Event: https://icml.cc/Conferences/2019/ScheduleMultitrack?event=4336
This is huge. This is just one hall and most people I guess are still waiting for registration. Yeah, but definitely the size of these things is ginormous. The tutorials have just started. There we go. It's finding a place. Hi, so I just wanted to give a little update on a tutorial that I liked which was the population-based search and open-ended learning tutorial which happened on Monday here. So I was pleasantly surprised by this tutorial because I knew almost nothing about these techniques and they seem really cool. It seems to be a really cool line of research. So I started out with what is population-based search and basically in population-based search you don't want to just reach one solution of a problem but you want to maintain a population of solutions that you develop over time. So natural evolution would be an example of that. So this can have many benefits that were explored in the tutorial. So the culprit of traditional optimization, let's say you have a classification problem, you just train one classifier on it, is what they call deception, meaning that a better example is an RL problem where you need to reach some goal but since the goal might be very hard to reach, your algorithm has basically nothing to go on. There's no stepping stone. So usually people go and construct a reward function in a very clever way. But this can be overcome with these techniques as well. So just imagine the hardest video game in the Atari suite. This would be something like Montezuma's Revenge where you first need to collect some key and then go to some door and only then you get a score. So this reward function is too ambitious and is a problem they call your deception. An observation they make is if you look at nature and natural evolution, it is very successful even without a goal. So there's no goal in mind to natural evolution except reproduction creates other reproduction. But it's not a goal, that's simply a kind of underlying mechanism. And if you look at nature, all this variety of life was produced without a goal in mind. And all this variety of life filling different niches and basically reproducing at their own pace. So it's a very interesting observation. The goal of this entire field is kind of to model, to go into this direction of what if we don't really go after only the cost function, but what if we... So in the most extreme case, what if we build a search algorithm that only wants to create novel things? So where kind of novelty is the only goal, what happens then? And it turns out some interesting things can be achieved with that. So they introduced this notion of quality diversity, which basically means if you look at, let's again take a life on earth, you want all the achievable behaviors that there are. So maybe one achievable behavior is a very fast life form that can hunt other life forms, and another achievable behavior is one that camouflages very well and so on. And you want to kind of find for each of these behaviors, you want to find the best possible example. So that's the direction that these algorithms go into. And an algorithm that they presented was MapElites, so M-A-P-Elites, which goes as follows. So let's say you have a bunch of dimensions you care about, say how fast a creature is, how tall it is, how well it is camouflaged and so on. Now you want to discretize each of those dimensions. So this will give you cells basically. So each of these discretization will introduce a grid of cells. And what you now do is you want to keep the best examples of each cell. So if you have a creature that's very fast but not very well camouflaged at some cell, you look at how well it's doing at the goal that you have in mind. And you want to keep the best one of those. You have a population and whichever ones are in that cell, you keep the best. And then you go ahead and you kind of change them. You could do this via evolutionary process, like you can mutate them, or it could be via gradient descent something. But you mutate them and I guess they will probably end up in a different cell. So you go look at that cell. Are these new ones better than the ones that you remembered from that old cell? And if so, replace them. For each cell, keep the best one and then kind of start continue developing from those. Sort of like Dijkstra's shortest path algorithm. So what it will return is like an entire landscape of possible behaviors. And for each behavior, it will give you the best result. Now it doesn't mean they all do equally. Some will be better, some cells will be not as good with regards to your cost function. But it will give you an entire landscape and you could see then that there are many kind of modes in this landscape. As I said, some creatures are very fast hunters, some camouflage very well. But then they are kind of slower. So you will be able to see these modes in that. I found this pretty interesting and opens the door to a lot of different applications. So a principle they employ is what is called goal switching. Namely, that means if a line of development can benefit from inventions of another line. So let's say the very fast hunters, they are good at that, but then maybe they don't reach quite optimal performance. But then another line develops somewhere else and these are camouflaged, like the camouflaged life forms develop. So they invent kind of camouflage. Now because of the way this mutation and so on is, you kind of keep the camouflaged ones around and the hunters. And now the camouflage can kind of jump over to the hunters. It's very difficult to explain like this, but they call this goal switching. And what it means is that the hunters can now adopt a little bit of camouflage through, let's say mutating one of the camouflaged ones into the hunters or vice versa. And then can kind of benefit from that invention over there. And so a good example of that, they mentioned, is that in order to discover the microwave, you first had to work on radar technology, which had nothing to do with microwaves. But because of the inventions made in radar technology, you could then invent the microwave easily. So it kind of jumped over into the space of ovens, basically. Before, all you had to make food warm was just put it in an oven and heat it up. Now you had the microwave. So that kind of these algorithms capture the spirit of this. A book that the people who gave the tutorial wrote is Why Greatness Cannot Be Planned. I'll definitely get that. And I can't recommend it since I haven't read it yet, but I'm going to get and read it. Should be fairly interesting. So they give them a number. They gave a number of examples of this, for example, robots that can recover from damage because so they had a robot with six legs. They trained it to move. Now they disabled one leg. Now, usually you have one solution like you trained your neural network. I don't think it was even a neural network, but you trained your like your system to move this robot as efficiently as possible. And now because you only have one solution, one legs broken, it doesn't work anymore. But since you have the entire landscape of solutions, you can easily kind of jump to other not as good solutions if you have all legs. But you can jump to other solutions in the solution space and try them out. Which ones do still work? If I only now have five legs, since you have the entire landscape, you're very well able to do that. So that's pretty cool. Another algorithm they presented was GoExplore, which is an algorithm that kind of solved these really hard Atari games while back. And what they do in specific is they kind of have an archive of states that they have reached in the past. So it's a video game and you do some things and then you are in certain states. So it's an archive of states. And you just pick one of that. Right. You pick like, OK, this state means I'm like my little person I control is somewhere over there. And then you just explore from it. Right. You do a population based. You just kind of go around from it and so on. And then you look at the state you end up in. And if the state you end up in is a known state like you've been there before. So it's also in your archive. Then you compare the two. Did you get faster to that state via the new route or did you get faster to that state via the route that was already in your archive? And if you're faster in that state via the new route, you will you replace the archived one with the new one. So this again is kind of like a Dijkstra shortest path algorithm extrapolated to this to this kind of domain where you have to explore. You don't actually have a graph. So I think it's it's pretty cool. It's all kind of the same principle, but it can employ this goal switching thing. Right. So you go to a certain state, but then all of a sudden, because you explored something else, you find a much quicker way to that state, which you never intended. But it happens. So this is a basic principle that kind of if you explore a lot, then good things might happen. So kind of a serendipity discovery mechanism, and you could use those good things, incorporate them into the things that already work. The last topic they covered was open ended search. So a distinction from what they've already discussed to open ended is now. They give the example again life on earth. If you consider it, it's a single run of an algorithm. It's not that for every life form, a different optimization was started and kind of started and finished, optimized for a certain thing. It's all one single run of the same algorithm. And it doesn't really have a goal in mind. So open ended algorithms are like that. They kind of define interesting notion. Is it still interesting if we were to just let it run for a billion years? Like, would it still be interesting? If yes, consider it an open ended algorithm, which I find a really good kind of definition. So the fundamental property that open ended algorithms have and research in this has defined is that constantly not only is the population shifting, but also the environment is shifting. So there's kind of a never static situation. The environment's always shifting. That also means there's always new opportunities opening up for kind of new life on earth, for new creatures to evolve, to kind of fill the niches that open up. And the research community around this, the open ended search, open ended learning community is considering exactly those types of environments. Like how can they even describe those, manufacture those and then learn in those. So pretty cool. The cool experiment they've shown was the pick breeder experiment, where basically it's a human in the loop. So they gave humans could cooperate. So as a human, you go to a website, you pick one picture and these pictures are procedurally generated. So they start out with a very simple pattern and you just have the opportunity to kind of you pick one and it gives you a bunch of random perturbations of the procedurally generated image. And you pick the ones that you like and then you continue exploring from there. And if you're happy, you can just save that to the database and someone else can look through the database and then pick yours, for example, to continue. And the things that the humans came up with or the result of that was extremely interesting. So not only could you perturb, but you could also kind of mix pictures as far as I remember. Not sure anymore. But the things they end up with is you could breed pictures, right? You could you could kind of also put pictures together. So the procedural generation of them and what you end up with is remarkable, remarkably interesting things. And the point they made is it's really only from very few iterations. These are like tens or hundreds of iterations of development, not like a million like we're used to. And there's a real tree of phylogenies that emerge. And the crucial lesson, they say, is people only find when they are not looking. So if you had a certain goal in mind, you would never be able to, you know, change the pictures in the way that this goal would appear. But if you have no goal in mind, you might discover all kinds of interesting things. So that that is kind of all I'm going to say of this. They discussed many more things, but I think these are the main takeaways. So population population based search is interesting because it can kind of overcome the problems that if you only had one optimizer, one optimization run of one algorithm, if you employ quality diversity in the algorithm map elites, this this enables this kind of goal switching gives you back an entire landscape of of the of learned actors or systems that for each one, you know, it's kind of the best performing one in that particular constraint of of the of the dimensions you care about. And yeah, open ended algorithms, open ended search is definitely a cool research direction. And I encourage you to check it out. All right. That was it so far. Thanks for listening. Bye.
[ { "end": 8, "start": 0, "text": " This is huge. This is just one hall and most people I guess are still waiting for registration." }, { "end": 14, "start": 8, "text": " Yeah, but definitely the size of these things is ginormous." }, { "end": 17, "start": 14, "text": " The tutorials have just started." }, { "end": 20, "start": 17, "text": " There we go. It's finding a place." }, { "end": 26, "start": 20, "text": " Hi, so I just wanted to give a little update on a tutorial that I liked" }, { "end": 30, "start": 26, "text": " which was the population-based search and open-ended learning tutorial" }, { "end": 34, "start": 30, "text": " which happened on Monday here." }, { "end": 40, "start": 34, "text": " So I was pleasantly surprised by this tutorial because I knew almost nothing about these techniques" }, { "end": 44, "start": 40, "text": " and they seem really cool. It seems to be a really cool line of research." }, { "end": 48, "start": 44, "text": " So I started out with what is population-based search" }, { "end": 54, "start": 48, "text": " and basically in population-based search you don't want to just reach one solution of a problem" }, { "end": 60, "start": 54, "text": " but you want to maintain a population of solutions that you develop over time." }, { "end": 66, "start": 60, "text": " So natural evolution would be an example of that." }, { "end": 73, "start": 66, "text": " So this can have many benefits that were explored in the tutorial." }, { "end": 80, "start": 73, "text": " So the culprit of traditional optimization, let's say you have a classification problem," }, { "end": 86, "start": 80, "text": " you just train one classifier on it, is what they call deception," }, { "end": 93, "start": 86, "text": " meaning that a better example is an RL problem where you need to reach some goal" }, { "end": 101, "start": 93, "text": " but since the goal might be very hard to reach, your algorithm has basically nothing to go on." }, { "end": 103, "start": 101, "text": " There's no stepping stone." }, { "end": 108, "start": 103, "text": " So usually people go and construct a reward function in a very clever way." }, { "end": 113, "start": 108, "text": " But this can be overcome with these techniques as well." }, { "end": 119, "start": 113, "text": " So just imagine the hardest video game in the Atari suite." }, { "end": 123, "start": 119, "text": " This would be something like Montezuma's Revenge where you first need to collect some key" }, { "end": 127, "start": 123, "text": " and then go to some door and only then you get a score." }, { "end": 134, "start": 127, "text": " So this reward function is too ambitious and is a problem they call your deception." }, { "end": 140, "start": 134, "text": " An observation they make is if you look at nature and natural evolution," }, { "end": 144, "start": 140, "text": " it is very successful even without a goal." }, { "end": 152, "start": 144, "text": " So there's no goal in mind to natural evolution except reproduction creates other reproduction." }, { "end": 159, "start": 152, "text": " But it's not a goal, that's simply a kind of underlying mechanism." }, { "end": 165, "start": 159, "text": " And if you look at nature, all this variety of life was produced without a goal in mind." }, { "end": 173, "start": 165, "text": " And all this variety of life filling different niches and basically reproducing at their own pace." }, { "end": 176, "start": 173, "text": " So it's a very interesting observation." }, { "end": 181, "start": 176, "text": " The goal of this entire field is kind of to model, to go into this direction of" }, { "end": 188, "start": 181, "text": " what if we don't really go after only the cost function, but what if we..." }, { "end": 196, "start": 188, "text": " So in the most extreme case, what if we build a search algorithm that only wants to create novel things?" }, { "end": 202, "start": 196, "text": " So where kind of novelty is the only goal, what happens then?" }, { "end": 207, "start": 202, "text": " And it turns out some interesting things can be achieved with that." }, { "end": 215, "start": 207, "text": " So they introduced this notion of quality diversity, which basically means if you look at," }, { "end": 223, "start": 215, "text": " let's again take a life on earth, you want all the achievable behaviors that there are." }, { "end": 230, "start": 223, "text": " So maybe one achievable behavior is a very fast life form that can hunt other life forms," }, { "end": 235, "start": 230, "text": " and another achievable behavior is one that camouflages very well and so on." }, { "end": 243, "start": 235, "text": " And you want to kind of find for each of these behaviors, you want to find the best possible example." }, { "end": 247, "start": 243, "text": " So that's the direction that these algorithms go into." }, { "end": 256, "start": 247, "text": " And an algorithm that they presented was MapElites, so M-A-P-Elites, which goes as follows." }, { "end": 263, "start": 256, "text": " So let's say you have a bunch of dimensions you care about, say how fast a creature is," }, { "end": 266, "start": 263, "text": " how tall it is, how well it is camouflaged and so on." }, { "end": 270, "start": 266, "text": " Now you want to discretize each of those dimensions." }, { "end": 274, "start": 270, "text": " So this will give you cells basically." }, { "end": 279, "start": 274, "text": " So each of these discretization will introduce a grid of cells." }, { "end": 285, "start": 279, "text": " And what you now do is you want to keep the best examples of each cell." }, { "end": 291, "start": 285, "text": " So if you have a creature that's very fast but not very well camouflaged at some cell," }, { "end": 297, "start": 291, "text": " you look at how well it's doing at the goal that you have in mind." }, { "end": 300, "start": 297, "text": " And you want to keep the best one of those." }, { "end": 305, "start": 300, "text": " You have a population and whichever ones are in that cell, you keep the best." }, { "end": 308, "start": 305, "text": " And then you go ahead and you kind of change them." }, { "end": 312, "start": 308, "text": " You could do this via evolutionary process, like you can mutate them," }, { "end": 317, "start": 312, "text": " or it could be via gradient descent something." }, { "end": 322, "start": 317, "text": " But you mutate them and I guess they will probably end up in a different cell." }, { "end": 329, "start": 322, "text": " So you go look at that cell. Are these new ones better than the ones that you remembered from that old cell?" }, { "end": 331, "start": 329, "text": " And if so, replace them." }, { "end": 338, "start": 331, "text": " For each cell, keep the best one and then kind of start continue developing from those." }, { "end": 342, "start": 338, "text": " Sort of like Dijkstra's shortest path algorithm." }, { "end": 350, "start": 342, "text": " So what it will return is like an entire landscape of possible behaviors." }, { "end": 355, "start": 350, "text": " And for each behavior, it will give you the best result." }, { "end": 358, "start": 355, "text": " Now it doesn't mean they all do equally." }, { "end": 365, "start": 358, "text": " Some will be better, some cells will be not as good with regards to your cost function." }, { "end": 372, "start": 365, "text": " But it will give you an entire landscape and you could see then that there are many kind of modes in this landscape." }, { "end": 377, "start": 372, "text": " As I said, some creatures are very fast hunters, some camouflage very well." }, { "end": 380, "start": 377, "text": " But then they are kind of slower." }, { "end": 383, "start": 380, "text": " So you will be able to see these modes in that." }, { "end": 392, "start": 383, "text": " I found this pretty interesting and opens the door to a lot of different applications." }, { "end": 397, "start": 392, "text": " So a principle they employ is what is called goal switching." }, { "end": 406, "start": 397, "text": " Namely, that means if a line of development can benefit from inventions of another line." }, { "end": 419, "start": 406, "text": " So let's say the very fast hunters, they are good at that, but then maybe they don't reach quite optimal performance." }, { "end": 427, "start": 419, "text": " But then another line develops somewhere else and these are camouflaged, like the camouflaged life forms develop." }, { "end": 429, "start": 427, "text": " So they invent kind of camouflage." }, { "end": 438, "start": 429, "text": " Now because of the way this mutation and so on is, you kind of keep the camouflaged ones around and the hunters." }, { "end": 442, "start": 438, "text": " And now the camouflage can kind of jump over to the hunters." }, { "end": 448, "start": 442, "text": " It's very difficult to explain like this, but they call this goal switching." }, { "end": 461, "start": 448, "text": " And what it means is that the hunters can now adopt a little bit of camouflage through, let's say mutating one of the camouflaged ones into the hunters or vice versa." }, { "end": 465, "start": 461, "text": " And then can kind of benefit from that invention over there." }, { "end": 478, "start": 465, "text": " And so a good example of that, they mentioned, is that in order to discover the microwave, you first had to work on radar technology, which had nothing to do with microwaves." }, { "end": 485, "start": 478, "text": " But because of the inventions made in radar technology, you could then invent the microwave easily." }, { "end": 489, "start": 485, "text": " So it kind of jumped over into the space of ovens, basically." }, { "end": 494, "start": 489, "text": " Before, all you had to make food warm was just put it in an oven and heat it up." }, { "end": 500, "start": 494, "text": " Now you had the microwave. So that kind of these algorithms capture the spirit of this." }, { "end": 508, "start": 500, "text": " A book that the people who gave the tutorial wrote is Why Greatness Cannot Be Planned." }, { "end": 516, "start": 508, "text": " I'll definitely get that. And I can't recommend it since I haven't read it yet, but I'm going to get and read it." }, { "end": 519, "start": 516, "text": " Should be fairly interesting." }, { "end": 531, "start": 519, "text": " So they give them a number. They gave a number of examples of this, for example, robots that can recover from damage because so they had a robot with six legs." }, { "end": 535, "start": 531, "text": " They trained it to move. Now they disabled one leg." }, { "end": 540, "start": 535, "text": " Now, usually you have one solution like you trained your neural network." }, { "end": 547, "start": 540, "text": " I don't think it was even a neural network, but you trained your like your system to move this robot as efficiently as possible." }, { "end": 552, "start": 547, "text": " And now because you only have one solution, one legs broken, it doesn't work anymore." }, { "end": 562, "start": 552, "text": " But since you have the entire landscape of solutions, you can easily kind of jump to other not as good solutions if you have all legs." }, { "end": 568, "start": 562, "text": " But you can jump to other solutions in the solution space and try them out." }, { "end": 576, "start": 568, "text": " Which ones do still work? If I only now have five legs, since you have the entire landscape, you're very well able to do that." }, { "end": 579, "start": 576, "text": " So that's pretty cool." }, { "end": 591, "start": 579, "text": " Another algorithm they presented was GoExplore, which is an algorithm that kind of solved these really hard Atari games while back." }, { "end": 600, "start": 591, "text": " And what they do in specific is they kind of have an archive of states that they have reached in the past." }, { "end": 608, "start": 600, "text": " So it's a video game and you do some things and then you are in certain states. So it's an archive of states." }, { "end": 619, "start": 608, "text": " And you just pick one of that. Right. You pick like, OK, this state means I'm like my little person I control is somewhere over there." }, { "end": 626, "start": 619, "text": " And then you just explore from it. Right. You do a population based. You just kind of go around from it and so on." }, { "end": 636, "start": 626, "text": " And then you look at the state you end up in. And if the state you end up in is a known state like you've been there before." }, { "end": 649, "start": 636, "text": " So it's also in your archive. Then you compare the two. Did you get faster to that state via the new route or did you get faster to that state via the route that was already in your archive?" }, { "end": 657, "start": 649, "text": " And if you're faster in that state via the new route, you will you replace the archived one with the new one." }, { "end": 667, "start": 657, "text": " So this again is kind of like a Dijkstra shortest path algorithm extrapolated to this to this kind of domain where you have to explore." }, { "end": 678, "start": 667, "text": " You don't actually have a graph. So I think it's it's pretty cool. It's all kind of the same principle, but it can employ this goal switching thing." }, { "end": 687, "start": 678, "text": " Right. So you go to a certain state, but then all of a sudden, because you explored something else, you find a much quicker way to that state, which you never intended." }, { "end": 699, "start": 687, "text": " But it happens. So this is a basic principle that kind of if you explore a lot, then good things might happen." }, { "end": 710, "start": 699, "text": " So kind of a serendipity discovery mechanism, and you could use those good things, incorporate them into the things that already work." }, { "end": 722, "start": 710, "text": " The last topic they covered was open ended search. So a distinction from what they've already discussed to open ended is now." }, { "end": 729, "start": 722, "text": " They give the example again life on earth. If you consider it, it's a single run of an algorithm." }, { "end": 740, "start": 729, "text": " It's not that for every life form, a different optimization was started and kind of started and finished, optimized for a certain thing." }, { "end": 747, "start": 740, "text": " It's all one single run of the same algorithm. And it doesn't really have a goal in mind." }, { "end": 752, "start": 747, "text": " So open ended algorithms are like that. They kind of define interesting notion." }, { "end": 758, "start": 752, "text": " Is it still interesting if we were to just let it run for a billion years? Like, would it still be interesting?" }, { "end": 765, "start": 758, "text": " If yes, consider it an open ended algorithm, which I find a really good kind of definition." }, { "end": 781, "start": 765, "text": " So the fundamental property that open ended algorithms have and research in this has defined is that constantly not only is the population shifting, but also the environment is shifting." }, { "end": 800, "start": 781, "text": " So there's kind of a never static situation. The environment's always shifting. That also means there's always new opportunities opening up for kind of new life on earth, for new creatures to evolve, to kind of fill the niches that open up." }, { "end": 814, "start": 800, "text": " And the research community around this, the open ended search, open ended learning community is considering exactly those types of environments." }, { "end": 821, "start": 814, "text": " Like how can they even describe those, manufacture those and then learn in those. So pretty cool." }, { "end": 832, "start": 821, "text": " The cool experiment they've shown was the pick breeder experiment, where basically it's a human in the loop. So they gave humans could cooperate." }, { "end": 840, "start": 832, "text": " So as a human, you go to a website, you pick one picture and these pictures are procedurally generated." }, { "end": 850, "start": 840, "text": " So they start out with a very simple pattern and you just have the opportunity to kind of you pick one and it gives you a bunch of random perturbations of the procedurally generated image." }, { "end": 855, "start": 850, "text": " And you pick the ones that you like and then you continue exploring from there." }, { "end": 864, "start": 855, "text": " And if you're happy, you can just save that to the database and someone else can look through the database and then pick yours, for example, to continue." }, { "end": 872, "start": 864, "text": " And the things that the humans came up with or the result of that was extremely interesting." }, { "end": 881, "start": 872, "text": " So not only could you perturb, but you could also kind of mix pictures as far as I remember. Not sure anymore." }, { "end": 891, "start": 881, "text": " But the things they end up with is you could breed pictures, right? You could you could kind of also put pictures together." }, { "end": 900, "start": 891, "text": " So the procedural generation of them and what you end up with is remarkable, remarkably interesting things." }, { "end": 905, "start": 900, "text": " And the point they made is it's really only from very few iterations." }, { "end": 911, "start": 905, "text": " These are like tens or hundreds of iterations of development, not like a million like we're used to." }, { "end": 915, "start": 911, "text": " And there's a real tree of phylogenies that emerge." }, { "end": 922, "start": 915, "text": " And the crucial lesson, they say, is people only find when they are not looking." }, { "end": 931, "start": 922, "text": " So if you had a certain goal in mind, you would never be able to, you know, change the pictures in the way that this goal would appear." }, { "end": 937, "start": 931, "text": " But if you have no goal in mind, you might discover all kinds of interesting things." }, { "end": 944, "start": 937, "text": " So that that is kind of all I'm going to say of this." }, { "end": 948, "start": 944, "text": " They discussed many more things, but I think these are the main takeaways." }, { "end": 958, "start": 948, "text": " So population population based search is interesting because it can kind of overcome the problems that if you only had one optimizer," }, { "end": 965, "start": 958, "text": " one optimization run of one algorithm, if you employ quality diversity in the algorithm map elites," }, { "end": 977, "start": 965, "text": " this this enables this kind of goal switching gives you back an entire landscape of of the of learned actors or systems" }, { "end": 988, "start": 977, "text": " that for each one, you know, it's kind of the best performing one in that particular constraint of of the of the dimensions you care about." }, { "end": 997, "start": 988, "text": " And yeah, open ended algorithms, open ended search is definitely a cool research direction." }, { "end": 1002, "start": 997, "text": " And I encourage you to check it out. All right. That was it so far." }, { "end": 1007, "start": 1002, "text": " Thanks for listening. Bye." } ]
EA96xh9qog0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
I'm at ICML19 :)
[ "Science & Technology" ]
[ "machine learning", "conference", "long beach", "california", "icml19", "icml", "artificial intelligence", "ai", "deep learning" ]
Short intro to the International Conference on Machine Learning in Long Beach, CA. I'll be making some updates from the conference.
Hi there, it's day one of ICML and we'll be attending the conference here and just quickly pre-video to let everyone know I'll be trying to report from here kind of what papers are cool, what I liked, what are kind of the trends and so hopefully get this conference out to a broader community. So everyone's conglomerating here, the line's probably going to be huge, I'm already registered so that's pretty good. It's beautiful weather and looking forward to five days of conference. So today is tutorial day and I'll think I'll be attending some cool tutorials. Yeah, just look how pretty it is here, nice. All right, bye everyone, see you later.
[ { "end": 12.4, "start": 0, "text": " Hi there, it's day one of ICML and we'll be attending the conference here and just" }, { "end": 19.28, "start": 12.4, "text": " quickly pre-video to let everyone know I'll be trying to report from here kind of what" }, { "end": 27.12, "start": 19.28, "text": " papers are cool, what I liked, what are kind of the trends and so hopefully get this conference" }, { "end": 31.520000000000003, "start": 27.12, "text": " out to a broader community. So everyone's conglomerating here, the line's probably" }, { "end": 35.760000000000005, "start": 31.520000000000003, "text": " going to be huge, I'm already registered so that's pretty good. It's beautiful weather" }, { "end": 45.2, "start": 36.64, "text": " and looking forward to five days of conference. So today is tutorial day and I'll think I'll be" }, { "end": 54.480000000000004, "start": 45.92, "text": " attending some cool tutorials. Yeah, just look how pretty it is here, nice." }, { "end": 59.44, "start": 54.48, "text": " All right, bye everyone, see you later." } ]
hMO6rbMAPew
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Adversarial Examples Are Not Bugs, They Are Features
[ "Science & Technology" ]
[ "machine learning", "deep learning", "adversarial examples", "adversarial samples", "pgd", "projected gradient descent", "vulnerabiliby", "security", "artificial intelligence", "MIT", "geometry", "classifier", "deep neural network", "attack", "convolutional neural networks", "research", "robust features", "robust classifier", "robust network", "neural network" ]
Abstract: Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data. Authors: Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry https://arxiv.org/abs/1905.02175
Hi there! Today we're looking at Adversarial Examples Are Not Bugs, They Are Features by Andrew Elias et al. So this paper is pretty interesting as a catchy title and we'll try to kind of dissect what it says. So first of all, in the abstract they say adversarial examples have attracted significant attention, but the reasons for their existence and pervasiveness remain unclear. So if you don't know what an adversarial example is, an adversarial example is basically the following. Say you have an image classifier, right? Classifier, boom, neural network, image here, and the image is of a, let's say, a cat. Right, this is my best attempt at a cat, bang, cat. And you feed it through the classifier and the classifier says cat. Now if you perturb this image, if you derive an image from it and you perturb it just very slightly, very subtly, so you introduce some pixels here, there, here, there, right, you change some pixels in a very targeted way, and you feed that new image through here, then the classifier will say dog or something really, you can make it say anything like airplane or, I don't know, sky or whatever you want. So these are called adversarial examples. And it's true, their existence and pervade, the reasons for their existence and pervasiveness remain unclear. They say we demonstrate that adversarial examples can be directly attributed to the presence of non-robust features. So they're basically, their paper is about these non-robust features and they define later what they mean exactly. But here they say features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. And this is pretty neat. So the fundamental idea, as I understand it, and I'm going to take this away right here, that if you have images, let's say here of cats, and I'm going to draw another one over here, if you have an image, say, of cats, there is multiple features in this image and the feature is something that the classifier can pick up on and kind of learn to, this is a horrible cat, learn to classify images from. So features that we humans generally use are a cat has ear, ear, eyes, whiskers, right, and the general relationship to each other of these things. This is what constitutes a cat. And that's how we classify it. But also they say there are other features that are also very indicative, right? If you think what differentiates a cat from a dog and a dog here, let's pick fluffy ears, also eyes, yeah, not going to go further with the dog too much. What differentiates a cat from a dog? And we, of course, we would say, well, the head shape is different and the ears are different and the relationship to them, to each other are different, but it could also be, and this is a simplistic right now, right? But it's also that cats, for example, have different fur than dogs. And yeah, being overly simplistic here, but bear with me. So let's say in our hypothetical world, cats have fur that goes like this, left to right, right? Every hair is basically vertical, sorry, horizontal. If you look at it like that and dog fur, on the other hand, is always like this, right? This is vertical, right? Top to bottom. And so the classifier might just as well pick up on the fur direction in order to classify images, right? Since all cats have that type of fur and all dogs have that other type of fur, the classifier might just as well pick up on that, right? And to us humans, we don't really pay attention to these things because they're minute, right? You don't look at the directions of individual hairs to classify in an animal to cat or dog. You would much rather go for these kind of large features like where are the ears, how do they look and so on. But a classifier, there's actually you can make an argument that the classifier would more likely pick up on the fur direction, right? In order to in order to classify since we're using convolutional neural networks and they're generally neighborhood pixel neighborhood operators. It can much easier pick up on these patterns than it can on the general relationship of the of the large features. So if a classifier now learns that cats always have fur like this and dogs always have fur like that, what we could do is we can go over here to the dog and change its fur, right? Change in the image, change its fur to this direction. Now, to us humans, that would still very much look like a dog because the fur direction is almost imperceptible. But to the classifier that has only learned, hey, a cat always has this type of fur and the dog always has that type of fur. That new image would totally look like a cat. Right. So this paper argues exactly that this paper argues that in the data set, there are features and these are real features like this. This actually could be the case that cats fur is always like that and dogs fur is always like this. It could be the case and the classifier could pick up on this. Right. And then the adversarial examples, the reason why they exist is because the classifier has picked up on these imperceptible features. And so by changing the features, we can change the classifiers decision. And without changing the image in a large scale. So they they say that they make this hypothesis and they kind of they say, OK, we established a widespread existence in standard data sets. So they kind of give supporting evidence for their hypothesis. And then they say, finally, we present a simple setting, which is a theoretical setting where we can rigorously tie the phenomena we observe to a misalignment between the human specified notion of robustness and the inherent geometry of the data. All right. So it's kind of different pieces of the of this paper. And we're going to look at them in succession. So the introduction, we largely skip, except that their main claim here is specifically we claim that adversarial vulnerability is a direct result of our models, sensitivity to well generalizing features in the data. So that's the core point, I think, is well generalizing features, which is what we mentioned. These are features that actually describe the data well, but but features that are kind of imperceptibly small to humans or that don't fit our notion of robustness. All right. So they go on and they define more clearly what they mean here. Here, whenever we talk of a feature, right? Remember, we had the our classifier here, then we input an image and the image is called X. Right. And that classifier, usually, if we look at it closer, consists of multiple layers of interconnected neurons, whatever. And the last layer will be an output layer into different classes. Right. And so the features, when they say a feature, what they mean specifically is the last here, the last representation before it goes into the classifier. So the way you would classify them and here they just establish a two class setting. The way you would establish that is you have feature one, feature two, feature three, and you have a weight vector W1 for each feature W2, W3. You make the inner product and that will give you a Y hat. Basically, if that is high, you say it's class one. If that is low, you say it's class minus one. So the classes here are plus one and minus one, just to make things simple. So but you see the features are basically what comes out after these layers, what is then used to make a linear classification. This last thing is basically just a logistic regression. So you can think of the features as the output of the neural network, but before it goes into the classifier. So a feature basically since then, it's linearly classified. If the feature is high, it will give a signal for one class. And if a feature is low, it will give a signal for the other class, depending on, of course, if this W is negative or positive. All right, so they say we call a feature row useful. And if this thing holds here, what is this thing? This thing means so the expectation over the dates. So generally in the data set, this must hold Y times the feature. So why is the class? And remember, it's plus or minus one. And the feature, as we've seen, is some some number Y times a feature must be higher than some some number. So what does it mean when a product is high? It means either both are high or both are low. So they're correlated. That's what that means. So basically, this is says a feature F is useful if whenever it an example, X is of class one, if it's class class one or let's if it if Y is one plus one, then F is high. And whenever Y is minus one, then F is low, which means it's high in the negative direction. Right. So this is this is our this is intuitive. Right. If a feature is useful, it means it should say one thing in samples of class one, then it should say another thing in samples of class two. Then I can actually use the feature to make a decision when it's, you know, very correlated with the class. So that, you know, that makes perfect sense. So that's kind of when is a feature useful if it correlates with the class label? Yes. Cool. But the usefulness simply any feature basically that classifier will extract will be useful. That's an assumption we can make. Otherwise, the classifier wouldn't extract it. So the neural network here, that's an assumption, will only extract useful features. Right. Because the non-useful features, there would simply be no reason for it to extract them because they don't contribute to solving the task, because they're not correlated with an output class. Right. So next, they define robust, robustly useful features. So in addition to being useful, they're now also robust. What does it mean? Again, we want a correlation of why and the feature to be higher than some constant. But not only the feature of the image X, but the feature of the image X that has been perturbed by a small perturbation. So and we take the infinum here over a class of perturbations. Of course, this class of perturbations is exactly the adversarial perturbations. Basically, what this means is it says that however we try to perturb X, right, and the infinum here means that the minimum correlation, however we try to make the feature not correlated with Y, however much we try, we can't get it lower than some some gamma, some number, right? We can't we can't get it down. So whatever we try to make the feature bad for the classifier, basically, we can't. If this holds for a feature, if this is the case, then we call that feature a robust feature. Right. That feature is robustly useful if it correlates, no matter how hard we try to make it not correlate. And of course, a non robust features, so a useful non robust feature is a feature which is useful. You see here is useful. But is not gamma robust feature for any gamma. So it is a feature that is useful like the cat fur. Right. So this here, an example of this would be that the cat's eyes and ear position. Right. We can't just make a small perturbation for the image and make the ears be somewhere completely else. That's just that would require a large perturbation of the image. So the position of the ears and eyes are pretty robust features. But here the cat's fur, no matter how no matter how small we we make this this gamma, we can always kind of change the fur to make the feature not to make the feature not useful. Right. If we can change the cat fur into a dog fur and the dog fur into a cat fur, then the feature will become not useful anymore. Because we can, you know, we can we can change that arbitrarily for any image and then the classifier will have no clue. It can't be like, well, this fur could be of any of any class. Right. So the feature is not useful anymore. So this is a non robust feature. The technique you can say any feature that is useful but not robust is a non robust feature. All right. So this is kind of the definition of what robust and non robust features are. Yeah. Remember, maybe remember robust features like position of the ears and their shape and non robust features would be which direction are the individual hairs in the fur going. Right. And in our world where cat fur is going different ways than dog fur. So they now go into experimental evidence for their for their hypothesis. And here you have to understand they do two experiments which give pretty good indication that their hypothesis is actually correct. And what you have to understand before this is is two things. First of all, here you basically you just have to assume that they already they have some procedure where they can do the following where they can take an image of the training data set and they can decompose it into its robust and non robust features. Right. Don't I mean don't ask yet how they do this. But they can decompose it into these two parts. Right. So that's assumption one. They have a procedure that can actually do that. And then number two is what they what they do here is basically the general theme of these experiments is they they have a training data set. Right. This is the original training. They create a derived version of it. So let's put a tick here. This is a derived version of the data set. Then they train a regular neural network with that. So what you can do with a neural network if you train one. All right. What you usually do is you feed images X you feed images in it gives you some output Y hat and you say well but I know why is the true label. So I feed an image of a cat that the network says airplane. You say well but this should be a cat. So please make this why more to be more to be. Please make this why had more be like why. And then you have a loss function here. You say this is wrong. Please correct this. You back propagate and all the network in here will update to make that a bit more likely. That's how you train usually in our network. Now what you can do is if you want to become robust adversarial examples you can do what is called adversarial training which means that you have the same network here. But of each of the training data points you create a derived version an adversarial example to that to this X you feed the adversarial examples through the network together with the original examples. Then this will give you some why hat to and then you say but this should also be equal to why. Basically you train the classifier also on adversarial examples right. Since the hypothesis is if you train on an image data set then you can teach the classifier about that data set right. Like you do with the regular data set say well OK I can now just train on adversarial examples and my classifier will be able to better classify these correctly right. This usually works it's called adversarial training and it's been a kind of standard method to make your classifier robust. They don't do that here. They don't do this. They simply want to say OK we now have we have a regular training procedure right like this except for what we change is here the training data set. We change this to in one case for example only robust images. So we've changed all the X to be only robust and we do the regular training procedure. And then we evaluate that resulting classifier here this thing we evaluate that. How does that behave. It's kind of a new approach where you modify the date the original data set. So what did they do. First of all they decompose this training data set into a version that is only robust features right. We assume we have such a procedure. We then train a regular neural network on that right. We train a regular neural network on this on this data set and what we get is two things. First of all good standard accuracy. What does good standard accuracy mean. It means that we we can test it on what's called the unmodified test set. So the the test set the original test set of the data set the test set belonging to this training data set. We can test it on that and it works just fine. Right. So that basically means that the robust features are predictive of the of the kind of they generalize well. It means that if I train a classifier only on robust features that can actually classify well to to the to the test set. Right. So that means that's standard accuracy standard accuracy is how well do I classify the test set just an unmodified test set. So they also obtain good robust accuracy which means that what is robust accuracy. Robust accuracy means your accuracy on adversarial examples of the test set. And usually classifiers are vulnerable to this classifier is usually obtained good standard accuracy but bad robust accuracy. But if I only train my classifier on what they call robust features then I all of a sudden retain good standard accuracy. But I also get good robust accuracy which means that. It gives pretty good support to their hypothesis that the adversarial examples are abusing the fact that the classifiers learn the non robust features. Since if I don't have any non robust features it means my classifier can't learn any non robust features which in turn means my classifier isn't vulnerable to adversarial attacks because they would abuse the fact that the classifier has learned about the non robust features. So that's pretty good evidence for their hypothesis. Second thing they do is they now create this on this modified data set where they only have non robust features. Right. So the only thing they have is non robust features. Again they train a standard neural network. They train just a regular neural network on that and they also get good standard accuracy. So this means that also the non robust features as we seen like the cats fur direction can lead to you generalize well to the test set since in the test set also the cats will have that property. But you get bad robust accuracy and this gives further support to their hypothesis if you train a classifier on only non robust features. They are features because they generalize well but they are very vulnerable because they're non robust. Right. So the classifier that has learned about non robust features is vulnerable. They didn't do a third experiment which I find pretty cool where they take they take the training image and of course it's an unmodified training image. So it's robust features will basically say this is a dog. It's non robust features will also say this is a dog because it's a training image of a dog. And what they then do is they derive from this dog an adversarial example towards the cat class. Right. So what does it mean in their hypothesis if their hypothesis is correct. It now means that the robust features still say it's a dog. We can also see this here right. The kind of big shape of the image still is a dog to us humans. But the non robust features will say it's a cat. Right. This hinges on their hypothesis that adversarial examples actually abuse the non robust features. Right. They create an adversarial example. So if their hypothesis is correct the non robust features now say that's a cat. So they derive an entire data set where they change every image to another image and they also change the labels accordingly. And then they train again a regular neural network on this and they look what happens on the unmodified test set. So the unmodified test set will. So imagine if you're the you're this classifier and what you get is an image X and it has robust features. That's a dog and has non robust features say cat and its label. You're asked to predict cat. Right. And then you see the next image and the next image X to the non robust features. Maybe it's derived from some other class it will say plain. But the robust the non robust features again say cat. Right. And you're asked to predict cat. So basically the constructed data set where the non robust features always agree with with the label but the robust features they don't. So naturally what you can expect is the classifier will learn to disregard the robust features because they're no longer useful. Right. But it will actually only will learn to view these features. It's different from before before we only had these features. Now we these features are still in there. Right. But they're not informative. So the classifier will naturally learn to pick up on the non robust features and classify and classify according to them so much that if we now test on the test set and we feed in an actual cat. Right. It's of course it's robust features will say cat and its non robust features will say cat and the classifier is able to accurately predict. This is a cat even though the all the images of cats it has seen during training were actually of basically of non cats of here a dog. So this is pretty cool and shows that kind of these these features that these non robust features that adversarial examples abuse since they're created by adversarial examples. They they are actually predictive and generalize to the test set. So that's pretty pretty good evidence for their hypothesis so far. Now the kind of final remaining question is how do they create what is the procedure where they can create a robust and then basically non robust version of the data set. And here is kind of where we get into the into the sort of what I find. Yeah. So here you see basically examples of so this is an original image of a ship in the CIFAR 10 data set I believe. And this is a robust sample. So these are only robust features of the ship. And this is a ship made with only non robust features you see is actually a moose. But the non robust features have been changed to ship. So the way they construct a robust version of the data set. They have a formal definition but the way they do it is as follows. So and then they say OK here is where we where we get into the details. They say imagine we have a classifier. Right. The classifier outputs features and here we call them here they call them G which is the representation. It can be larger than features. It can be a bigger class. But in essence G is the features which then goes into the into the classifier and into the labels and so on. So the neural network outputs the features inputs some X. Now what if what if I have another X let's say X prime and I just initialize this with random noise. And if I feed this and I get G prime here and I try to make the two as close as possible by changing X. So I'm going to change my X here. Basically I'm going to change my image such that the outputs the features here match each other as close as possible. What does it mean? And I do this via back propagation right. I match these and I back propagate to X. I can do that with gradient descent. What happens is that my image X will basically pick up will match the image. My X prime will match the X in all the ways that are relevant for the features. Basically I will transfer all of the features from X to X prime. But nothing else right since I start with random. Now what if my classifier and that's what they do. What if the classifier is a robust classifier. So remember we talked about we can actually robustify a classifier by doing adversarial training. What if I have a classifier like such that is robust. If I input an X and it outputs me a feature representation of X. If the classifier is robust that representation will only contain robust features. And then if I have a second image X or and I started from random noise and I match the representation of X. And by changing XR basically I will transfer all of the robust features from X. But nothing else right. Given that I start from random noise here this means random noise has no features. That's the assumption. Random noise has no features since it's random noise. And if I transfer only the robust features basically what I've done is I've have now an image that I know has no non robust features. And only robust features of X. So that's how they derive a robustified version of X. Second how do they derive a non robust version. And that's even even easier if I have a classifier. A regular classifier and I want a non robust version of X. I have X input output G output some label. What I do is I simply derive an adversarial example of X like we did before adversarial example in here out here. And that gives me some X Y2 which is different from Y right. If I have a adversarial example then basically I've transferred. I've transferred the non robust features that lead to class Y2. I've transferred the non robust features here while still maintaining the robust features from here. So if this is too abstract imagine here X is an image of a dog right dog. And I derive from it an adversarial image that now says airplane right. So the robust features will still be of a dog will still be of the original image. But the non robust features will be of the airplane class. So that's how I derive a non robust non robust version that has features of kind of one. Robust features of one class but non robust features of the other class. That's what you see up here with the moose right. The moose clearly has been started from the image of a moose and then has been has received non robust features from the ship class. And that's just your classic adversarial example procedure. So that's the that's the kind of procedure. And so what's kind of my criticism here if you look at the first part the first part where they say well in order to determine what the robust features are we actually need a classifier that's already robust. So we've seen before we have a we have a data set sorry let's go up here. They say aha here we have a data set right and we can disentangle this and then it will which color have we not used we have a data set. We only we robustify the data set to a robust data set. We train a standard neural network and that gives us good robust accuracy which is really cool because we don't do anything special during training and we still get good robust accuracy. But in order to do this procedure here this one you actually have to have a robust classifier right. You have to have this already robustified classifier which you have obtained by adversarially training the robust classifier. Basically what you're doing now is you take this adversarial training procedure which the point here is that you don't do anything different during training right. But here you take the adversarial training procedure and via training the robust classifier via changing this data set here you basically get good robust accuracy which to me is just a reflection that you've obtained the data set using this robust classifier in the first place. I mean yeah of course their their method gives a hint that I can actually this is actually due to things in the data set themselves right. But there and I mean that's really important because it surely means that it's not a point of let's say the the classifier itself but it's a point of the data set which also say OK. It also explains why these adversarial examples transfer between classifiers if you have two classifiers that are different but classify the same thing they're vulnerable to the same adversarial example which basically means it must be some property of the data set that these things learn. But to do then say we have a procedure to extract the robust features and if we only train on the robust features we become robust right as here but you obtain the robust features by using a robustified classifier which you have adversarially trained to me that's kind of kind of back door in adversarial training into this whole procedure. And yeah so that's that's kind of my first criticism my second criticism is the fact that you know I mean it's it's an interesting take on this but this whole notion this whole seeing of these features are robust these features are non robust is basically just reframing the problem of adversarial examples in terms of in terms of features. It says nothing why these features are there. It's just postulating that they're there. It says nothing why they're there. It says nothing about why the classifiers pick up on them or how they do it or how you know how this is to be mitigated without first having a robustly trained network to extract the robust features. It's very much widely or not. Things are very much widely not known about these samples it's just a reframing of the problem, I feel. And it's cool experiments I mean they, it does show some a lot of things about these adversarial examples but certainly not an explanation. I find, at least that's my opinion. Alright, so down here then they show that they make an kind of simplified version of this a theoretical setting where they can analyze this. And they basically say, okay, this is generally what happens at the fundamental level at the fundamental level, you have classes, and let's say the classes are distributed like, like this right this these are the examples in the data set and they're distributed like that right. Mean, and you have some covariance. So they're distributed like that. If I have two classes like this, such as here, right, and they're distributed like that, and I create like the separator, the linear classifier, the linear classifier will classify like this it will be like super this is the best linear classifier. Right, we can calculate this accurately. But what do I say when I say okay. I want an adversarial example adversarial examples means that I can shift my examples by a little bit but achieve a big change in output. And since, since this distance here. Right, so if I have a sample here, I need to go a long way to the boundary to achieve another output but if I go into another direction. Right, if I go down here, I only need to go a very short way. And since adversarial examples as they're specified, they say, okay, we want to go a short way and the short way is characterized by going a short way in any direction, right, this is a terrible circle in any direction, we want to go a short way. That's another example. You see that if I have this any direction property, there's actually directions where this classification boundary is very, very close. And so that's what they say this is a fundamental misalignment between the geometry of the data, which is like this, and the geometry of how we specify adversarial examples, which is, you know, kind of equal in each direction, which leads to that. And they say, okay, what if I now robust parameters so what if I adversarially train my network to be robust, it basically means that I expand my data, because I add adversarial examples right of the circle here, I actually add adversarial examples, so my, my class, my data distribution will actually more like this. And my separating hyperplane will change here. And the geometry of the adversarial examples will be much more aligned with my separating hyperplane. So this is kind of a toy example of where they say this is fundamentally what's going on. There's a misalignment between the geometry of the adversarial examples and the inherent geometry of the data. So that's kind of the theoretical analysis they do. And with that, I finish here, and I hope this was clear enough and goodbye.
[ { "end": 8, "start": 0, "text": " Hi there! Today we're looking at Adversarial Examples Are Not Bugs, They Are Features by Andrew Elias et al." }, { "end": 18, "start": 8, "text": " So this paper is pretty interesting as a catchy title and we'll try to kind of dissect what it says." }, { "end": 25, "start": 18, "text": " So first of all, in the abstract they say adversarial examples have attracted significant attention," }, { "end": 30, "start": 25, "text": " but the reasons for their existence and pervasiveness remain unclear." }, { "end": 35, "start": 30, "text": " So if you don't know what an adversarial example is, an adversarial example is basically the following." }, { "end": 45, "start": 35, "text": " Say you have an image classifier, right? Classifier, boom, neural network, image here, and the image is of a, let's say, a cat." }, { "end": 57, "start": 45, "text": " Right, this is my best attempt at a cat, bang, cat. And you feed it through the classifier and the classifier says cat." }, { "end": 67, "start": 57, "text": " Now if you perturb this image, if you derive an image from it and you perturb it just very slightly, very subtly," }, { "end": 75, "start": 67, "text": " so you introduce some pixels here, there, here, there, right, you change some pixels in a very targeted way," }, { "end": 85, "start": 75, "text": " and you feed that new image through here, then the classifier will say dog or something really, you can make it say anything like airplane or," }, { "end": 92, "start": 85, "text": " I don't know, sky or whatever you want. So these are called adversarial examples." }, { "end": 100, "start": 92, "text": " And it's true, their existence and pervade, the reasons for their existence and pervasiveness remain unclear." }, { "end": 107, "start": 100, "text": " They say we demonstrate that adversarial examples can be directly attributed to the presence of non-robust features." }, { "end": 114, "start": 107, "text": " So they're basically, their paper is about these non-robust features and they define later what they mean exactly." }, { "end": 126, "start": 114, "text": " But here they say features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans." }, { "end": 135, "start": 126, "text": " And this is pretty neat. So the fundamental idea, as I understand it, and I'm going to take this away right here," }, { "end": 147, "start": 135, "text": " that if you have images, let's say here of cats, and I'm going to draw another one over here, if you have an image, say, of cats," }, { "end": 158, "start": 147, "text": " there is multiple features in this image and the feature is something that the classifier can pick up on and kind of learn to," }, { "end": 170, "start": 158, "text": " this is a horrible cat, learn to classify images from. So features that we humans generally use are a cat has ear," }, { "end": 178, "start": 170, "text": " ear, eyes, whiskers, right, and the general relationship to each other of these things." }, { "end": 187, "start": 178, "text": " This is what constitutes a cat. And that's how we classify it. But also they say there are other features that are also very indicative, right?" }, { "end": 203, "start": 187, "text": " If you think what differentiates a cat from a dog and a dog here, let's pick fluffy ears, also eyes, yeah, not going to go further with the dog too much." }, { "end": 214, "start": 203, "text": " What differentiates a cat from a dog? And we, of course, we would say, well, the head shape is different and the ears are different and the relationship to them," }, { "end": 223, "start": 214, "text": " to each other are different, but it could also be, and this is a simplistic right now, right? But it's also that cats, for example, have different fur than dogs." }, { "end": 234, "start": 223, "text": " And yeah, being overly simplistic here, but bear with me. So let's say in our hypothetical world, cats have fur that goes like this, left to right, right?" }, { "end": 248, "start": 234, "text": " Every hair is basically vertical, sorry, horizontal. If you look at it like that and dog fur, on the other hand, is always like this, right?" }, { "end": 260, "start": 248, "text": " This is vertical, right? Top to bottom. And so the classifier might just as well pick up on the fur direction in order to classify images, right?" }, { "end": 268, "start": 260, "text": " Since all cats have that type of fur and all dogs have that other type of fur, the classifier might just as well pick up on that, right?" }, { "end": 272, "start": 268, "text": " And to us humans, we don't really pay attention to these things because they're minute, right?" }, { "end": 280, "start": 272, "text": " You don't look at the directions of individual hairs to classify in an animal to cat or dog." }, { "end": 287, "start": 280, "text": " You would much rather go for these kind of large features like where are the ears, how do they look and so on." }, { "end": 295, "start": 287, "text": " But a classifier, there's actually you can make an argument that the classifier would more likely pick up on the fur direction, right?" }, { "end": 304, "start": 295, "text": " In order to in order to classify since we're using convolutional neural networks and they're generally neighborhood pixel neighborhood operators." }, { "end": 313, "start": 304, "text": " It can much easier pick up on these patterns than it can on the general relationship of the of the large features." }, { "end": 325, "start": 313, "text": " So if a classifier now learns that cats always have fur like this and dogs always have fur like that, what we could do is we can go over here to the dog and change its fur, right?" }, { "end": 329, "start": 325, "text": " Change in the image, change its fur to this direction." }, { "end": 335, "start": 329, "text": " Now, to us humans, that would still very much look like a dog because the fur direction is almost imperceptible." }, { "end": 342, "start": 335, "text": " But to the classifier that has only learned, hey, a cat always has this type of fur and the dog always has that type of fur." }, { "end": 346, "start": 342, "text": " That new image would totally look like a cat." }, { "end": 355, "start": 346, "text": " Right. So this paper argues exactly that this paper argues that in the data set, there are features and these are real features like this." }, { "end": 361, "start": 355, "text": " This actually could be the case that cats fur is always like that and dogs fur is always like this." }, { "end": 365, "start": 361, "text": " It could be the case and the classifier could pick up on this." }, { "end": 376, "start": 365, "text": " Right. And then the adversarial examples, the reason why they exist is because the classifier has picked up on these imperceptible features." }, { "end": 383, "start": 376, "text": " And so by changing the features, we can change the classifiers decision." }, { "end": 386, "start": 383, "text": " And without changing the image in a large scale." }, { "end": 397, "start": 386, "text": " So they they say that they make this hypothesis and they kind of they say, OK, we established a widespread existence in standard data sets." }, { "end": 401, "start": 397, "text": " So they kind of give supporting evidence for their hypothesis." }, { "end": 410, "start": 401, "text": " And then they say, finally, we present a simple setting, which is a theoretical setting where we can rigorously tie the phenomena" }, { "end": 418, "start": 410, "text": " we observe to a misalignment between the human specified notion of robustness and the inherent geometry of the data." }, { "end": 421, "start": 418, "text": " All right. So it's kind of different pieces of the of this paper." }, { "end": 424, "start": 421, "text": " And we're going to look at them in succession." }, { "end": 438, "start": 424, "text": " So the introduction, we largely skip, except that their main claim here is specifically we claim that adversarial vulnerability is a direct result of our models, sensitivity to well generalizing features in the data." }, { "end": 445, "start": 438, "text": " So that's the core point, I think, is well generalizing features, which is what we mentioned." }, { "end": 458, "start": 445, "text": " These are features that actually describe the data well, but but features that are kind of imperceptibly small to humans or that don't fit our notion of robustness." }, { "end": 465, "start": 458, "text": " All right. So they go on and they define more clearly what they mean here." }, { "end": 468, "start": 465, "text": " Here, whenever we talk of a feature, right?" }, { "end": 475, "start": 468, "text": " Remember, we had the our classifier here, then we input an image and the image is called X." }, { "end": 484, "start": 475, "text": " Right. And that classifier, usually, if we look at it closer, consists of multiple layers of interconnected neurons, whatever." }, { "end": 490, "start": 484, "text": " And the last layer will be an output layer into different classes." }, { "end": 491, "start": 490, "text": " Right." }, { "end": 503, "start": 491, "text": " And so the features, when they say a feature, what they mean specifically is the last here, the last representation before it goes into the classifier." }, { "end": 510, "start": 503, "text": " So the way you would classify them and here they just establish a two class setting." }, { "end": 520, "start": 510, "text": " The way you would establish that is you have feature one, feature two, feature three, and you have a weight vector W1 for each feature W2, W3." }, { "end": 526, "start": 520, "text": " You make the inner product and that will give you a Y hat." }, { "end": 530, "start": 526, "text": " Basically, if that is high, you say it's class one." }, { "end": 533, "start": 530, "text": " If that is low, you say it's class minus one." }, { "end": 538, "start": 533, "text": " So the classes here are plus one and minus one, just to make things simple." }, { "end": 547, "start": 538, "text": " So but you see the features are basically what comes out after these layers, what is then used to make a linear classification." }, { "end": 552, "start": 547, "text": " This last thing is basically just a logistic regression." }, { "end": 558, "start": 552, "text": " So you can think of the features as the output of the neural network, but before it goes into the classifier." }, { "end": 564, "start": 558, "text": " So a feature basically since then, it's linearly classified." }, { "end": 569, "start": 564, "text": " If the feature is high, it will give a signal for one class." }, { "end": 576, "start": 569, "text": " And if a feature is low, it will give a signal for the other class, depending on, of course, if this W is negative or positive." }, { "end": 583, "start": 576, "text": " All right, so they say we call a feature row useful." }, { "end": 588, "start": 583, "text": " And if this thing holds here, what is this thing?" }, { "end": 591, "start": 588, "text": " This thing means so the expectation over the dates." }, { "end": 598, "start": 591, "text": " So generally in the data set, this must hold Y times the feature." }, { "end": 599, "start": 598, "text": " So why is the class?" }, { "end": 601, "start": 599, "text": " And remember, it's plus or minus one." }, { "end": 613, "start": 601, "text": " And the feature, as we've seen, is some some number Y times a feature must be higher than some some number." }, { "end": 615, "start": 613, "text": " So what does it mean when a product is high?" }, { "end": 619, "start": 615, "text": " It means either both are high or both are low." }, { "end": 622, "start": 619, "text": " So they're correlated. That's what that means." }, { "end": 643, "start": 622, "text": " So basically, this is says a feature F is useful if whenever it an example, X is of class one, if it's class class one or let's if it if Y is one plus one, then F is high." }, { "end": 651, "start": 643, "text": " And whenever Y is minus one, then F is low, which means it's high in the negative direction." }, { "end": 655, "start": 651, "text": " Right. So this is this is our this is intuitive." }, { "end": 665, "start": 655, "text": " Right. If a feature is useful, it means it should say one thing in samples of class one, then it should say another thing in samples of class two." }, { "end": 671, "start": 665, "text": " Then I can actually use the feature to make a decision when it's, you know, very correlated with the class." }, { "end": 676, "start": 671, "text": " So that, you know, that makes perfect sense." }, { "end": 681, "start": 676, "text": " So that's kind of when is a feature useful if it correlates with the class label?" }, { "end": 689, "start": 681, "text": " Yes. Cool. But the usefulness simply any feature basically that classifier will extract will be useful." }, { "end": 693, "start": 689, "text": " That's an assumption we can make. Otherwise, the classifier wouldn't extract it." }, { "end": 701, "start": 693, "text": " So the neural network here, that's an assumption, will only extract useful features." }, { "end": 714, "start": 701, "text": " Right. Because the non-useful features, there would simply be no reason for it to extract them because they don't contribute to solving the task, because they're not correlated with an output class." }, { "end": 721, "start": 714, "text": " Right. So next, they define robust, robustly useful features." }, { "end": 725, "start": 721, "text": " So in addition to being useful, they're now also robust." }, { "end": 735, "start": 725, "text": " What does it mean? Again, we want a correlation of why and the feature to be higher than some constant." }, { "end": 745, "start": 735, "text": " But not only the feature of the image X, but the feature of the image X that has been perturbed by a small perturbation." }, { "end": 750, "start": 745, "text": " So and we take the infinum here over a class of perturbations." }, { "end": 755, "start": 750, "text": " Of course, this class of perturbations is exactly the adversarial perturbations." }, { "end": 764, "start": 755, "text": " Basically, what this means is it says that however we try to perturb X, right, and the infinum here means that the minimum correlation," }, { "end": 777, "start": 764, "text": " however we try to make the feature not correlated with Y, however much we try, we can't get it lower than some some gamma, some number, right?" }, { "end": 787, "start": 777, "text": " We can't we can't get it down. So whatever we try to make the feature bad for the classifier, basically, we can't." }, { "end": 794, "start": 787, "text": " If this holds for a feature, if this is the case, then we call that feature a robust feature." }, { "end": 804, "start": 794, "text": " Right. That feature is robustly useful if it correlates, no matter how hard we try to make it not correlate." }, { "end": 813, "start": 804, "text": " And of course, a non robust features, so a useful non robust feature is a feature which is useful." }, { "end": 820, "start": 813, "text": " You see here is useful. But is not gamma robust feature for any gamma." }, { "end": 825, "start": 820, "text": " So it is a feature that is useful like the cat fur." }, { "end": 830, "start": 825, "text": " Right. So this here, an example of this would be that the cat's eyes and ear position." }, { "end": 839, "start": 830, "text": " Right. We can't just make a small perturbation for the image and make the ears be somewhere completely else." }, { "end": 842, "start": 839, "text": " That's just that would require a large perturbation of the image." }, { "end": 847, "start": 842, "text": " So the position of the ears and eyes are pretty robust features." }, { "end": 864, "start": 847, "text": " But here the cat's fur, no matter how no matter how small we we make this this gamma, we can always kind of change the fur to make the feature not to make the feature not useful." }, { "end": 873, "start": 864, "text": " Right. If we can change the cat fur into a dog fur and the dog fur into a cat fur, then the feature will become not useful anymore." }, { "end": 879, "start": 873, "text": " Because we can, you know, we can we can change that arbitrarily for any image and then the classifier will have no clue." }, { "end": 884, "start": 879, "text": " It can't be like, well, this fur could be of any of any class." }, { "end": 886, "start": 884, "text": " Right. So the feature is not useful anymore." }, { "end": 895, "start": 886, "text": " So this is a non robust feature. The technique you can say any feature that is useful but not robust is a non robust feature." }, { "end": 901, "start": 895, "text": " All right. So this is kind of the definition of what robust and non robust features are." }, { "end": 914, "start": 901, "text": " Yeah. Remember, maybe remember robust features like position of the ears and their shape and non robust features would be which direction are the individual hairs in the fur going." }, { "end": 921, "start": 914, "text": " Right. And in our world where cat fur is going different ways than dog fur." }, { "end": 930, "start": 921, "text": " So they now go into experimental evidence for their for their hypothesis." }, { "end": 939, "start": 930, "text": " And here you have to understand they do two experiments which give pretty good indication that their hypothesis is actually correct." }, { "end": 944, "start": 939, "text": " And what you have to understand before this is is two things." }, { "end": 957, "start": 944, "text": " First of all, here you basically you just have to assume that they already they have some procedure where they can do the following where they can take an image of the training data set" }, { "end": 962, "start": 957, "text": " and they can decompose it into its robust and non robust features." }, { "end": 966, "start": 962, "text": " Right. Don't I mean don't ask yet how they do this." }, { "end": 971, "start": 966, "text": " But they can decompose it into these two parts." }, { "end": 975, "start": 971, "text": " Right. So that's assumption one. They have a procedure that can actually do that." }, { "end": 985, "start": 975, "text": " And then number two is what they what they do here is basically the general theme of these experiments is they they have a training data set." }, { "end": 991, "start": 985, "text": " Right. This is the original training. They create a derived version of it." }, { "end": 996, "start": 991, "text": " So let's put a tick here. This is a derived version of the data set." }, { "end": 1004, "start": 996, "text": " Then they train a regular neural network with that." }, { "end": 1008, "start": 1004, "text": " So what you can do with a neural network if you train one." }, { "end": 1021, "start": 1008, "text": " All right. What you usually do is you feed images X you feed images in it gives you some output Y hat and you say well but I know why is the true label." }, { "end": 1024, "start": 1021, "text": " So I feed an image of a cat that the network says airplane." }, { "end": 1034, "start": 1024, "text": " You say well but this should be a cat. So please make this why more to be more to be." }, { "end": 1040, "start": 1034, "text": " Please make this why had more be like why. And then you have a loss function here." }, { "end": 1042, "start": 1040, "text": " You say this is wrong. Please correct this." }, { "end": 1047, "start": 1042, "text": " You back propagate and all the network in here will update to make that a bit more likely." }, { "end": 1049, "start": 1047, "text": " That's how you train usually in our network." }, { "end": 1063, "start": 1049, "text": " Now what you can do is if you want to become robust adversarial examples you can do what is called adversarial training which means that you have the same network here." }, { "end": 1080, "start": 1063, "text": " But of each of the training data points you create a derived version an adversarial example to that to this X you feed the adversarial examples through the network together with the original examples." }, { "end": 1090, "start": 1080, "text": " Then this will give you some why hat to and then you say but this should also be equal to why." }, { "end": 1096, "start": 1090, "text": " Basically you train the classifier also on adversarial examples right." }, { "end": 1106, "start": 1096, "text": " Since the hypothesis is if you train on an image data set then you can teach the classifier about that data set right." }, { "end": 1118, "start": 1106, "text": " Like you do with the regular data set say well OK I can now just train on adversarial examples and my classifier will be able to better classify these correctly right." }, { "end": 1124, "start": 1118, "text": " This usually works it's called adversarial training and it's been a kind of standard method to make your classifier robust." }, { "end": 1127, "start": 1124, "text": " They don't do that here. They don't do this." }, { "end": 1139, "start": 1127, "text": " They simply want to say OK we now have we have a regular training procedure right like this except for what we change is here the training data set." }, { "end": 1152, "start": 1139, "text": " We change this to in one case for example only robust images. So we've changed all the X to be only robust and we do the regular training procedure." }, { "end": 1159, "start": 1152, "text": " And then we evaluate that resulting classifier here this thing we evaluate that." }, { "end": 1165, "start": 1159, "text": " How does that behave. It's kind of a new approach where you modify the date the original data set." }, { "end": 1177, "start": 1165, "text": " So what did they do. First of all they decompose this training data set into a version that is only robust features right." }, { "end": 1186, "start": 1177, "text": " We assume we have such a procedure. We then train a regular neural network on that right." }, { "end": 1195, "start": 1186, "text": " We train a regular neural network on this on this data set and what we get is two things." }, { "end": 1199, "start": 1195, "text": " First of all good standard accuracy. What does good standard accuracy mean." }, { "end": 1208, "start": 1199, "text": " It means that we we can test it on what's called the unmodified test set." }, { "end": 1215, "start": 1208, "text": " So the the test set the original test set of the data set the test set belonging to this training data set." }, { "end": 1219, "start": 1215, "text": " We can test it on that and it works just fine. Right." }, { "end": 1228, "start": 1219, "text": " So that basically means that the robust features are predictive of the of the kind of they generalize well." }, { "end": 1239, "start": 1228, "text": " It means that if I train a classifier only on robust features that can actually classify well to to the to the test set." }, { "end": 1248, "start": 1239, "text": " Right. So that means that's standard accuracy standard accuracy is how well do I classify the test set just an unmodified test set." }, { "end": 1254, "start": 1248, "text": " So they also obtain good robust accuracy which means that what is robust accuracy." }, { "end": 1261, "start": 1254, "text": " Robust accuracy means your accuracy on adversarial examples of the test set." }, { "end": 1270, "start": 1261, "text": " And usually classifiers are vulnerable to this classifier is usually obtained good standard accuracy but bad robust accuracy." }, { "end": 1279, "start": 1270, "text": " But if I only train my classifier on what they call robust features then I all of a sudden retain good standard accuracy." }, { "end": 1287, "start": 1279, "text": " But I also get good robust accuracy which means that." }, { "end": 1296, "start": 1287, "text": " It gives pretty good support to their hypothesis that the adversarial examples are abusing the fact that the classifiers learn the non robust features." }, { "end": 1313, "start": 1296, "text": " Since if I don't have any non robust features it means my classifier can't learn any non robust features which in turn means my classifier isn't vulnerable to adversarial attacks because they would abuse the fact that the classifier has learned about the non robust features." }, { "end": 1318, "start": 1313, "text": " So that's pretty good evidence for their hypothesis." }, { "end": 1329, "start": 1318, "text": " Second thing they do is they now create this on this modified data set where they only have non robust features." }, { "end": 1332, "start": 1329, "text": " Right. So the only thing they have is non robust features." }, { "end": 1335, "start": 1332, "text": " Again they train a standard neural network." }, { "end": 1341, "start": 1335, "text": " They train just a regular neural network on that and they also get good standard accuracy." }, { "end": 1357, "start": 1341, "text": " So this means that also the non robust features as we seen like the cats fur direction can lead to you generalize well to the test set since in the test set also the cats will have that property." }, { "end": 1368, "start": 1357, "text": " But you get bad robust accuracy and this gives further support to their hypothesis if you train a classifier on only non robust features." }, { "end": 1376, "start": 1368, "text": " They are features because they generalize well but they are very vulnerable because they're non robust." }, { "end": 1383, "start": 1376, "text": " Right. So the classifier that has learned about non robust features is vulnerable." }, { "end": 1394, "start": 1383, "text": " They didn't do a third experiment which I find pretty cool where they take they take the training image and of course it's an unmodified training image." }, { "end": 1399, "start": 1394, "text": " So it's robust features will basically say this is a dog." }, { "end": 1406, "start": 1399, "text": " It's non robust features will also say this is a dog because it's a training image of a dog." }, { "end": 1415, "start": 1406, "text": " And what they then do is they derive from this dog an adversarial example towards the cat class." }, { "end": 1422, "start": 1415, "text": " Right. So what does it mean in their hypothesis if their hypothesis is correct." }, { "end": 1427, "start": 1422, "text": " It now means that the robust features still say it's a dog." }, { "end": 1429, "start": 1427, "text": " We can also see this here right." }, { "end": 1437, "start": 1429, "text": " The kind of big shape of the image still is a dog to us humans." }, { "end": 1441, "start": 1437, "text": " But the non robust features will say it's a cat." }, { "end": 1447, "start": 1441, "text": " Right. This hinges on their hypothesis that adversarial examples actually abuse the non robust features." }, { "end": 1456, "start": 1447, "text": " Right. They create an adversarial example. So if their hypothesis is correct the non robust features now say that's a cat." }, { "end": 1465, "start": 1456, "text": " So they derive an entire data set where they change every image to another image and they also change the labels accordingly." }, { "end": 1475, "start": 1465, "text": " And then they train again a regular neural network on this and they look what happens on the unmodified test set." }, { "end": 1487, "start": 1475, "text": " So the unmodified test set will. So imagine if you're the you're this classifier and what you get is an image X and it has robust features." }, { "end": 1493, "start": 1487, "text": " That's a dog and has non robust features say cat and its label." }, { "end": 1501, "start": 1493, "text": " You're asked to predict cat. Right. And then you see the next image and the next image X to the non robust features." }, { "end": 1509, "start": 1501, "text": " Maybe it's derived from some other class it will say plain. But the robust the non robust features again say cat." }, { "end": 1522, "start": 1509, "text": " Right. And you're asked to predict cat. So basically the constructed data set where the non robust features always agree with with the label but the robust features they don't." }, { "end": 1532, "start": 1522, "text": " So naturally what you can expect is the classifier will learn to disregard the robust features because they're no longer useful." }, { "end": 1538, "start": 1532, "text": " Right. But it will actually only will learn to view these features." }, { "end": 1544, "start": 1538, "text": " It's different from before before we only had these features. Now we these features are still in there. Right." }, { "end": 1559, "start": 1544, "text": " But they're not informative. So the classifier will naturally learn to pick up on the non robust features and classify and classify according to them so much that if we now test on the test set and we feed in an actual cat." }, { "end": 1568, "start": 1559, "text": " Right. It's of course it's robust features will say cat and its non robust features will say cat and the classifier is able to accurately predict." }, { "end": 1579, "start": 1568, "text": " This is a cat even though the all the images of cats it has seen during training were actually of basically of non cats of here a dog." }, { "end": 1592, "start": 1579, "text": " So this is pretty cool and shows that kind of these these features that these non robust features that adversarial examples abuse since they're created by adversarial examples." }, { "end": 1599, "start": 1592, "text": " They they are actually predictive and generalize to the test set." }, { "end": 1603, "start": 1599, "text": " So that's pretty pretty good evidence for their hypothesis so far." }, { "end": 1617, "start": 1603, "text": " Now the kind of final remaining question is how do they create what is the procedure where they can create a robust and then basically non robust version of the data set." }, { "end": 1623, "start": 1617, "text": " And here is kind of where we get into the into the sort of what I find." }, { "end": 1632, "start": 1623, "text": " Yeah. So here you see basically examples of so this is an original image of a ship in the CIFAR 10 data set I believe." }, { "end": 1637, "start": 1632, "text": " And this is a robust sample." }, { "end": 1639, "start": 1637, "text": " So these are only robust features of the ship." }, { "end": 1644, "start": 1639, "text": " And this is a ship made with only non robust features you see is actually a moose." }, { "end": 1649, "start": 1644, "text": " But the non robust features have been changed to ship." }, { "end": 1655, "start": 1649, "text": " So the way they construct a robust version of the data set." }, { "end": 1661, "start": 1655, "text": " They have a formal definition but the way they do it is as follows." }, { "end": 1666, "start": 1661, "text": " So and then they say OK here is where we where we get into the details." }, { "end": 1670, "start": 1666, "text": " They say imagine we have a classifier." }, { "end": 1678, "start": 1670, "text": " Right. The classifier outputs features and here we call them here they call them G which is the representation." }, { "end": 1680, "start": 1678, "text": " It can be larger than features." }, { "end": 1682, "start": 1680, "text": " It can be a bigger class." }, { "end": 1690, "start": 1682, "text": " But in essence G is the features which then goes into the into the classifier and into the labels and so on." }, { "end": 1695, "start": 1690, "text": " So the neural network outputs the features inputs some X." }, { "end": 1705, "start": 1695, "text": " Now what if what if I have another X let's say X prime and I just initialize this with random noise." }, { "end": 1715, "start": 1705, "text": " And if I feed this and I get G prime here and I try to make the two as close as possible by changing X." }, { "end": 1717, "start": 1715, "text": " So I'm going to change my X here." }, { "end": 1725, "start": 1717, "text": " Basically I'm going to change my image such that the outputs the features here match each other as close as possible." }, { "end": 1728, "start": 1725, "text": " What does it mean? And I do this via back propagation right." }, { "end": 1731, "start": 1728, "text": " I match these and I back propagate to X." }, { "end": 1734, "start": 1731, "text": " I can do that with gradient descent." }, { "end": 1744, "start": 1734, "text": " What happens is that my image X will basically pick up will match the image." }, { "end": 1751, "start": 1744, "text": " My X prime will match the X in all the ways that are relevant for the features." }, { "end": 1758, "start": 1751, "text": " Basically I will transfer all of the features from X to X prime." }, { "end": 1761, "start": 1758, "text": " But nothing else right since I start with random." }, { "end": 1766, "start": 1761, "text": " Now what if my classifier and that's what they do." }, { "end": 1770, "start": 1766, "text": " What if the classifier is a robust classifier." }, { "end": 1776, "start": 1770, "text": " So remember we talked about we can actually robustify a classifier by doing adversarial training." }, { "end": 1780, "start": 1776, "text": " What if I have a classifier like such that is robust." }, { "end": 1786, "start": 1780, "text": " If I input an X and it outputs me a feature representation of X." }, { "end": 1792, "start": 1786, "text": " If the classifier is robust that representation will only contain robust features." }, { "end": 1802, "start": 1792, "text": " And then if I have a second image X or and I started from random noise and I match the representation of X." }, { "end": 1811, "start": 1802, "text": " And by changing XR basically I will transfer all of the robust features from X." }, { "end": 1813, "start": 1811, "text": " But nothing else right." }, { "end": 1818, "start": 1813, "text": " Given that I start from random noise here this means random noise has no features." }, { "end": 1822, "start": 1818, "text": " That's the assumption. Random noise has no features since it's random noise." }, { "end": 1834, "start": 1822, "text": " And if I transfer only the robust features basically what I've done is I've have now an image that I know has no non robust features." }, { "end": 1838, "start": 1834, "text": " And only robust features of X." }, { "end": 1845, "start": 1838, "text": " So that's how they derive a robustified version of X." }, { "end": 1851, "start": 1845, "text": " Second how do they derive a non robust version." }, { "end": 1858, "start": 1851, "text": " And that's even even easier if I have a classifier." }, { "end": 1865, "start": 1858, "text": " A regular classifier and I want a non robust version of X." }, { "end": 1871, "start": 1865, "text": " I have X input output G output some label." }, { "end": 1882, "start": 1871, "text": " What I do is I simply derive an adversarial example of X like we did before adversarial example in here out here." }, { "end": 1887, "start": 1882, "text": " And that gives me some X Y2 which is different from Y right." }, { "end": 1895, "start": 1887, "text": " If I have a adversarial example then basically I've transferred." }, { "end": 1901, "start": 1895, "text": " I've transferred the non robust features that lead to class Y2." }, { "end": 1909, "start": 1901, "text": " I've transferred the non robust features here while still maintaining the robust features from here." }, { "end": 1916, "start": 1909, "text": " So if this is too abstract imagine here X is an image of a dog right dog." }, { "end": 1925, "start": 1916, "text": " And I derive from it an adversarial image that now says airplane right." }, { "end": 1932, "start": 1925, "text": " So the robust features will still be of a dog will still be of the original image." }, { "end": 1938, "start": 1932, "text": " But the non robust features will be of the airplane class." }, { "end": 1948, "start": 1938, "text": " So that's how I derive a non robust non robust version that has features of kind of one." }, { "end": 1952, "start": 1948, "text": " Robust features of one class but non robust features of the other class." }, { "end": 1955, "start": 1952, "text": " That's what you see up here with the moose right." }, { "end": 1963, "start": 1955, "text": " The moose clearly has been started from the image of a moose and then has been has received non robust features from the ship class." }, { "end": 1968, "start": 1963, "text": " And that's just your classic adversarial example procedure." }, { "end": 1971, "start": 1968, "text": " So that's the that's the kind of procedure." }, { "end": 1986, "start": 1971, "text": " And so what's kind of my criticism here if you look at the first part the first part where they say well in order to determine what the robust features are we actually need a classifier that's already robust." }, { "end": 1994, "start": 1986, "text": " So we've seen before we have a we have a data set sorry let's go up here." }, { "end": 2005, "start": 1994, "text": " They say aha here we have a data set right and we can disentangle this and then it will which color have we not used we have a data set." }, { "end": 2009, "start": 2005, "text": " We only we robustify the data set to a robust data set." }, { "end": 2019, "start": 2009, "text": " We train a standard neural network and that gives us good robust accuracy which is really cool because we don't do anything special during training and we still get good robust accuracy." }, { "end": 2030, "start": 2019, "text": " But in order to do this procedure here this one you actually have to have a robust classifier right." }, { "end": 2043, "start": 2030, "text": " You have to have this already robustified classifier which you have obtained by adversarially training the robust classifier." }, { "end": 2052, "start": 2043, "text": " Basically what you're doing now is you take this adversarial training procedure which the point here is that you don't do anything different during training right." }, { "end": 2069, "start": 2052, "text": " But here you take the adversarial training procedure and via training the robust classifier via changing this data set here you basically get good robust accuracy which to me is just a reflection that you've obtained the data set using this robust classifier in the first place." }, { "end": 2082, "start": 2069, "text": " I mean yeah of course their their method gives a hint that I can actually this is actually due to things in the data set themselves right." }, { "end": 2097, "start": 2082, "text": " But there and I mean that's really important because it surely means that it's not a point of let's say the the classifier itself but it's a point of the data set which also say OK." }, { "end": 2114, "start": 2097, "text": " It also explains why these adversarial examples transfer between classifiers if you have two classifiers that are different but classify the same thing they're vulnerable to the same adversarial example which basically means it must be some property of the data set that these things learn." }, { "end": 2137, "start": 2114, "text": " But to do then say we have a procedure to extract the robust features and if we only train on the robust features we become robust right as here but you obtain the robust features by using a robustified classifier which you have adversarially trained to me that's kind of kind of back door in adversarial training into this whole procedure." }, { "end": 2161, "start": 2137, "text": " And yeah so that's that's kind of my first criticism my second criticism is the fact that you know I mean it's it's an interesting take on this but this whole notion this whole seeing of these features are robust these features are non robust is basically just reframing the problem of adversarial examples in terms of in terms of features." }, { "end": 2167, "start": 2161, "text": " It says nothing why these features are there." }, { "end": 2195, "start": 2167, "text": " It's just postulating that they're there. It says nothing why they're there. It says nothing about why the classifiers pick up on them or how they do it or how you know how this is to be mitigated without first having a robustly trained network to extract the robust features." }, { "end": 2198, "start": 2195, "text": " It's very much widely or not." }, { "end": 2205, "start": 2198, "text": " Things are very much widely not known about these samples it's just a reframing of the problem, I feel." }, { "end": 2215, "start": 2205, "text": " And it's cool experiments I mean they, it does show some a lot of things about these adversarial examples but certainly not an explanation." }, { "end": 2219, "start": 2215, "text": " I find, at least that's my opinion." }, { "end": 2234, "start": 2219, "text": " Alright, so down here then they show that they make an kind of simplified version of this a theoretical setting where they can analyze this." }, { "end": 2254, "start": 2234, "text": " And they basically say, okay, this is generally what happens at the fundamental level at the fundamental level, you have classes, and let's say the classes are distributed like, like this right this these are the examples in the data set and they're distributed like that right." }, { "end": 2258, "start": 2254, "text": " Mean, and you have some covariance." }, { "end": 2277, "start": 2258, "text": " So they're distributed like that. If I have two classes like this, such as here, right, and they're distributed like that, and I create like the separator, the linear classifier, the linear classifier will classify like this it will be like super this is the best linear classifier." }, { "end": 2279, "start": 2277, "text": " Right, we can calculate this accurately." }, { "end": 2283, "start": 2279, "text": " But what do I say when I say okay." }, { "end": 2294, "start": 2283, "text": " I want an adversarial example adversarial examples means that I can shift my examples by a little bit but achieve a big change in output." }, { "end": 2298, "start": 2294, "text": " And since, since this distance here." }, { "end": 2307, "start": 2298, "text": " Right, so if I have a sample here, I need to go a long way to the boundary to achieve another output but if I go into another direction." }, { "end": 2329, "start": 2307, "text": " Right, if I go down here, I only need to go a very short way. And since adversarial examples as they're specified, they say, okay, we want to go a short way and the short way is characterized by going a short way in any direction, right, this is a terrible circle in any direction, we want to go a short way." }, { "end": 2339, "start": 2329, "text": " That's another example. You see that if I have this any direction property, there's actually directions where this classification boundary is very, very close." }, { "end": 2355, "start": 2339, "text": " And so that's what they say this is a fundamental misalignment between the geometry of the data, which is like this, and the geometry of how we specify adversarial examples, which is, you know, kind of equal in each direction, which leads to that." }, { "end": 2380, "start": 2355, "text": " And they say, okay, what if I now robust parameters so what if I adversarially train my network to be robust, it basically means that I expand my data, because I add adversarial examples right of the circle here, I actually add adversarial examples, so my, my class, my data distribution will actually more like this." }, { "end": 2393, "start": 2380, "text": " And my separating hyperplane will change here. And the geometry of the adversarial examples will be much more aligned with my separating hyperplane." }, { "end": 2407, "start": 2393, "text": " So this is kind of a toy example of where they say this is fundamentally what's going on. There's a misalignment between the geometry of the adversarial examples and the inherent geometry of the data." }, { "end": 2420, "start": 2407, "text": " So that's kind of the theoretical analysis they do. And with that, I finish here, and I hope this was clear enough and goodbye." } ]
_N_nFzMtWkA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning, Fast and Slow
[ "Science & Technology" ]
[ "machine learning", "reinforcement learning", "meta-learning", "deep rl", "deep reinforcement learning", "deep neural network", "atari", "alphago", "deepmind", "google", "td-gammon", "episodic memory", "inductive bias", "bias variance tradeoff" ]
Abstract: Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. However, the concern has been raised that deep RL may be too sample-inefficient – that is, it may simply be too slow – to provide a plausible model of how humans learn. In the present review, we counter this critique by describing recently developed techniques that allow deep RL to operate more nimbly, solving problems much more quickly than previous methods. Although these techniques were developed in an AI context, we propose that they may have rich implications for psychology and neuroscience. A key insight, arising from these AI methods, concerns the fundamental connection between fast RL and slower, more incremental forms of learning. Authors: Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurth-Nelson, Charles Blundell, Demis Hassabis https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
Hi there, today we're looking at reinforcement learning, fast and slow, by Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurt-Nielsen, Charles Spondel and Demis Hassabis. These people are from Google DeepMind and this is a review of kind of a development in reinforcement learning, especially as it pertains to kind of how humans learn or what we can understand from the RL world that translates over to human learning. Alright, so basically their argument here is that the first wave of deep RL, as you see here, is powerful but slow. And they give examples of this. So in box one, box one is this. So they believe there's an image missing here. This is Backgammon, TD Gammon. This is the famous DeepMind Atari playing bot and this is kind of the 3D labyrinth playing bot. So there's been a number of advances in RL and especially what they talk about is deep RL. So when we talk about reinforcement learning, the easiest case is where you have an agent and an environment. Alright, so the agent will observe some observation O from the environment and then based on that the agent will perform an action A. And then the environment will give back a reward and also a next observation. So this is O0, O1, and then this is A0 and then here you give A1, AI. So basically this goes back and forth and back and forth. The agent performs an action, the environment gives a reward and the next observation. So this could be for example here in the Atari world. The observation is the screen itself. And then the agent needs to perform an action which is an input of the joystick or pressing some button. You can see the individual actions actually listed here. And then the reward will be given to the agent via a number which I guess is the same number as up here. So the task is to maximize the reward simply by... So the difference is you're not doing this in a supervised manner. So you're not telling the agent what would be the correct action to do. You simply tell it that whether what it did was good or bad by giving it a high or a low reward. Right, so that's reinforcement learning. So what is deep reinforcement learning? Deep reinforcement learning simply means the agent maps the observation to the action via a deep neural network. So deep neural network. That's deep reinforcement learning where the mapping or some part of the agent consists of a deep neural network. You see for example here there is a deep neural network mapping the observation to the action. As well as down here but it's a bit more complicated. So they argue that the first wave of this was powerful but slow meaning kind of you need a lot of samples. And they give two sources of why it's slow, why you need a lot of samples. They say the two factors are incremental parameter adjustment and weak inductive bias. So incremental parameter adjustment means basically that you have to update or train your neural network in a very small incremental way. In order to basically, because you train it one by one, right? You train your neural network step by step. You have to make small steps in order to not forget what came before. You can't fundamentally readjust your neural network to every new batch of observations because then that's going to destroy all the information you've learned of the old one. And then weak inductive bias here is basically an understanding of these neural networks. They are general function approximators and they can approximate any function. So if you just think in terms of kind of, I don't know, let's say polynomials and what kind of polynomials are there? This polynomial, this polynomial, this polynomial, this weird polynomial. If I have a function that can approximate all of these then I have a weak inductive bias whereas if I kind of know, okay all my polynomials are the polynomial that I'm looking for ultimately, I'm very sure it's a third degree polynomial, right? So something like this or like this or like this. So this is much less of a class of functions that I can fit but if I'm sure that the function that I'm trying to fit falls in this category then I'm much faster. So this is then called a strong inductive bias is where I build into the model basically I tell it beforehand. Here is a very restricted class of functions that you can fit. Whereas in a weak inductive bias I won't tell it that. I'll simply say, well model you could fit any function you want and I'm just giving you training samples. So this is a classic example of a bias variance trade-off where there is a lot of variance in these models meaning you can fit also a lot of functions but here because you bias the model towards a certain set of functions it can lower this variance and in this case here it speeds up learning because you don't have as much variance that means you can basically go faster while learning. Alright, so they propose two solutions to this problem of this kind of to mitigate these problems that make reinforcement learning faster or have made reinforcement learning faster. This is a review remember. So the first one is episodic deep reinforcement learning and this episodic deep reinforcement learning is specified here, fast learning through episodic memory. So the suggestion in this field of research is to augment the neural network or the agent by a memory and the memory could look something like this. So in a lot of these RL frameworks what a principal component of the agent is, so the agent will get an observation O and one of the things it has to do is estimate the value of this observation of this state. So basically the agent is in some state let's say you play pong right and you are here down and the ball comes your way up there right there's a little arrow sorry so the ball flies away from you and you're all the way down which basically means draw this bigger. So here you are down here and the ball is here flying up there. So one task in these in these agents that occurs often is to estimate the value of this observation basically means how much reward am I expecting from this state going into the future. In this case I probably will not expect a lot of reward since I can't move up fast enough right to catch the ball. So this I would assign this state a pretty low value whereas if I were up here I would assign this state quite a high value. So as we've already seen this is a deep neural network mapping we learn to assign value to different states and this is one of the parts that takes a long time and these methods they are the one that's depicted here replaces this value estimation by saying okay we have an observation we somehow need to estimate its value why don't we look for similar observation so we have some kind of memory right and we go with our observation and we retrieve O'1 O'2 O'3 that are somehow similar right so in our in our pong example I'm down I'm up here ball moves here I could be looking now at at states where I was here or where I was here like really close or where the ball flew a bit differently but still in the same direction or down here right so all these states are kind of close to my state and I can I already have I already have played these since they're in my memory right so with every one of them I can also retrieve the reward that I got so I because I already know the problem in reinforcement learning is before you do an action you don't know what the reward will be but here I already know because I've played it I've already experienced it it's in the past so I know what reward I got right so and this is exactly what they say over here they basically say here we have time time runs this way we're in state one then in state two and so on and we perform actions and and get rewards and what we can do is we can save these states into this memory as along with their sum of discounted rewards that we collect from that state on and then later this is like a spongebob reference if we want to estimate the value of some new state right what we do is we retrieve all of these states from memory calculate a similarity score over them and with with we wait basically we add their rewards weighted by how similar they are to the state that we want to compute so this basically amounts to averaging over states respective by how close they are to the current state right this is kind of a soft a soft way of saying I only select the states which are close and that gives you a value estimate for the new states so basically this means you just got rid of having to train a value function and this will speed up your reinforcement learning quite a bit if you don't have to train that if you already have good value estimations from your previous experience that's great of course there are a number of problems associated with that namely if this memory here for example becomes stale it doesn't represent the future rewards quite as well there is also a question of which states do you keep in memory just the good ones or do they have to have a certain property do you have to have some diversity in there and of course the biggest problem here the biggest problem is how do you know when two states are similar or when they aren't it might be easy in a situation like pong where I only have like three variables like position y position of my of my paddle and position of the ball and velocity of the ball those are like I can specify those in five numbers but if it gets harder than that if it's like this labyrinth setting full 3d environment then we have no clue which states are similar to each other and what these what most end up doing is they will train you guessed it they will train a deep neural network to give you this similarity score between states right how they do it is is a different question but presumably you can train this network offline basically meaning you can pre train it you could pre train it and then the so we have two stages stage one pre train train similarity dnn right and then once we've done that second stage do reinforcement learning using this and the claim here is that by having this done this this second stage will become faster so it it doesn't really solve the problem of the sample efficiency but what it says is okay the actual reinforcement learning part will become faster because we've already done the work previously but basically by by including this similarity score sorry whatever dnn by including this in the language of the review here we have successfully introduced an inductive bias into the rl procedure because the rl procedure now can't just fit any function we say we tell it your value function is one that conforms to our notion of similarity that we've pre trained this restricts the rl algorithm and we give it an inductive bias and as long as our similarity score is useful for the rl algorithm it can speed up its learning because it doesn't have to learn the value function itself all right cool so the second part here is a bit more abstract it's called meta reinforcement learning speeding up deep rl by learning to learn these kind of learning to learn approaches are quite abundant in the literature people try this usually there's a i mean it's it's very large scale experiments basically you have i think i believe they show it somewhere here yeah you have like some um some outer loop where you would say that's this thing here what the outer loop does is in each loop it samples one environment so it samples one environment from a distribution of environments so now you not only have one environment but you say okay if i'm going to navigate this maze one trying to learn to navigate this maze i'm going actually to learn to learn to navigate many mazes right so it's not like you train one agent to learn you train one agent to navigate many mazes that would just be classic reinforcement learning but you want to train an algorithm that helps an agent learn as a particular maze and you do that by training your helper algorithm on a variety of agent maze combinations so in each step you sample one environment like this this here and you then have an inner loop here you fully reinforcement learn train an agent in the classic sense on this environment right you see here action action observation reward right but the agent receives some kind of signal from outside so the outside algorithm will kind of tell the agent how to approach the problem right this could be that it initializes the the weights here you see that the outer loop trains the parameter weights which determine the inner learner that interacts with an environment during the duration of the episode for every cycle of the outer loop a new environment is sampled from a distribution of environments which share some common structure so basically the one would expect when you train this that these parameters here this could be for example it could be the initial weights of the network that the agent uses that this one possibility right this is very abstract here this meta reinforcement learning it could be literally anything that the outer model teaches the inner model or gives to the inner model right and you you train both of these with reinforcement learning so the inner you train with reinforcement learning on the individual rewards and then you can train the outer loop on the reward that the entire app agent environment episode achieved so the that's kind of a two loop situation and yeah so that's meta reinforcement learning again it's very unspecified what it does but as you can already see if you now have such an algorithm that kind of tells the the inner agent just as an example how to initialize its weights right how to initialize the weights of its deep neural network if you have that here then the agent you will technically bias it this is again an inductive bias so you will give it inductive bias towards what you think are good weights to generally learn these maze structured environments right since the outer loop you can update it way slower because it needs to learn over a longer time horizon and it needs to learn things for a different variety of environments but once you have good kind of initial weights for a particular environment then this agent in here can learn much faster given an individual environment so the agent you instantiated and then you give it good starting weights or some other kind of signal about the environment and then it can go much much faster at learning the environment thereby you have just sped up this inner agent by providing it an inductive bias and that's basically what the claim of the review is that by providing these models with a larger inductive bias you may then speed up their learning because you've kind of told them what good functions are from the outset of course you see the problem again here well the problem the problem is of course you actually need to train this outer loop and the outer loop may actually take much much longer to train than a single and unbiased reinforcement learning thing but again what you could do is you could pre-train on a distribution of environments and then once a new environment shows up that is similar to this distribution you can then have the agent instantiated and learn much faster so again kind of this two-step process you could pre-train this outer loop and then the inner loop will be much faster than if you didn't have the outer loop all right so those are basically the kind of the kind of outlines they do here they then kind of do a connection to like the brain and so on and they relate this to biology and biological learning but ultimately their conclusion is here that whenever you want to do whenever you have slow rl or this is at least my conclusion from their article whenever you have slower you can transform it to fast rl rl but you have to outsource the slow rl slow something else slow x you have to outsource the slowness to some other part so if you want to do fast rl you have to outsource the slowness and what the slowness provides is an inductive bias which means yeah if you want to do like fast rl with episodic memory you have to learn the similarity function which again which might be slow in itself but then the rl will be fast and if you want to do this via kind of a an outer meta learner again this learning of the outer meta learner might be slow but then the inner learner will be fast in a connection to the kind of biological aspect of this they do make a connection which which i find is appropriate in that for example the human brain the reason we can learn things fast let's say in the physical world picking things up dropping things down or navigating our paths we're incredibly good at this navigating through like a weird terrain with rocks in the way is because of course our brains have been adapted to these kinds of environment over generations so there is an outer process like evolution which is this kind of outer loop and it instantiates the inner loop which are the humans that kind of live or die by their ability to to navigate better so the if if the outer loop does a good job of only keeping the humans alive that can navigate well then the individual human in here that that does this the individual human given a landscape with rocks will then be much faster at learning to navigate it all right so that was it for that i it's an interesting article to read especially the connections to the kind of biological aspects and with that have a nice day
[ { "end": 7, "start": 0, "text": " Hi there, today we're looking at reinforcement learning, fast and slow, by Matthew Botvinick," }, { "end": 17, "start": 7, "text": " Sam Ritter, Jane X. Wang, Zeb Kurt-Nielsen, Charles Spondel and Demis Hassabis." }, { "end": 24.52, "start": 17, "text": " These people are from Google DeepMind and this is a review of kind of a development" }, { "end": 32.16, "start": 24.52, "text": " in reinforcement learning, especially as it pertains to kind of how humans learn or what" }, { "end": 38.96, "start": 32.16, "text": " we can understand from the RL world that translates over to human learning." }, { "end": 48.44, "start": 38.96, "text": " Alright, so basically their argument here is that the first wave of deep RL, as you" }, { "end": 54.8, "start": 48.44, "text": " see here, is powerful but slow." }, { "end": 57.14, "start": 54.8, "text": " And they give examples of this." }, { "end": 60.66, "start": 57.14, "text": " So in box one, box one is this." }, { "end": 65.16, "start": 60.66, "text": " So they believe there's an image missing here." }, { "end": 68.28, "start": 65.16, "text": " This is Backgammon, TD Gammon." }, { "end": 78.03999999999999, "start": 68.28, "text": " This is the famous DeepMind Atari playing bot and this is kind of the 3D labyrinth playing" }, { "end": 79.04, "start": 78.04, "text": " bot." }, { "end": 83.88000000000001, "start": 79.04, "text": " So there's been a number of advances in RL and especially what they talk about is deep" }, { "end": 84.88000000000001, "start": 83.88000000000001, "text": " RL." }, { "end": 92.84, "start": 84.88000000000001, "text": " So when we talk about reinforcement learning, the easiest case is where you have an agent" }, { "end": 95.84, "start": 92.84, "text": " and an environment." }, { "end": 105.48, "start": 95.84, "text": " Alright, so the agent will observe some observation O from the environment and then based on that" }, { "end": 112.44, "start": 105.48, "text": " the agent will perform an action A. And then the environment will give back a reward and" }, { "end": 115.72, "start": 112.44, "text": " also a next observation." }, { "end": 124.52000000000001, "start": 115.72, "text": " So this is O0, O1, and then this is A0 and then here you give A1, AI." }, { "end": 128.32, "start": 124.52000000000001, "text": " So basically this goes back and forth and back and forth." }, { "end": 133.34, "start": 128.32, "text": " The agent performs an action, the environment gives a reward and the next observation." }, { "end": 137.94, "start": 133.34, "text": " So this could be for example here in the Atari world." }, { "end": 142.16, "start": 137.94, "text": " The observation is the screen itself." }, { "end": 148.72, "start": 142.16, "text": " And then the agent needs to perform an action which is an input of the joystick or pressing" }, { "end": 150.16, "start": 148.72, "text": " some button." }, { "end": 154.86, "start": 150.16, "text": " You can see the individual actions actually listed here." }, { "end": 161.2, "start": 154.86, "text": " And then the reward will be given to the agent via a number which I guess is the same number" }, { "end": 163.3, "start": 161.2, "text": " as up here." }, { "end": 168.04000000000002, "start": 163.3, "text": " So the task is to maximize the reward simply by..." }, { "end": 171.56, "start": 168.04000000000002, "text": " So the difference is you're not doing this in a supervised manner." }, { "end": 175.84, "start": 171.56, "text": " So you're not telling the agent what would be the correct action to do." }, { "end": 183.72000000000003, "start": 175.84, "text": " You simply tell it that whether what it did was good or bad by giving it a high or a low" }, { "end": 184.72000000000003, "start": 183.72000000000003, "text": " reward." }, { "end": 186.92000000000002, "start": 184.72000000000003, "text": " Right, so that's reinforcement learning." }, { "end": 189.44, "start": 186.92000000000002, "text": " So what is deep reinforcement learning?" }, { "end": 197.76, "start": 189.44, "text": " Deep reinforcement learning simply means the agent maps the observation to the action via" }, { "end": 199.8, "start": 197.76, "text": " a deep neural network." }, { "end": 203.52, "start": 199.8, "text": " So deep neural network." }, { "end": 209.28, "start": 203.52, "text": " That's deep reinforcement learning where the mapping or some part of the agent consists" }, { "end": 211.88, "start": 209.28, "text": " of a deep neural network." }, { "end": 219.36, "start": 211.88, "text": " You see for example here there is a deep neural network mapping the observation to the action." }, { "end": 225.20000000000002, "start": 219.36, "text": " As well as down here but it's a bit more complicated." }, { "end": 232.92000000000002, "start": 225.20000000000002, "text": " So they argue that the first wave of this was powerful but slow meaning kind of you" }, { "end": 235.48000000000002, "start": 232.92000000000002, "text": " need a lot of samples." }, { "end": 241.96, "start": 235.48000000000002, "text": " And they give two sources of why it's slow, why you need a lot of samples." }, { "end": 249.64000000000001, "start": 241.96, "text": " They say the two factors are incremental parameter adjustment and weak inductive bias." }, { "end": 256.28000000000003, "start": 249.64000000000001, "text": " So incremental parameter adjustment means basically that you have to update or train" }, { "end": 260.92, "start": 256.28000000000003, "text": " your neural network in a very small incremental way." }, { "end": 266.92, "start": 260.92, "text": " In order to basically, because you train it one by one, right?" }, { "end": 270.28000000000003, "start": 266.92, "text": " You train your neural network step by step." }, { "end": 275.84, "start": 270.28, "text": " You have to make small steps in order to not forget what came before." }, { "end": 281.44, "start": 275.84, "text": " You can't fundamentally readjust your neural network to every new batch of observations" }, { "end": 286.44, "start": 281.44, "text": " because then that's going to destroy all the information you've learned of the old one." }, { "end": 295, "start": 286.44, "text": " And then weak inductive bias here is basically an understanding of these neural networks." }, { "end": 300.23999999999995, "start": 295, "text": " They are general function approximators and they can approximate any function." }, { "end": 305.56, "start": 300.24, "text": " So if you just think in terms of kind of, I don't know, let's say polynomials and what" }, { "end": 306.92, "start": 305.56, "text": " kind of polynomials are there?" }, { "end": 314.04, "start": 306.92, "text": " This polynomial, this polynomial, this polynomial, this weird polynomial." }, { "end": 320, "start": 314.04, "text": " If I have a function that can approximate all of these then I have a weak inductive" }, { "end": 326.8, "start": 320, "text": " bias whereas if I kind of know, okay all my polynomials are the polynomial that I'm looking" }, { "end": 333.44, "start": 326.8, "text": " for ultimately, I'm very sure it's a third degree polynomial, right?" }, { "end": 337.2, "start": 333.44, "text": " So something like this or like this or like this." }, { "end": 346.2, "start": 337.2, "text": " So this is much less of a class of functions that I can fit but if I'm sure that the function" }, { "end": 351.76, "start": 346.2, "text": " that I'm trying to fit falls in this category then I'm much faster." }, { "end": 357.15999999999997, "start": 351.76, "text": " So this is then called a strong inductive bias is where I build into the model basically" }, { "end": 359.24, "start": 357.15999999999997, "text": " I tell it beforehand." }, { "end": 364.84, "start": 359.24, "text": " Here is a very restricted class of functions that you can fit." }, { "end": 367.92, "start": 364.84, "text": " Whereas in a weak inductive bias I won't tell it that." }, { "end": 372.8, "start": 367.92, "text": " I'll simply say, well model you could fit any function you want and I'm just giving" }, { "end": 374.7, "start": 372.8, "text": " you training samples." }, { "end": 381.68, "start": 374.7, "text": " So this is a classic example of a bias variance trade-off where there is a lot of" }, { "end": 388.16, "start": 381.68, "text": " variance in these models meaning you can fit also a lot of functions but here because you" }, { "end": 395.2, "start": 388.16, "text": " bias the model towards a certain set of functions it can lower this variance and in this case" }, { "end": 403.16, "start": 395.2, "text": " here it speeds up learning because you don't have as much variance that means you can basically" }, { "end": 405.92, "start": 403.16, "text": " go faster while learning." }, { "end": 417.28000000000003, "start": 405.92, "text": " Alright, so they propose two solutions to this problem of this kind of to mitigate these" }, { "end": 422.44, "start": 417.28000000000003, "text": " problems that make reinforcement learning faster or have made reinforcement learning" }, { "end": 423.70000000000005, "start": 422.44, "text": " faster." }, { "end": 426.98, "start": 423.70000000000005, "text": " This is a review remember." }, { "end": 433.8, "start": 426.98, "text": " So the first one is episodic deep reinforcement learning and this episodic deep reinforcement" }, { "end": 438.8, "start": 433.8, "text": " learning is specified here, fast learning through episodic memory." }, { "end": 446.58000000000004, "start": 438.8, "text": " So the suggestion in this field of research is to augment the neural network or the agent" }, { "end": 453.48, "start": 446.58000000000004, "text": " by a memory and the memory could look something like this." }, { "end": 462.44, "start": 453.48, "text": " So in a lot of these RL frameworks what a principal component of the agent is, so the" }, { "end": 470.16, "start": 462.44, "text": " agent will get an observation O and one of the things it has to do is estimate the value" }, { "end": 472.64, "start": 470.16, "text": " of this observation of this state." }, { "end": 480.88, "start": 472.64, "text": " So basically the agent is in some state let's say you play pong right and you are here down" }, { "end": 487.4, "start": 480.88, "text": " and the ball comes your way up there right there's a little arrow sorry so the ball" }, { "end": 495.15999999999997, "start": 487.4, "text": " flies away from you and you're all the way down which basically means draw this bigger." }, { "end": 502.96, "start": 495.15999999999997, "text": " So here you are down here and the ball is here flying up there." }, { "end": 510.28, "start": 502.96, "text": " So one task in these in these agents that occurs often is to estimate the value of this" }, { "end": 516.36, "start": 510.28, "text": " observation basically means how much reward am I expecting from this state going into" }, { "end": 517.6, "start": 516.36, "text": " the future." }, { "end": 524.04, "start": 517.6, "text": " In this case I probably will not expect a lot of reward since I can't move up fast enough" }, { "end": 526.76, "start": 524.04, "text": " right to catch the ball." }, { "end": 534.04, "start": 526.76, "text": " So this I would assign this state a pretty low value whereas if I were up here I would" }, { "end": 537.48, "start": 534.04, "text": " assign this state quite a high value." }, { "end": 545.16, "start": 537.48, "text": " So as we've already seen this is a deep neural network mapping we learn to assign value to" }, { "end": 553.52, "start": 545.16, "text": " different states and this is one of the parts that takes a long time and these methods they" }, { "end": 560.16, "start": 553.52, "text": " are the one that's depicted here replaces this value estimation by saying okay we have" }, { "end": 567.8399999999999, "start": 560.16, "text": " an observation we somehow need to estimate its value why don't we look for similar observation" }, { "end": 577.36, "start": 567.84, "text": " so we have some kind of memory right and we go with our observation and we retrieve O'1" }, { "end": 587, "start": 577.36, "text": " O'2 O'3 that are somehow similar right so in our in our pong example I'm down I'm up" }, { "end": 595.8000000000001, "start": 587, "text": " here ball moves here I could be looking now at at states where I was here or where I was" }, { "end": 601.4399999999999, "start": 595.8, "text": " here like really close or where the ball flew a bit differently but still in the same direction" }, { "end": 608.8399999999999, "start": 601.4399999999999, "text": " or down here right so all these states are kind of close to my state and I can I already" }, { "end": 614.26, "start": 608.8399999999999, "text": " have I already have played these since they're in my memory right so with every one of them" }, { "end": 621.0799999999999, "start": 614.26, "text": " I can also retrieve the reward that I got so I because I already know the problem in" }, { "end": 625.04, "start": 621.0799999999999, "text": " reinforcement learning is before you do an action you don't know what the reward will" }, { "end": 631.76, "start": 625.04, "text": " be but here I already know because I've played it I've already experienced it it's in the" }, { "end": 638.64, "start": 631.76, "text": " past so I know what reward I got right so and this is exactly what they say over here" }, { "end": 646.36, "start": 638.64, "text": " they basically say here we have time time runs this way we're in state one then in state" }, { "end": 654.48, "start": 646.36, "text": " two and so on and we perform actions and and get rewards and what we can do is we can save" }, { "end": 662.6, "start": 654.48, "text": " these states into this memory as along with their sum of discounted rewards that we collect" }, { "end": 672.48, "start": 662.6, "text": " from that state on and then later this is like a spongebob reference if we want to estimate" }, { "end": 680.76, "start": 672.48, "text": " the value of some new state right what we do is we retrieve all of these states from" }, { "end": 688.92, "start": 680.76, "text": " memory calculate a similarity score over them and with with we wait basically we add their" }, { "end": 694.64, "start": 688.92, "text": " rewards weighted by how similar they are to the state that we want to compute so this" }, { "end": 703.84, "start": 694.64, "text": " basically amounts to averaging over states respective by how close they are to the current" }, { "end": 708.88, "start": 703.84, "text": " state right this is kind of a soft a soft way of saying I only select the states which" }, { "end": 715.28, "start": 708.88, "text": " are close and that gives you a value estimate for the new states so basically this means" }, { "end": 721.2, "start": 715.28, "text": " you just got rid of having to train a value function and this will speed up your reinforcement" }, { "end": 726.4399999999999, "start": 721.2, "text": " learning quite a bit if you don't have to train that if you already have good value" }, { "end": 731.66, "start": 726.4399999999999, "text": " estimations from your previous experience that's great of course there are a number" }, { "end": 737.4, "start": 731.66, "text": " of problems associated with that namely if this memory here for example becomes stale" }, { "end": 745.4399999999999, "start": 737.4, "text": " it doesn't represent the future rewards quite as well there is also a question of which" }, { "end": 749.9599999999999, "start": 745.4399999999999, "text": " states do you keep in memory just the good ones or do they have to have a certain property" }, { "end": 756.6, "start": 749.9599999999999, "text": " do you have to have some diversity in there and of course the biggest problem here the" }, { "end": 764.1999999999999, "start": 756.6, "text": " biggest problem is how do you know when two states are similar or when they aren't it" }, { "end": 772.12, "start": 764.2, "text": " might be easy in a situation like pong where I only have like three variables like position" }, { "end": 777.72, "start": 772.12, "text": " y position of my of my paddle and position of the ball and velocity of the ball those" }, { "end": 785.5600000000001, "start": 777.72, "text": " are like I can specify those in five numbers but if it gets harder than that if it's like" }, { "end": 792.82, "start": 785.5600000000001, "text": " this labyrinth setting full 3d environment then we have no clue which states are similar" }, { "end": 799.72, "start": 792.82, "text": " to each other and what these what most end up doing is they will train you guessed it" }, { "end": 807.32, "start": 799.72, "text": " they will train a deep neural network to give you this similarity score between states right" }, { "end": 813.1400000000001, "start": 807.32, "text": " how they do it is is a different question but presumably you can train this network" }, { "end": 820.48, "start": 813.1400000000001, "text": " offline basically meaning you can pre train it you could pre train it and then the so" }, { "end": 833.4, "start": 820.48, "text": " we have two stages stage one pre train train similarity dnn right and then once we've done" }, { "end": 842.5600000000001, "start": 833.4, "text": " that second stage do reinforcement learning using this and the claim here is that by having" }, { "end": 849.8000000000001, "start": 842.5600000000001, "text": " this done this this second stage will become faster so it it doesn't really solve the problem" }, { "end": 854.3199999999999, "start": 849.8, "text": " of the sample efficiency but what it says is okay the actual reinforcement learning" }, { "end": 860.0799999999999, "start": 854.3199999999999, "text": " part will become faster because we've already done the work previously but basically by" }, { "end": 868.56, "start": 860.0799999999999, "text": " by including this similarity score sorry whatever dnn by including this in the language of the" }, { "end": 878.0799999999999, "start": 868.56, "text": " review here we have successfully introduced an inductive bias into the rl procedure because" }, { "end": 885.32, "start": 878.08, "text": " the rl procedure now can't just fit any function we say we tell it your value function is one" }, { "end": 890.8000000000001, "start": 885.32, "text": " that conforms to our notion of similarity that we've pre trained this restricts the" }, { "end": 898.72, "start": 890.8000000000001, "text": " rl algorithm and we give it an inductive bias and as long as our similarity score is useful" }, { "end": 904.4000000000001, "start": 898.72, "text": " for the rl algorithm it can speed up its learning because it doesn't have to learn the value" }, { "end": 912.4, "start": 904.4, "text": " function itself all right cool so the second part here is a bit more abstract it's called" }, { "end": 918.88, "start": 912.4, "text": " meta reinforcement learning speeding up deep rl by learning to learn these kind of learning" }, { "end": 925, "start": 918.88, "text": " to learn approaches are quite abundant in the literature people try this usually there's" }, { "end": 933.96, "start": 925, "text": " a i mean it's it's very large scale experiments basically you have i think i believe they" }, { "end": 941.2, "start": 933.96, "text": " show it somewhere here yeah you have like some um some outer loop where you would say" }, { "end": 947.6800000000001, "start": 941.2, "text": " that's this thing here what the outer loop does is in each loop it samples one environment" }, { "end": 953.0400000000001, "start": 947.6800000000001, "text": " so it samples one environment from a distribution of environments so now you not only have one" }, { "end": 959.96, "start": 953.0400000000001, "text": " environment but you say okay if i'm going to navigate this maze one trying to learn" }, { "end": 970.88, "start": 959.96, "text": " to navigate this maze i'm going actually to learn to learn to navigate many mazes right" }, { "end": 977.72, "start": 970.88, "text": " so it's not like you train one agent to learn you train one agent to navigate many mazes" }, { "end": 985.5600000000001, "start": 977.72, "text": " that would just be classic reinforcement learning but you want to train an algorithm that helps" }, { "end": 993.2399999999999, "start": 985.56, "text": " an agent learn as a particular maze and you do that by training your helper algorithm" }, { "end": 1000.1999999999999, "start": 993.2399999999999, "text": " on a variety of agent maze combinations so in each step you sample one environment like" }, { "end": 1009, "start": 1000.1999999999999, "text": " this this here and you then have an inner loop here you fully reinforcement learn train" }, { "end": 1016.6, "start": 1009, "text": " an agent in the classic sense on this environment right you see here action action observation" }, { "end": 1025.84, "start": 1016.6, "text": " reward right but the agent receives some kind of signal from outside so the outside algorithm" }, { "end": 1034.04, "start": 1025.84, "text": " will kind of tell the agent how to approach the problem right this could be that it initializes" }, { "end": 1042.56, "start": 1034.04, "text": " the the weights here you see that the outer loop trains the parameter weights which determine" }, { "end": 1049.84, "start": 1042.56, "text": " the inner learner that interacts with an environment during the duration of the episode for every" }, { "end": 1054.96, "start": 1049.84, "text": " cycle of the outer loop a new environment is sampled from a distribution of environments" }, { "end": 1061, "start": 1054.96, "text": " which share some common structure so basically the one would expect when you train this that" }, { "end": 1068.64, "start": 1061, "text": " these parameters here this could be for example it could be the initial weights of the network" }, { "end": 1074.68, "start": 1068.64, "text": " that the agent uses that this one possibility right this is very abstract here this meta" }, { "end": 1081, "start": 1074.68, "text": " reinforcement learning it could be literally anything that the outer model teaches the" }, { "end": 1088.06, "start": 1081, "text": " inner model or gives to the inner model right and you you train both of these with reinforcement" }, { "end": 1092.44, "start": 1088.06, "text": " learning so the inner you train with reinforcement learning on the individual rewards and then" }, { "end": 1099.8, "start": 1092.44, "text": " you can train the outer loop on the reward that the entire app agent environment episode" }, { "end": 1108.96, "start": 1099.8, "text": " achieved so the that's kind of a two loop situation and yeah so that's meta reinforcement" }, { "end": 1116.72, "start": 1108.96, "text": " learning again it's very unspecified what it does but as you can already see if you" }, { "end": 1124.64, "start": 1116.72, "text": " now have such an algorithm that kind of tells the the inner agent just as an example how" }, { "end": 1131, "start": 1124.64, "text": " to initialize its weights right how to initialize the weights of its deep neural network if" }, { "end": 1138.24, "start": 1131, "text": " you have that here then the agent you will technically bias it this is again an inductive" }, { "end": 1149.44, "start": 1138.24, "text": " bias so you will give it inductive bias towards what you think are good weights to generally" }, { "end": 1158.1200000000001, "start": 1149.44, "text": " learn these maze structured environments right since the outer loop you can update it way" }, { "end": 1164.32, "start": 1158.1200000000001, "text": " slower because it needs to learn over a longer time horizon and it needs to learn things" }, { "end": 1170.24, "start": 1164.32, "text": " for a different variety of environments but once you have good kind of initial weights" }, { "end": 1177.08, "start": 1170.24, "text": " for a particular environment then this agent in here can learn much faster given an individual" }, { "end": 1182.12, "start": 1177.08, "text": " environment so the agent you instantiated and then you give it good starting weights" }, { "end": 1188.78, "start": 1182.12, "text": " or some other kind of signal about the environment and then it can go much much faster at learning" }, { "end": 1195.52, "start": 1188.78, "text": " the environment thereby you have just sped up this inner agent by providing it an inductive" }, { "end": 1207.56, "start": 1195.52, "text": " bias and that's basically what the claim of the review is that by providing these models" }, { "end": 1213.6399999999999, "start": 1207.56, "text": " with a larger inductive bias you may then speed up their learning because you've kind" }, { "end": 1220.6000000000001, "start": 1213.64, "text": " of told them what good functions are from the outset of course you see the problem again" }, { "end": 1228.2, "start": 1220.6000000000001, "text": " here well the problem the problem is of course you actually need to train this outer loop" }, { "end": 1236.0400000000002, "start": 1228.2, "text": " and the outer loop may actually take much much longer to train than a single and unbiased" }, { "end": 1242.1000000000001, "start": 1236.0400000000002, "text": " reinforcement learning thing but again what you could do is you could pre-train on a distribution" }, { "end": 1248.04, "start": 1242.1, "text": " of environments and then once a new environment shows up that is similar to this distribution" }, { "end": 1256.04, "start": 1248.04, "text": " you can then have the agent instantiated and learn much faster so again kind of this two-step" }, { "end": 1262.28, "start": 1256.04, "text": " process you could pre-train this outer loop and then the inner loop will be much faster" }, { "end": 1271.48, "start": 1262.28, "text": " than if you didn't have the outer loop all right so those are basically the kind of the" }, { "end": 1278.96, "start": 1271.48, "text": " kind of outlines they do here they then kind of do a connection to like the brain and so" }, { "end": 1290.3, "start": 1278.96, "text": " on and they relate this to biology and biological learning but ultimately their conclusion is" }, { "end": 1298.98, "start": 1290.3, "text": " here that whenever you want to do whenever you have slow rl or this is at least my conclusion" }, { "end": 1308.28, "start": 1298.98, "text": " from their article whenever you have slower you can transform it to fast rl rl but you" }, { "end": 1318.72, "start": 1308.28, "text": " have to outsource the slow rl slow something else slow x you have to outsource the slowness" }, { "end": 1324.52, "start": 1318.72, "text": " to some other part so if you want to do fast rl you have to outsource the slowness and" }, { "end": 1333.72, "start": 1324.52, "text": " what the slowness provides is an inductive bias which means yeah if you want to do like" }, { "end": 1339.26, "start": 1333.72, "text": " fast rl with episodic memory you have to learn the similarity function which again which" }, { "end": 1346.72, "start": 1339.26, "text": " might be slow in itself but then the rl will be fast and if you want to do this via kind" }, { "end": 1352.28, "start": 1346.72, "text": " of a an outer meta learner again this learning of the outer meta learner might be slow but" }, { "end": 1361.36, "start": 1352.28, "text": " then the inner learner will be fast in a connection to the kind of biological aspect of this they" }, { "end": 1368.48, "start": 1361.36, "text": " do make a connection which which i find is appropriate in that for example the human" }, { "end": 1374.32, "start": 1368.48, "text": " brain the reason we can learn things fast let's say in the physical world picking things" }, { "end": 1381.26, "start": 1374.32, "text": " up dropping things down or navigating our paths we're incredibly good at this navigating" }, { "end": 1389.84, "start": 1381.26, "text": " through like a weird terrain with rocks in the way is because of course our brains have" }, { "end": 1396.48, "start": 1389.84, "text": " been adapted to these kinds of environment over generations so there is an outer process" }, { "end": 1403, "start": 1396.48, "text": " like evolution which is this kind of outer loop and it instantiates the inner loop which" }, { "end": 1413.8, "start": 1403, "text": " are the humans that kind of live or die by their ability to to navigate better so the" }, { "end": 1419.64, "start": 1413.8, "text": " if if the outer loop does a good job of only keeping the humans alive that can navigate" }, { "end": 1427.08, "start": 1419.64, "text": " well then the individual human in here that that does this the individual human given" }, { "end": 1434.48, "start": 1427.08, "text": " a landscape with rocks will then be much faster at learning to navigate it all right so that" }, { "end": 1440.32, "start": 1434.48, "text": " was it for that i it's an interesting article to read especially the connections to the" }, { "end": 1457.6, "start": 1440.32, "text": " kind of biological aspects and with that have a nice day" } ]
F5mxzvgl_oU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
S.H.E. - Search. Human. Equalizer.
[ "Science & Technology" ]
[ "pantene", "search", "google", "bias", "machine learning", "artificial intelligence", "search engine", "ranking", "equality", "diversity" ]
Short opinion on Pantene's tool to de-bias Google search results. https://www.apnews.com/Business%20Wire/c53a0e8f5fe04bf68e8311f214c806cf https://shetransforms.us/
Hi everyone, just a quick more of a news update in the AI world. Which is the following. Pantene launches S.H.E. The Search Human Equalizer to shine a light on bias in search. So Pantene, the kind of cosmetic corporation, launches this thing which is supposed to correct your search. And it's introduced here in this YouTube video which as you can see down here has 400 likes, has 3.5K dislikes and of course comments are disabled. So that's kind of already weird. Let's say weird. If you go to the website here that they made, basically let me refresh this and you can see the intro. They say let's take the bias out of search. So if you search for greatest engineers you'll get all men. If you search for schoolgirl you'll get like this kind of sexualized images. If you search for Asian women in Spanish, same. So basically they have a browser extension that modifies your search results so that for example schoolgirl looks like this. Of course, I don't know, if I were to do this I would actually let people explore the search box right here. But of course I want you to download this extension. So to me the interesting part is how does this work? So you're asked to install a Chrome extension which I won't do. But basically down here they say view the terms that SHE is equalizing. If you click on that you get to a list. So it very much seems like this is absolutely manual handcrafted work because there's a lot of work in kind of correcting bias in for example in search, in machine learning and so on. These approaches usually have some data driven approach that actually will change the models and so on or will re-rank based on some kind of learned input. But this here is simply a list of terms, for example famous actor, famous athletes and so on that it will then kind of re-rank. And I'm pretty sure this is just human manual labor. Someone comes up with a new term like oh this term we should you can actually flag yourself in the Chrome extension. So they say here flag this search. You can there's a button so you can suggest one and they will say oh yeah okay that is really not biased or that is really biased. Will now re-rank the search results for you. I mean academically this is a terrible idea, absolutely terrible. Because how are you going to do this like manually replace every single there is like I don't know it reminds a bit of new speak. But yeah this approach is doomed to fail. But of course it's just a company trying to sell you stuff. It's not, I mean this is not a, this is a PR gag not really trying to do anything, anything state of the art or meaningful or even effective right. If you search a little different thing than this it will still show you the old kind of result. So yeah from the terms you can also pretty clearly see where they come from. They have their own name. They have Pantene. I didn't see this yet. They have Pantene in here. So yeah if you want less biased search results for these exact terms then install the extension. I do not recommend you do so. But I would like them to take on one more query that I came up with that is pretty pretty biased I found. And that's the most dangerous criminals. All men. Goodbye.
[ { "end": 7.12, "start": 0, "text": " Hi everyone, just a quick more of a news update in the AI world." }, { "end": 8.96, "start": 7.12, "text": " Which is the following." }, { "end": 11.120000000000001, "start": 8.96, "text": " Pantene launches S.H.E." }, { "end": 16.740000000000002, "start": 11.120000000000001, "text": " The Search Human Equalizer to shine a light on bias in search." }, { "end": 24.8, "start": 16.740000000000002, "text": " So Pantene, the kind of cosmetic corporation, launches this thing which is supposed to correct" }, { "end": 26.84, "start": 24.8, "text": " your search." }, { "end": 36.04, "start": 26.84, "text": " And it's introduced here in this YouTube video which as you can see down here has 400 likes," }, { "end": 41.84, "start": 36.04, "text": " has 3.5K dislikes and of course comments are disabled." }, { "end": 47.8, "start": 41.84, "text": " So that's kind of already weird." }, { "end": 50, "start": 47.8, "text": " Let's say weird." }, { "end": 57.4, "start": 50, "text": " If you go to the website here that they made, basically let me refresh this and you can" }, { "end": 59.2, "start": 57.4, "text": " see the intro." }, { "end": 62.64, "start": 59.2, "text": " They say let's take the bias out of search." }, { "end": 68.06, "start": 62.64, "text": " So if you search for greatest engineers you'll get all men." }, { "end": 76, "start": 68.06, "text": " If you search for schoolgirl you'll get like this kind of sexualized images." }, { "end": 85.14, "start": 76, "text": " If you search for Asian women in Spanish, same." }, { "end": 91.64, "start": 85.14, "text": " So basically they have a browser extension that modifies your search results so that" }, { "end": 96, "start": 91.64, "text": " for example schoolgirl looks like this." }, { "end": 102.84, "start": 96, "text": " Of course, I don't know, if I were to do this I would actually let people explore the search" }, { "end": 104.56, "start": 102.84, "text": " box right here." }, { "end": 110.08, "start": 104.56, "text": " But of course I want you to download this extension." }, { "end": 116.24000000000001, "start": 110.08, "text": " So to me the interesting part is how does this work?" }, { "end": 123.5, "start": 116.24000000000001, "text": " So you're asked to install a Chrome extension which I won't do." }, { "end": 131.36, "start": 123.5, "text": " But basically down here they say view the terms that SHE is equalizing." }, { "end": 133.8, "start": 131.36, "text": " If you click on that you get to a list." }, { "end": 140.20000000000002, "start": 133.8, "text": " So it very much seems like this is absolutely manual handcrafted work because there's a" }, { "end": 144.84, "start": 140.20000000000002, "text": " lot of work in kind of correcting bias in for example in search, in machine learning" }, { "end": 145.92000000000002, "start": 144.84, "text": " and so on." }, { "end": 152.60000000000002, "start": 145.92000000000002, "text": " These approaches usually have some data driven approach that actually will change the models" }, { "end": 158.36, "start": 152.60000000000002, "text": " and so on or will re-rank based on some kind of learned input." }, { "end": 166.68, "start": 158.36, "text": " But this here is simply a list of terms, for example famous actor, famous athletes and" }, { "end": 169.28, "start": 166.68, "text": " so on that it will then kind of re-rank." }, { "end": 172.56, "start": 169.28, "text": " And I'm pretty sure this is just human manual labor." }, { "end": 178.76000000000002, "start": 172.56, "text": " Someone comes up with a new term like oh this term we should you can actually flag yourself" }, { "end": 180.02, "start": 178.76000000000002, "text": " in the Chrome extension." }, { "end": 183.48000000000002, "start": 180.02, "text": " So they say here flag this search." }, { "end": 187.64000000000001, "start": 183.48000000000002, "text": " You can there's a button so you can suggest one and they will say oh yeah okay that is" }, { "end": 191.2, "start": 187.64, "text": " really not biased or that is really biased." }, { "end": 196.51999999999998, "start": 191.2, "text": " Will now re-rank the search results for you." }, { "end": 200.95999999999998, "start": 196.51999999999998, "text": " I mean academically this is a terrible idea, absolutely terrible." }, { "end": 207.76, "start": 200.95999999999998, "text": " Because how are you going to do this like manually replace every single there is like" }, { "end": 213.23999999999998, "start": 207.76, "text": " I don't know it reminds a bit of new speak." }, { "end": 215.92, "start": 213.23999999999998, "text": " But yeah this approach is doomed to fail." }, { "end": 219.32, "start": 215.92, "text": " But of course it's just a company trying to sell you stuff." }, { "end": 228.83999999999997, "start": 219.32, "text": " It's not, I mean this is not a, this is a PR gag not really trying to do anything, anything" }, { "end": 232.6, "start": 228.83999999999997, "text": " state of the art or meaningful or even effective right." }, { "end": 239.04, "start": 232.6, "text": " If you search a little different thing than this it will still show you the old kind of" }, { "end": 241.23999999999998, "start": 239.04, "text": " result." }, { "end": 248.08, "start": 241.24, "text": " So yeah from the terms you can also pretty clearly see where they come from." }, { "end": 249.08, "start": 248.08, "text": " They have their own name." }, { "end": 250.08, "start": 249.08, "text": " They have Pantene." }, { "end": 251.08, "start": 250.08, "text": " I didn't see this yet." }, { "end": 256.04, "start": 251.08, "text": " They have Pantene in here." }, { "end": 265.2, "start": 256.04, "text": " So yeah if you want less biased search results for these exact terms then install the extension." }, { "end": 268.6, "start": 265.2, "text": " I do not recommend you do so." }, { "end": 275.96000000000004, "start": 268.6, "text": " But I would like them to take on one more query that I came up with that is pretty pretty" }, { "end": 277.38, "start": 275.96000000000004, "text": " biased I found." }, { "end": 281.16, "start": 277.38, "text": " And that's the most dangerous criminals." }, { "end": 282.16, "start": 281.16, "text": " All men." }, { "end": 302.8, "start": 282.16, "text": " Goodbye." } ]
3Tqp_B2G6u0
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Blockwise Parallel Decoding for Deep Autoregressive Models
[ "Science & Technology" ]
[ "machine learning", "deep learning", "transformers", "nlp", "natural language processing", "ai", "artificial intelligence", "google brain", "autoregressive", "greedy decoding", "inference", "language model", "speedup" ]
https://arxiv.org/abs/1811.03115 Abstract: Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding. Authors: Mitchell Stern, Noam Shazeer, Jakob Uszkoreit
Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by Mitchell Stern, Noam Shazir and Jakob Uschkordei of UC Berkeley and Google Brain. So this is a bit more of an engineering paper than usual, which I find cool. It's basically an engineering trick to get these autoregressive models to decode faster, while you can either preserve fully their performance or suffer a bit of a drop in performance, while even speeding them up more. Alright, so let's dive in actually. The paper starts out with a description of what autoregressive models are and what decoding is in them. So let me try to quickly explain this. So what is an autoregressive model? So basically we're talking about, let's say, language models. So language models are the classic examples of these models, where you have a language model is a model that simply predicts the next word in a sequence. So you could have something like a cat sits on the, and then here is blank. So the language model is asked to predict which word is the word that follows. The language model basically does this by predicting the probability distribution over the next word. So w t plus one, if this is t here, this is t minus one and so on, given all the w's smaller or equal than t. So all the words that come before should lead to the next word being predicted. So the language model is tasked to ask what is the next word in the sequence, or what's the probability distribution over the next word. And then you can simply, you know, pick the maximum probability word or something like this. So that's that's pretty standard so far. So what is the autoregressive part in here? So basically the autoregressive part means that in order for me to find this word here, this next word, I will look at all of these words here. And what does it mean then when I want to use this language model for generating generating a sentence, let's say, so I'm now I've trained the language model, it's really good at predicting the next word, I want to actually use it to do something more interesting. So I, I want it to generate a full sentence, what I do, let's say I pick the first word, the right, I pick the first word, and I simply ask the language model, why what's the next word? Right? And the language model can do this, it can assess what's the probability distribution here over words, and it will, for example, give me some some distribution over words, and I pick the maximum one, I say, okay, the maximum one here is house. Okay, the house. The house. And then I go back and I ask the language model, well, what's the next word then? See, clearly, you're a language model. So you can give me based on the two previous words, you can give me the next word, what's the next word, and the language model will maybe say the house is, and so on. So you can see how you can generate a sentence by simply basically feeding the answer that the language model gives feeding it into the next step of predicting. So all of these now go into the next step, and once you've predicted the next step, the house is on. Once you've predicted that, then you can take that and in conjunction with everything you've predicted so far, to predict the next step. So you can use the language model that is trained to predict the next word to predict an entire sentence and the autoregressive part basically means that its own predictions will serve as the basis for the next predictions, and this introduces a fundamental problem, namely that I have to basically wait for one prediction, so I have to wait here for is before I can predict on, and this means if I have a, I basically can't help but, so if this is my language model, it's a box, I can't help but go to the language model, wait for a response, okay, then go to the language model again, wait for a response again. This is inherently sequential nature here where I have to do like M steps if M is the length of the sentence that I want, and we can't make use of batching normally, so usually what you do during training, during training you have a whole bunch of data, right, you have the cat sits on the mat, you have the house, the house is blue, so I can generate, just from these two sentences I can generate a bunch of training examples, I can ask, this is a training example where the input is the cat and it's meaning to predict sits, then this is a training example where the input is the cat sits and the language model has to predict on, this here is a training example, this, this is a training example, so I can chunk this up into a whole bunch of training examples and all of those I can write, I can feed in parallel into a big matrix, I can all put them here and then run this thing through my language model in training mode because each of them is already like is in the corpus, I can batch the training but I can't batch the prediction because of what we've seen before because inherently the next predicting the next word depends on the last word that the model itself has output, so there is no training corpus around since we're not training, yeah, so this is the fundamental problem and these authors tackle this problem, they say how can we make this faster, this decoding, so they introduce greedy decoding here where they say okay, this is what we've just seen, the probability of the next word is like the maximum, the maximum log probability here in that case if the model predicts a log probability over the words that we've input so far, right, and this X here is, so this is for example a translation task, a machine translation task, so the X would be the source language sentence, so maybe like a French sentence and the Y smaller equal to J would be the so far decoded English sentence if we're trying to translate to English and the Y J plus one would be the next word that we're trying to predict in the English sentence given the English sentence so far and the French sentence, the total French sentence, so greedy decoding just does this one step after another and we try to go to what they call blockwise parallel decoding. So we can just jump to the graphics straight away because what they do is pretty straightforward and is best illustrated in this graphic actually, so they go from this situation where they already have this here, they have a saw a dog ride, this is the sentence that has been decoded so far and we have to try to complete it, naturally we'll ask what's the next word, but they say okay what if we could predict not only the next word from this but the word two positions away or three positions away, we could do this all at the same time, right, I mean I can certainly build a model, a language model that doesn't only predict the next word but predicts the word after that as well, though of course if then this word, the predictor for this word still only gets this as an input so this is the important thing here, so the part of the model that predicts the is two words away isn't being informed that this word is being produced here, so naturally you would expect the quality to be worse because the word one position away, two positions away and three positions away are each predicted basically independently of each other just from the source context, so there is no, you can't expect like a coherency between the words or not a lot, so this is the fundamental trade-off with such a model, you can predict farther into the future at the same time but then these predictions can't basically depend on each other and this degrades your performance quite a bit, so what these authors do is to remedy that, they say well these things here we can, I mean we can produce a bunch of them, right, since all that's required as an input is this, we can actually produce like, we can produce a batch of them at the same time, so we can produce one, two and three words into the future and we can do this like a hundred times in parallel, no problem, alright, and we can sample this, we don't have to always take the most likely word, we can actually sample a bunch into the future and this now gets smarter because now I have a list of one hundred basically suggestions of what the continuation here could be, right, I have, I take this not as a given but I take these outputs as suggestions, alright, and then I can have another model that, this is called verify here, I can have another model that scores all of these different, all of these different decodings in parallel, both of these can be done by the same model, we saw the language model can be either used to predict or to score something, since it inherently predicts the probability of sequences or of following words, we can, we can let it output this probability all in parallel, so this also can count as a score, what I'm trying to say is you can, since the language model is constructed as a, as outputting probabilities anyway, like such, we can use it both to predict the next word and also if we have a suggestion we can use it to score that and to say okay how likely is that, right, and then what we can make sure is that the suggestion, we are looking for the suggestion basically that has the highest score and if you want to be really true to your original model you say I want to look for the suggestion that has the maximum, that would have had the maximum score had I decoded one by one, so then basically you retain the original performance and you gain a speed up as long as the, what the greedy decoding would have produced is in your suggestion, in your box of suggestions that you produce, as long as that's in there you gain a speed up, if that's not in there then you can always, you always have the one word ahead model because that's, you have that anyway, you predict the next word anyway, so in case none of these suggestions work out you still have this one word prediction basically which is the model you started with, so at worst case you're as fast as the greedy model and in best case you always, your suggestions are so good that they are always the one that would have been decoded anyway, so you can basically in this case do three steps at once. Alright, so this verify step here is shown here and you see it will decode, now this is just one suggestion keep in mind, they can produce many suggestions at the same time if there's memory or and they can actually, they can score each of this, so they can score this, they can score this and they can score this also independently as a batch, so they can do this in parallel and here you see, yeah here is executed in parallel, so the model will go and will score this word in and say ah this would have been, this is the argmax of the greedy decoding anyway and it can also score this step and say aha given that there is an in that this the is the argmax anyway, right and you can score this step and say ah given that there's in the, the argmax would have been car, so that's not bus, so we reject this suggestion but we keep that part of the suggestion and say okay the in the is basically what would have been decoded anyway according to the greedy decoding, so we can basically accept this here and continue from there, this is the accept step here, so this basically, so you can see in this one step which yeah we'll call one decoding step, we have basically done two of the greedy decoding steps in one go, so by predicting into the future and then selecting the one that agrees with the original model because we can, the fundamental thing is we can score in parallel but we can greedily produce not in parallel, alright so they actually push this further by also eliminating one of the, one of the evaluations here by combining basically the next predict step with the previous verify step and it's pretty cool to look at that, so we're in the same situation, you have this and you suggest this continuation and then the score model again will go here but while you verify you also do the next predict at the same time, since you've built your model, since it's the same model and this model every time you execute it, it outputs a distribution over the next set of positions, you might as well take the outputs of it, right, so when you then decide to accept this here, you will already have the outputs computed for the next three positions, so this you can feed directly into this next predict step, you basically don't have to execute it, you simply go to the one you've accepted and then you look at the outputs that you get anyway from this model and use them, so you might ask, okay which, how does a model look that like scores and predicts into the future and this, the answer is here, it's a bit out of order, I would have maybe liked this more previously but in any case this is what they do, so they use a transformer architecture and you have to imagine it starts down here and actually there is a huge network down here, right, this is just the output layer, so there's a giant transformer network down below and it produces this output representation, now normally from this representation you would go to this what's called p layer here, this is a output vocabulary projection, so this has one entry for each of the words in your vocabulary, so the, a, cat and so on and you would then for each one predict a probability, so with this representation you basically project it onto this vocabulary and predict the probability distribution over the next word, but what they do is they say no no no we not only need the next word, we need the next three words, so let's actually split this output signal into three output signals and they do this by introducing this hidden feed forward layer here or a hidden transformer layer, it's a hidden layer, yeah we insert a single feed forward layer with hidden size, okay, so they insert a hidden layer and then they also add these skip connections here, right, they add the skip connections which basically just means they feed through this output directly to here and add it to that, so basically the feed forward layer needs to transform this output here into the vocabulary input, one step ahead, two steps ahead and three steps ahead and you can see here that those are independent, right, they don't depend on each other, there's nothing feeding back p1 here into the decision of p2 so they can be executed in parallel, but they lose the dependence on each other, alright, so that's the architecture and you can clearly see here it's able to predict three steps into the future at the same time, so yeah, alright so they also do different adjustments where they say now yeah we can also kind of sacrifice a bit of the fidelity to the original model by not requiring that the basically we don't only accept when the suggestion is the perfect best suggestion that would have been decoded by the greedy model, but what we could do is we could just if it's in the top k we could accept it, if it's in the if it's good enough basically one of the suggestions that we have is good enough then we'll accept it or when you have like some sort of distance metric they say here so the distance between our suggestion and the maximum so the what would have been best by the greedy should be smaller than some constant epsilon and that way you can sacrifice a bit of performance but your suggestions will be accepted much more often and thereby your speedup will be much higher and they also experiment with whether or not they should fine tune the original model along with their model and also the experiment with knowledge distillation where they basically have like some some teacher model and you train the your model on the output of the teacher model don't want to go too far into this since these are mostly kind of things to make it work even better and you can see here that this is for example a machine translation task so this is the WMT 2014 English German translation and there's a regular they get a blow score of 26 and here higher is better and if you can see they get a fairly sizable speedups by keeping the blow scores fairly constant so they they almost speed up by 2x but if they allow the blow scores to go down a bit they get a much higher speedup of like 3 and then if they do like distillation and fine tuning they actually manage to keep up the performance even though they get very very high speedups so they get speedups until like 5x by not dropping the blow scores very much so that's that's pretty impressive another experiment they do is image super resolution where you can see here with regular they try to really keep exactly the original model output and it doesn't it doesn't speed it up too much but when they allow for a bit of a mistake to be made so here this is image super resolution so values are between zero and 255 and they allow epsilon equals to two of that so that's that's kind of less than 1% error on the individual pixel then they get a speed ups of 7x or something like this and you can see in this region here that when the K is for in case the number of steps that you decode ahead so and the mini mean block size is 3.75 that means on average 3.75 steps ahead or accepted which means basically there their suggestions are almost always good enough to be accepted so they get this massive speed up by basically being able to jump these decoding steps yeah so they have a bunch of other results here there show their wall clock time speed up since iteration speed up as well but if you have to pay in huge computational cost it's not so good but they also show that they have a big kind of wall clock speed up up to up to 4x here in super resolution and over 3x in translation so it's a pretty cool paper they give some examples here a bunch of more tables some examples of their super resolution and yeah if this might be something for you then use it it's I think it's a pretty neat trick and yeah especially for production systems all right that was it bye bye.
[ { "end": 6.640000000000001, "start": 0, "text": " Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by" }, { "end": 15.200000000000001, "start": 6.640000000000001, "text": " Mitchell Stern, Noam Shazir and Jakob Uschkordei of UC Berkeley and Google Brain." }, { "end": 21.44, "start": 15.200000000000001, "text": " So this is a bit more of an engineering paper than usual, which I find cool." }, { "end": 28.400000000000002, "start": 21.44, "text": " It's basically an engineering trick to get these autoregressive models to decode faster," }, { "end": 36.48, "start": 28.4, "text": " while you can either preserve fully their performance or suffer a bit of a drop in performance," }, { "end": 39.12, "start": 36.48, "text": " while even speeding them up more." }, { "end": 46.72, "start": 39.12, "text": " Alright, so let's dive in actually." }, { "end": 52.32, "start": 46.72, "text": " The paper starts out with a description of what autoregressive models are and what decoding" }, { "end": 54.239999999999995, "start": 52.32, "text": " is in them." }, { "end": 59.36, "start": 54.24, "text": " So let me try to quickly explain this." }, { "end": 62.800000000000004, "start": 59.36, "text": " So what is an autoregressive model?" }, { "end": 68, "start": 62.800000000000004, "text": " So basically we're talking about, let's say, language models." }, { "end": 73.2, "start": 68, "text": " So language models are the classic examples of these models, where you have a language" }, { "end": 77.08, "start": 73.2, "text": " model is a model that simply predicts the next word in a sequence." }, { "end": 88.4, "start": 77.08, "text": " So you could have something like a cat sits on the, and then here is blank." }, { "end": 95.44, "start": 88.4, "text": " So the language model is asked to predict which word is the word that follows." }, { "end": 101.16, "start": 95.44, "text": " The language model basically does this by predicting the probability distribution over" }, { "end": 102.28, "start": 101.16, "text": " the next word." }, { "end": 112.84, "start": 102.28, "text": " So w t plus one, if this is t here, this is t minus one and so on, given all the w's smaller" }, { "end": 114.76, "start": 112.84, "text": " or equal than t." }, { "end": 122.36, "start": 114.76, "text": " So all the words that come before should lead to the next word being predicted." }, { "end": 128.64, "start": 122.36, "text": " So the language model is tasked to ask what is the next word in the sequence, or what's" }, { "end": 131.16, "start": 128.64, "text": " the probability distribution over the next word." }, { "end": 136.04, "start": 131.16, "text": " And then you can simply, you know, pick the maximum probability word or something like" }, { "end": 137.04, "start": 136.04, "text": " this." }, { "end": 143.04, "start": 137.04, "text": " So that's that's pretty standard so far." }, { "end": 146.28, "start": 143.04, "text": " So what is the autoregressive part in here?" }, { "end": 153.16, "start": 146.28, "text": " So basically the autoregressive part means that in order for me to find this word here," }, { "end": 158.04, "start": 153.16, "text": " this next word, I will look at all of these words here." }, { "end": 164.23999999999998, "start": 158.04, "text": " And what does it mean then when I want to use this language model for generating generating" }, { "end": 169.68, "start": 164.23999999999998, "text": " a sentence, let's say, so I'm now I've trained the language model, it's really good at predicting" }, { "end": 174.89999999999998, "start": 169.68, "text": " the next word, I want to actually use it to do something more interesting." }, { "end": 182.04, "start": 174.89999999999998, "text": " So I, I want it to generate a full sentence, what I do, let's say I pick the first word," }, { "end": 187.92, "start": 182.04, "text": " the right, I pick the first word, and I simply ask the language model, why what's the next" }, { "end": 188.92, "start": 187.92, "text": " word?" }, { "end": 189.92, "start": 188.92, "text": " Right?" }, { "end": 195.95999999999998, "start": 189.92, "text": " And the language model can do this, it can assess what's the probability distribution" }, { "end": 201.72, "start": 195.95999999999998, "text": " here over words, and it will, for example, give me some some distribution over words," }, { "end": 206.44, "start": 201.72, "text": " and I pick the maximum one, I say, okay, the maximum one here is house." }, { "end": 210.2, "start": 206.44, "text": " Okay, the house." }, { "end": 211.92, "start": 210.2, "text": " The house." }, { "end": 216.79999999999998, "start": 211.92, "text": " And then I go back and I ask the language model, well, what's the next word then?" }, { "end": 218.8, "start": 216.8, "text": " See, clearly, you're a language model." }, { "end": 223.52, "start": 218.8, "text": " So you can give me based on the two previous words, you can give me the next word, what's" }, { "end": 230.8, "start": 223.52, "text": " the next word, and the language model will maybe say the house is, and so on." }, { "end": 237.8, "start": 230.8, "text": " So you can see how you can generate a sentence by simply basically feeding the answer that" }, { "end": 242.72000000000003, "start": 237.8, "text": " the language model gives feeding it into the next step of predicting." }, { "end": 247.35999999999999, "start": 242.72, "text": " So all of these now go into the next step, and once you've predicted the next step, the" }, { "end": 251.07999999999998, "start": 247.35999999999999, "text": " house is on." }, { "end": 255.6, "start": 251.07999999999998, "text": " Once you've predicted that, then you can take that and in conjunction with everything you've" }, { "end": 258.66, "start": 255.6, "text": " predicted so far, to predict the next step." }, { "end": 263.76, "start": 258.66, "text": " So you can use the language model that is trained to predict the next word to predict" }, { "end": 268.36, "start": 263.76, "text": " an entire sentence and the autoregressive part basically means that its own predictions" }, { "end": 275.32, "start": 268.36, "text": " will serve as the basis for the next predictions, and this introduces a fundamental problem," }, { "end": 283.68, "start": 275.32, "text": " namely that I have to basically wait for one prediction, so I have to wait here for is" }, { "end": 292.92, "start": 283.68, "text": " before I can predict on, and this means if I have a, I basically can't help but, so if" }, { "end": 298.28000000000003, "start": 292.92, "text": " this is my language model, it's a box, I can't help but go to the language model, wait for" }, { "end": 303.23999999999995, "start": 298.28, "text": " a response, okay, then go to the language model again, wait for a response again." }, { "end": 309.44, "start": 303.23999999999995, "text": " This is inherently sequential nature here where I have to do like M steps if M is the" }, { "end": 318.35999999999996, "start": 309.44, "text": " length of the sentence that I want, and we can't make use of batching normally, so usually" }, { "end": 324.23999999999995, "start": 318.35999999999996, "text": " what you do during training, during training you have a whole bunch of data, right, you" }, { "end": 343.96000000000004, "start": 324.24, "text": " have the cat sits on the mat, you have the house, the house is blue, so I can generate," }, { "end": 349.28000000000003, "start": 343.96000000000004, "text": " just from these two sentences I can generate a bunch of training examples, I can ask, this" }, { "end": 356.67999999999995, "start": 349.28, "text": " is a training example where the input is the cat and it's meaning to predict sits, then" }, { "end": 361.79999999999995, "start": 356.67999999999995, "text": " this is a training example where the input is the cat sits and the language model has" }, { "end": 368.52, "start": 361.79999999999995, "text": " to predict on, this here is a training example, this, this is a training example, so I can" }, { "end": 373.79999999999995, "start": 368.52, "text": " chunk this up into a whole bunch of training examples and all of those I can write, I can" }, { "end": 381.84000000000003, "start": 373.8, "text": " feed in parallel into a big matrix, I can all put them here and then run this thing" }, { "end": 387.12, "start": 381.84000000000003, "text": " through my language model in training mode because each of them is already like is in" }, { "end": 394.2, "start": 387.12, "text": " the corpus, I can batch the training but I can't batch the prediction because of what" }, { "end": 400.12, "start": 394.2, "text": " we've seen before because inherently the next predicting the next word depends on the last" }, { "end": 405.72, "start": 400.12, "text": " word that the model itself has output, so there is no training corpus around since we're" }, { "end": 412.04, "start": 405.72, "text": " not training, yeah, so this is the fundamental problem and these authors tackle this problem," }, { "end": 419.64, "start": 412.04, "text": " they say how can we make this faster, this decoding, so they introduce greedy decoding" }, { "end": 428.14, "start": 419.64, "text": " here where they say okay, this is what we've just seen, the probability of the next word" }, { "end": 435.88, "start": 428.14, "text": " is like the maximum, the maximum log probability here in that case if the model predicts a" }, { "end": 444.76, "start": 435.88, "text": " log probability over the words that we've input so far, right, and this X here is, so" }, { "end": 449, "start": 444.76, "text": " this is for example a translation task, a machine translation task, so the X would be" }, { "end": 456.56, "start": 449, "text": " the source language sentence, so maybe like a French sentence and the Y smaller equal" }, { "end": 463.36, "start": 456.56, "text": " to J would be the so far decoded English sentence if we're trying to translate to English and" }, { "end": 468.72, "start": 463.36, "text": " the Y J plus one would be the next word that we're trying to predict in the English sentence" }, { "end": 475.48, "start": 468.72, "text": " given the English sentence so far and the French sentence, the total French sentence," }, { "end": 482.98, "start": 475.48, "text": " so greedy decoding just does this one step after another and we try to go to what they" }, { "end": 487.64000000000004, "start": 482.98, "text": " call blockwise parallel decoding." }, { "end": 494.28000000000003, "start": 487.64000000000004, "text": " So we can just jump to the graphics straight away because what they do is pretty straightforward" }, { "end": 500.92, "start": 494.28000000000003, "text": " and is best illustrated in this graphic actually, so they go from this situation where they" }, { "end": 510.6, "start": 500.92, "text": " already have this here, they have a saw a dog ride, this is the sentence that has been" }, { "end": 518.52, "start": 510.6, "text": " decoded so far and we have to try to complete it, naturally we'll ask what's the next word," }, { "end": 524.76, "start": 518.52, "text": " but they say okay what if we could predict not only the next word from this but the word" }, { "end": 531.12, "start": 524.76, "text": " two positions away or three positions away, we could do this all at the same time, right," }, { "end": 535.9200000000001, "start": 531.12, "text": " I mean I can certainly build a model, a language model that doesn't only predict the next word" }, { "end": 544.9599999999999, "start": 535.92, "text": " but predicts the word after that as well, though of course if then this word, the predictor" }, { "end": 550.68, "start": 544.9599999999999, "text": " for this word still only gets this as an input so this is the important thing here, so the" }, { "end": 559.2199999999999, "start": 550.68, "text": " part of the model that predicts the is two words away isn't being informed that this" }, { "end": 565, "start": 559.2199999999999, "text": " word is being produced here, so naturally you would expect the quality to be worse because" }, { "end": 571.4, "start": 565, "text": " the word one position away, two positions away and three positions away are each predicted" }, { "end": 579.08, "start": 571.4, "text": " basically independently of each other just from the source context, so there is no, you" }, { "end": 588.84, "start": 579.08, "text": " can't expect like a coherency between the words or not a lot, so this is the fundamental" }, { "end": 593.44, "start": 588.84, "text": " trade-off with such a model, you can predict farther into the future at the same time but" }, { "end": 599.72, "start": 593.44, "text": " then these predictions can't basically depend on each other and this degrades your performance" }, { "end": 606.8000000000001, "start": 599.72, "text": " quite a bit, so what these authors do is to remedy that, they say well these things here" }, { "end": 613.12, "start": 606.8000000000001, "text": " we can, I mean we can produce a bunch of them, right, since all that's required as an input" }, { "end": 618.7600000000001, "start": 613.12, "text": " is this, we can actually produce like, we can produce a batch of them at the same time," }, { "end": 624.2, "start": 618.76, "text": " so we can produce one, two and three words into the future and we can do this like a" }, { "end": 631.2, "start": 624.2, "text": " hundred times in parallel, no problem, alright, and we can sample this, we don't have to always" }, { "end": 639.48, "start": 631.2, "text": " take the most likely word, we can actually sample a bunch into the future and this now" }, { "end": 646.42, "start": 639.48, "text": " gets smarter because now I have a list of one hundred basically suggestions of what" }, { "end": 652.3199999999999, "start": 646.42, "text": " the continuation here could be, right, I have, I take this not as a given but I take these" }, { "end": 660.12, "start": 652.3199999999999, "text": " outputs as suggestions, alright, and then I can have another model that, this is called" }, { "end": 668.24, "start": 660.12, "text": " verify here, I can have another model that scores all of these different, all of these" }, { "end": 672.92, "start": 668.24, "text": " different decodings in parallel, both of these can be done by the same model, we saw the" }, { "end": 679.9599999999999, "start": 672.92, "text": " language model can be either used to predict or to score something, since it inherently" }, { "end": 689.28, "start": 679.9599999999999, "text": " predicts the probability of sequences or of following words, we can, we can let it output" }, { "end": 694.92, "start": 689.28, "text": " this probability all in parallel, so this also can count as a score, what I'm trying" }, { "end": 701.04, "start": 694.92, "text": " to say is you can, since the language model is constructed as a, as outputting probabilities" }, { "end": 710.28, "start": 701.04, "text": " anyway, like such, we can use it both to predict the next word and also if we have a suggestion" }, { "end": 719.16, "start": 710.28, "text": " we can use it to score that and to say okay how likely is that, right, and then what we" }, { "end": 726.5999999999999, "start": 719.16, "text": " can make sure is that the suggestion, we are looking for the suggestion basically that" }, { "end": 733.72, "start": 726.6, "text": " has the highest score and if you want to be really true to your original model you say" }, { "end": 741.58, "start": 733.72, "text": " I want to look for the suggestion that has the maximum, that would have had the maximum" }, { "end": 750.88, "start": 741.58, "text": " score had I decoded one by one, so then basically you retain the original performance and you" }, { "end": 759.92, "start": 750.88, "text": " gain a speed up as long as the, what the greedy decoding would have produced is in your suggestion," }, { "end": 763.6, "start": 759.92, "text": " in your box of suggestions that you produce, as long as that's in there you gain a speed" }, { "end": 769.66, "start": 763.6, "text": " up, if that's not in there then you can always, you always have the one word ahead model because" }, { "end": 775.72, "start": 769.66, "text": " that's, you have that anyway, you predict the next word anyway, so in case none of these" }, { "end": 782.88, "start": 775.72, "text": " suggestions work out you still have this one word prediction basically which is the model" }, { "end": 792.08, "start": 782.88, "text": " you started with, so at worst case you're as fast as the greedy model and in best case" }, { "end": 798.72, "start": 792.08, "text": " you always, your suggestions are so good that they are always the one that would have been" }, { "end": 807.36, "start": 798.72, "text": " decoded anyway, so you can basically in this case do three steps at once. Alright, so this" }, { "end": 814.9, "start": 807.36, "text": " verify step here is shown here and you see it will decode, now this is just one suggestion" }, { "end": 822.44, "start": 814.9, "text": " keep in mind, they can produce many suggestions at the same time if there's memory or and" }, { "end": 827.6, "start": 822.44, "text": " they can actually, they can score each of this, so they can score this, they can score" }, { "end": 837.72, "start": 827.6, "text": " this and they can score this also independently as a batch, so they can do this in parallel" }, { "end": 843.84, "start": 837.72, "text": " and here you see, yeah here is executed in parallel, so the model will go and will score" }, { "end": 848.52, "start": 843.84, "text": " this word in and say ah this would have been, this is the argmax of the greedy decoding" }, { "end": 854.88, "start": 848.52, "text": " anyway and it can also score this step and say aha given that there is an in that this" }, { "end": 861.72, "start": 854.88, "text": " the is the argmax anyway, right and you can score this step and say ah given that there's" }, { "end": 869.08, "start": 861.72, "text": " in the, the argmax would have been car, so that's not bus, so we reject this suggestion" }, { "end": 876.24, "start": 869.08, "text": " but we keep that part of the suggestion and say okay the in the is basically what would" }, { "end": 886.44, "start": 876.24, "text": " have been decoded anyway according to the greedy decoding, so we can basically accept" }, { "end": 896.48, "start": 886.44, "text": " this here and continue from there, this is the accept step here, so this basically, so" }, { "end": 902.52, "start": 896.48, "text": " you can see in this one step which yeah we'll call one decoding step, we have basically" }, { "end": 912.42, "start": 902.52, "text": " done two of the greedy decoding steps in one go, so by predicting into the future and then" }, { "end": 919.04, "start": 912.42, "text": " selecting the one that agrees with the original model because we can, the fundamental thing" }, { "end": 928.4, "start": 919.04, "text": " is we can score in parallel but we can greedily produce not in parallel, alright so they actually" }, { "end": 939.04, "start": 928.4, "text": " push this further by also eliminating one of the, one of the evaluations here by combining" }, { "end": 948.4, "start": 939.04, "text": " basically the next predict step with the previous verify step and it's pretty cool to look at" }, { "end": 957.04, "start": 948.4, "text": " that, so we're in the same situation, you have this and you suggest this continuation" }, { "end": 968.04, "start": 957.04, "text": " and then the score model again will go here but while you verify you also do the next" }, { "end": 973.56, "start": 968.04, "text": " predict at the same time, since you've built your model, since it's the same model and" }, { "end": 982.52, "start": 973.56, "text": " this model every time you execute it, it outputs a distribution over the next set of positions," }, { "end": 988.4, "start": 982.52, "text": " you might as well take the outputs of it, right, so when you then decide to accept this" }, { "end": 996.36, "start": 988.4, "text": " here, you will already have the outputs computed for the next three positions, so this you" }, { "end": 1001.48, "start": 996.36, "text": " can feed directly into this next predict step, you basically don't have to execute it, you" }, { "end": 1009.76, "start": 1001.48, "text": " simply go to the one you've accepted and then you look at the outputs that you get anyway" }, { "end": 1018.88, "start": 1009.76, "text": " from this model and use them, so you might ask, okay which, how does a model look that" }, { "end": 1024.12, "start": 1018.88, "text": " like scores and predicts into the future and this, the answer is here, it's a bit out of" }, { "end": 1029.8799999999999, "start": 1024.12, "text": " order, I would have maybe liked this more previously but in any case this is what they" }, { "end": 1034.52, "start": 1029.8799999999999, "text": " do, so they use a transformer architecture and you have to imagine it starts down here" }, { "end": 1040.48, "start": 1034.52, "text": " and actually there is a huge network down here, right, this is just the output layer," }, { "end": 1047.6, "start": 1040.48, "text": " so there's a giant transformer network down below and it produces this output representation," }, { "end": 1054.84, "start": 1047.6, "text": " now normally from this representation you would go to this what's called p layer here," }, { "end": 1060.52, "start": 1054.84, "text": " this is a output vocabulary projection, so this has one entry for each of the words in" }, { "end": 1068.76, "start": 1060.52, "text": " your vocabulary, so the, a, cat and so on and you would then for each one predict a" }, { "end": 1076.24, "start": 1068.76, "text": " probability, so with this representation you basically project it onto this vocabulary" }, { "end": 1082.6399999999999, "start": 1076.24, "text": " and predict the probability distribution over the next word, but what they do is they say" }, { "end": 1087.68, "start": 1082.6399999999999, "text": " no no no we not only need the next word, we need the next three words, so let's actually" }, { "end": 1095.5600000000002, "start": 1087.68, "text": " split this output signal into three output signals and they do this by introducing this" }, { "end": 1103.3200000000002, "start": 1095.5600000000002, "text": " hidden feed forward layer here or a hidden transformer layer, it's a hidden layer, yeah" }, { "end": 1110.28, "start": 1103.3200000000002, "text": " we insert a single feed forward layer with hidden size, okay, so they insert a hidden" }, { "end": 1119.16, "start": 1110.28, "text": " layer and then they also add these skip connections here, right, they add the skip connections" }, { "end": 1127.52, "start": 1119.16, "text": " which basically just means they feed through this output directly to here and add it to" }, { "end": 1135.08, "start": 1127.52, "text": " that, so basically the feed forward layer needs to transform this output here into the" }, { "end": 1141.84, "start": 1135.08, "text": " vocabulary input, one step ahead, two steps ahead and three steps ahead and you can see" }, { "end": 1146.6, "start": 1141.84, "text": " here that those are independent, right, they don't depend on each other, there's nothing" }, { "end": 1151.84, "start": 1146.6, "text": " feeding back p1 here into the decision of p2 so they can be executed in parallel, but" }, { "end": 1160.12, "start": 1151.84, "text": " they lose the dependence on each other, alright, so that's the architecture and you can clearly" }, { "end": 1171.1599999999999, "start": 1160.12, "text": " see here it's able to predict three steps into the future at the same time, so yeah," }, { "end": 1177.2399999999998, "start": 1171.1599999999999, "text": " alright so they also do different adjustments where they say now yeah we can also kind of" }, { "end": 1187.6799999999998, "start": 1177.2399999999998, "text": " sacrifice a bit of the fidelity to the original model by not requiring that the basically" }, { "end": 1192.96, "start": 1187.68, "text": " we don't only accept when the suggestion is the perfect best suggestion that would have" }, { "end": 1199.04, "start": 1192.96, "text": " been decoded by the greedy model, but what we could do is we could just if it's in the" }, { "end": 1205.48, "start": 1199.04, "text": " top k we could accept it, if it's in the if it's good enough basically one of the suggestions" }, { "end": 1210.4, "start": 1205.48, "text": " that we have is good enough then we'll accept it or when you have like some sort of distance" }, { "end": 1216, "start": 1210.4, "text": " metric they say here so the distance between our suggestion and the maximum so the what" }, { "end": 1222, "start": 1216, "text": " would have been best by the greedy should be smaller than some constant epsilon and" }, { "end": 1226.8, "start": 1222, "text": " that way you can sacrifice a bit of performance but your suggestions will be accepted much" }, { "end": 1232.6, "start": 1226.8, "text": " more often and thereby your speedup will be much higher and they also experiment with" }, { "end": 1239.4, "start": 1232.6, "text": " whether or not they should fine tune the original model along with their model and also the" }, { "end": 1246.5600000000002, "start": 1239.4, "text": " experiment with knowledge distillation where they basically have like some some teacher" }, { "end": 1251.92, "start": 1246.5600000000002, "text": " model and you train the your model on the output of the teacher model don't want to" }, { "end": 1258.92, "start": 1251.92, "text": " go too far into this since these are mostly kind of things to make it work even better" }, { "end": 1266.64, "start": 1258.92, "text": " and you can see here that this is for example a machine translation task so this is the" }, { "end": 1274.44, "start": 1266.64, "text": " WMT 2014 English German translation and there's a regular they get a blow score of 26 and" }, { "end": 1283.3600000000001, "start": 1274.44, "text": " here higher is better and if you can see they get a fairly sizable speedups by keeping the" }, { "end": 1289.8000000000002, "start": 1283.3600000000001, "text": " blow scores fairly constant so they they almost speed up by 2x but if they allow the blow" }, { "end": 1297.12, "start": 1289.8, "text": " scores to go down a bit they get a much higher speedup of like 3 and then if they do like" }, { "end": 1303.12, "start": 1297.12, "text": " distillation and fine tuning they actually manage to keep up the performance even though" }, { "end": 1310.56, "start": 1303.12, "text": " they get very very high speedups so they get speedups until like 5x by not dropping the" }, { "end": 1319.1399999999999, "start": 1310.56, "text": " blow scores very much so that's that's pretty impressive another experiment they do is image" }, { "end": 1326.2800000000002, "start": 1319.14, "text": " super resolution where you can see here with regular they try to really keep exactly the" }, { "end": 1332.5600000000002, "start": 1326.2800000000002, "text": " original model output and it doesn't it doesn't speed it up too much but when they allow for" }, { "end": 1339.8400000000001, "start": 1332.5600000000002, "text": " a bit of a mistake to be made so here this is image super resolution so values are between" }, { "end": 1347.64, "start": 1339.8400000000001, "text": " zero and 255 and they allow epsilon equals to two of that so that's that's kind of less" }, { "end": 1355.44, "start": 1347.64, "text": " than 1% error on the individual pixel then they get a speed ups of 7x or something like" }, { "end": 1361.72, "start": 1355.44, "text": " this and you can see in this region here that when the K is for in case the number of steps" }, { "end": 1371.64, "start": 1361.72, "text": " that you decode ahead so and the mini mean block size is 3.75 that means on average 3.75" }, { "end": 1376.3200000000002, "start": 1371.64, "text": " steps ahead or accepted which means basically there their suggestions are almost always" }, { "end": 1381.84, "start": 1376.32, "text": " good enough to be accepted so they get this massive speed up by basically being able to" }, { "end": 1390.3999999999999, "start": 1381.84, "text": " jump these decoding steps yeah so they have a bunch of other results here there show their" }, { "end": 1395.96, "start": 1390.3999999999999, "text": " wall clock time speed up since iteration speed up as well but if you have to pay in huge" }, { "end": 1401.28, "start": 1395.96, "text": " computational cost it's not so good but they also show that they have a big kind of wall" }, { "end": 1410.08, "start": 1401.28, "text": " clock speed up up to up to 4x here in super resolution and over 3x in translation so it's" }, { "end": 1415.12, "start": 1410.08, "text": " a pretty cool paper they give some examples here a bunch of more tables some examples" }, { "end": 1424, "start": 1415.12, "text": " of their super resolution and yeah if this might be something for you then use it it's" }, { "end": 1429.92, "start": 1424, "text": " I think it's a pretty neat trick and yeah especially for production systems all right" }, { "end": 1431.44, "start": 1429.92, "text": " that was it bye bye." } ]
pPBqM4CKjUU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Discriminating Systems - Gender, Race, and Power in AI
[ "Science & Technology" ]
[ "ai", "machine learning", "bias", "fairness", "ml fairness", "algorithmic bias", "algorithmic discrimination", "ai and society", "ainow", "google", "microsoft", "race", "gender", "stem", "pipeline", "gender gap", "diversity", "inclusion", "equity", "power" ]
TL;DR: - There exists both an unequal representation of people in the AI workforce as well as examples of societal bias in AI systems. - The authors claim that the former causally leads to the latter and vice versa. - To me, the report does not manage to make a strong enough argument for that claim. - I find the statements made quite dishonest at times. https://ainowinstitute.org/discriminatingsystems.pdf Authors: Sarah Myers West, Meredith Whittaker, Kate Crawford
Hi there, today we're looking at discriminating systems, gender, race and power in AI by Sarah Myers-West, Meredith Whitaker and Kate Crawford of the AI Now Institute, which is a part of New York University or associated with it. This is not as much a paper as it is a report, kind of summarizing current literature and also kind of an opinion piece slash recommendation giving document. Yes, so we'll dive into it. As you can see from the index, it's quite a long report and we don't have time to go into all of it. Actually, we don't have time to go into most of it. I just hope to kind of point out what the main arguments and themes are in the report, kind of what it's trying to say, pick out some interesting things and summarize it to the best of my ability. Also give a little critique. So let me actually go ahead and try to state the kind of core argument that the report is trying to make, because it's not really clear from reading it and you have to kind of read the whole thing and then kind of becomes clear what the argument is, I feel, though they somehow stated in the introduction numerous times in various ways. So I might just be not as attentive reader at first time. But all right, so here's the argument and I really hope I'm representing this correctly. We have a problem currently that sometimes AI systems can exhibit what we usually call bias. And we don't mean mathematical bias, like bias variance tradeoff. We mean bias in a societal sense, let's say bias against certain types of people where they shouldn't exist. So for example, let me draw an AI system and I'll just draw a little computer screen with a little light bulb. All right. So this is because it's smart, this is an AI system and the AI system and they give numerous examples. One example they give us for is like face recognition algorithm that is much more accurate on faces of white males, as opposed to darker skinned females. So let me draw like two curves to represent these distributions are unequal. And so the AI system exhibits some bias with respect to some kinds of people with an especially protected attributes. And in this report, they focus mainly on gender and race. So that's what we're going to talk about. The second thing they observe, so this observation one, the second thing they observe is, I'm going to draw some generic people here that represent the workforce of AI. So the AI workforce is classified as all the people that work on AI, be that university researchers or within companies building AI products or deploying them. So this is the workforce and they observe that there is an unequal distribution among the AI workforce. So this distribution, I'm also going to do this for unequal distribution. There's an unequal distribution in the AI workforce, most notably, it's predominantly males who work on AI. And also white people are overrepresented compared to the world population at large. So that's kind of the two observations they make. And now what they claim is that the unequal representation in the workforce is causing the bias in the AI systems. So they're basically saying these AI systems are biased because that the workforce is unequally distributed. And also they claim in a less powerful sense, I feel, but they claim there is a loop that this then leads back that because there is bias in the AI system, that again leads to an unequal, more unequal distribution of the workforce. So the core argument really is, as they set out to do, like in the introduction, and also claim that they have done in the conclusion, is to demonstrate these two directions here in a causal way. So the systems are biased because there is an unequal representation in the workforce and that feeds back. So the argument is that if you want to fix the bias here, if you want to fix that, then you will have to fix it via making the workforce more what they call diverse, so less unilaterally distributed towards white males. That's kind of the final conclusion. If you read their report and the recommendations, that's mainly what they're going for. Yeah, so my opinion, or in my opinion, having read the report a couple of times, is that as I see it, they really don't demonstrate these links. So they give examples of this and they give examples of this. They show that the workforce is unequally distributed. They show that AI systems can exhibit such bias, but they never actually show these links in my opinion. They don't show this. So if you make the claim that in order to fix the bias in AI systems, you must fix the unequal representation in the workforce, I would need an argument that says because there is unequal representation, therefore A, therefore B, therefore C, therefore bias, like an actual argument to follow that says because of this, that, because of that, that, and so on. It's just not there. They simply show parallels. They simply show that these two things exist and they just list example after example of that. I don't think they make this argument. But I think, also the other direction, they don't really make this argument. Except in one case, where if you give them benefit of the doubt. What I also think is that it appears like the article, if you read it, and I encourage you to read it if you have some time, it makes a lot of sense if you have already accepted this conclusion. Like if you've already accepted this, then it's like, oh yeah, because I feel this is just a text where the confirmation bias is so high, just the way it's written, that it must make a lot of sense to someone who's already kind of in on this conclusion. But to someone who isn't sold yet, like myself, I am just not finding this convincing at all. The second thing is that it very much feels like this isn't like a discovery or something. But someone actually set out with the goal to address this here with the goal of I want companies to hire more of these people or certain kinds of people or to become more diverse or to promote more of a certain type of people. And now I'm going to find reasons for this. And the reason is like, oh, look at look at this bias here. This is caused. This is caused by this other thing. And therefore we must fix this other thing. It very much feels like someone setting out with already the conclusion in mind rather than this being an honest investigation. But yeah, I mean, read it for yourself. I can't prove the absence of an argument by not reading every single line. And I can't read every single line because it'll just get very long and boring. But read it yourself. And I think I'm pretty I'm pretty I've read it numerous times with really an open mind to be convinced that there is an argument in there. But I don't think there is or I don't think there is a very strong argument for this. All right. Let this first part here is more or less a summary. So research findings is more or less a summary. And we'll get to these things as they are important. Then they state recommendations right at the beginning. So actually, you'd have to read the article first. This is kind of more of an abstract section. But since it's right here, we'll kind of jump right into it. So these are recommendations and I've claimed they don't really show a connection. But they actually just show examples, examples of this and examples of this and parallel them. And this is reflected in like every single section, including here in the recommendations. They have recommendations for improving workplace diversity. And they have recommendations for addressing bias and discrimination in AI systems. Right. So all right, in my case, if you make this argument, I would I would feel you also make recommendations for breaking these links. But or argue why they can't be broken. But all right, let's jump into some of them. And it is really a mixed bag here, really. So some recommendations I'm really in favor of just from from the go not even you don't even need the article for those here. Discrimination, harassment and discrimination, transparency reports, including number of claims over time, the types of claims submitted and actions taken. So it's known that especially in these larger companies, sexual harassment claims often go down in either bureaucracy or are kind of hushed under the table or something like this. What you have to recognize is that a human resource department of a large company isn't there to serve the human resources. It's there to serve the company providing human resources. That's why a sexual harassment claim to an HR department is just a potential lawsuit. And that's why they don't want to take it seriously except for it must go away really quickly. So I think to kind of force companies or to ask companies to be more transparent, to take more seriously these the accusations of sexual harassment and assault and also discrimination is a very valuable goal. And I fully, fully support this. Also the here commit to transparency around hiring practices, especially hiring regarding how candidates are leveled, compensated and promoted. But also the larger the company gets, the less transparent this process usually becomes or the more bureaucratic, the more people are able to game it and so on and distort it. So I feel it's always good to be transparent around, okay, this person provides this much value to the company, therefore they should be compensated according to that or at least be transparent about it. So these are kind of recommendations I like. Then recommendations that really go into a different direction is something like this here, change hiring practices to maximize diversity. And this is kind of reflect, I'm not going to go on this reflected in other points, increase the number of people of color, women and other underrepresented groups at senior leadership levels of AI companies across all departments. So these things, they are usually within like company diversity goals and so on, doesn't really say how to do it. But then the I mean, as such, they're not really recommendations yet. They're more like goals. But here recommendation seven, I think is the the crucial one, ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups. So this is it's a bit of coded language. But here they talk about executive incentive structure tied to hiring and retention of underrepresented groups. This basically means if you are a manager or someone in charge of hiring or promoting, and you hire or promote a underrepresented person, and since they're talking about gender and race here, if you that means if you hire or promote a person of color or a woman, in this case, you will be compensated more. So at the end of the year, you'll somehow have more money, like more bonuses or more base comp or more equity or something like you'll get more money. So this, this recommendation is a direct call to hire based on race and gender. So this, this is a direct call to racist and sexist hiring basically to discriminate people according to their skin color and according to their gender, which I mean, how, how is this okay with anyone? Like how can anyone how are people even able to state this and in like a high profile report like this and get away with it and not have people criticize them, this directly calls for people to be treated according to their gender and race. And probably as directly as you can go without getting into actual legal trouble. But yeah, I'm really, really against such such practices. I mean, yeah, that's I just I just don't know how this how this can ever how this can ever be thought of as a good thing by anyone. All right, so, well, yeah, in my mind, this recommendation, and this recommendation kind of are counter to each other. Because if if I commit to transparency, how people are okay now I can, I can transparently commit to to be racist, I guess. But if I say, okay, I'm going to come and promote people based on how much value they provide to the company, then yeah, I'd much rather have that than saying I'm going to come and promote people based on their skin color. Alright, so let's actually jump into the report. I'm not gonna these recommendations for addressing bias and discrimination in systems this these are fairly general and common. So as well, as I said, we'll jump most of the things in the report. So introduction. So they start out with there is a diversity crisis in the AI industry. This they give like some numbers like 15% of AI research staff and 10% at Google, so 15% of Facebook are women. So these are some kind of fairly known statistics about how the AI field is kind of gender and race skewed. Currently, so they say they claim in bold the diversity problem is not just about women. It's about gender, race, and most fundamentally about power. It affects how companies work, what products get built, who they're designed to serve, and who benefits from their development. So this, I find this, this, this word power and this notion of power, a lot in this report, it appears again and again and again in in like power dynamics and power dynamics among groups. It's like a worldview, it paints like a worldview, where these different gender and race groups kind of struggle against each other to gain power over another. And whoever's in power will try to remain in power in alliance with their gender and race group and try to keep the other groups down. I'm not sure that's the correct view of the world. In my mind, the world is comprised of individual people that want to achieve something for themselves and they would like to prop themselves up. Whereas in this worldview, it's like, I'm going to use the power of my group to keep other groups down. I don't know which worldview you subscribe to, but I find the world is comprised of individuals. Yeah, and this is not discrediting that some people have it harder because of their gender or race. But to see the entire world as a power struggle between these groups, to me, it's, it's, yeah, and I'm not going to point out everywhere it appears, this power wording, but it appears a lot and it's really shapes how the report reads. You have to, you have to kind of remember, if you're a white male, and currently, the field is comprised of 90% white males, you, if you have like 10, like 10 hours, let's say you have to have 10 hours to do something, right, you can either choose to put down some other groups, like put down groups that you're not part of, or you can choose to invest these 10 hours in putting up yourself, you, right. So if, if I, like I profit, if I'm a white male, I profit minimally from keeping the other groups down because guess what, I still have to compete with the like 1 billion other white males there are. It's not going to help me to keep down anyone else, and especially, like it's, it's moronic, like who does that, who like has alliance, except most fringe people, like to their race or gender, rather than to the people they admire and respect and like to work with. So I'm going to, if I have like 10 hours today, I'm going to rather spend this in propping up myself compared to everyone else, and I don't care what gender or race they are. And so that to me, that's a much more accurate or, I don't know, plausible worldview. But just be aware that this report really takes on the language of kind of groups and power between groups and groups trying to, you know, kind of gain power and keep in, keep power and keep others from having power. All right, so say, to date, the diversity problems of the industry and the issues of bias in the systems it builds have tended to be considered separately. We suggest that these are two versions of the same problem. Issues of discrimination in the workforce and in system buildings are deeply intertwined. Challenge, and moreover, tackling the challenges of bias within technical systems requires addressing workforce diversity and vice versa. So the, I think this, this here actually is like how I described the argument and they kind of restated multiple times in a bit different way. But I think this is the core. And I really think I'm not misrepresenting the article here in that this is what they are setting out to do. They're setting out to say, okay, the diversity, the kind of unequal representation in the workforce and the bias in some AI systems are causally linked to each other and tackling one requires tackling the other. So yeah, if I'm misrepresenting them, let me know, but I really think I'm accurately representing their argument. So what they, what they do, as I said, is they give examples of one and of the other and also they really, they're really on kind of discrediting the kind of issues to solve problems of bias in a different way. So they point a little bit to this here in the introduction. They say in the face of growing evidence, the AI research community and the industry producing our products have begun addressing the problem of bias by building on a body of work of fairness, accountability and transparency. So fairness, accountability and transparency research concerns these issues. For one is research showing that some products are unfair or untransparent and so on. On the other hand, it's trying to devise algorithms that are more fair according to some notions or more accountable and transparent, which means that the algorithm can kind of say why it made a certain decision rather than it being a deep learning system that you don't really have an insight. These fields are active fields of research, definitely very interesting to look into. So but they, they kind of, it is not already here, but they say, yeah, we have adjusting AI systems that produce a result deemed fair by one of various mathematical definitions. You can already see in the language here, they don't really like this research and they are trying in this report to kind of discredit it or at least claim that it doesn't solve the whole problem because their point is, of course, you have to address this diversity issue in the workforce in order to fix the problems. So to this, I just want to say no, like if you can, I mean, you can criticize the fairness and accountability and transparency research field in that they haven't solved the problem fully yet. But in principle, if I have an algorithm, if I'm being delivered an algorithm, right, and the fairness literature has been applied to that algorithm and someone tells me, I guarantee you here is a proof, the algorithm is fair, right, then I really don't care who made that algorithm. As long as it's fair, the problem is fixed. If the bias is gone, the problem is fixed. And I don't care who fix it. I don't care if the person who fixed it is black or white or purple. Then the problem is fixed. And they, they really have to, they really try to just make the counter argument here is that no, that's it's not enough. But I claim yes, it, if you can actually solve the fairness problem, technically, then you have solved the fairness problem. Yeah, the only thing you can do is claim that it is not good enough yet, but not that it's fun to they kind of have to make the argument that it's fundamentally flawed approach. And I don't think they succeed in doing that here. Um, yeah, so they go on to say, we should expand to consider not only how I tools can be biased technically, but how they're shaped by the environments in which you're built in and the people that built them. Again, this this focus like who builds the AI system, I don't care, I care what it does, right? As much as if, if I hear an argument for or against something, I don't care who makes the argument, right? I care what the argument says. This is, it's like an ad hominem attack for an entire community. That's kind of how this this article, this report shows, or is appears to me. So they say, currently, large scale AI systems are developed almost exclusively in a handful of technology companies and a small set of elite university laboratories spaces that in the West tend to be extremely white, affluent, technically oriented and male. So yeah, their their problem, that's their fundamental problem here that these these spaces are skewed in one direction. Interestingly enough, their problem is not so much that it's that they're all in the same place, right? That they all live like 20 miles from each other in around San Francisco. That's that seems to be not a problem at all, as long as we get to like enough people of color and women into these 20 miles. But yeah, so that that's pointing out the the problem here or the yeah, kind of issue they have. All right, so they go on. Just kind of want to highlight again, they say both within the spaces where AI is being created and the logic of how AI systems are being designed. So paralleling the two things, the cost of bias, harassment and discrimination are born by the same people, gender minorities, people of color, other underrepresented groups. And they also say similarly, the benefits of such systems from profit to efficiency, accrue primarily to those are already in positions of power tend to be white, educated and male. So they again, they say the this points to a systematic relationship between patterns of exclusion within the field of AI and the industry driving its production on the one hand and the biases that manifest in the logics and applications of the technologies on the other. And they try to make this connection because they say the cost and the benefit of these two things are overlap in the people that where it costs and it benefits. And I really, again, it's just a parallel, but I really even don't think that's true because they kind of, they kind of argue against themselves later. So they always say, we have to look at again, they shoot against the take much more than the technically driven problem solving. They point to this. So our research requires looking at gender and racist categories within which humans think in short, sorry, studies of discriminatory systems, we need to ask who is harmed, who benefits, who gets to decide. So it's kind of who bears the cost, who bears the benefits and who has the power. So that's the, and again, it's we seek to understand how AI disadvantages some, we also consider how it works to the advantage of others. So keep that in mind. That's kind of the lens through how they analyze the this thing again, one that acknowledges power relationships and centers equity and justice. That's the, they want to see this bigger picture. So that's yeah, keep, again, keep that in mind. So they go into a section called which humans are in the loop, how workforces and AI systems interact. So this kind of from the title of this section, you think, okay, here's where we get in. Here's where we make the argument. And they start by listing examples of how AI systems can be discriminatory. And first, they go into an example of Amazon had developed an experimental hiring tool to help rank job candidates. By learning from its past reference preferences, Amazon hoped that the resume scanning tool will be able to efficiently identify qualified applicants, comparing their applications to previous hires. The system quickly began to downgrade resumes from candidates who attended all women's colleges along with any resumes that included the word women's. After uncovering this bias, Amazon engineers tried to fix the problem by directing the system to treat these terms in a neutral manner. The company eventually abandoned the tool when they were unable to ensure that the algorithm would not be biased against women. Gender based discrimination was built too deeply within the system and in Amazon's past hiring practices to be uprooted using a purely technical approach. So this just the way is written, I find to be quite dishonest. But let's analyze what happened here. So their final claim is that gender based discrimination was built too deeply within the system to be uprooted using a purely technical approach. So this is one of their arguments. They say technical approaches, they don't help because the Amazon engineers tried to fix the problem. But when they were unable to ensure that the algorithm would not be biased against women. So if you read this, you really I mean, I really get the impression that's not what happened here. What happened here most probably is Amazon built this tool, okay, and it fed in its past hires and we know of issues of like data set bias bias inherent in data set. So if your data set is skewed, the AI tends to pick up on the skewed data set and become skewed itself. Okay, so I actually would argue that most or all of the examples they stayed in here are examples of such biased data sets and not. So the the cause of the bias is the data set that they are strained on and not the person that ran the code or built the algorithm to train it on or built the deployment. And so but it doesn't matter you're a you're Amazon, you built this tool and you realize, oh, it discriminates against people having women's on their CV. So this is a pretty bad PR wise. So you tell your engineers engineers fix the problem. So the engineers go fix the problem, they come back and say, okay, we fixed the problem. And then what you do is you say, okay, engineers, can you ensure me that the algorithm would not be biased against women? Because if only the slightest bias exists, if only it doesn't even have to be if one journalist finds one example, where there is a down rank, because I add the word women's, then we are screwed, right? And the engineers will say, No, we can't guarantee that it's a deep learning system or something, right? We, we can't like give you a proof that it's not biased. If you're a smart executive, at that point, you'll scrap the tool, because the potential PR downside are just huge. And probably they've also realized it's not that handy to have this, this tool compared to their recruiters doing their job, because their recruiters might actually be good and have been doing this for a while. So to the to the fact that this tool was scrapped is probably much more a result of a PR disaster. But also independent of that to say gender based discrimination, sorry, gender based discrimination was built too deeply within the system to be uprooted using a purely technical approach. It's just I mean, what is what is this? This is just trying to discredit this kind of technical, technical going about solving this problem. I'm pretty sure if someone comes to me and says here, I have this tool, and I can mathematically prove to you that it's not biased, then it's not then the problem is solved. And also, I really don't see how the person training the algorithm, or the person researching such an algorithm has any influence over how the algorithm works, because they're not the ones making the data set, or if they are, yeah, then they can make a better data set. Also, if a person comes and makes a better data set, that will fix the problem. And it doesn't matter what skin color the person has that makes the better data set. So all of this, this link is just not demonstrated here, or anywhere here at all. But this this here is the closest Amazon that this report actually comes to making this point. And I said before, I drew that drew this thing workforce AI bias, right? So this this link since it here the AI system is used for hiring the workforce. So at least one could make a claim that this link is somewhat demonstrated. But I this it's a weak case, I would agree, but this is the closest they come. So that and but then to go this direction, you have to somehow argue, well, the workforce somehow makes the AI system bias, no, the workforce influences the data set. If the AI is trained, so if a hiring AI, how do you train a hiring AI, you optimally train it on the performance. So this this employee here is going to have a performance over time, right? And the AI system will look at that performance over time. So if the AI system even if it's initially biased, because it learns from the risk recruiters, it will learn that, okay, actually, if I always forgo these women, then I don't get as much performance of a workforce, so I should correct for that. So if you train the AI system on a good metric, then then then this problem will leave even out itself. But again, this Yeah, this this is this could be considered like one point in the argument, but I think it's a very weak point. And only because the AI system is actually used for hiring, where I think the point they're making is a much larger one is the general bias in the AI systems contributes to the workforce imbalances. And there you somehow have to say that, okay, the AI system somehow influences society at large and society at large then go leads to the workforce being skewed. I don't Yeah, that it's just not strong enough, in my opinion. And the other direction also isn't isn't strong here. But again, the examples only get weaker from here on. They go on to say, this is just one of many examples that show how the functional logics of a given technology echo the gender and racial dynamics of the industry that produced it here. Yeah, this, that's the claim they're making to echo the gender and racial dynamics. And they're actually making a stronger claim, namely a causal claim. They give the other example of the Amazon's recognition facial analysis service previously demonstrated gender and racial biases worse than those of comparable tools. So it failed to see dark skinned women while being most proficient at detecting likes light skinned men. And they later go into this example again, where they basically also state yes, this is an issue of the data set, the data set being much more comprised of white men. And they say, but then they have to kind of make the turnaround argument and say, well, the data set is a reflection of society and society, you know, part of society is the workforce. And it's just not, I mean, it's again, this argument only works if you already believe the conclusion. Otherwise, there's actually no argument there or no solid one. But what they do here is they say Amazon's initial response to such criticism has been to try and discredit the research behind it. This reaction, or let's let's first discuss this. So the Amazon, yeah, Amazon, of course, being the accused here and a multi billion dollar company and the criticism is something that is PR wise very bad for them. They discredit the research tried to discredit the research behind it. It's understandable that this could be dishonest from Amazon side, right? I mean, they're getting attacked. It's like, you know, the tobacco companies trying to discredit the smoking research, but still, I mean, that doesn't mean it's wrong. It could actually be bad research, right? You have to actually go and look at what's Amazon saying, what is the research really doing? Is Amazon right or wrong? Completely open that Amazon is wrong here, but you still have to go look. And this citation here, I've tried this citation here. This one isn't to a to Amazon's response. It's to like a medium article and the medium article doesn't even include Amazon's response. I've looked, maybe I haven't seen it. It doesn't also doesn't link Amazon's response. Maybe it links something that links something or that includes it in some way. But basically this medium article only states, yeah, Amazon has been denying this or Amazon has been critical of this. And if you state such a sentence, Amazon's initial response to such criticism has been to try and discredit the research behind it. I at least expect the citation to lead me to Amazon's response so that I can verify what they're saying. Right. So this, I mean, I don't know, willing to chalk it up to incompetence rather than malice. Right, but then they go on and they say this reaction is evidence of the wider problem. The research was conducted by two well-regarded AI researchers who are women of color. By attempting to publicly discredit their expertise and research methods, Amazon is reinforcing the same kinds of prejudice and derasers that the research critiques. Yeah, here you go straight to the identity of the researchers. Like play the race card straight out. I mean, this is maximum dishonesty, right? Except if Amazon said something like, well, these women of color, clearly because they're women of color, they have no idea what they're doing or something like this. This is basically it's coded language for saying either saying you're not allowed to criticize people of color because they're a minority or you're basically saying Amazon is racist and that's why they criticize them. They just don't take them seriously because they're women of color. I mean, both are both are abhorrent. This is just dishonesty really stated here too. I mean, again, I'm perfectly willing to accept that Amazon's critique of this research is wrong and is not well intended because they're the ones attacked, but you still have to examine it rather than say, well, they shoot against women of color and therefore somehow that makes their counter argument irrelevant or even racist or something. That's I don't know. I find this dishonest. Yeah, I don't know about you. Moving on. So they go on and state a number of examples of bias and discrimination in the workforce and they a lot of times they make a mixture of the gender and race imbalance in workforce and things like sexual harassment not being taken seriously by the companies and also the things like gender or race pay gaps, which I'm open to accept that these things exist and are even intertwined. But just to tell you what's happening because we're kind of skipping but it's kind of a mixture of these things. So they say these issues are systemic. There's a close relationship between these workplaces with discriminatory practices and discriminatory tools, a feedback loop that is shaping the industry and its tools. So again here to state, I think I've stated it enough now that or demonstrated enough that I'm really representing their arguments as they intended it to namely that there is this kind of causal links and loop between these two things. And they shoot against the fairness literature by saying from this perspective, locating individual biases within given technical systems and attempting to fix them by tweaking the system becomes an exercise in futility. Only by examining discrimination through the lens of social logics, who it benefits, who it harms and how can we see the workings of these systems in the context of existing power relationships. So they say these issues aren't technically fixing these systems won't help. If that's the problem. And I agree, if that causal link actually exists, then technically fixing the system might not solve the problem. Not even sure. I mean, if you technically fix a system like this, then you technically break the causal link and thereby fix the problem. I would not sure, but again, this is based on the hypothesis that they've already reached, like demonstrated their, their conclusion, which they haven't and which they are not in the entire article. Yeah, so the next section goes into who makes AI so I don't know about you, but this section was titled how workforces and AI systems interact. And apart from one, the AI system being used for hiring the workforce, which is said this one instance where actually there could be one causal direction from bias to different misrepresentation the workforce. Other than that, there isn't really anything in there that really shows how these two interact, especially in a in a causal way. Alright, the next section is called who makes AI is broadly about the about the gender and race imbalances or miss not unequal representation in the workforce. And we're going to skip this diversity statistics that kind of that discuss that diversity statistics of companies aren't really accurate, or can be, you know, massaged kind of by the companies, which you know, is true. Definitely companies will always try to maximize their profits. And even if they give out such a report, so that definitely critical thinking is in order. Alright, so the next section is called the discrimination feedback loop. Right, if so if in the earlier section, you felt like here we go into the meat, then you must feel with this title, like, okay, we're actually going to see how this loop works and how the two things are really linked, like how one causes the other and vice versa. So let's jump in. They say AI systems increasingly play a role in our social and political institutions, including education, healthcare, hiring, criminal justice. Yes, therefore, we need to consider the relationship between the workplace diversity crisis and the problems with bias and discrimination in AI systems. No, why I don't see how therefore, but yeah, so I don't see how therefore we need to consider the relationship. Okay, if there is a relationship, we need to consider whether there's a relationship. Okay, granted. So they say fairness, accountability and transparency research is playing an emerging role. Now what they mean here is the aspect of fairness, accountability and transparency research that shows that there is a problem. So I told you there's two sides, one side is showing there is a problem in current systems and the other side is trying to fix them. So they're very much fans of the side that shows that there is a problem and they use show some of these problems here, we've already seen some but they show some more like Facebook's ad delivery systems let users to be shown as for housing and employment in a discriminatory manner. So giving 2019 study found significant racial bias in a widely used commercial algorithm used to determine whether patients will be enrolled in care management programs. So these are these are just examples of these AI systems being biased. So they go into this say taking a contextualized view may enable more extensive account and the contextualized view they when they say this they mean anything more than just a technical approach at solving these problems. More extensive account of bias to emerge future work could examine the politics of system design study how AI systems in situated reality and study AI systems in situated realities ask why a system was designed in a particular way, how it was constructed, whose interest it shaped shaped by the metrics in which its success or failure is assessed, rather than solely focusing on improving existing data sets or individual algorithms. Yeah, I agree. I mean, we always have to we always have to pay attention to these things, especially like looking at the metrics by which its success or failure is assessed. But a lot of times this is this is rather straightforward in kind of if you look at the metric, the metric most often, especially in commercial applications is money, right? So the metric of like an ad showing system, like if I have a system to recommend ads to people, show people ads and personalize them and so on, I simply want to maximize my revenue. So I want to sell someone something. And everything I want to know is how likely is it that person is going to buy that thing? Right? I that's basically Yeah. So in essence, sometimes it's really valuable to consider what capitalism is. So in capitalism in so capitalism, these kind of this system we're working on is kind of a form of limited capitalism, but mostly mostly capitalism. And capitalism is very greedy. So capitalism, all corporations want to do basically is make money. And that is and on the other side, you have discrimination. So discrimination meaning these unequal represent like unequal distribution actively. So and often sometimes these go hand in hand, sometimes you can make more money by discriminating against a certain type of people. And that's, that's a really bad scenario. Like that's a very, like, this is really something where we need to take action. But a lot of times, a lot of times, these two things stand in opposition to each other. So little arrow here, non compatible. That means if I want to sell someone something, then I maximize my profit by not caring by accurately assessing how likely is it that person buys that thing. If I want to discriminate here, if I want to discriminate, start discriminating, according to skin color saying like, No, I don't like that this person with the skin color is able to buy this product, I want to kind of keep them down, and so on, then I forgo profit, right, then I actually, even though this person could buy this thing, I forego that. So often these things are in direct opposition to each other. Also, if I am in charge of hiring, and I don't like people of a certain gender, but they would actually be really, really good, whatever, good employees. So I forgo that, that means I'm getting a pay more for less qualified people just because I'm biased and I'm down ranking unjustifiably, these people of the gender I don't like. So oftentimes, you have to ask yourself, are people fundamentally greedy, or discriminatory? Which are they more? If push comes to shove, would they rather have more money? Or would they rather keep their own race and gender group in power? And with just, yeah, so the and you have to ask this of corporations, you have to ask this of people. And in my experience and view, like people are much, much more greedy than they are willing to discriminate and give up money for discrimination. And so if we look at metrics by which success or failure of AI systems are designed, then I would argue a lot of the times metrics are actually profit incentives. And especially if we look at data set construction, if there is a skewed data set that makes my AI system be biased, that actually loses me money and the company would profit a lot from building a better data set. So looking at kind of metrics actually makes a lot of sense to me and very much in favor of that. And I think by designing accurate metrics and then getting the best possible information, the best possible data sets to maximize these metrics will oftentimes actually eliminate such forms of discrimination. Again, there are situations where they don't, we have to be very cognizant of these. They go into this and they say, also examine more thoroughly how societal discrimination surfaces in data provenance, examining the history and process of data set construction and considering how cultural norms and stereotypes were enumerated and represented at the time of data creation. This is a big issue. Yes. The data set construction kind of at the time of data creation and so on, this is a big issue in these systems and a lot of bias. And I would argue most of the bias we've seen here arises from corrupt data sets and from data sets that were constructed in an already biased way. And the AI system trained on these data sets simply replicates this bias. So I think that's very correct here. They go into this example, they say the labeled faces in the wild data set contains over 15,000 images. Only 7% of images are of black people. This is because these, the media landscape of the early 2000s, these images were gathered from the news media at the time, predominantly featured white men in positions of celebrity and power. This exactly. So if you train a system on this data set, the system will inherit this bias. Yeah, so this is a classic example of a corrupt data set. Also this isn't only with race and gender. This is also if you like take pictures from IMDB, yes, a lot of this currently Celeb A data set that is used in all the GAN research is collected from IMDB. You probably have overly beautiful, like pretty face people on there. So that your AI system, your generative model is only going to produce mostly pretty face people, since movie stars tend to be a lot prettier than the average humans. So that the kind of data set construction process, I think is currently the biggest source of bias in AI. But that also, it's interesting that they go into this here and they kind of want to make the point that this is because society and power in society, the data set reflects that. But I would argue if someone makes a data set that doesn't have this bias, then the problem is solved. And I don't care who makes the data set. So the link between the workforce and the bias is really broken by an argument like this, because as soon as we have a correct data set, an unbiased data set, we can mitigate the bias. And they even go, they go into this here. They say, sorry. Yeah, they say down here. They say these people, these researchers have looked at these facial recognition systems and they assessed this what we saw earlier, higher error rates for darker skinned women than for any other group, lowest error rates for light skinned men. To measure this disparity, these researchers developed a new data set that is more balanced, both in terms of gender and skin color. Good. Problem, like make a larger data set to actually train on and then problem solved. And I don't care at all what race and what gender these people are. Well done. Good people make a good data set like this. And then we've solved the problem. What's the problem here? Why would you ever care what these people look like if they do good work? That's to me, this actually breaks their own argument. I don't know why they included here. To me that to then suggest that there is a link to the workforces, if here is obvious that if you fix the data set, you can fix the recognition system. All right, so we'll go on here, jump a couple more paragraphs. Except when they say they shoot again against this kind of say to this point, a focus on fixing technical systems in isolation without examining their broader context of use and power and dynamics that attends issues is not limited in its intervention, it can actively cause harm. So if you fix the problem in a technical manner, they argue here it can actively cause harm. And the example they give is that facial and image recognition systems, they are often applied in service of police surveillance, which disproportionately harms poor people and communities of color. So there's a quote from this person that says, is this not social progress to make black people equally visible to software that will inevitably be further weaponized against us? We are considered criminal and more surveillable by orders of magnitude. Whatever claim to a right of privacy that we may have is diminished by a state that believes we must always be watched and seen. So this is an example where by improving the facial recognition for black people, it makes the police better at surveilling them, which is true. And then it is an ethical problem that the police is able to use these facial recognition systems to surveil people. That's a massive privacy problem. That's a massive problem in how much the state is allowed to overreach and so on. So I think it's a discussion in itself, but here they argue because at the very beginning I asked you to remember this whole notion of we always have to look at who benefits from the way the AI system is constructed, who is harmed from that, who benefits from how the metrics are shaped and so on. In this case, we actually have a perfect example where if the face recognition system is very inaccurate for black people's faces, that actually helps them in the societal context. So by logic of this report here, that must mean that somehow the bias works for them and thereby the system is good or something like this. And by fixing it, you actually make it worse. Yeah, they say it can actively cause harm. So I think this is pretty much arguing against themselves earlier where they say, oh, we always have to look at who benefits from the system. Yeah, here, if the face recognition system can't recognize you, you actually benefit. So I don't think that argument works in any case except if you only look at it when you want to look at it. All right, so we're going to jump a couple of sections here. But the core thing here was the feedback loop. And again, the feedback loop isn't demonstrated at all here. Just examples of systems that are biased and of data sets that are biased, because of data sets that are biased. But there's no demonstration of how the workforce, I mean, yeah, just take this previous argument. So the workforce is supposedly supremely white. And it makes a face recognition system that makes that is performing poorly for darker skinned people. And that actually in this context of police surveillance helps the darker skinned people compared to the lighter skinned people. So that kind of is an exact counterexample to the argument that this misrepresentation in the workforce leads to the biases in the system. If we interpret it through the lens, who it costs and who it benefits. All right. So the next section is corporate diversity beyond the pipeline problem. And this is kind of an odd inclusion when I read it first to interpret to go against the pipeline problem here. But it kind of makes sense if you know what these people set out to do. So what these people set out to do is to argue we must fix the workforce, right? We must fix the, we must hire more people of color, more women and so on, promote them more. And they have a very much have a problem with this pipeline argument. What the pipeline argument is, is the following. So at the beginning, if you consider like the educational or career paths of people, then you have like 100% of people that's represented at this at the beginning, and then most of these people go through school. So most of these go on. This is kind of the area in here is the population. And then some of them pursue higher education like some drop out. So this gets a smaller amount. So this is here, this is time and this is kind of volume of people. And then very few go into computer science, right? And then even fewer go into AI. So what you end up is just a tiny sliver of people that actually go into AI. So this is called a pipeline, and we have various junctions here like where you would go into higher education, where you would choose your major in university, where you would go into a subfield of computer science, where the kind of volume of people drops significantly from one point to the other. And now if you compare this, if you compare this and use it say, we're not considered all of society, but here over here we'll call consider all just men and over here we'll consider all women again, they all go to high school and then university and then maybe very few go to CS, even fewer go to AI. What you'll find is, and I've drawn it maybe wrong here, is that this is smaller than this. So if you comparatively look at how many males end up in the AI field, you will find that fewer end up in more and will end up in our field than women. If you comparatively look at it. So at and this is over time, like at the beginning, you have 5050 main women distribution in society, almost I guess, I think slightly more boys are born, but I could be wrong about this. And then as you go through time here, excuse that I believe. So you go through high school and let's just assume like high school is still kind of equal, it depends on the country. Then you go to university, where there's actually more women at university slightly. And then you go into computer science and in computer science, and this is just relative here, that's why I kind of norm it at 100%. Otherwise these things would go down all of them at the same time. But comparatively, you have then much more men than women in computer science. And then if you see who chooses AI, I don't know if there's any statistics of specifically choosing AI from computer science. I'm just going to assume that remains the same. So if you look into the AI field, kind of this, this will stay the same. So in the AI field, you have much more men than women. And presumably, because you already have much more men than women choosing computer science as their major or choosing any technical field as their major. This is kind of the so called pipeline argument. So where do AI companies hiring come in? AI companies come in here, they hire at this point, after your university degree, presumably. There's exceptions, but just say they hire after your university degree. And therefore, they basically have to choose from this distribution. And if they just say, okay, we'll just take the top, I don't know, 10% people will hire the good people of this, we don't care what gender they are. Right, so the top 10% here, the top 10% here, then this will end up being the same distribution as you have graduates. Right, so this is kind of the company, company hiring from an let's say an 80 20 distribution without looking at gender will end up with an 80 20 distribution. That's the pipeline argument of companies. And they don't like the pipeline argument, because the pipeline argument basically says that the problem is somewhere here, right? The problem isn't the company's hiring wrongly. The problem isn't that the company's here, deselected, the problem is somewhere here. And because they want to make the argument that the company should hire in a different way, they can't have that. So they argue against it. Now to argue against this would actually be very easy. If this argument were wrong, like they claim the argument is is is not good, the pipeline argument isn't good. If the pipeline argument were wrong, what you'd have to do is you would have to say, you would have to say, hey, companies, look at that. In your company, you have an 80 20 distribution men to women, right? That's pretty unequal. And you know, in university graduates, the pool you choose from is actually 5050. So obviously, you're engaged in discriminatory hiring, because you know, the pool is 5050. There's no reason why it why your hiring practices should cause this inequality. And therefore, we can clearly show you do discriminatory hiring, you should stop it, you should definitely hire more women and people of color, more of these more of the minorities, because your hiring practices are the problem. But that's not the case. How do I know? Because if it were the case, they would simply state this. Definitely in this report, if that were the case, that you could actually show with numbers that the pipeline argument is wrong, then they would absolutely do this. That they have to like, go back and they have to like, ramble around it for several pages, which will mostly skip but mainly because this is the case, it is the case that these companies hire from a pool of of unequally represented people. And the only argument that you can make is that, well, if if you were to equalize this here, then maybe here where the problem is that would fix like, so the argument is often made if young girls choosing their majors have no one to look up to, like no strong women in in corporation CEO roles, they will think that it's not a climate for women and they will elect not to go into these fields, which is a valid argument, like I'm completely open to that to that argument. But it's the only argument you can make. And still then, even if you determine this as the cause, I would still not support racist and sexist hiring practices like do something else like make them clear that the environment can be changed or change the environment, like change the if if it really is the case that it's kind of a non anti woman environment, change that. If it's just the case that they perceive it as such change the perception, but do not engage in discriminatory hiring practices, because there's always someone losing out unfairly on these practices. And that's, that's something I'm not willing to, to go into, like that's something I'm not willing to engage in. And I don't think people should engage be engaging in that. Actually, that's why it's illegal. So let's, let's actually look at very few points. This is just why the so they claim they go kind of go over these pipeline studies. And they yeah, they say term used in industry to reference the absence of diverse candidates in the hiring pool of to justify the inability of large firms to achieve diversity due to scarcity. Right? So that's, they basically agree the of that on the definition that I stated here. So the companies that are challenged on their lack of diversity frequently site pipeline studies as proof of the persistent challenge of finding enough women and people of color to hire. Yes, and, and the yeah, but they say but the evidence suggests otherwise. For example, in 2016, Facebook chief diversity officer wrote that it has become clear that at the most fundamental level, appropriate representation, technology or any other industry will depend upon more people having the opportunity to gain necessary skills through the public education system. Well, yes, that's something I would agree. And that's something clearly that addresses this region here. Then and where the actual problem is happening. So I would say that's a very, very good statement from the Facebook's chief diversity officer. They say but as the Center for Investigative Reporting study of tech company diversity data found 91 large tech companies headquartered in Silicon Valley managed to hire higher percent of black, Latino and multiracial employees than Facebook that year. Well, just if other just just because other companies employ racist and sexist hiring to improve their diversity numbers doesn't mean that Facebook has to do this. Right? It it like just because other companies do this doesn't mean that it's a it's a it's a good thing to do or that's how you should go about it. Facebook simply says like, if we want to hire without being racist or sexist, if we want to just hire the best people, then more of the best people have to be in the pipeline, like more people have to gain access to educational opportunities so we can then hire them. Whereas these other companies probably make a big effort to say, well, even if you are not as educated, even if you're not as qualified as this other person will hire you because of your skin color. I don't think that's that's an argument in that in the favor of what the report is claiming. Like I don't think that that is evidence that the pipeline argument is invalid. All right, so they go into core themes in pipeline research, and they do some they do some overview of the kind of pipeline research that often so sometimes the pipeline research examines why, why, for example, why women don't choose to go into computer science as much and sometimes they focus on what is their perception of the field, what was it, what is their perceptions of the stereotypes of the field, what is their perceptions of the kind of culture in the field, is it suited to them, what is their perception of how qualified they are for the field, and is that true, is that false, and so on. So this research examines a whole variety of things. And it's very interesting, actually, to read through this research. I want to point out this here. Other studies suggest that gender is correlated with a person's motivations for pursuing a career in the field. Women and particularly women from low socioeconomic status or minority backgrounds are more likely to see computing as a versatile profession that provides an opportunity for secure employment, higher pay, and better social standing. Moreover, their interests go beyond technical aspects of computing, focusing instead on the purpose and application of software. However, such interests are often de-emphasized in computer science curricula, a price technical skill and its applicability to industrial settings above all else. So I find this really interesting because it's basically saying that women have different interests than men on average. That's basically saying that, which is almost heresy. To say this in this context, people will come after you if you suggest something like this, and yet they're just stating it here. Remember this for later. This is really funny that they're like, yeah, the interests could be different for women than for men. And we might have to adjust our curriculum to be more suited to these different interests. I mean, yeah. I'm sure that's... Yeah, as I said, you're like, usually this is forbidden to say. All right. So they go on. They say limitations of pipeline research, right? These are fairly like common limitations, let's say, of studies in general, social science studies, which I won't go into much. Again, they state we have to examine... We don't only have to examine this, but the problem... They basically say the problem is actually the culture and the problem is actually the perpetrators, where do I say? I don't remember where this is stated, but they again say we have to examine who benefits from its present construction, who is underserved within the current tech ecology, who benefits from its present construction, how these dynamics might be untangled, and so on. So again, stating these kind of power relationships for the different groups, which I don't agree is in large part what's happening. They say it's worth considering the scope of these studies and by and large, the recommendations they issue are limited, targeted at the administrators of university computer science programs seeking to broaden the diversity of their student body. Yes, that's exactly where we saw the problem appears to be, right? So the reason they have a problem with these studies is that they actually focus on the point where this discrepancy appears to happen, because they want to claim that no, no, no, you should focus on a different point, namely hiring in these companies, hiring and promotion. They say though important, so at least they acknowledge that that's an important problem. This is a narrow frame through which potential solutions to barriers to inclusion. It does not address the companies that hire computer science students, the peers responsible for promulgating stereotype views or engaging in hostile behavior or the broader social conditions that may influence students' success in computer science programs. Actually the research and even some of the examples they've included of this research addresses all of this. But the research often addresses the kind of stereotypes and how the peers act and how the companies act and also how the companies hire and how people have something to look forward to or nothing to look forward to and how that influences their decisions. Yeah, again, they say the studies are frequently cited by those within corporate environments to justify their own lack of diversity as they situate the locus of change outside of the corporation itself. As such pipeline studies are disproportionately emphasized as a part of the broader research agenda on diversity and technology. Again, they state companies use this to get out and of course, like companies, of course they're going to use this to get out. I mean, I agree at least with that. I agree that companies are going to try to use this to get out of responsibility. Certainly. All right. So the last section here is the pipeline dreams after years of research. Again this is on this pipeline studies. Basically they say the pipeline research hasn't shown, like hasn't borne fruit. It hasn't led to meaningful change in the field even though we've researched this. The reason they say the number of reasons they tend to place the owners to solve issues of discrimination, Silicon Valley on those who are discriminated against rather than the perpetrators. I find this word choice really interesting. Perpetrators, right? Like again, the group of white men is trying to put down everyone else. That's the perspective that the article takes. And it's not even true. This research, a lot of times it actually says the reason why, for example, women don't choose to go into computer science is the male dominated culture within these corporations, is the perception of this not being a woman friendly environment, is the people here of sexual harassment and so on. So it's not even true. But moreover, I just wanted to point out the choice of word here, perpetrators. I don't know how you get to this word. It really shows kind of a worldview of the authors in my opinion. All right. So they go on and say, okay, this pipeline studies haven't been beneficial and companies haven't done much or hasn't been successful. They're going to worker led initiatives, which I'm going to skip here. It's just a kind of a reporting of what happened at companies where the workers themselves organized. And then the last section here is the pushback against diversity. So in this section, they're kind of documenting and arguing against people who have basically stated counter arguments to their recommendations mainly. So their recommendations being, let's change the hiring, let's change the promotion, and so on to be based on race and gender. And the pushback here characterized in different ways. So we'll go through this. This is the last section. I know it's a long video already. If you're still here, like the one person who's still here, hi, I hope you're doing well. Good. Keep hydrated. Yeah. So they say, it's a critical time. We now see diversity itself being weaponized. So they say this growing awareness accompanied by demands for inclusion and equity has led to some change, but there has also been resistance, especially among those implicitly privileged by the status quo. So again, jumping straight to attack on the person. Like I don't care if who makes an argument against me. I want to go on the argument and I'm going to go on the content of the argument. But these people straight, first thing they stayed is that's just by the people who are benefiting. That's just by the white men, basically. Straight to the identity of the person. That's dishonesty right there. So those questioning and even rejecting the idea that racism, misogyny, and harassment are problems within the AI field and the tech industry have appropriated the language of diversity to argue that efforts to improve inclusion are in fact exclusionary and addressing the deeper structural challenges posed by racism, sex and inequity is misguided. And yes, yes, definitely efforts to improve inclusion can be exclusionary. Like just because, so this is a thing, just because you're fixing a problem doesn't mean the method you're using to fixing it is justified and is itself good. Methods to improve inclusion can be exclusionary and some that have been proposed are exclusionary. Definitely it depends on the method. It doesn't mean these people are against these efforts. It means that the measures, for example, implementing racist hiring policy, I can definitely see that this is going to lead to more equal representation within the workforce. But the tool itself is really bad and exclusionary and discriminating. So yeah, I would say that it's accurate that it can be exclusionary. I say, for example, some AI researchers greeted the announcement of Black in AI Workshop at NRIPS leading machine learning conference by questioning whether the event was necessary, arguing that it would be discriminatory. But can't they? Can't they question whether the event was necessary? Like that would, I would, here I would need a discussion. What is it for? Right? Why is this event happening? And what is it doing? And is it discriminatory? It could be. Any event can be discriminatory. Does it discriminate based on race or gender or anything? Is it, you know, does it do so unjustly and all? So I don't, I don't just don't see why. Could still be wrong. Like you could question and then you could be wrong. But you should be taken on your argument. But the argument here is just already questioning this is already on the wrong side of the argument. And I don't agree with this. I don't agree with these people that question this workshop. Don't have a particular opinion on these things. But I have the opinion that you have to take arguments at their argument value and not just at who makes them or whether or not they're against a particular viewpoint. All right. They say such pushback often centers calls for cognitive diversity or viewpoint diversity. The idea that individual differences in the ways people think and understand the world are distinctions that should be counted alongside or instead of other identity categories such as race and gender. Well, yes, that's I mean, isn't that isn't that a very reasonable thing to say? Isn't it very reasonable to say that differences in the ways people think and understand the world, their distinctions that should be counted alongside other identity categories such as race and gender, they say a dozen white men so long as they were not raised in the same household and don't think identical thoughts could be considered diverse. That's I don't know if this is a sarcastic statement or not, but clearly it's it's kind of the counterpoint they're trying to make here that but yes, I would I would totally agree with this statement in a way a white man growing up in San Francisco, a white man growing up in rural Idaho, a white man growing up in Florida, a white man growing up in Western Europe, one in Russia, and one growing up on the road with its circus, his circus parents in Mongolia would definitely be that plenty diverse, right? I mean, they criticize this here, but this is is actually how can you how can you not see this that? Yes, these are valid differences, and people are going to think differently, independent of how they look, people are going to have different thoughts. And it's important to recognize other people think differently. And therefore, you should, you know, include them if it's relevant. And the counter argument to this is, of course, what the authors here are saying basically is that 12, a dozen people, as long as they are don't look the same, could be considered diverse, even if they all were raised in the same place, and basically all live in San Francisco, and think the exact same thing. Yeah, that's, I mean, it sounds to me, it sounds as absurd as the other way around. To me. So here's, here's my, here's my thoughts on this. I am not going to pretend that I know what life is like as a woman. Right? I'm absolutely sure that for areas of life, it is it is definitely valuable to listen to the experience of a woman or multiple women, an aggregate of women, because the life is just different as a woman. Life is also different. As a black person, I absolutely concede that there are things that I might not be able to draw from my life experience, because I am not of that skin color that different problems that people face. And that's why it's important to have an opinion of that at the table. But I'm also absolutely certain that I have no relation to someone who grew up as a child pop star from the age of 12, and then had that life. I have no relation to someone growing up under a communist regime. I have no relation to someone growing up in in kind of a Buddhist religious tradition. I just don't. And I don't care how they look. They have different experiences. They have different bodies of knowledge to draw on. And I don't think why we should make the difference along the exact lines of race and gender. Yeah, but that's what they that's of course what they argue here. Those arguments work by centering identity while flattening or ignoring power relationships. Here the VP, the Facebook VP of engineering said that the ultimate goal is cognitive diversity and cognitive diversity is correlated with identity diversity. That means it's not just about getting women in tech, it's about broad voices, broad representation. Right? So the the this is exactly what I would say the reason why we want different the reason why we want a woman or a black person at the table is because they have a different knowledge is because they have different thoughts because of their different life experience. They have different thoughts that they can bring in. So actually, by including these what they call bodies, it is about cognitive diversity, even in itself. But the authors here really see this from a different angle. They really see this in terms of power relationships between race and gender groups. And I yeah, the arguments of the authors don't make sense if you don't view it through that lens. That lens to me is just such a it's such a I don't know, it's just sad look on the world. And also, I think it's a very, very inaccurate look on the world. And it's, I think, a very dangerous look on the world. Um, yeah, again, they say instead of looking at historical patterns of marginalization, calls for cognitive diversity argued that all differences are equal. No, we're not. Like, no calls for cognitive diversity or don't argue that all differences are equal. Well aware that some people have it harder, well aware that some differences are bigger, worse or better. That's absolutely well aware all they're saying is that race and gender shouldn't be the like, only things to consider and shouldn't be in itself be considered diverse. Just because someone is of a certain skin color, it doesn't mean anything, right? It doesn't actually tell you anything about that person. So why not consider people as individuals and look at what was their life like until this point and what could they contribute to the discussion we're having rather than looking at the color of their skin. I mean, if the color of their skin played a role in their life, then obviously that would manifest in my suggestion as well. But to just look at people through this kind of group lens is is so foreign to me. And yeah, I feel it's it's quite dangerous. Yeah, so again, and this this could argue that all differences are equal. I mean, the point where you have to start misrepresenting what the counter argument is saying, that's really how you know you're dealing with a with not a well intentioned person on the other side of the of the discussion. This is really politics now. This isn't a well intended argumentation. It's really someone to trying to achieve some goal, because they have to misrepresent the other side. And this only gets worse from here. They say recently was exemplified in the controversy over Google's appointment of Heritage Foundation CEO K calls James to its Advanced Technology External Advisory Council. Google's reasoning for the appointment of James was ostensibly to ensure diversity of thought by including a conservative viewpoint on the council. Alright, so Google has a technology advisory board, or council, sorry, of external people, and they've included a conservative. And she is by all by all metrics, let's say, a standard conservative. So this is not a far right neo Nazi type. I don't know. But this is this is someone who has similar opinions than half the US country and in generally in at least in the Western world, generally half of the of the country's population tends to be conservative. More or less, I mean, there's differences. But yeah, so this this is a this is an opinion that a large portion of the population shares. So it would be I don't know, it would be suitable to include at least someone of that opinion in an external advisory council to to have that on board. You don't have to listen to her like she's not like she's made king. It's simply that she will have the opportunity to input her voice representative of kind of that large, very large percentage of people. They go on to say, James is also a black woman, thus adding racial and gender diversity to the panel. So even further, right, this is it's a conservative black woman. All right, but the pushback following James's inclusion focused on her policy position, citing specifically her vocal anti LGBTQ and anti immigrant views and highlighted why cognitive diversity is a particularly limited lens. And the pushback here was very much spearheaded by one of the authors of this article. So I am this isn't just reporting. I will also I'll also criticize the the this pushback here since it's, you know, it's kind of argued for in this article. It's not just reported and also because the authors are the same. So here they say they have vocal anti LGBTQ and anti immigrant views. And I haven't actually gone specifically and looked at what this person particularly has said, but given that she's a standard conservative and has been in public office, I believe under George W. Bush, she can't like I have trouble believing that she has like extremely hateful opinions like these people shouldn't exist or like something like that nature. Like often people like conservative people have have issues with forcing people to adopt certain pronouns for people or issues with which bathrooms do people go in and, you know, generally are tougher on immigration, especially illegal immigration and so on. I mean, these are these are views that people hold. It's a large part of people and these are discussions to be had. So including this this person would be very sensible move. But they say in a letter opposing the appointment, a group of Google workers calling themselves Googlers against transphobia and hate, transphobia and hate responded to the idea that diversity of thought justified James's addition to the council. This is a weaponization of the language of diversity by appointing James to the ATAC. Google elevates and endorses her view, implying that hers is a valid perspective worthy of inclusions in its decision making. This is unacceptable. Here it says again, the author was one of the organizers of that. And that's what they're saying here. The views, if you don't have our views, these are unacceptable views, right? It's valid perspective worthy of inclusion. It's what they're saying basically is you don't even talk to these to this person, like talking to this person, considering their opinion. You can still evaluate the opinion, but even considering their opinion is already wrong. And that given that the person is a black woman. So basically, they are called the author's idea of diversity is people that look different that are from race and gender groups that have don't have much power or perceived what they call power right now. As long as they all think exactly as we think, right, then that's fine. As long as they they share our thoughts, as long as they don't have dissenting opinions, we want the we want the different looking people. But don't dare talk to anyone of a different opinion. Yeah, this, I don't I don't see how I mean, these these authors, in my opinion, they really live in in a bubble, they really live in the in a tiny Silicon Valley or Silicon Valley influenced spaces, because this is this is half the people they basically saying half the people in their greater community in their country aren't even worthy listening to their opinions aren't even worthy of inclusion in of consideration. So yeah, well, well done might as well discredit them at once. I'm sure I'm sure I'm sure that's gonna fly well with these people. All right. Yeah, might might start calling them deplorables and see what they do. Maybe they'll return the favor and elect a moron just to stick it in your face. I mean, that's what happened. So the idea of cognitive diversity is mobilized by some support in support that the AI field and the tech industry are already diverse. Including as far as to support claims that not including identities like white and male constitutes discrimination. Yes, it can. Like if, if you include every single identity except white and male, that constitutes discrimination. That's I mean, yes, even if they're in the majority is still constitutes discrimination, like no one can help being born white and male, no one white and male chose to be born like that. Don't mostly don't choose the melanin content of your skin, you can modulate it a bit by going to the sun, which computer science people statistically don't do very often. So there's not much leeway there. So yeah, to not include identities like that, if you include every other one, can constitute discrimination. True. A July 2017 memo written by James Damore, a software engineer at Google is illustrative of such pushback titled Google's ideological echo chamber. And published in an internal mailing list, the memo critiqued the company's diversity policies arguing that biological differences between men and women rather than bias and discrimination help explain gender disparities at the company. I feel the you can leave out the rather than here. I think the memo simply stated that biological differences can help explain the gender disparities. The most objective writing the memo was to make the case that policies designed to achieve equal representation are unfair, divisive and bad for business. Well some are. Yes, especially the recommendations that you've given at the beginning, number seven, is unfair, divisive and I would also argue bad for business. So supporters for Damore's point of view at times even drew on the rhetoric of the pipeline to make the case that diversity initiatives are in fact discriminatory. They argue incorrectly that if there aren't qualified candidates in the pipeline, then hiring those who are unqualified on the basis of identity discriminates against those who are qualified. No, I would say hiring anyone on the basis of identity discriminates. I mean inherently. So again I think that's the larger argument that these people are making, which is not incorrect, is very correct. So in an update to the memo Damore himself asserted that he values diversity and inclusion, but his primary concern was cognitive diversity. He says diversity inclusion is not denying that sexism exists, doesn't endorse using stereotypes. And in specific I've read the memo and it directly says these are population level kind of statistics and there is more overlap than difference and you absolutely can't say anything about an individual by looking at these statistics. That's almost a quote from this memo. So he was very much concerned with considering people as individuals, but also if you like he was basically making the same argument as earlier. I told you to remember, hey look this one study that found that women's interests might be different and we might shape the curriculum. That's basically what Damore said. He said women's interests might be different and we'd have to maybe shape the way we do work, like change the way we do software engineering to attract more of them. That was one of his points. So he's exactly the same thing, but of course he's a misogynist because he suggested that this could be due partly because of biological differences. And the way he was dragged through the mud is just crazy. And they shoot here very much against this kind of biological, what they call biological determinism. We'll see this very briefly. I'd say diversity becomes an empty signifier, stripped of the histories and experiences of systemic discrimination, repurposed around ideology rather than bodies. I'd say diversity has nothing inherently to do with bodies as such. I think that's only the case if you are already convinced of this. Within hours of the memo's publication, harassment targeting minority advocates who pushed back against the claims in the memo began, with a particular focus on queer and trans workers. That's bad, but also I think the pushback against people who voiced support was also pretty bad because one of them was fired, as you already stated. Google's vice president of diversity even locked down her Twitter account shortly after Demours firing, responding to the barrage of threats describing her as a police Nazi. Well yeah, if you fire something. I mean undoubtedly Google fired this guy because they thought it was less of a PR disaster if they also fired him now. This probably wasn't an ideological decision, much more a PR decision. If you fire someone after stating something like this, it very much looks like you're firing them because you don't like their ideas and you don't like what they're saying, which people generally are not in favor of censoring freedom of speech. But yeah, that being said, harassment is bad, don't harass people. Also that being said, criticism isn't always harassment and don't conflate the two. Demours' memo also stated that the distribution of preference abilities of men and women differ in part due to biological causes and that these differences may explain why we don't see equal representation of women in tech and leadership. This assertion hinges on a flawed assumption that identities like gender and race are essential and fixed biological attributes and that inequalities are at least in part the product of such irreducible differences. Well, I mean, if they're not fixed biological attributes, certainly gender and race have a 0.99 correlation with biology. Since your biology is first and it's determined when you're conceived, that demonstrates a causal direction. Even if they're not exactly fixed, they are overwhelmingly fixed. And to suggest that this is a flawed assumption, that these inequalities are at least part the product of such differences, what you'd have to do, they simply state it's a flawed assumption. What you have to do in order to show this is a flawed assumption, you have to show that gender and race, as far as they're biologically determined, have no influence whatsoever on these differences. That's what you have to show, right? That's the counterclaim because the claim is they have at least in part something to do with it. And that's also, I believe, what the more stated and what the predominant opinion like is very like all the research points to, for example, there is a large difference in interest between genders as far as, for example, career selection goes and so on. Now, we can talk about why that is, but there's also a large consensus, I believe, that this is at least partly determined to however degree, but it is at least partly determined by biology. In order to show that this is flawed, you need to show that it does not have, it can't have any influence, right? You have to basically prove them the impossibility of this having an influence, which no one has done so far, much to the contrary. So simply state this is a flawed assumption kind of shows to me that they've already, they are there, they're in a bubble and they're expecting to speak to people in the same bubble. Yeah, so they go on and kind of discredit this as called a biological determinism, which I don't think that's a correct use of the term biological determinism, but you can judge for yourself. All I think these people are saying that biology might have some influence and we could adjust for that. It's not even right, it's not even. Yeah, this comes up here. So conclusion, conclusion, finally, I think it's been two hours. Sorry. Conclusion. Throughout this report, we've outlined the scope and scale of the problem, tracing how the diversity crisis in the industry and the problems of bias and AI systems are interrelated aspect of the same issue. No. In the past, these topics are commonly examined in isolation, but increasing evidence shows that they are closely intertwined. No, you've shown that they're parallel. You have absolutely not shown that they're interrelated aspects of the same issue and you have not shown that one, any one of these causally influences the other, that there is any feedback loop. You have not shown that fixing one leads to fixing the other. I mean, you could also take a company that extremely is focused on, or for some reason has a different workforce and then show how their products with the same data sets as the previous companies don't end up being biased. Probably not so easy. But again, none of that is in the report. There are many things you could actually do to show what you wanted to show, but it's just not the case in this article. Our analysis surfaced two prominent responses to the diversity crisis. On one hand, a worker driven movement, which we've skipped. On the other hand, we observe a small but vocal counter movement that actively resists diversity in the industry. What dishonesty actively resists diversity? I mean, the thought that these people stray around like, no, I don't like the other looking people. It's just so absurd. All they're saying is that either we don't understand the problem in the correct way or our tools aren't appropriate to solve the problem. I think everyone has the same goal of the workplace and the AI systems being as fair and as non discriminatory as possible. Misrepresentation of the other side is something that really bugs me. And it's something that these authors do a lot. So yeah, I lose my polite side maybe. And uses arguments from biological determinism to assert that women are inherently less suited to computer science and AI. What a load of crap. Sorry, but uses to assert that women are inherently less suited to computer science. No one. Okay, not no one, but no one that I know. Asserts that absolutely no one that makes these arguments. Sorry, not no one. You can always find a sexist douchebag that makes that argument. But this is not a serious argument made. And this is not this counter movement. Most people in the argument that most people in this counter movement make. Not at all. And to represent them as such is just so dishonest that yeah, this this this basically this is the it's nice that it's in the conclusion because it finally like at the end it completely destroys the credibility of me taking seriously these authors. I thought they had so that the parts we skipped over I mostly would say I'm mostly okay with they mostly show parallels between the that AI systems are biased and they also show that there is unequal representation. They also show examples of discrimination, harassment and so on. Problems in AI companies and universities that all you can read the report for this that's it's pretty interesting to read. But the points I've addressed, I'm not happy with. Yeah, so that was it for now. Sorry this was took so long, but I felt that a thorough take was necessary. Have a nice rest of the day.
[ { "end": 7.5200000000000005, "start": 0, "text": " Hi there, today we're looking at discriminating systems, gender, race and power in AI by Sarah" }, { "end": 14.72, "start": 7.5200000000000005, "text": " Myers-West, Meredith Whitaker and Kate Crawford of the AI Now Institute, which is a part of" }, { "end": 18.8, "start": 14.72, "text": " New York University or associated with it." }, { "end": 24.8, "start": 18.8, "text": " This is not as much a paper as it is a report, kind of summarizing current literature and" }, { "end": 31.76, "start": 24.8, "text": " also kind of an opinion piece slash recommendation giving document." }, { "end": 35.86, "start": 31.76, "text": " Yes, so we'll dive into it." }, { "end": 40.68, "start": 35.86, "text": " As you can see from the index, it's quite a long report and we don't have time to go" }, { "end": 41.68, "start": 40.68, "text": " into all of it." }, { "end": 43.92, "start": 41.68, "text": " Actually, we don't have time to go into most of it." }, { "end": 50.400000000000006, "start": 43.92, "text": " I just hope to kind of point out what the main arguments and themes are in the report," }, { "end": 58.4, "start": 50.4, "text": " kind of what it's trying to say, pick out some interesting things and summarize it to" }, { "end": 60.64, "start": 58.4, "text": " the best of my ability." }, { "end": 62.72, "start": 60.64, "text": " Also give a little critique." }, { "end": 73.48, "start": 62.72, "text": " So let me actually go ahead and try to state the kind of core argument that the report" }, { "end": 78.44, "start": 73.48, "text": " is trying to make, because it's not really clear from reading it and you have to kind" }, { "end": 84.8, "start": 78.44, "text": " of read the whole thing and then kind of becomes clear what the argument is, I feel, though" }, { "end": 89.96, "start": 84.8, "text": " they somehow stated in the introduction numerous times in various ways." }, { "end": 94.24, "start": 89.96, "text": " So I might just be not as attentive reader at first time." }, { "end": 100.47999999999999, "start": 94.24, "text": " But all right, so here's the argument and I really hope I'm representing this correctly." }, { "end": 107.68, "start": 100.47999999999999, "text": " We have a problem currently that sometimes AI systems can exhibit what we usually call" }, { "end": 109.08000000000001, "start": 107.68, "text": " bias." }, { "end": 113.52000000000001, "start": 109.08000000000001, "text": " And we don't mean mathematical bias, like bias variance tradeoff." }, { "end": 120.60000000000001, "start": 113.52000000000001, "text": " We mean bias in a societal sense, let's say bias against certain types of people where" }, { "end": 122, "start": 120.60000000000001, "text": " they shouldn't exist." }, { "end": 129.28, "start": 122, "text": " So for example, let me draw an AI system and I'll just draw a little computer screen with" }, { "end": 131.60000000000002, "start": 129.28, "text": " a little light bulb." }, { "end": 132.60000000000002, "start": 131.60000000000002, "text": " All right." }, { "end": 137.92, "start": 132.6, "text": " So this is because it's smart, this is an AI system and the AI system and they give" }, { "end": 139.04, "start": 137.92, "text": " numerous examples." }, { "end": 145.2, "start": 139.04, "text": " One example they give us for is like face recognition algorithm that is much more accurate" }, { "end": 151.92, "start": 145.2, "text": " on faces of white males, as opposed to darker skinned females." }, { "end": 159.04, "start": 151.92, "text": " So let me draw like two curves to represent these distributions are unequal." }, { "end": 165.48, "start": 159.04, "text": " And so the AI system exhibits some bias with respect to some kinds of people with an especially" }, { "end": 167.2, "start": 165.48, "text": " protected attributes." }, { "end": 171.39999999999998, "start": 167.2, "text": " And in this report, they focus mainly on gender and race." }, { "end": 174.51999999999998, "start": 171.39999999999998, "text": " So that's what we're going to talk about." }, { "end": 179.68, "start": 174.51999999999998, "text": " The second thing they observe, so this observation one, the second thing they observe is, I'm" }, { "end": 185.32, "start": 179.68, "text": " going to draw some generic people here that represent the workforce of AI." }, { "end": 191.76, "start": 185.32, "text": " So the AI workforce is classified as all the people that work on AI, be that university" }, { "end": 197, "start": 191.76, "text": " researchers or within companies building AI products or deploying them." }, { "end": 202.51999999999998, "start": 197, "text": " So this is the workforce and they observe that there is an unequal distribution among" }, { "end": 205.64, "start": 202.51999999999998, "text": " the AI workforce." }, { "end": 211.84, "start": 205.64, "text": " So this distribution, I'm also going to do this for unequal distribution." }, { "end": 217.48, "start": 211.84, "text": " There's an unequal distribution in the AI workforce, most notably, it's predominantly" }, { "end": 221.76, "start": 217.48, "text": " males who work on AI." }, { "end": 228.08, "start": 221.76, "text": " And also white people are overrepresented compared to the world population at large." }, { "end": 231.72, "start": 228.08, "text": " So that's kind of the two observations they make." }, { "end": 240.36, "start": 231.72, "text": " And now what they claim is that the unequal representation in the workforce is causing" }, { "end": 243.14000000000001, "start": 240.36, "text": " the bias in the AI systems." }, { "end": 250.52, "start": 243.14000000000001, "text": " So they're basically saying these AI systems are biased because that the workforce is unequally" }, { "end": 251.96, "start": 250.52, "text": " distributed." }, { "end": 258.48, "start": 251.96, "text": " And also they claim in a less powerful sense, I feel, but they claim there is a loop that" }, { "end": 265.24, "start": 258.48, "text": " this then leads back that because there is bias in the AI system, that again leads to" }, { "end": 270.08000000000004, "start": 265.24, "text": " an unequal, more unequal distribution of the workforce." }, { "end": 276.56, "start": 270.08, "text": " So the core argument really is, as they set out to do, like in the introduction, and also" }, { "end": 282.28, "start": 276.56, "text": " claim that they have done in the conclusion, is to demonstrate these two directions here" }, { "end": 283.84, "start": 282.28, "text": " in a causal way." }, { "end": 289.21999999999997, "start": 283.84, "text": " So the systems are biased because there is an unequal representation in the workforce" }, { "end": 293, "start": 289.21999999999997, "text": " and that feeds back." }, { "end": 300.03999999999996, "start": 293, "text": " So the argument is that if you want to fix the bias here, if you want to fix that, then" }, { "end": 309.88, "start": 300.04, "text": " you will have to fix it via making the workforce more what they call diverse, so less unilaterally" }, { "end": 313.40000000000003, "start": 309.88, "text": " distributed towards white males." }, { "end": 315.48, "start": 313.40000000000003, "text": " That's kind of the final conclusion." }, { "end": 321.12, "start": 315.48, "text": " If you read their report and the recommendations, that's mainly what they're going for." }, { "end": 331.8, "start": 321.12, "text": " Yeah, so my opinion, or in my opinion, having read the report a couple of times, is that" }, { "end": 335.98, "start": 331.8, "text": " as I see it, they really don't demonstrate these links." }, { "end": 341.04, "start": 335.98, "text": " So they give examples of this and they give examples of this." }, { "end": 344.08, "start": 341.04, "text": " They show that the workforce is unequally distributed." }, { "end": 350.2, "start": 344.08, "text": " They show that AI systems can exhibit such bias, but they never actually show these links" }, { "end": 351.4, "start": 350.2, "text": " in my opinion." }, { "end": 352.8, "start": 351.4, "text": " They don't show this." }, { "end": 358.94, "start": 352.8, "text": " So if you make the claim that in order to fix the bias in AI systems, you must fix the" }, { "end": 364.42, "start": 358.94, "text": " unequal representation in the workforce, I would need an argument that says because there" }, { "end": 372.12, "start": 364.42, "text": " is unequal representation, therefore A, therefore B, therefore C, therefore bias, like an actual" }, { "end": 382.32, "start": 372.12, "text": " argument to follow that says because of this, that, because of that, that, and so on." }, { "end": 384.8, "start": 382.32, "text": " It's just not there." }, { "end": 386.56, "start": 384.8, "text": " They simply show parallels." }, { "end": 392, "start": 386.56, "text": " They simply show that these two things exist and they just list example after example of" }, { "end": 396.52, "start": 392, "text": " that." }, { "end": 398.84000000000003, "start": 396.52, "text": " I don't think they make this argument." }, { "end": 406.2, "start": 398.84, "text": " But I think, also the other direction, they don't really make this argument." }, { "end": 415.47999999999996, "start": 406.2, "text": " Except in one case, where if you give them benefit of the doubt." }, { "end": 423.91999999999996, "start": 415.47999999999996, "text": " What I also think is that it appears like the article, if you read it, and I encourage" }, { "end": 429.72, "start": 423.92, "text": " you to read it if you have some time, it makes a lot of sense if you have already accepted" }, { "end": 430.72, "start": 429.72, "text": " this conclusion." }, { "end": 437.20000000000005, "start": 430.72, "text": " Like if you've already accepted this, then it's like, oh yeah, because I feel this is" }, { "end": 443.40000000000003, "start": 437.20000000000005, "text": " just a text where the confirmation bias is so high, just the way it's written, that it" }, { "end": 448.84000000000003, "start": 443.40000000000003, "text": " must make a lot of sense to someone who's already kind of in on this conclusion." }, { "end": 456.52, "start": 448.84, "text": " But to someone who isn't sold yet, like myself, I am just not finding this convincing at all." }, { "end": 465.64, "start": 456.52, "text": " The second thing is that it very much feels like this isn't like a discovery or something." }, { "end": 472.96, "start": 465.64, "text": " But someone actually set out with the goal to address this here with the goal of I want" }, { "end": 479.64, "start": 472.96, "text": " companies to hire more of these people or certain kinds of people or to become more" }, { "end": 484.2, "start": 479.64, "text": " diverse or to promote more of a certain type of people." }, { "end": 487.35999999999996, "start": 484.2, "text": " And now I'm going to find reasons for this." }, { "end": 492.2, "start": 487.35999999999996, "text": " And the reason is like, oh, look at look at this bias here." }, { "end": 493.79999999999995, "start": 492.2, "text": " This is caused." }, { "end": 495.79999999999995, "start": 493.79999999999995, "text": " This is caused by this other thing." }, { "end": 498.84, "start": 495.79999999999995, "text": " And therefore we must fix this other thing." }, { "end": 505.08, "start": 498.84, "text": " It very much feels like someone setting out with already the conclusion in mind rather" }, { "end": 508.67999999999995, "start": 505.08, "text": " than this being an honest investigation." }, { "end": 510.64, "start": 508.67999999999995, "text": " But yeah, I mean, read it for yourself." }, { "end": 514.36, "start": 510.64, "text": " I can't prove the absence of an argument by not reading every single line." }, { "end": 519.12, "start": 514.36, "text": " And I can't read every single line because it'll just get very long and boring." }, { "end": 520.88, "start": 519.12, "text": " But read it yourself." }, { "end": 528.68, "start": 520.88, "text": " And I think I'm pretty I'm pretty I've read it numerous times with really an open mind" }, { "end": 531.1999999999999, "start": 528.68, "text": " to be convinced that there is an argument in there." }, { "end": 536.4399999999999, "start": 531.1999999999999, "text": " But I don't think there is or I don't think there is a very strong argument for this." }, { "end": 537.4399999999999, "start": 536.4399999999999, "text": " All right." }, { "end": 540.76, "start": 537.4399999999999, "text": " Let this first part here is more or less a summary." }, { "end": 543.3199999999999, "start": 540.76, "text": " So research findings is more or less a summary." }, { "end": 547.28, "start": 543.3199999999999, "text": " And we'll get to these things as they are important." }, { "end": 550.0999999999999, "start": 547.28, "text": " Then they state recommendations right at the beginning." }, { "end": 552.92, "start": 550.0999999999999, "text": " So actually, you'd have to read the article first." }, { "end": 554.76, "start": 552.92, "text": " This is kind of more of an abstract section." }, { "end": 558.54, "start": 554.76, "text": " But since it's right here, we'll kind of jump right into it." }, { "end": 563.68, "start": 558.54, "text": " So these are recommendations and I've claimed they don't really show a connection." }, { "end": 569.52, "start": 563.68, "text": " But they actually just show examples, examples of this and examples of this and parallel" }, { "end": 570.52, "start": 569.52, "text": " them." }, { "end": 575.38, "start": 570.52, "text": " And this is reflected in like every single section, including here in the recommendations." }, { "end": 579.12, "start": 575.38, "text": " They have recommendations for improving workplace diversity." }, { "end": 583.5999999999999, "start": 579.12, "text": " And they have recommendations for addressing bias and discrimination in AI systems." }, { "end": 584.5999999999999, "start": 583.5999999999999, "text": " Right." }, { "end": 591.84, "start": 584.6, "text": " So all right, in my case, if you make this argument, I would I would feel you also make" }, { "end": 594.96, "start": 591.84, "text": " recommendations for breaking these links." }, { "end": 598.9200000000001, "start": 594.96, "text": " But or argue why they can't be broken." }, { "end": 600.94, "start": 598.9200000000001, "text": " But all right, let's jump into some of them." }, { "end": 604.34, "start": 600.94, "text": " And it is really a mixed bag here, really." }, { "end": 610.48, "start": 604.34, "text": " So some recommendations I'm really in favor of just from from the go not even you don't" }, { "end": 613.9200000000001, "start": 610.48, "text": " even need the article for those here." }, { "end": 617.5999999999999, "start": 613.92, "text": " Discrimination, harassment and discrimination, transparency reports, including number of" }, { "end": 621.4399999999999, "start": 617.5999999999999, "text": " claims over time, the types of claims submitted and actions taken." }, { "end": 627.8, "start": 621.4399999999999, "text": " So it's known that especially in these larger companies, sexual harassment claims often" }, { "end": 633.8399999999999, "start": 627.8, "text": " go down in either bureaucracy or are kind of hushed under the table or something like" }, { "end": 634.8399999999999, "start": 633.8399999999999, "text": " this." }, { "end": 638.24, "start": 634.8399999999999, "text": " What you have to recognize is that a human resource department of a large company isn't" }, { "end": 640.52, "start": 638.24, "text": " there to serve the human resources." }, { "end": 645.52, "start": 640.52, "text": " It's there to serve the company providing human resources." }, { "end": 651.96, "start": 645.52, "text": " That's why a sexual harassment claim to an HR department is just a potential lawsuit." }, { "end": 657.1999999999999, "start": 651.96, "text": " And that's why they don't want to take it seriously except for it must go away really" }, { "end": 658.1999999999999, "start": 657.1999999999999, "text": " quickly." }, { "end": 664.48, "start": 658.1999999999999, "text": " So I think to kind of force companies or to ask companies to be more transparent, to take" }, { "end": 673.64, "start": 664.48, "text": " more seriously these the accusations of sexual harassment and assault and also discrimination" }, { "end": 675.88, "start": 673.64, "text": " is a very valuable goal." }, { "end": 680.9200000000001, "start": 675.88, "text": " And I fully, fully support this." }, { "end": 687.84, "start": 680.9200000000001, "text": " Also the here commit to transparency around hiring practices, especially hiring regarding" }, { "end": 691.8000000000001, "start": 687.84, "text": " how candidates are leveled, compensated and promoted." }, { "end": 698.3599999999999, "start": 691.8, "text": " But also the larger the company gets, the less transparent this process usually becomes" }, { "end": 703.8, "start": 698.3599999999999, "text": " or the more bureaucratic, the more people are able to game it and so on and distort" }, { "end": 704.8, "start": 703.8, "text": " it." }, { "end": 711.1999999999999, "start": 704.8, "text": " So I feel it's always good to be transparent around, okay, this person provides this much" }, { "end": 718.7199999999999, "start": 711.1999999999999, "text": " value to the company, therefore they should be compensated according to that or at least" }, { "end": 721.18, "start": 718.7199999999999, "text": " be transparent about it." }, { "end": 723.68, "start": 721.18, "text": " So these are kind of recommendations I like." }, { "end": 730.12, "start": 723.68, "text": " Then recommendations that really go into a different direction is something like this" }, { "end": 734.2399999999999, "start": 730.12, "text": " here, change hiring practices to maximize diversity." }, { "end": 739.68, "start": 734.2399999999999, "text": " And this is kind of reflect, I'm not going to go on this reflected in other points, increase" }, { "end": 744.12, "start": 739.68, "text": " the number of people of color, women and other underrepresented groups at senior leadership" }, { "end": 746.9599999999999, "start": 744.12, "text": " levels of AI companies across all departments." }, { "end": 752.6, "start": 746.96, "text": " So these things, they are usually within like company diversity goals and so on, doesn't" }, { "end": 754.12, "start": 752.6, "text": " really say how to do it." }, { "end": 759.2800000000001, "start": 754.12, "text": " But then the I mean, as such, they're not really recommendations yet." }, { "end": 760.2800000000001, "start": 759.2800000000001, "text": " They're more like goals." }, { "end": 766.4000000000001, "start": 760.2800000000001, "text": " But here recommendation seven, I think is the the crucial one, ensure executive incentive" }, { "end": 774.0400000000001, "start": 766.4000000000001, "text": " structures are tied to increases in hiring and retention of underrepresented groups." }, { "end": 777.56, "start": 774.04, "text": " So this is it's a bit of coded language." }, { "end": 783.56, "start": 777.56, "text": " But here they talk about executive incentive structure tied to hiring and retention of" }, { "end": 785.12, "start": 783.56, "text": " underrepresented groups." }, { "end": 790.12, "start": 785.12, "text": " This basically means if you are a manager or someone in charge of hiring or promoting," }, { "end": 795.52, "start": 790.12, "text": " and you hire or promote a underrepresented person, and since they're talking about gender" }, { "end": 802.68, "start": 795.52, "text": " and race here, if you that means if you hire or promote a person of color or a woman, in" }, { "end": 805.64, "start": 802.68, "text": " this case, you will be compensated more." }, { "end": 809.5999999999999, "start": 805.64, "text": " So at the end of the year, you'll somehow have more money, like more bonuses or more" }, { "end": 814.12, "start": 809.5999999999999, "text": " base comp or more equity or something like you'll get more money." }, { "end": 822.9599999999999, "start": 814.12, "text": " So this, this recommendation is a direct call to hire based on race and gender." }, { "end": 829.4399999999999, "start": 822.9599999999999, "text": " So this, this is a direct call to racist and sexist hiring basically to discriminate people" }, { "end": 838.5200000000001, "start": 829.44, "text": " according to their skin color and according to their gender, which I mean, how, how is" }, { "end": 840, "start": 838.5200000000001, "text": " this okay with anyone?" }, { "end": 846.8000000000001, "start": 840, "text": " Like how can anyone how are people even able to state this and in like a high profile report" }, { "end": 852.1400000000001, "start": 846.8000000000001, "text": " like this and get away with it and not have people criticize them, this directly calls" }, { "end": 856.8800000000001, "start": 852.1400000000001, "text": " for people to be treated according to their gender and race." }, { "end": 863.64, "start": 856.88, "text": " And probably as directly as you can go without getting into actual legal trouble." }, { "end": 868.28, "start": 863.64, "text": " But yeah, I'm really, really against such such practices." }, { "end": 875.12, "start": 868.28, "text": " I mean, yeah, that's I just I just don't know how this how this can ever how this can ever" }, { "end": 879.12, "start": 875.12, "text": " be thought of as a good thing by anyone." }, { "end": 887.52, "start": 879.12, "text": " All right, so, well, yeah, in my mind, this recommendation, and this recommendation kind" }, { "end": 889.52, "start": 887.52, "text": " of are counter to each other." }, { "end": 895.6, "start": 889.52, "text": " Because if if I commit to transparency, how people are okay now I can, I can transparently" }, { "end": 898.32, "start": 895.6, "text": " commit to to be racist, I guess." }, { "end": 903.5600000000001, "start": 898.32, "text": " But if I say, okay, I'm going to come and promote people based on how much value they" }, { "end": 910.04, "start": 903.56, "text": " provide to the company, then yeah, I'd much rather have that than saying I'm going to" }, { "end": 913, "start": 910.04, "text": " come and promote people based on their skin color." }, { "end": 916.2399999999999, "start": 913, "text": " Alright, so let's actually jump into the report." }, { "end": 920.9399999999999, "start": 916.2399999999999, "text": " I'm not gonna these recommendations for addressing bias and discrimination in systems this these" }, { "end": 923.3199999999999, "start": 920.9399999999999, "text": " are fairly general and common." }, { "end": 928.04, "start": 923.3199999999999, "text": " So as well, as I said, we'll jump most of the things in the report." }, { "end": 930.3199999999999, "start": 928.04, "text": " So introduction." }, { "end": 935.8000000000001, "start": 930.32, "text": " So they start out with there is a diversity crisis in the AI industry." }, { "end": 942.72, "start": 935.8000000000001, "text": " This they give like some numbers like 15% of AI research staff and 10% at Google, so" }, { "end": 946.48, "start": 942.72, "text": " 15% of Facebook are women." }, { "end": 953.96, "start": 946.48, "text": " So these are some kind of fairly known statistics about how the AI field is kind of gender and" }, { "end": 956.1600000000001, "start": 953.96, "text": " race skewed." }, { "end": 963.3199999999999, "start": 956.16, "text": " Currently, so they say they claim in bold the diversity problem is not just about women." }, { "end": 969.5799999999999, "start": 963.3199999999999, "text": " It's about gender, race, and most fundamentally about power." }, { "end": 974.18, "start": 969.5799999999999, "text": " It affects how companies work, what products get built, who they're designed to serve," }, { "end": 976.6, "start": 974.18, "text": " and who benefits from their development." }, { "end": 985.72, "start": 976.6, "text": " So this, I find this, this, this word power and this notion of power, a lot in this report," }, { "end": 992.52, "start": 985.72, "text": " it appears again and again and again in in like power dynamics and power dynamics among" }, { "end": 993.52, "start": 992.52, "text": " groups." }, { "end": 1001.6, "start": 993.52, "text": " It's like a worldview, it paints like a worldview, where these different gender and race groups" }, { "end": 1007.52, "start": 1001.6, "text": " kind of struggle against each other to gain power over another." }, { "end": 1014.24, "start": 1007.52, "text": " And whoever's in power will try to remain in power in alliance with their gender and" }, { "end": 1018.5600000000001, "start": 1014.24, "text": " race group and try to keep the other groups down." }, { "end": 1021.88, "start": 1018.5600000000001, "text": " I'm not sure that's the correct view of the world." }, { "end": 1029.48, "start": 1021.88, "text": " In my mind, the world is comprised of individual people that want to achieve something for" }, { "end": 1033.6, "start": 1029.48, "text": " themselves and they would like to prop themselves up." }, { "end": 1039.24, "start": 1033.6, "text": " Whereas in this worldview, it's like, I'm going to use the power of my group to keep" }, { "end": 1041.84, "start": 1039.24, "text": " other groups down." }, { "end": 1048.8, "start": 1041.84, "text": " I don't know which worldview you subscribe to, but I find the world is comprised of individuals." }, { "end": 1054, "start": 1048.8, "text": " Yeah, and this is not discrediting that some people have it harder because of their gender" }, { "end": 1055.52, "start": 1054, "text": " or race." }, { "end": 1060.52, "start": 1055.52, "text": " But to see the entire world as a power struggle between these groups, to me, it's, it's," }, { "end": 1068.3999999999999, "start": 1060.52, "text": " yeah, and I'm not going to point out everywhere it appears, this power wording, but it appears" }, { "end": 1072.24, "start": 1068.4, "text": " a lot and it's really shapes how the report reads." }, { "end": 1079.3600000000001, "start": 1072.24, "text": " You have to, you have to kind of remember, if you're a white male, and currently, the" }, { "end": 1086.76, "start": 1079.3600000000001, "text": " field is comprised of 90% white males, you, if you have like 10, like 10 hours, let's" }, { "end": 1093.96, "start": 1086.76, "text": " say you have to have 10 hours to do something, right, you can either choose to put down some" }, { "end": 1101.92, "start": 1093.96, "text": " other groups, like put down groups that you're not part of, or you can choose to invest these" }, { "end": 1106.8, "start": 1101.92, "text": " 10 hours in putting up yourself, you, right." }, { "end": 1113.2, "start": 1106.8, "text": " So if, if I, like I profit, if I'm a white male, I profit minimally from keeping the" }, { "end": 1120.32, "start": 1113.2, "text": " other groups down because guess what, I still have to compete with the like 1 billion other" }, { "end": 1123.04, "start": 1120.32, "text": " white males there are." }, { "end": 1131.68, "start": 1123.04, "text": " It's not going to help me to keep down anyone else, and especially, like it's, it's moronic," }, { "end": 1138.92, "start": 1131.68, "text": " like who does that, who like has alliance, except most fringe people, like to their race" }, { "end": 1144.68, "start": 1138.92, "text": " or gender, rather than to the people they admire and respect and like to work with." }, { "end": 1149.3999999999999, "start": 1144.68, "text": " So I'm going to, if I have like 10 hours today, I'm going to rather spend this in propping" }, { "end": 1155.92, "start": 1149.4, "text": " up myself compared to everyone else, and I don't care what gender or race they are." }, { "end": 1162.1200000000001, "start": 1155.92, "text": " And so that to me, that's a much more accurate or, I don't know, plausible worldview." }, { "end": 1166.64, "start": 1162.1200000000001, "text": " But just be aware that this report really takes on the language of kind of groups and" }, { "end": 1173.2800000000002, "start": 1166.64, "text": " power between groups and groups trying to, you know, kind of gain power and keep in," }, { "end": 1176.52, "start": 1173.2800000000002, "text": " keep power and keep others from having power." }, { "end": 1183.44, "start": 1176.52, "text": " All right, so say, to date, the diversity problems of the industry and the issues of" }, { "end": 1188.44, "start": 1183.44, "text": " bias in the systems it builds have tended to be considered separately." }, { "end": 1193.02, "start": 1188.44, "text": " We suggest that these are two versions of the same problem." }, { "end": 1197.6399999999999, "start": 1193.02, "text": " Issues of discrimination in the workforce and in system buildings are deeply intertwined." }, { "end": 1203.8, "start": 1197.6399999999999, "text": " Challenge, and moreover, tackling the challenges of bias within technical systems requires" }, { "end": 1207.76, "start": 1203.8, "text": " addressing workforce diversity and vice versa." }, { "end": 1214.72, "start": 1207.76, "text": " So the, I think this, this here actually is like how I described the argument and they" }, { "end": 1218.1599999999999, "start": 1214.72, "text": " kind of restated multiple times in a bit different way." }, { "end": 1219.76, "start": 1218.1599999999999, "text": " But I think this is the core." }, { "end": 1224.28, "start": 1219.76, "text": " And I really think I'm not misrepresenting the article here in that this is what they" }, { "end": 1225.3999999999999, "start": 1224.28, "text": " are setting out to do." }, { "end": 1233, "start": 1225.3999999999999, "text": " They're setting out to say, okay, the diversity, the kind of unequal representation in the" }, { "end": 1240.48, "start": 1233, "text": " workforce and the bias in some AI systems are causally linked to each other and tackling" }, { "end": 1243.96, "start": 1240.48, "text": " one requires tackling the other." }, { "end": 1249.16, "start": 1243.96, "text": " So yeah, if I'm misrepresenting them, let me know, but I really think I'm accurately" }, { "end": 1253.98, "start": 1249.16, "text": " representing their argument." }, { "end": 1261, "start": 1253.98, "text": " So what they, what they do, as I said, is they give examples of one and of the other" }, { "end": 1271.24, "start": 1261, "text": " and also they really, they're really on kind of discrediting the kind of issues to solve" }, { "end": 1273.66, "start": 1271.24, "text": " problems of bias in a different way." }, { "end": 1276.56, "start": 1273.66, "text": " So they point a little bit to this here in the introduction." }, { "end": 1280.04, "start": 1276.56, "text": " They say in the face of growing evidence, the AI research community and the industry" }, { "end": 1285.26, "start": 1280.04, "text": " producing our products have begun addressing the problem of bias by building on a body" }, { "end": 1288.36, "start": 1285.26, "text": " of work of fairness, accountability and transparency." }, { "end": 1294.8, "start": 1288.36, "text": " So fairness, accountability and transparency research concerns these issues." }, { "end": 1300.4399999999998, "start": 1294.8, "text": " For one is research showing that some products are unfair or untransparent and so on." }, { "end": 1308.6399999999999, "start": 1300.4399999999998, "text": " On the other hand, it's trying to devise algorithms that are more fair according to some notions" }, { "end": 1314.36, "start": 1308.6399999999999, "text": " or more accountable and transparent, which means that the algorithm can kind of say why" }, { "end": 1320, "start": 1314.36, "text": " it made a certain decision rather than it being a deep learning system that you don't" }, { "end": 1321.58, "start": 1320, "text": " really have an insight." }, { "end": 1326.6799999999998, "start": 1321.58, "text": " These fields are active fields of research, definitely very interesting to look into." }, { "end": 1334.6, "start": 1326.6799999999998, "text": " So but they, they kind of, it is not already here, but they say, yeah, we have adjusting" }, { "end": 1342.08, "start": 1334.6, "text": " AI systems that produce a result deemed fair by one of various mathematical definitions." }, { "end": 1345.96, "start": 1342.08, "text": " You can already see in the language here, they don't really like this research and they" }, { "end": 1352.76, "start": 1345.96, "text": " are trying in this report to kind of discredit it or at least claim that it doesn't solve" }, { "end": 1357.76, "start": 1352.76, "text": " the whole problem because their point is, of course, you have to address this diversity" }, { "end": 1364.24, "start": 1357.76, "text": " issue in the workforce in order to fix the problems." }, { "end": 1372.32, "start": 1364.24, "text": " So to this, I just want to say no, like if you can, I mean, you can criticize the fairness" }, { "end": 1376.1200000000001, "start": 1372.32, "text": " and accountability and transparency research field in that they haven't solved the problem" }, { "end": 1377.32, "start": 1376.1200000000001, "text": " fully yet." }, { "end": 1384.8, "start": 1377.32, "text": " But in principle, if I have an algorithm, if I'm being delivered an algorithm, right," }, { "end": 1390.4, "start": 1384.8, "text": " and the fairness literature has been applied to that algorithm and someone tells me, I" }, { "end": 1397, "start": 1390.4, "text": " guarantee you here is a proof, the algorithm is fair, right, then I really don't care who" }, { "end": 1398.3200000000002, "start": 1397, "text": " made that algorithm." }, { "end": 1400.96, "start": 1398.3200000000002, "text": " As long as it's fair, the problem is fixed." }, { "end": 1404.16, "start": 1400.96, "text": " If the bias is gone, the problem is fixed." }, { "end": 1405.3600000000001, "start": 1404.16, "text": " And I don't care who fix it." }, { "end": 1410.64, "start": 1405.3600000000001, "text": " I don't care if the person who fixed it is black or white or purple." }, { "end": 1412.52, "start": 1410.64, "text": " Then the problem is fixed." }, { "end": 1418.4, "start": 1412.52, "text": " And they, they really have to, they really try to just make the counter argument here" }, { "end": 1421.2800000000002, "start": 1418.4, "text": " is that no, that's it's not enough." }, { "end": 1428.16, "start": 1421.2800000000002, "text": " But I claim yes, it, if you can actually solve the fairness problem, technically, then you" }, { "end": 1430.3600000000001, "start": 1428.16, "text": " have solved the fairness problem." }, { "end": 1436.76, "start": 1430.3600000000001, "text": " Yeah, the only thing you can do is claim that it is not good enough yet, but not that it's" }, { "end": 1441.6000000000001, "start": 1436.76, "text": " fun to they kind of have to make the argument that it's fundamentally flawed approach." }, { "end": 1445.1200000000001, "start": 1441.6000000000001, "text": " And I don't think they succeed in doing that here." }, { "end": 1452.1999999999998, "start": 1445.12, "text": " Um, yeah, so they go on to say, we should expand to consider not only how I tools can" }, { "end": 1456.04, "start": 1452.1999999999998, "text": " be biased technically, but how they're shaped by the environments in which you're built" }, { "end": 1458.28, "start": 1456.04, "text": " in and the people that built them." }, { "end": 1463.8, "start": 1458.28, "text": " Again, this this focus like who builds the AI system, I don't care, I care what it does," }, { "end": 1464.9199999999998, "start": 1463.8, "text": " right?" }, { "end": 1469.4399999999998, "start": 1464.9199999999998, "text": " As much as if, if I hear an argument for or against something, I don't care who makes" }, { "end": 1470.8, "start": 1469.4399999999998, "text": " the argument, right?" }, { "end": 1473.28, "start": 1470.8, "text": " I care what the argument says." }, { "end": 1477.8, "start": 1473.28, "text": " This is, it's like an ad hominem attack for an entire community." }, { "end": 1487.76, "start": 1477.8, "text": " That's kind of how this this article, this report shows, or is appears to me." }, { "end": 1493.44, "start": 1487.76, "text": " So they say, currently, large scale AI systems are developed almost exclusively in a handful" }, { "end": 1497.76, "start": 1493.44, "text": " of technology companies and a small set of elite university laboratories spaces that" }, { "end": 1502.74, "start": 1497.76, "text": " in the West tend to be extremely white, affluent, technically oriented and male." }, { "end": 1508.1200000000001, "start": 1502.74, "text": " So yeah, their their problem, that's their fundamental problem here that these these" }, { "end": 1511.72, "start": 1508.1200000000001, "text": " spaces are skewed in one direction." }, { "end": 1515.84, "start": 1511.72, "text": " Interestingly enough, their problem is not so much that it's that they're all in the" }, { "end": 1518.04, "start": 1515.84, "text": " same place, right?" }, { "end": 1523.68, "start": 1518.04, "text": " That they all live like 20 miles from each other in around San Francisco." }, { "end": 1528.1200000000001, "start": 1523.68, "text": " That's that seems to be not a problem at all, as long as we get to like enough people of" }, { "end": 1532.32, "start": 1528.1200000000001, "text": " color and women into these 20 miles." }, { "end": 1540.52, "start": 1532.32, "text": " But yeah, so that that's pointing out the the problem here or the yeah, kind of issue" }, { "end": 1541.52, "start": 1540.52, "text": " they have." }, { "end": 1546.28, "start": 1541.52, "text": " All right, so they go on." }, { "end": 1554.12, "start": 1546.28, "text": " Just kind of want to highlight again, they say both within the spaces where AI is being" }, { "end": 1557.8, "start": 1554.12, "text": " created and the logic of how AI systems are being designed." }, { "end": 1563, "start": 1557.8, "text": " So paralleling the two things, the cost of bias, harassment and discrimination are born" }, { "end": 1570.28, "start": 1563, "text": " by the same people, gender minorities, people of color, other underrepresented groups." }, { "end": 1576.56, "start": 1570.28, "text": " And they also say similarly, the benefits of such systems from profit to efficiency," }, { "end": 1583.24, "start": 1576.56, "text": " accrue primarily to those are already in positions of power tend to be white, educated and male." }, { "end": 1592.88, "start": 1583.24, "text": " So they again, they say the this points to a systematic relationship between patterns" }, { "end": 1597.6, "start": 1592.88, "text": " of exclusion within the field of AI and the industry driving its production on the one" }, { "end": 1602.04, "start": 1597.6, "text": " hand and the biases that manifest in the logics and applications of the technologies on the" }, { "end": 1603.04, "start": 1602.04, "text": " other." }, { "end": 1609.84, "start": 1603.04, "text": " And they try to make this connection because they say the cost and the benefit of these" }, { "end": 1614.6, "start": 1609.84, "text": " two things are overlap in the people that where it costs and it benefits." }, { "end": 1619.28, "start": 1614.6, "text": " And I really, again, it's just a parallel, but I really even don't think that's true" }, { "end": 1626.04, "start": 1619.28, "text": " because they kind of, they kind of argue against themselves later." }, { "end": 1632.8799999999999, "start": 1626.04, "text": " So they always say, we have to look at again, they shoot against the take much more than" }, { "end": 1638.28, "start": 1632.8799999999999, "text": " the technically driven problem solving." }, { "end": 1640.12, "start": 1638.28, "text": " They point to this." }, { "end": 1645.28, "start": 1640.12, "text": " So our research requires looking at gender and racist categories within which humans" }, { "end": 1652.24, "start": 1645.28, "text": " think in short, sorry, studies of discriminatory systems, we need to ask who is harmed, who" }, { "end": 1654.84, "start": 1652.24, "text": " benefits, who gets to decide." }, { "end": 1664.84, "start": 1654.84, "text": " So it's kind of who bears the cost, who bears the benefits and who has the power." }, { "end": 1671.52, "start": 1664.84, "text": " So that's the, and again, it's we seek to understand how AI disadvantages some, we also" }, { "end": 1676.04, "start": 1671.52, "text": " consider how it works to the advantage of others." }, { "end": 1677.3999999999999, "start": 1676.04, "text": " So keep that in mind." }, { "end": 1682.4399999999998, "start": 1677.3999999999999, "text": " That's kind of the lens through how they analyze the this thing again, one that acknowledges" }, { "end": 1685.72, "start": 1682.4399999999998, "text": " power relationships and centers equity and justice." }, { "end": 1691.6399999999999, "start": 1685.72, "text": " That's the, they want to see this bigger picture." }, { "end": 1696.5600000000002, "start": 1691.64, "text": " So that's yeah, keep, again, keep that in mind." }, { "end": 1703.8400000000001, "start": 1696.5600000000002, "text": " So they go into a section called which humans are in the loop, how workforces and AI systems" }, { "end": 1705.0800000000002, "start": 1703.8400000000001, "text": " interact." }, { "end": 1710.6000000000001, "start": 1705.0800000000002, "text": " So this kind of from the title of this section, you think, okay, here's where we get in." }, { "end": 1712.76, "start": 1710.6000000000001, "text": " Here's where we make the argument." }, { "end": 1720.76, "start": 1712.76, "text": " And they start by listing examples of how AI systems can be discriminatory." }, { "end": 1728.4, "start": 1720.76, "text": " And first, they go into an example of Amazon had developed an experimental hiring tool" }, { "end": 1733.16, "start": 1728.4, "text": " to help rank job candidates." }, { "end": 1738.12, "start": 1733.16, "text": " By learning from its past reference preferences, Amazon hoped that the resume scanning tool" }, { "end": 1743.3799999999999, "start": 1738.12, "text": " will be able to efficiently identify qualified applicants, comparing their applications" }, { "end": 1745, "start": 1743.3799999999999, "text": " to previous hires." }, { "end": 1750.64, "start": 1745, "text": " The system quickly began to downgrade resumes from candidates who attended all women's" }, { "end": 1757.38, "start": 1750.64, "text": " colleges along with any resumes that included the word women's." }, { "end": 1762.8400000000001, "start": 1757.38, "text": " After uncovering this bias, Amazon engineers tried to fix the problem by directing the" }, { "end": 1765.92, "start": 1762.8400000000001, "text": " system to treat these terms in a neutral manner." }, { "end": 1772.4, "start": 1765.92, "text": " The company eventually abandoned the tool when they were unable to ensure that the algorithm" }, { "end": 1776.1200000000001, "start": 1772.4, "text": " would not be biased against women." }, { "end": 1781.4399999999998, "start": 1776.12, "text": " Gender based discrimination was built too deeply within the system and in Amazon's past" }, { "end": 1785.4799999999998, "start": 1781.4399999999998, "text": " hiring practices to be uprooted using a purely technical approach." }, { "end": 1790.4799999999998, "start": 1785.4799999999998, "text": " So this just the way is written, I find to be quite dishonest." }, { "end": 1793.84, "start": 1790.4799999999998, "text": " But let's analyze what happened here." }, { "end": 1798.9199999999998, "start": 1793.84, "text": " So their final claim is that gender based discrimination was built too deeply within" }, { "end": 1804.6, "start": 1798.9199999999998, "text": " the system to be uprooted using a purely technical approach." }, { "end": 1806.1999999999998, "start": 1804.6, "text": " So this is one of their arguments." }, { "end": 1812.12, "start": 1806.1999999999998, "text": " They say technical approaches, they don't help because the Amazon engineers tried to" }, { "end": 1814.9599999999998, "start": 1812.12, "text": " fix the problem." }, { "end": 1823, "start": 1814.9599999999998, "text": " But when they were unable to ensure that the algorithm would not be biased against women." }, { "end": 1828.6399999999999, "start": 1823, "text": " So if you read this, you really I mean, I really get the impression that's not what" }, { "end": 1830.12, "start": 1828.6399999999999, "text": " happened here." }, { "end": 1837.1599999999999, "start": 1830.12, "text": " What happened here most probably is Amazon built this tool, okay, and it fed in its past" }, { "end": 1843.9599999999998, "start": 1837.1599999999999, "text": " hires and we know of issues of like data set bias bias inherent in data set." }, { "end": 1851.2399999999998, "start": 1843.9599999999998, "text": " So if your data set is skewed, the AI tends to pick up on the skewed data set and become" }, { "end": 1852.2399999999998, "start": 1851.2399999999998, "text": " skewed itself." }, { "end": 1860.08, "start": 1852.2399999999998, "text": " Okay, so I actually would argue that most or all of the examples they stayed in here" }, { "end": 1865.1599999999999, "start": 1860.08, "text": " are examples of such biased data sets and not." }, { "end": 1871, "start": 1865.1599999999999, "text": " So the the cause of the bias is the data set that they are strained on and not the person" }, { "end": 1879.24, "start": 1871, "text": " that ran the code or built the algorithm to train it on or built the deployment." }, { "end": 1885.56, "start": 1879.24, "text": " And so but it doesn't matter you're a you're Amazon, you built this tool and you realize," }, { "end": 1891.3999999999999, "start": 1885.56, "text": " oh, it discriminates against people having women's on their CV." }, { "end": 1895.98, "start": 1891.3999999999999, "text": " So this is a pretty bad PR wise." }, { "end": 1899.62, "start": 1895.98, "text": " So you tell your engineers engineers fix the problem." }, { "end": 1903.78, "start": 1899.62, "text": " So the engineers go fix the problem, they come back and say, okay, we fixed the problem." }, { "end": 1909.44, "start": 1903.78, "text": " And then what you do is you say, okay, engineers, can you ensure me that the algorithm would" }, { "end": 1911.12, "start": 1909.44, "text": " not be biased against women?" }, { "end": 1918, "start": 1911.12, "text": " Because if only the slightest bias exists, if only it doesn't even have to be if one" }, { "end": 1926.52, "start": 1918, "text": " journalist finds one example, where there is a down rank, because I add the word women's," }, { "end": 1928.8, "start": 1926.52, "text": " then we are screwed, right?" }, { "end": 1934.08, "start": 1928.8, "text": " And the engineers will say, No, we can't guarantee that it's a deep learning system or something," }, { "end": 1935.08, "start": 1934.08, "text": " right?" }, { "end": 1938.78, "start": 1935.08, "text": " We, we can't like give you a proof that it's not biased." }, { "end": 1943.56, "start": 1938.78, "text": " If you're a smart executive, at that point, you'll scrap the tool, because the potential" }, { "end": 1946.54, "start": 1943.56, "text": " PR downside are just huge." }, { "end": 1952, "start": 1946.54, "text": " And probably they've also realized it's not that handy to have this, this tool compared" }, { "end": 1956.3999999999999, "start": 1952, "text": " to their recruiters doing their job, because their recruiters might actually be good and" }, { "end": 1958.6399999999999, "start": 1956.3999999999999, "text": " have been doing this for a while." }, { "end": 1967.78, "start": 1958.6399999999999, "text": " So to the to the fact that this tool was scrapped is probably much more a result of a PR disaster." }, { "end": 1974.32, "start": 1967.78, "text": " But also independent of that to say gender based discrimination, sorry, gender based" }, { "end": 1980.6, "start": 1974.32, "text": " discrimination was built too deeply within the system to be uprooted using a purely technical" }, { "end": 1982.8799999999999, "start": 1980.6, "text": " approach." }, { "end": 1988.12, "start": 1982.8799999999999, "text": " It's just I mean, what is what is this?" }, { "end": 1993.94, "start": 1988.12, "text": " This is just trying to discredit this kind of technical, technical going about solving" }, { "end": 1994.94, "start": 1993.94, "text": " this problem." }, { "end": 1999.88, "start": 1994.94, "text": " I'm pretty sure if someone comes to me and says here, I have this tool, and I can mathematically" }, { "end": 2006.26, "start": 1999.88, "text": " prove to you that it's not biased, then it's not then the problem is solved." }, { "end": 2014.72, "start": 2006.26, "text": " And also, I really don't see how the person training the algorithm, or the person researching" }, { "end": 2019.8400000000001, "start": 2014.72, "text": " such an algorithm has any influence over how the algorithm works, because they're not the" }, { "end": 2025.6399999999999, "start": 2019.84, "text": " ones making the data set, or if they are, yeah, then they can make a better data set." }, { "end": 2031.3999999999999, "start": 2025.6399999999999, "text": " Also, if a person comes and makes a better data set, that will fix the problem." }, { "end": 2036.1999999999998, "start": 2031.3999999999999, "text": " And it doesn't matter what skin color the person has that makes the better data set." }, { "end": 2042.82, "start": 2036.1999999999998, "text": " So all of this, this link is just not demonstrated here, or anywhere here at all." }, { "end": 2048.56, "start": 2042.82, "text": " But this this here is the closest Amazon that this report actually comes to making this" }, { "end": 2049.56, "start": 2048.56, "text": " point." }, { "end": 2055.64, "start": 2049.56, "text": " And I said before, I drew that drew this thing workforce AI bias, right?" }, { "end": 2061.86, "start": 2055.64, "text": " So this this link since it here the AI system is used for hiring the workforce." }, { "end": 2069.22, "start": 2061.86, "text": " So at least one could make a claim that this link is somewhat demonstrated." }, { "end": 2075.38, "start": 2069.22, "text": " But I this it's a weak case, I would agree, but this is the closest they come." }, { "end": 2082.2000000000003, "start": 2075.38, "text": " So that and but then to go this direction, you have to somehow argue, well, the workforce" }, { "end": 2088.02, "start": 2082.2000000000003, "text": " somehow makes the AI system bias, no, the workforce influences the data set." }, { "end": 2093.9, "start": 2088.02, "text": " If the AI is trained, so if a hiring AI, how do you train a hiring AI, you optimally train" }, { "end": 2095.7200000000003, "start": 2093.9, "text": " it on the performance." }, { "end": 2101.82, "start": 2095.7200000000003, "text": " So this this employee here is going to have a performance over time, right?" }, { "end": 2104.5, "start": 2101.82, "text": " And the AI system will look at that performance over time." }, { "end": 2109.7, "start": 2104.5, "text": " So if the AI system even if it's initially biased, because it learns from the risk recruiters," }, { "end": 2118.56, "start": 2109.7, "text": " it will learn that, okay, actually, if I always forgo these women, then I don't get as much" }, { "end": 2121.86, "start": 2118.56, "text": " performance of a workforce, so I should correct for that." }, { "end": 2130.02, "start": 2121.86, "text": " So if you train the AI system on a good metric, then then then this problem will leave even" }, { "end": 2131.02, "start": 2130.02, "text": " out itself." }, { "end": 2138.42, "start": 2131.02, "text": " But again, this Yeah, this this is this could be considered like one point in the argument," }, { "end": 2140.58, "start": 2138.42, "text": " but I think it's a very weak point." }, { "end": 2146.04, "start": 2140.58, "text": " And only because the AI system is actually used for hiring, where I think the point they're" }, { "end": 2152.74, "start": 2146.04, "text": " making is a much larger one is the general bias in the AI systems contributes to the" }, { "end": 2153.74, "start": 2152.74, "text": " workforce imbalances." }, { "end": 2159.44, "start": 2153.74, "text": " And there you somehow have to say that, okay, the AI system somehow influences society at" }, { "end": 2165.98, "start": 2159.44, "text": " large and society at large then go leads to the workforce being skewed." }, { "end": 2171.7400000000002, "start": 2165.98, "text": " I don't Yeah, that it's just not strong enough, in my opinion." }, { "end": 2176.18, "start": 2171.7400000000002, "text": " And the other direction also isn't isn't strong here." }, { "end": 2180.54, "start": 2176.18, "text": " But again, the examples only get weaker from here on." }, { "end": 2185.66, "start": 2180.54, "text": " They go on to say, this is just one of many examples that show how the functional logics" }, { "end": 2189.8599999999997, "start": 2185.66, "text": " of a given technology echo the gender and racial dynamics of the industry that produced" }, { "end": 2190.8599999999997, "start": 2189.8599999999997, "text": " it here." }, { "end": 2194.66, "start": 2190.8599999999997, "text": " Yeah, this, that's the claim they're making to echo the gender and racial dynamics." }, { "end": 2200.18, "start": 2194.66, "text": " And they're actually making a stronger claim, namely a causal claim." }, { "end": 2205.8199999999997, "start": 2200.18, "text": " They give the other example of the Amazon's recognition facial analysis service previously" }, { "end": 2210.54, "start": 2205.8199999999997, "text": " demonstrated gender and racial biases worse than those of comparable tools." }, { "end": 2215.94, "start": 2210.54, "text": " So it failed to see dark skinned women while being most proficient at detecting likes light" }, { "end": 2218.42, "start": 2215.94, "text": " skinned men." }, { "end": 2224.5, "start": 2218.42, "text": " And they later go into this example again, where they basically also state yes, this" }, { "end": 2231.3, "start": 2224.5, "text": " is an issue of the data set, the data set being much more comprised of white men." }, { "end": 2236.02, "start": 2231.3, "text": " And they say, but then they have to kind of make the turnaround argument and say, well," }, { "end": 2242.82, "start": 2236.02, "text": " the data set is a reflection of society and society, you know, part of society is the" }, { "end": 2243.82, "start": 2242.82, "text": " workforce." }, { "end": 2248.78, "start": 2243.82, "text": " And it's just not, I mean, it's again, this argument only works if you already believe" }, { "end": 2249.78, "start": 2248.78, "text": " the conclusion." }, { "end": 2257.14, "start": 2249.78, "text": " Otherwise, there's actually no argument there or no solid one." }, { "end": 2262.72, "start": 2257.14, "text": " But what they do here is they say Amazon's initial response to such criticism has been" }, { "end": 2267.7, "start": 2262.72, "text": " to try and discredit the research behind it." }, { "end": 2270.8799999999997, "start": 2267.7, "text": " This reaction, or let's let's first discuss this." }, { "end": 2278.02, "start": 2270.8799999999997, "text": " So the Amazon, yeah, Amazon, of course, being the accused here and a multi billion dollar" }, { "end": 2283.8999999999996, "start": 2278.02, "text": " company and the criticism is something that is PR wise very bad for them." }, { "end": 2289.2999999999997, "start": 2283.8999999999996, "text": " They discredit the research tried to discredit the research behind it." }, { "end": 2292.7400000000002, "start": 2289.3, "text": " It's understandable that this could be dishonest from Amazon side, right?" }, { "end": 2293.7400000000002, "start": 2292.7400000000002, "text": " I mean, they're getting attacked." }, { "end": 2297.82, "start": 2293.7400000000002, "text": " It's like, you know, the tobacco companies trying to discredit the smoking research," }, { "end": 2300.5800000000004, "start": 2297.82, "text": " but still, I mean, that doesn't mean it's wrong." }, { "end": 2303.98, "start": 2300.5800000000004, "text": " It could actually be bad research, right?" }, { "end": 2308.5800000000004, "start": 2303.98, "text": " You have to actually go and look at what's Amazon saying, what is the research really" }, { "end": 2309.5800000000004, "start": 2308.5800000000004, "text": " doing?" }, { "end": 2313.54, "start": 2309.5800000000004, "text": " Is Amazon right or wrong?" }, { "end": 2317.5, "start": 2313.54, "text": " Completely open that Amazon is wrong here, but you still have to go look." }, { "end": 2321.1, "start": 2317.5, "text": " And this citation here, I've tried this citation here." }, { "end": 2324.94, "start": 2321.1, "text": " This one isn't to a to Amazon's response." }, { "end": 2330.94, "start": 2324.94, "text": " It's to like a medium article and the medium article doesn't even include Amazon's response." }, { "end": 2332.86, "start": 2330.94, "text": " I've looked, maybe I haven't seen it." }, { "end": 2335.98, "start": 2332.86, "text": " It doesn't also doesn't link Amazon's response." }, { "end": 2340.46, "start": 2335.98, "text": " Maybe it links something that links something or that includes it in some way." }, { "end": 2346.58, "start": 2340.46, "text": " But basically this medium article only states, yeah, Amazon has been denying this or Amazon" }, { "end": 2348.74, "start": 2346.58, "text": " has been critical of this." }, { "end": 2353.94, "start": 2348.74, "text": " And if you state such a sentence, Amazon's initial response to such criticism has been" }, { "end": 2355.7799999999997, "start": 2353.94, "text": " to try and discredit the research behind it." }, { "end": 2362.7799999999997, "start": 2355.7799999999997, "text": " I at least expect the citation to lead me to Amazon's response so that I can verify what" }, { "end": 2363.7799999999997, "start": 2362.7799999999997, "text": " they're saying." }, { "end": 2364.7799999999997, "start": 2363.7799999999997, "text": " Right." }, { "end": 2373.98, "start": 2364.7799999999997, "text": " So this, I mean, I don't know, willing to chalk it up to incompetence rather than malice." }, { "end": 2381.5, "start": 2373.98, "text": " Right, but then they go on and they say this reaction is evidence of the wider problem." }, { "end": 2387.82, "start": 2381.5, "text": " The research was conducted by two well-regarded AI researchers who are women of color." }, { "end": 2393.1, "start": 2387.82, "text": " By attempting to publicly discredit their expertise and research methods, Amazon is" }, { "end": 2398.14, "start": 2393.1, "text": " reinforcing the same kinds of prejudice and derasers that the research critiques." }, { "end": 2403.34, "start": 2398.14, "text": " Yeah, here you go straight to the identity of the researchers." }, { "end": 2405.98, "start": 2403.34, "text": " Like play the race card straight out." }, { "end": 2409.54, "start": 2405.98, "text": " I mean, this is maximum dishonesty, right?" }, { "end": 2415.1800000000003, "start": 2409.54, "text": " Except if Amazon said something like, well, these women of color, clearly because they're" }, { "end": 2419.06, "start": 2415.1800000000003, "text": " women of color, they have no idea what they're doing or something like this." }, { "end": 2425.2200000000003, "start": 2419.06, "text": " This is basically it's coded language for saying either saying you're not allowed to" }, { "end": 2433.74, "start": 2425.22, "text": " criticize people of color because they're a minority or you're basically saying Amazon" }, { "end": 2437.8999999999996, "start": 2433.74, "text": " is racist and that's why they criticize them." }, { "end": 2440.98, "start": 2437.8999999999996, "text": " They just don't take them seriously because they're women of color." }, { "end": 2443.7599999999998, "start": 2440.98, "text": " I mean, both are both are abhorrent." }, { "end": 2448.2999999999997, "start": 2443.7599999999998, "text": " This is just dishonesty really stated here too." }, { "end": 2454.22, "start": 2448.2999999999997, "text": " I mean, again, I'm perfectly willing to accept that Amazon's critique of this research is" }, { "end": 2460.2999999999997, "start": 2454.22, "text": " wrong and is not well intended because they're the ones attacked, but you still have to examine" }, { "end": 2468.4199999999996, "start": 2460.2999999999997, "text": " it rather than say, well, they shoot against women of color and therefore somehow that" }, { "end": 2474.5, "start": 2468.4199999999996, "text": " makes their counter argument irrelevant or even racist or something." }, { "end": 2476.1, "start": 2474.5, "text": " That's I don't know." }, { "end": 2477.8999999999996, "start": 2476.1, "text": " I find this dishonest." }, { "end": 2483.58, "start": 2477.8999999999996, "text": " Yeah, I don't know about you." }, { "end": 2485.5, "start": 2483.58, "text": " Moving on." }, { "end": 2496.42, "start": 2485.5, "text": " So they go on and state a number of examples of bias and discrimination in the workforce" }, { "end": 2504.46, "start": 2496.42, "text": " and they a lot of times they make a mixture of the gender and race imbalance in workforce" }, { "end": 2512.02, "start": 2504.46, "text": " and things like sexual harassment not being taken seriously by the companies and also" }, { "end": 2521.94, "start": 2512.02, "text": " the things like gender or race pay gaps, which I'm open to accept that these things exist" }, { "end": 2525.34, "start": 2521.94, "text": " and are even intertwined." }, { "end": 2530.34, "start": 2525.34, "text": " But just to tell you what's happening because we're kind of skipping but it's kind of a" }, { "end": 2532.62, "start": 2530.34, "text": " mixture of these things." }, { "end": 2535.46, "start": 2532.62, "text": " So they say these issues are systemic." }, { "end": 2539.94, "start": 2535.46, "text": " There's a close relationship between these workplaces with discriminatory practices and" }, { "end": 2546.7000000000003, "start": 2539.94, "text": " discriminatory tools, a feedback loop that is shaping the industry and its tools." }, { "end": 2552.06, "start": 2546.7000000000003, "text": " So again here to state, I think I've stated it enough now that or demonstrated enough" }, { "end": 2558.2200000000003, "start": 2552.06, "text": " that I'm really representing their arguments as they intended it to namely that there is" }, { "end": 2564.46, "start": 2558.2200000000003, "text": " this kind of causal links and loop between these two things." }, { "end": 2572.06, "start": 2564.46, "text": " And they shoot against the fairness literature by saying from this perspective, locating" }, { "end": 2577.94, "start": 2572.06, "text": " individual biases within given technical systems and attempting to fix them by tweaking the" }, { "end": 2582.94, "start": 2577.94, "text": " system becomes an exercise in futility." }, { "end": 2587.02, "start": 2582.94, "text": " Only by examining discrimination through the lens of social logics, who it benefits, who" }, { "end": 2592.18, "start": 2587.02, "text": " it harms and how can we see the workings of these systems in the context of existing power" }, { "end": 2593.18, "start": 2592.18, "text": " relationships." }, { "end": 2599.7, "start": 2593.18, "text": " So they say these issues aren't technically fixing these systems won't help." }, { "end": 2600.7, "start": 2599.7, "text": " If that's the problem." }, { "end": 2607.62, "start": 2600.7, "text": " And I agree, if that causal link actually exists, then technically fixing the system" }, { "end": 2608.8999999999996, "start": 2607.62, "text": " might not solve the problem." }, { "end": 2609.8999999999996, "start": 2608.8999999999996, "text": " Not even sure." }, { "end": 2615.58, "start": 2609.8999999999996, "text": " I mean, if you technically fix a system like this, then you technically break the causal" }, { "end": 2617.7, "start": 2615.58, "text": " link and thereby fix the problem." }, { "end": 2624.1, "start": 2617.7, "text": " I would not sure, but again, this is based on the hypothesis that they've already reached," }, { "end": 2630.3399999999997, "start": 2624.1, "text": " like demonstrated their, their conclusion, which they haven't and which they are not" }, { "end": 2632.8599999999997, "start": 2630.3399999999997, "text": " in the entire article." }, { "end": 2641.2999999999997, "start": 2632.8599999999997, "text": " Yeah, so the next section goes into who makes AI so I don't know about you, but this section" }, { "end": 2648.1000000000004, "start": 2641.3, "text": " was titled how workforces and AI systems interact." }, { "end": 2655.34, "start": 2648.1000000000004, "text": " And apart from one, the AI system being used for hiring the workforce, which is said this" }, { "end": 2662.9, "start": 2655.34, "text": " one instance where actually there could be one causal direction from bias to different" }, { "end": 2664.78, "start": 2662.9, "text": " misrepresentation the workforce." }, { "end": 2671.38, "start": 2664.78, "text": " Other than that, there isn't really anything in there that really shows how these two interact," }, { "end": 2673.46, "start": 2671.38, "text": " especially in a in a causal way." }, { "end": 2682.82, "start": 2673.46, "text": " Alright, the next section is called who makes AI is broadly about the about the gender and" }, { "end": 2688.6200000000003, "start": 2682.82, "text": " race imbalances or miss not unequal representation in the workforce." }, { "end": 2698.2599999999998, "start": 2688.62, "text": " And we're going to skip this diversity statistics that kind of that discuss that diversity statistics" }, { "end": 2706.54, "start": 2698.2599999999998, "text": " of companies aren't really accurate, or can be, you know, massaged kind of by the companies," }, { "end": 2709.9, "start": 2706.54, "text": " which you know, is true." }, { "end": 2714.46, "start": 2709.9, "text": " Definitely companies will always try to maximize their profits." }, { "end": 2722.62, "start": 2714.46, "text": " And even if they give out such a report, so that definitely critical thinking is in order." }, { "end": 2729.5, "start": 2722.62, "text": " Alright, so the next section is called the discrimination feedback loop." }, { "end": 2734.18, "start": 2729.5, "text": " Right, if so if in the earlier section, you felt like here we go into the meat, then you" }, { "end": 2740.78, "start": 2734.18, "text": " must feel with this title, like, okay, we're actually going to see how this loop works" }, { "end": 2748.7000000000003, "start": 2740.78, "text": " and how the two things are really linked, like how one causes the other and vice versa." }, { "end": 2750.02, "start": 2748.7000000000003, "text": " So let's jump in." }, { "end": 2758.38, "start": 2750.02, "text": " They say AI systems increasingly play a role in our social and political institutions," }, { "end": 2762.2200000000003, "start": 2758.38, "text": " including education, healthcare, hiring, criminal justice." }, { "end": 2769.38, "start": 2762.2200000000003, "text": " Yes, therefore, we need to consider the relationship between the workplace diversity crisis and" }, { "end": 2774.06, "start": 2769.38, "text": " the problems with bias and discrimination in AI systems." }, { "end": 2783.94, "start": 2774.06, "text": " No, why I don't see how therefore, but yeah, so I don't see how therefore we need to consider" }, { "end": 2784.94, "start": 2783.94, "text": " the relationship." }, { "end": 2789.58, "start": 2784.94, "text": " Okay, if there is a relationship, we need to consider whether there's a relationship." }, { "end": 2792.38, "start": 2789.58, "text": " Okay, granted." }, { "end": 2797.1600000000003, "start": 2792.38, "text": " So they say fairness, accountability and transparency research is playing an emerging role." }, { "end": 2802.62, "start": 2797.16, "text": " Now what they mean here is the aspect of fairness, accountability and transparency research that" }, { "end": 2804.3799999999997, "start": 2802.62, "text": " shows that there is a problem." }, { "end": 2809.5, "start": 2804.3799999999997, "text": " So I told you there's two sides, one side is showing there is a problem in current systems" }, { "end": 2811.42, "start": 2809.5, "text": " and the other side is trying to fix them." }, { "end": 2818.46, "start": 2811.42, "text": " So they're very much fans of the side that shows that there is a problem and they use" }, { "end": 2823.94, "start": 2818.46, "text": " show some of these problems here, we've already seen some but they show some more like Facebook's" }, { "end": 2828.98, "start": 2823.94, "text": " ad delivery systems let users to be shown as for housing and employment in a discriminatory" }, { "end": 2829.98, "start": 2828.98, "text": " manner." }, { "end": 2836.9, "start": 2829.98, "text": " So giving 2019 study found significant racial bias in a widely used commercial algorithm" }, { "end": 2843.02, "start": 2836.9, "text": " used to determine whether patients will be enrolled in care management programs." }, { "end": 2855.1, "start": 2843.02, "text": " So these are these are just examples of these AI systems being biased." }, { "end": 2861.02, "start": 2855.1, "text": " So they go into this say taking a contextualized view may enable more extensive account and" }, { "end": 2866.86, "start": 2861.02, "text": " the contextualized view they when they say this they mean anything more than just a technical" }, { "end": 2870.02, "start": 2866.86, "text": " approach at solving these problems." }, { "end": 2874.62, "start": 2870.02, "text": " More extensive account of bias to emerge future work could examine the politics of system" }, { "end": 2881.58, "start": 2874.62, "text": " design study how AI systems in situated reality and study AI systems in situated realities" }, { "end": 2888.18, "start": 2881.58, "text": " ask why a system was designed in a particular way, how it was constructed, whose interest" }, { "end": 2894.34, "start": 2888.18, "text": " it shaped shaped by the metrics in which its success or failure is assessed, rather than" }, { "end": 2898.9, "start": 2894.34, "text": " solely focusing on improving existing data sets or individual algorithms." }, { "end": 2901.02, "start": 2898.9, "text": " Yeah, I agree." }, { "end": 2906.46, "start": 2901.02, "text": " I mean, we always have to we always have to pay attention to these things, especially" }, { "end": 2913.46, "start": 2906.46, "text": " like looking at the metrics by which its success or failure is assessed." }, { "end": 2922.1, "start": 2913.46, "text": " But a lot of times this is this is rather straightforward in kind of if you look at" }, { "end": 2929.06, "start": 2922.1, "text": " the metric, the metric most often, especially in commercial applications is money, right?" }, { "end": 2936.62, "start": 2929.06, "text": " So the metric of like an ad showing system, like if I have a system to recommend ads to" }, { "end": 2943.7599999999998, "start": 2936.62, "text": " people, show people ads and personalize them and so on, I simply want to maximize my revenue." }, { "end": 2946.7, "start": 2943.7599999999998, "text": " So I want to sell someone something." }, { "end": 2952.8199999999997, "start": 2946.7, "text": " And everything I want to know is how likely is it that person is going to buy that thing?" }, { "end": 2953.8199999999997, "start": 2952.8199999999997, "text": " Right?" }, { "end": 2956.7799999999997, "start": 2953.8199999999997, "text": " I that's basically Yeah." }, { "end": 2965.7599999999998, "start": 2956.7799999999997, "text": " So in essence, sometimes it's really valuable to consider what capitalism is." }, { "end": 2975.2999999999997, "start": 2965.7599999999998, "text": " So in capitalism in so capitalism, these kind of this system we're working on is kind of" }, { "end": 2980.1000000000004, "start": 2975.3, "text": " a form of limited capitalism, but mostly mostly capitalism." }, { "end": 2984.3, "start": 2980.1000000000004, "text": " And capitalism is very greedy." }, { "end": 2990.42, "start": 2984.3, "text": " So capitalism, all corporations want to do basically is make money." }, { "end": 2998.02, "start": 2990.42, "text": " And that is and on the other side, you have discrimination." }, { "end": 3004.76, "start": 2998.02, "text": " So discrimination meaning these unequal represent like unequal distribution actively." }, { "end": 3009.4, "start": 3004.76, "text": " So and often sometimes these go hand in hand, sometimes you can make more money by discriminating" }, { "end": 3010.82, "start": 3009.4, "text": " against a certain type of people." }, { "end": 3013.26, "start": 3010.82, "text": " And that's, that's a really bad scenario." }, { "end": 3018.5200000000004, "start": 3013.26, "text": " Like that's a very, like, this is really something where we need to take action." }, { "end": 3025.9, "start": 3018.5200000000004, "text": " But a lot of times, a lot of times, these two things stand in opposition to each other." }, { "end": 3030.78, "start": 3025.9, "text": " So little arrow here, non compatible." }, { "end": 3041.82, "start": 3030.78, "text": " That means if I want to sell someone something, then I maximize my profit by not caring by" }, { "end": 3047.42, "start": 3041.82, "text": " accurately assessing how likely is it that person buys that thing." }, { "end": 3053.2200000000003, "start": 3047.42, "text": " If I want to discriminate here, if I want to discriminate, start discriminating, according" }, { "end": 3059.76, "start": 3053.2200000000003, "text": " to skin color saying like, No, I don't like that this person with the skin color is able" }, { "end": 3065.2200000000003, "start": 3059.76, "text": " to buy this product, I want to kind of keep them down, and so on, then I forgo profit," }, { "end": 3073.1400000000003, "start": 3065.2200000000003, "text": " right, then I actually, even though this person could buy this thing, I forego that." }, { "end": 3077.6200000000003, "start": 3073.1400000000003, "text": " So often these things are in direct opposition to each other." }, { "end": 3084.1000000000004, "start": 3077.6200000000003, "text": " Also, if I am in charge of hiring, and I don't like people of a certain gender, but they" }, { "end": 3088.94, "start": 3084.1000000000004, "text": " would actually be really, really good, whatever, good employees." }, { "end": 3097.7000000000003, "start": 3088.94, "text": " So I forgo that, that means I'm getting a pay more for less qualified people just because" }, { "end": 3107.32, "start": 3097.7000000000003, "text": " I'm biased and I'm down ranking unjustifiably, these people of the gender I don't like." }, { "end": 3115.92, "start": 3107.32, "text": " So oftentimes, you have to ask yourself, are people fundamentally greedy, or discriminatory?" }, { "end": 3116.92, "start": 3115.92, "text": " Which are they more?" }, { "end": 3120.2200000000003, "start": 3116.92, "text": " If push comes to shove, would they rather have more money?" }, { "end": 3127.26, "start": 3120.2200000000003, "text": " Or would they rather keep their own race and gender group in power?" }, { "end": 3133.94, "start": 3127.26, "text": " And with just, yeah, so the and you have to ask this of corporations, you have to ask" }, { "end": 3135.7400000000002, "start": 3133.94, "text": " this of people." }, { "end": 3144.58, "start": 3135.7400000000002, "text": " And in my experience and view, like people are much, much more greedy than they are willing" }, { "end": 3150.7799999999997, "start": 3144.58, "text": " to discriminate and give up money for discrimination." }, { "end": 3158.02, "start": 3150.7799999999997, "text": " And so if we look at metrics by which success or failure of AI systems are designed, then" }, { "end": 3165.66, "start": 3158.02, "text": " I would argue a lot of the times metrics are actually profit incentives." }, { "end": 3172.2599999999998, "start": 3165.66, "text": " And especially if we look at data set construction, if there is a skewed data set that makes my" }, { "end": 3178.38, "start": 3172.26, "text": " AI system be biased, that actually loses me money and the company would profit a lot from" }, { "end": 3180.0600000000004, "start": 3178.38, "text": " building a better data set." }, { "end": 3186.38, "start": 3180.0600000000004, "text": " So looking at kind of metrics actually makes a lot of sense to me and very much in favor" }, { "end": 3187.78, "start": 3186.38, "text": " of that." }, { "end": 3192.84, "start": 3187.78, "text": " And I think by designing accurate metrics and then getting the best possible information," }, { "end": 3198.5800000000004, "start": 3192.84, "text": " the best possible data sets to maximize these metrics will oftentimes actually eliminate" }, { "end": 3199.98, "start": 3198.5800000000004, "text": " such forms of discrimination." }, { "end": 3205.5, "start": 3199.98, "text": " Again, there are situations where they don't, we have to be very cognizant of these." }, { "end": 3211.7, "start": 3205.5, "text": " They go into this and they say, also examine more thoroughly how societal discrimination" }, { "end": 3217.3, "start": 3211.7, "text": " surfaces in data provenance, examining the history and process of data set construction" }, { "end": 3221.3, "start": 3217.3, "text": " and considering how cultural norms and stereotypes were enumerated and represented at the time" }, { "end": 3222.44, "start": 3221.3, "text": " of data creation." }, { "end": 3223.62, "start": 3222.44, "text": " This is a big issue." }, { "end": 3224.62, "start": 3223.62, "text": " Yes." }, { "end": 3230.3399999999997, "start": 3224.62, "text": " The data set construction kind of at the time of data creation and so on, this is a big" }, { "end": 3232.62, "start": 3230.3399999999997, "text": " issue in these systems and a lot of bias." }, { "end": 3238.02, "start": 3232.62, "text": " And I would argue most of the bias we've seen here arises from corrupt data sets and from" }, { "end": 3241.42, "start": 3238.02, "text": " data sets that were constructed in an already biased way." }, { "end": 3247.38, "start": 3241.42, "text": " And the AI system trained on these data sets simply replicates this bias." }, { "end": 3252.74, "start": 3247.38, "text": " So I think that's very correct here." }, { "end": 3258.74, "start": 3252.74, "text": " They go into this example, they say the labeled faces in the wild data set contains over 15,000" }, { "end": 3259.8599999999997, "start": 3258.74, "text": " images." }, { "end": 3262.8999999999996, "start": 3259.8599999999997, "text": " Only 7% of images are of black people." }, { "end": 3270.54, "start": 3262.8999999999996, "text": " This is because these, the media landscape of the early 2000s, these images were gathered" }, { "end": 3275.3799999999997, "start": 3270.54, "text": " from the news media at the time, predominantly featured white men in positions of celebrity" }, { "end": 3276.9799999999996, "start": 3275.3799999999997, "text": " and power." }, { "end": 3278.9399999999996, "start": 3276.9799999999996, "text": " This exactly." }, { "end": 3284.86, "start": 3278.94, "text": " So if you train a system on this data set, the system will inherit this bias." }, { "end": 3290.14, "start": 3284.86, "text": " Yeah, so this is a classic example of a corrupt data set." }, { "end": 3293.38, "start": 3290.14, "text": " Also this isn't only with race and gender." }, { "end": 3299.82, "start": 3293.38, "text": " This is also if you like take pictures from IMDB, yes, a lot of this currently Celeb A" }, { "end": 3304.2200000000003, "start": 3299.82, "text": " data set that is used in all the GAN research is collected from IMDB." }, { "end": 3311.4599999999996, "start": 3304.22, "text": " You probably have overly beautiful, like pretty face people on there." }, { "end": 3316.06, "start": 3311.4599999999996, "text": " So that your AI system, your generative model is only going to produce mostly pretty face" }, { "end": 3324.04, "start": 3316.06, "text": " people, since movie stars tend to be a lot prettier than the average humans." }, { "end": 3332.22, "start": 3324.04, "text": " So that the kind of data set construction process, I think is currently the biggest" }, { "end": 3335.1, "start": 3332.22, "text": " source of bias in AI." }, { "end": 3339.18, "start": 3335.1, "text": " But that also, it's interesting that they go into this here and they kind of want to" }, { "end": 3347.3399999999997, "start": 3339.18, "text": " make the point that this is because society and power in society, the data set reflects" }, { "end": 3348.3399999999997, "start": 3347.3399999999997, "text": " that." }, { "end": 3354.4599999999996, "start": 3348.3399999999997, "text": " But I would argue if someone makes a data set that doesn't have this bias, then the" }, { "end": 3355.8199999999997, "start": 3354.4599999999996, "text": " problem is solved." }, { "end": 3357.4599999999996, "start": 3355.8199999999997, "text": " And I don't care who makes the data set." }, { "end": 3363.14, "start": 3357.46, "text": " So the link between the workforce and the bias is really broken by an argument like" }, { "end": 3367.94, "start": 3363.14, "text": " this, because as soon as we have a correct data set, an unbiased data set, we can mitigate" }, { "end": 3368.94, "start": 3367.94, "text": " the bias." }, { "end": 3373.82, "start": 3368.94, "text": " And they even go, they go into this here." }, { "end": 3378.1, "start": 3373.82, "text": " They say, sorry." }, { "end": 3385.76, "start": 3378.1, "text": " Yeah, they say down here." }, { "end": 3393.38, "start": 3385.76, "text": " They say these people, these researchers have looked at these facial recognition systems" }, { "end": 3398.1000000000004, "start": 3393.38, "text": " and they assessed this what we saw earlier, higher error rates for darker skinned women" }, { "end": 3402.6200000000003, "start": 3398.1000000000004, "text": " than for any other group, lowest error rates for light skinned men." }, { "end": 3408.78, "start": 3402.6200000000003, "text": " To measure this disparity, these researchers developed a new data set that is more balanced," }, { "end": 3411.5800000000004, "start": 3408.78, "text": " both in terms of gender and skin color." }, { "end": 3412.5800000000004, "start": 3411.5800000000004, "text": " Good." }, { "end": 3419.22, "start": 3412.58, "text": " Problem, like make a larger data set to actually train on and then problem solved." }, { "end": 3424.94, "start": 3419.22, "text": " And I don't care at all what race and what gender these people are." }, { "end": 3427.54, "start": 3424.94, "text": " Well done." }, { "end": 3432.38, "start": 3427.54, "text": " Good people make a good data set like this." }, { "end": 3434.14, "start": 3432.38, "text": " And then we've solved the problem." }, { "end": 3436.1, "start": 3434.14, "text": " What's the problem here?" }, { "end": 3443.46, "start": 3436.1, "text": " Why would you ever care what these people look like if they do good work?" }, { "end": 3447.9, "start": 3443.46, "text": " That's to me, this actually breaks their own argument." }, { "end": 3454.5, "start": 3447.9, "text": " I don't know why they included here." }, { "end": 3462.22, "start": 3454.5, "text": " To me that to then suggest that there is a link to the workforces, if here is obvious" }, { "end": 3470.22, "start": 3462.22, "text": " that if you fix the data set, you can fix the recognition system." }, { "end": 3483.2599999999998, "start": 3470.22, "text": " All right, so we'll go on here, jump a couple more paragraphs." }, { "end": 3489.66, "start": 3483.2599999999998, "text": " Except when they say they shoot again against this kind of say to this point, a focus on" }, { "end": 3494.18, "start": 3489.66, "text": " fixing technical systems in isolation without examining their broader context of use and" }, { "end": 3499.58, "start": 3494.18, "text": " power and dynamics that attends issues is not limited in its intervention, it can actively" }, { "end": 3501.02, "start": 3499.58, "text": " cause harm." }, { "end": 3506.58, "start": 3501.02, "text": " So if you fix the problem in a technical manner, they argue here it can actively cause harm." }, { "end": 3514.46, "start": 3506.58, "text": " And the example they give is that facial and image recognition systems, they are often" }, { "end": 3519.7400000000002, "start": 3514.46, "text": " applied in service of police surveillance, which disproportionately harms poor people" }, { "end": 3523.46, "start": 3519.7400000000002, "text": " and communities of color." }, { "end": 3530.78, "start": 3523.46, "text": " So there's a quote from this person that says, is this not social progress to make black" }, { "end": 3537.38, "start": 3530.78, "text": " people equally visible to software that will inevitably be further weaponized against us?" }, { "end": 3543.82, "start": 3537.38, "text": " We are considered criminal and more surveillable by orders of magnitude." }, { "end": 3548.98, "start": 3543.82, "text": " Whatever claim to a right of privacy that we may have is diminished by a state that" }, { "end": 3551.7000000000003, "start": 3548.98, "text": " believes we must always be watched and seen." }, { "end": 3557.02, "start": 3551.7000000000003, "text": " So this is an example where by improving the facial recognition for black people, it makes" }, { "end": 3559.94, "start": 3557.02, "text": " the police better at surveilling them, which is true." }, { "end": 3565.1400000000003, "start": 3559.94, "text": " And then it is an ethical problem that the police is able to use these facial recognition" }, { "end": 3566.7400000000002, "start": 3565.1400000000003, "text": " systems to surveil people." }, { "end": 3568.98, "start": 3566.7400000000002, "text": " That's a massive privacy problem." }, { "end": 3574.1, "start": 3568.98, "text": " That's a massive problem in how much the state is allowed to overreach and so on." }, { "end": 3581.38, "start": 3574.1, "text": " So I think it's a discussion in itself, but here they argue because at the very beginning" }, { "end": 3588.58, "start": 3581.38, "text": " I asked you to remember this whole notion of we always have to look at who benefits" }, { "end": 3595.82, "start": 3588.58, "text": " from the way the AI system is constructed, who is harmed from that, who benefits from" }, { "end": 3599.1400000000003, "start": 3595.82, "text": " how the metrics are shaped and so on." }, { "end": 3607.54, "start": 3599.1400000000003, "text": " In this case, we actually have a perfect example where if the face recognition system is very" }, { "end": 3615.26, "start": 3607.54, "text": " inaccurate for black people's faces, that actually helps them in the societal context." }, { "end": 3626.94, "start": 3615.26, "text": " So by logic of this report here, that must mean that somehow the bias works for them" }, { "end": 3630.78, "start": 3626.94, "text": " and thereby the system is good or something like this." }, { "end": 3632.86, "start": 3630.78, "text": " And by fixing it, you actually make it worse." }, { "end": 3635.6000000000004, "start": 3632.86, "text": " Yeah, they say it can actively cause harm." }, { "end": 3641.78, "start": 3635.6000000000004, "text": " So I think this is pretty much arguing against themselves earlier where they say, oh, we" }, { "end": 3645.42, "start": 3641.78, "text": " always have to look at who benefits from the system." }, { "end": 3652.7000000000003, "start": 3645.42, "text": " Yeah, here, if the face recognition system can't recognize you, you actually benefit." }, { "end": 3659.0600000000004, "start": 3652.7000000000003, "text": " So I don't think that argument works in any case except if you only look at it when you" }, { "end": 3662.42, "start": 3659.0600000000004, "text": " want to look at it." }, { "end": 3672.1, "start": 3662.42, "text": " All right, so we're going to jump a couple of sections here." }, { "end": 3677.06, "start": 3672.1, "text": " But the core thing here was the feedback loop." }, { "end": 3680.78, "start": 3677.06, "text": " And again, the feedback loop isn't demonstrated at all here." }, { "end": 3687.06, "start": 3680.78, "text": " Just examples of systems that are biased and of data sets that are biased, because of data" }, { "end": 3689.58, "start": 3687.06, "text": " sets that are biased." }, { "end": 3697.2999999999997, "start": 3689.58, "text": " But there's no demonstration of how the workforce, I mean, yeah, just take this previous argument." }, { "end": 3701.74, "start": 3697.2999999999997, "text": " So the workforce is supposedly supremely white." }, { "end": 3711.4, "start": 3701.74, "text": " And it makes a face recognition system that makes that is performing poorly for darker" }, { "end": 3713.86, "start": 3711.4, "text": " skinned people." }, { "end": 3718.44, "start": 3713.86, "text": " And that actually in this context of police surveillance helps the darker skinned people" }, { "end": 3721.18, "start": 3718.44, "text": " compared to the lighter skinned people." }, { "end": 3727.44, "start": 3721.18, "text": " So that kind of is an exact counterexample to the argument that this misrepresentation" }, { "end": 3732.56, "start": 3727.44, "text": " in the workforce leads to the biases in the system." }, { "end": 3738.62, "start": 3732.56, "text": " If we interpret it through the lens, who it costs and who it benefits." }, { "end": 3740.26, "start": 3738.62, "text": " All right." }, { "end": 3745.66, "start": 3740.26, "text": " So the next section is corporate diversity beyond the pipeline problem." }, { "end": 3750.7799999999997, "start": 3745.66, "text": " And this is kind of an odd inclusion when I read it first to interpret to go against" }, { "end": 3754.14, "start": 3750.7799999999997, "text": " the pipeline problem here." }, { "end": 3758.5, "start": 3754.14, "text": " But it kind of makes sense if you know what these people set out to do." }, { "end": 3765.2599999999998, "start": 3758.5, "text": " So what these people set out to do is to argue we must fix the workforce, right?" }, { "end": 3772.1, "start": 3765.2599999999998, "text": " We must fix the, we must hire more people of color, more women and so on, promote them" }, { "end": 3773.1, "start": 3772.1, "text": " more." }, { "end": 3778.14, "start": 3773.1, "text": " And they have a very much have a problem with this pipeline argument." }, { "end": 3780.62, "start": 3778.14, "text": " What the pipeline argument is, is the following." }, { "end": 3786.02, "start": 3780.62, "text": " So at the beginning, if you consider like the educational or career paths of people," }, { "end": 3792.22, "start": 3786.02, "text": " then you have like 100% of people that's represented at this at the beginning, and then most of" }, { "end": 3794.02, "start": 3792.22, "text": " these people go through school." }, { "end": 3795.8199999999997, "start": 3794.02, "text": " So most of these go on." }, { "end": 3799.86, "start": 3795.8199999999997, "text": " This is kind of the area in here is the population." }, { "end": 3803.58, "start": 3799.86, "text": " And then some of them pursue higher education like some drop out." }, { "end": 3806.7000000000003, "start": 3803.58, "text": " So this gets a smaller amount." }, { "end": 3811.6200000000003, "start": 3806.7000000000003, "text": " So this is here, this is time and this is kind of volume of people." }, { "end": 3816.2200000000003, "start": 3811.6200000000003, "text": " And then very few go into computer science, right?" }, { "end": 3818.7400000000002, "start": 3816.2200000000003, "text": " And then even fewer go into AI." }, { "end": 3824.86, "start": 3818.7400000000002, "text": " So what you end up is just a tiny sliver of people that actually go into AI." }, { "end": 3831.3, "start": 3824.86, "text": " So this is called a pipeline, and we have various junctions here like where you would" }, { "end": 3835.54, "start": 3831.3, "text": " go into higher education, where you would choose your major in university, where you" }, { "end": 3844.34, "start": 3835.54, "text": " would go into a subfield of computer science, where the kind of volume of people drops significantly" }, { "end": 3846.7000000000003, "start": 3844.34, "text": " from one point to the other." }, { "end": 3853.26, "start": 3846.7000000000003, "text": " And now if you compare this, if you compare this and use it say, we're not considered" }, { "end": 3858.7000000000003, "start": 3853.26, "text": " all of society, but here over here we'll call consider all just men and over here we'll" }, { "end": 3864.26, "start": 3858.7000000000003, "text": " consider all women again, they all go to high school and then university and then maybe" }, { "end": 3869.0200000000004, "start": 3864.26, "text": " very few go to CS, even fewer go to AI." }, { "end": 3874.94, "start": 3869.0200000000004, "text": " What you'll find is, and I've drawn it maybe wrong here, is that this is smaller than this." }, { "end": 3883.86, "start": 3874.94, "text": " So if you comparatively look at how many males end up in the AI field, you will find that" }, { "end": 3889.46, "start": 3883.86, "text": " fewer end up in more and will end up in our field than women." }, { "end": 3891.62, "start": 3889.46, "text": " If you comparatively look at it." }, { "end": 3902.9, "start": 3891.62, "text": " So at and this is over time, like at the beginning, you have 5050 main women distribution in society," }, { "end": 3911.58, "start": 3902.9, "text": " almost I guess, I think slightly more boys are born, but I could be wrong about this." }, { "end": 3918.94, "start": 3911.58, "text": " And then as you go through time here, excuse that I believe." }, { "end": 3923.26, "start": 3918.94, "text": " So you go through high school and let's just assume like high school is still kind of equal," }, { "end": 3924.92, "start": 3923.26, "text": " it depends on the country." }, { "end": 3932.2400000000002, "start": 3924.92, "text": " Then you go to university, where there's actually more women at university slightly." }, { "end": 3936.5, "start": 3932.24, "text": " And then you go into computer science and in computer science, and this is just relative" }, { "end": 3939.3799999999997, "start": 3936.5, "text": " here, that's why I kind of norm it at 100%." }, { "end": 3943.02, "start": 3939.3799999999997, "text": " Otherwise these things would go down all of them at the same time." }, { "end": 3950.1, "start": 3943.02, "text": " But comparatively, you have then much more men than women in computer science." }, { "end": 3956.4599999999996, "start": 3950.1, "text": " And then if you see who chooses AI, I don't know if there's any statistics of specifically" }, { "end": 3958.3399999999997, "start": 3956.4599999999996, "text": " choosing AI from computer science." }, { "end": 3961.3399999999997, "start": 3958.3399999999997, "text": " I'm just going to assume that remains the same." }, { "end": 3967.46, "start": 3961.34, "text": " So if you look into the AI field, kind of this, this will stay the same." }, { "end": 3971.82, "start": 3967.46, "text": " So in the AI field, you have much more men than women." }, { "end": 3978.38, "start": 3971.82, "text": " And presumably, because you already have much more men than women choosing computer science" }, { "end": 3985.1400000000003, "start": 3978.38, "text": " as their major or choosing any technical field as their major." }, { "end": 3987.82, "start": 3985.1400000000003, "text": " This is kind of the so called pipeline argument." }, { "end": 3990.58, "start": 3987.82, "text": " So where do AI companies hiring come in?" }, { "end": 3999.66, "start": 3990.58, "text": " AI companies come in here, they hire at this point, after your university degree, presumably." }, { "end": 4003.86, "start": 3999.66, "text": " There's exceptions, but just say they hire after your university degree." }, { "end": 4010.2599999999998, "start": 4003.86, "text": " And therefore, they basically have to choose from this distribution." }, { "end": 4015.1, "start": 4010.2599999999998, "text": " And if they just say, okay, we'll just take the top, I don't know, 10% people will hire" }, { "end": 4018.22, "start": 4015.1, "text": " the good people of this, we don't care what gender they are." }, { "end": 4026.7, "start": 4018.22, "text": " Right, so the top 10% here, the top 10% here, then this will end up being the same distribution" }, { "end": 4028.74, "start": 4026.7, "text": " as you have graduates." }, { "end": 4036.3799999999997, "start": 4028.74, "text": " Right, so this is kind of the company, company hiring from an let's say an 80 20 distribution" }, { "end": 4041.2599999999998, "start": 4036.3799999999997, "text": " without looking at gender will end up with an 80 20 distribution." }, { "end": 4045.02, "start": 4041.2599999999998, "text": " That's the pipeline argument of companies." }, { "end": 4049.7, "start": 4045.02, "text": " And they don't like the pipeline argument, because the pipeline argument basically says" }, { "end": 4052.58, "start": 4049.7, "text": " that the problem is somewhere here, right?" }, { "end": 4060.58, "start": 4052.58, "text": " The problem isn't the company's hiring wrongly." }, { "end": 4067.22, "start": 4060.58, "text": " The problem isn't that the company's here, deselected, the problem is somewhere here." }, { "end": 4070.7, "start": 4067.22, "text": " And because they want to make the argument that the company should hire in a different" }, { "end": 4073.36, "start": 4070.7, "text": " way, they can't have that." }, { "end": 4076.1, "start": 4073.36, "text": " So they argue against it." }, { "end": 4079.76, "start": 4076.1, "text": " Now to argue against this would actually be very easy." }, { "end": 4085.44, "start": 4079.76, "text": " If this argument were wrong, like they claim the argument is is is not good, the pipeline" }, { "end": 4087.58, "start": 4085.44, "text": " argument isn't good." }, { "end": 4092.52, "start": 4087.58, "text": " If the pipeline argument were wrong, what you'd have to do is you would have to say," }, { "end": 4098.1, "start": 4092.52, "text": " you would have to say, hey, companies, look at that." }, { "end": 4105.22, "start": 4098.1, "text": " In your company, you have an 80 20 distribution men to women, right?" }, { "end": 4106.780000000001, "start": 4105.22, "text": " That's pretty unequal." }, { "end": 4112.14, "start": 4106.780000000001, "text": " And you know, in university graduates, the pool you choose from is actually 5050." }, { "end": 4118.740000000001, "start": 4112.14, "text": " So obviously, you're engaged in discriminatory hiring, because you know, the pool is 5050." }, { "end": 4127.42, "start": 4118.740000000001, "text": " There's no reason why it why your hiring practices should cause this inequality." }, { "end": 4132.12, "start": 4127.42, "text": " And therefore, we can clearly show you do discriminatory hiring, you should stop it," }, { "end": 4136.42, "start": 4132.12, "text": " you should definitely hire more women and people of color, more of these more of the" }, { "end": 4141.82, "start": 4136.42, "text": " minorities, because your hiring practices are the problem." }, { "end": 4143, "start": 4141.82, "text": " But that's not the case." }, { "end": 4144.06, "start": 4143, "text": " How do I know?" }, { "end": 4146.9400000000005, "start": 4144.06, "text": " Because if it were the case, they would simply state this." }, { "end": 4151.7, "start": 4146.9400000000005, "text": " Definitely in this report, if that were the case, that you could actually show with numbers" }, { "end": 4156.14, "start": 4151.7, "text": " that the pipeline argument is wrong, then they would absolutely do this." }, { "end": 4163.1, "start": 4156.14, "text": " That they have to like, go back and they have to like, ramble around it for several pages," }, { "end": 4170.58, "start": 4163.1, "text": " which will mostly skip but mainly because this is the case, it is the case that these" }, { "end": 4178.660000000001, "start": 4170.58, "text": " companies hire from a pool of of unequally represented people." }, { "end": 4187.0599999999995, "start": 4178.66, "text": " And the only argument that you can make is that, well, if if you were to equalize this" }, { "end": 4193.98, "start": 4187.0599999999995, "text": " here, then maybe here where the problem is that would fix like, so the argument is often" }, { "end": 4201.66, "start": 4193.98, "text": " made if young girls choosing their majors have no one to look up to, like no strong" }, { "end": 4208.94, "start": 4201.66, "text": " women in in corporation CEO roles, they will think that it's not a climate for women and" }, { "end": 4213.7, "start": 4208.94, "text": " they will elect not to go into these fields, which is a valid argument, like I'm completely" }, { "end": 4216.66, "start": 4213.7, "text": " open to that to that argument." }, { "end": 4218.58, "start": 4216.66, "text": " But it's the only argument you can make." }, { "end": 4225.58, "start": 4218.58, "text": " And still then, even if you determine this as the cause, I would still not support racist" }, { "end": 4231.58, "start": 4225.58, "text": " and sexist hiring practices like do something else like make them clear that the environment" }, { "end": 4238.1, "start": 4231.58, "text": " can be changed or change the environment, like change the if if it really is the case" }, { "end": 4245.3, "start": 4238.1, "text": " that it's kind of a non anti woman environment, change that." }, { "end": 4250.82, "start": 4245.3, "text": " If it's just the case that they perceive it as such change the perception, but do not" }, { "end": 4256.42, "start": 4250.82, "text": " engage in discriminatory hiring practices, because there's always someone losing out" }, { "end": 4258.22, "start": 4256.42, "text": " unfairly on these practices." }, { "end": 4266.58, "start": 4258.22, "text": " And that's, that's something I'm not willing to, to go into, like that's something I'm" }, { "end": 4267.66, "start": 4266.58, "text": " not willing to engage in." }, { "end": 4271.46, "start": 4267.66, "text": " And I don't think people should engage be engaging in that." }, { "end": 4273.9400000000005, "start": 4271.46, "text": " Actually, that's why it's illegal." }, { "end": 4278.72, "start": 4273.9400000000005, "text": " So let's, let's actually look at very few points." }, { "end": 4285.780000000001, "start": 4278.72, "text": " This is just why the so they claim they go kind of go over these pipeline studies." }, { "end": 4291.179999999999, "start": 4285.78, "text": " And they yeah, they say term used in industry to reference the absence of diverse candidates" }, { "end": 4296.139999999999, "start": 4291.179999999999, "text": " in the hiring pool of to justify the inability of large firms to achieve diversity due to" }, { "end": 4297.139999999999, "start": 4296.139999999999, "text": " scarcity." }, { "end": 4298.139999999999, "start": 4297.139999999999, "text": " Right?" }, { "end": 4306.42, "start": 4298.139999999999, "text": " So that's, they basically agree the of that on the definition that I stated here." }, { "end": 4311.259999999999, "start": 4306.42, "text": " So the companies that are challenged on their lack of diversity frequently site pipeline" }, { "end": 4315.5, "start": 4311.259999999999, "text": " studies as proof of the persistent challenge of finding enough women and people of color" }, { "end": 4316.82, "start": 4315.5, "text": " to hire." }, { "end": 4323.3, "start": 4316.82, "text": " Yes, and, and the yeah, but they say but the evidence suggests otherwise." }, { "end": 4328.5, "start": 4323.3, "text": " For example, in 2016, Facebook chief diversity officer wrote that it has become clear that" }, { "end": 4332.52, "start": 4328.5, "text": " at the most fundamental level, appropriate representation, technology or any other industry" }, { "end": 4337.1, "start": 4332.52, "text": " will depend upon more people having the opportunity to gain necessary skills through the public" }, { "end": 4338.42, "start": 4337.1, "text": " education system." }, { "end": 4341.7, "start": 4338.42, "text": " Well, yes, that's something I would agree." }, { "end": 4348.82, "start": 4341.7, "text": " And that's something clearly that addresses this region here." }, { "end": 4353.5199999999995, "start": 4348.82, "text": " Then and where the actual problem is happening." }, { "end": 4359.54, "start": 4353.5199999999995, "text": " So I would say that's a very, very good statement from the Facebook's chief diversity officer." }, { "end": 4364.82, "start": 4359.54, "text": " They say but as the Center for Investigative Reporting study of tech company diversity" }, { "end": 4371.66, "start": 4364.82, "text": " data found 91 large tech companies headquartered in Silicon Valley managed to hire higher percent" }, { "end": 4376.42, "start": 4371.66, "text": " of black, Latino and multiracial employees than Facebook that year." }, { "end": 4386.9, "start": 4376.42, "text": " Well, just if other just just because other companies employ racist and sexist hiring" }, { "end": 4392.98, "start": 4386.9, "text": " to improve their diversity numbers doesn't mean that Facebook has to do this." }, { "end": 4393.98, "start": 4392.98, "text": " Right?" }, { "end": 4401.54, "start": 4393.98, "text": " It it like just because other companies do this doesn't mean that it's a it's a it's" }, { "end": 4405.459999999999, "start": 4401.54, "text": " a good thing to do or that's how you should go about it." }, { "end": 4413.66, "start": 4405.459999999999, "text": " Facebook simply says like, if we want to hire without being racist or sexist, if we want" }, { "end": 4420.98, "start": 4413.66, "text": " to just hire the best people, then more of the best people have to be in the pipeline," }, { "end": 4427.7, "start": 4420.98, "text": " like more people have to gain access to educational opportunities so we can then hire them." }, { "end": 4434.86, "start": 4427.7, "text": " Whereas these other companies probably make a big effort to say, well, even if you are" }, { "end": 4439.74, "start": 4434.86, "text": " not as educated, even if you're not as qualified as this other person will hire you because" }, { "end": 4441.98, "start": 4439.74, "text": " of your skin color." }, { "end": 4450.74, "start": 4441.98, "text": " I don't think that's that's an argument in that in the favor of what the report is claiming." }, { "end": 4455.58, "start": 4450.74, "text": " Like I don't think that that is evidence that the pipeline argument is invalid." }, { "end": 4462.66, "start": 4455.58, "text": " All right, so they go into core themes in pipeline research, and they do some they do" }, { "end": 4470.58, "start": 4462.66, "text": " some overview of the kind of pipeline research that often so sometimes the pipeline research" }, { "end": 4476.36, "start": 4470.58, "text": " examines why, why, for example, why women don't choose to go into computer science as" }, { "end": 4481.82, "start": 4476.36, "text": " much and sometimes they focus on what is their perception of the field, what was it, what" }, { "end": 4487.86, "start": 4481.82, "text": " is their perceptions of the stereotypes of the field, what is their perceptions of the" }, { "end": 4494.54, "start": 4487.86, "text": " kind of culture in the field, is it suited to them, what is their perception of how qualified" }, { "end": 4498.0199999999995, "start": 4494.54, "text": " they are for the field, and is that true, is that false, and so on." }, { "end": 4500.78, "start": 4498.0199999999995, "text": " So this research examines a whole variety of things." }, { "end": 4503.7, "start": 4500.78, "text": " And it's very interesting, actually, to read through this research." }, { "end": 4507.74, "start": 4503.7, "text": " I want to point out this here." }, { "end": 4512.62, "start": 4507.74, "text": " Other studies suggest that gender is correlated with a person's motivations for pursuing a" }, { "end": 4514.34, "start": 4512.62, "text": " career in the field." }, { "end": 4520.62, "start": 4514.34, "text": " Women and particularly women from low socioeconomic status or minority backgrounds are more likely" }, { "end": 4526.5, "start": 4520.62, "text": " to see computing as a versatile profession that provides an opportunity for secure employment," }, { "end": 4529.74, "start": 4526.5, "text": " higher pay, and better social standing." }, { "end": 4535.3, "start": 4529.74, "text": " Moreover, their interests go beyond technical aspects of computing, focusing instead on" }, { "end": 4537.98, "start": 4535.3, "text": " the purpose and application of software." }, { "end": 4543.62, "start": 4537.98, "text": " However, such interests are often de-emphasized in computer science curricula, a price technical" }, { "end": 4550.98, "start": 4543.62, "text": " skill and its applicability to industrial settings above all else." }, { "end": 4556.76, "start": 4550.98, "text": " So I find this really interesting because it's basically saying that women have different" }, { "end": 4560.46, "start": 4556.76, "text": " interests than men on average." }, { "end": 4564.92, "start": 4560.46, "text": " That's basically saying that, which is almost heresy." }, { "end": 4570.9800000000005, "start": 4564.92, "text": " To say this in this context, people will come after you if you suggest something like this," }, { "end": 4573.3, "start": 4570.9800000000005, "text": " and yet they're just stating it here." }, { "end": 4575.2, "start": 4573.3, "text": " Remember this for later." }, { "end": 4581.02, "start": 4575.2, "text": " This is really funny that they're like, yeah, the interests could be different for women" }, { "end": 4582.02, "start": 4581.02, "text": " than for men." }, { "end": 4589.46, "start": 4582.02, "text": " And we might have to adjust our curriculum to be more suited to these different interests." }, { "end": 4591.540000000001, "start": 4589.46, "text": " I mean, yeah." }, { "end": 4593.540000000001, "start": 4591.540000000001, "text": " I'm sure that's..." }, { "end": 4600.42, "start": 4593.540000000001, "text": " Yeah, as I said, you're like, usually this is forbidden to say." }, { "end": 4602.900000000001, "start": 4600.42, "text": " All right." }, { "end": 4605.620000000001, "start": 4602.900000000001, "text": " So they go on." }, { "end": 4618.46, "start": 4605.62, "text": " They say limitations of pipeline research, right?" }, { "end": 4627.099999999999, "start": 4618.46, "text": " These are fairly like common limitations, let's say, of studies in general, social science" }, { "end": 4633.0199999999995, "start": 4627.099999999999, "text": " studies, which I won't go into much." }, { "end": 4643.26, "start": 4633.02, "text": " Again, they state we have to examine..." }, { "end": 4646.38, "start": 4643.26, "text": " We don't only have to examine this, but the problem..." }, { "end": 4653.38, "start": 4646.38, "text": " They basically say the problem is actually the culture and the problem is actually the" }, { "end": 4659.620000000001, "start": 4653.38, "text": " perpetrators, where do I say?" }, { "end": 4664.78, "start": 4659.62, "text": " I don't remember where this is stated, but they again say we have to examine who benefits" }, { "end": 4671.7, "start": 4664.78, "text": " from its present construction, who is underserved within the current tech ecology, who benefits" }, { "end": 4676.62, "start": 4671.7, "text": " from its present construction, how these dynamics might be untangled, and so on." }, { "end": 4686.22, "start": 4676.62, "text": " So again, stating these kind of power relationships for the different groups, which I don't agree" }, { "end": 4689.22, "start": 4686.22, "text": " is in large part what's happening." }, { "end": 4696.22, "start": 4689.22, "text": " They say it's worth considering the scope of these studies and by and large, the recommendations" }, { "end": 4701.900000000001, "start": 4696.22, "text": " they issue are limited, targeted at the administrators of university computer science programs seeking" }, { "end": 4704.02, "start": 4701.900000000001, "text": " to broaden the diversity of their student body." }, { "end": 4708.96, "start": 4704.02, "text": " Yes, that's exactly where we saw the problem appears to be, right?" }, { "end": 4713.58, "start": 4708.96, "text": " So the reason they have a problem with these studies is that they actually focus on the" }, { "end": 4721.62, "start": 4713.58, "text": " point where this discrepancy appears to happen, because they want to claim that no, no, no," }, { "end": 4732.18, "start": 4721.62, "text": " you should focus on a different point, namely hiring in these companies, hiring and promotion." }, { "end": 4737.74, "start": 4732.18, "text": " They say though important, so at least they acknowledge that that's an important problem." }, { "end": 4743.9, "start": 4737.74, "text": " This is a narrow frame through which potential solutions to barriers to inclusion." }, { "end": 4748.94, "start": 4743.9, "text": " It does not address the companies that hire computer science students, the peers responsible" }, { "end": 4753.82, "start": 4748.94, "text": " for promulgating stereotype views or engaging in hostile behavior or the broader social" }, { "end": 4758.58, "start": 4753.82, "text": " conditions that may influence students' success in computer science programs." }, { "end": 4762.179999999999, "start": 4758.58, "text": " Actually the research and even some of the examples they've included of this research" }, { "end": 4764.0599999999995, "start": 4762.179999999999, "text": " addresses all of this." }, { "end": 4773.580000000001, "start": 4764.06, "text": " But the research often addresses the kind of stereotypes and how the peers act and how" }, { "end": 4781.740000000001, "start": 4773.580000000001, "text": " the companies act and also how the companies hire and how people have something to look" }, { "end": 4787.02, "start": 4781.740000000001, "text": " forward to or nothing to look forward to and how that influences their decisions." }, { "end": 4792.1, "start": 4787.02, "text": " Yeah, again, they say the studies are frequently cited by those within corporate environments" }, { "end": 4796.5, "start": 4792.1, "text": " to justify their own lack of diversity as they situate the locus of change outside of" }, { "end": 4799.26, "start": 4796.5, "text": " the corporation itself." }, { "end": 4803.14, "start": 4799.26, "text": " As such pipeline studies are disproportionately emphasized as a part of the broader research" }, { "end": 4805.22, "start": 4803.14, "text": " agenda on diversity and technology." }, { "end": 4810.9800000000005, "start": 4805.22, "text": " Again, they state companies use this to get out and of course, like companies, of course" }, { "end": 4812.58, "start": 4810.9800000000005, "text": " they're going to use this to get out." }, { "end": 4814.58, "start": 4812.58, "text": " I mean, I agree at least with that." }, { "end": 4821.26, "start": 4814.58, "text": " I agree that companies are going to try to use this to get out of responsibility." }, { "end": 4822.26, "start": 4821.26, "text": " Certainly." }, { "end": 4823.26, "start": 4822.26, "text": " All right." }, { "end": 4831.62, "start": 4823.26, "text": " So the last section here is the pipeline dreams after years of research." }, { "end": 4833.820000000001, "start": 4831.62, "text": " Again this is on this pipeline studies." }, { "end": 4843.74, "start": 4833.820000000001, "text": " Basically they say the pipeline research hasn't shown, like hasn't borne fruit." }, { "end": 4850.780000000001, "start": 4843.74, "text": " It hasn't led to meaningful change in the field even though we've researched this." }, { "end": 4855.139999999999, "start": 4850.78, "text": " The reason they say the number of reasons they tend to place the owners to solve issues" }, { "end": 4859.86, "start": 4855.139999999999, "text": " of discrimination, Silicon Valley on those who are discriminated against rather than" }, { "end": 4860.86, "start": 4859.86, "text": " the perpetrators." }, { "end": 4863.86, "start": 4860.86, "text": " I find this word choice really interesting." }, { "end": 4865.5, "start": 4863.86, "text": " Perpetrators, right?" }, { "end": 4871.94, "start": 4865.5, "text": " Like again, the group of white men is trying to put down everyone else." }, { "end": 4874.9, "start": 4871.94, "text": " That's the perspective that the article takes." }, { "end": 4879.139999999999, "start": 4874.9, "text": " And it's not even true." }, { "end": 4886.22, "start": 4879.14, "text": " This research, a lot of times it actually says the reason why, for example, women don't" }, { "end": 4892.54, "start": 4886.22, "text": " choose to go into computer science is the male dominated culture within these corporations," }, { "end": 4901.860000000001, "start": 4892.54, "text": " is the perception of this not being a woman friendly environment, is the people here of" }, { "end": 4903.54, "start": 4901.860000000001, "text": " sexual harassment and so on." }, { "end": 4905.46, "start": 4903.54, "text": " So it's not even true." }, { "end": 4910.34, "start": 4905.46, "text": " But moreover, I just wanted to point out the choice of word here, perpetrators." }, { "end": 4917.9800000000005, "start": 4910.34, "text": " I don't know how you get to this word." }, { "end": 4924.86, "start": 4917.9800000000005, "text": " It really shows kind of a worldview of the authors in my opinion." }, { "end": 4927.22, "start": 4924.86, "text": " All right." }, { "end": 4933.22, "start": 4927.22, "text": " So they go on and say, okay, this pipeline studies haven't been beneficial and companies" }, { "end": 4937.26, "start": 4933.22, "text": " haven't done much or hasn't been successful." }, { "end": 4943.14, "start": 4937.26, "text": " They're going to worker led initiatives, which I'm going to skip here." }, { "end": 4950.26, "start": 4943.14, "text": " It's just a kind of a reporting of what happened at companies where the workers themselves" }, { "end": 4951.46, "start": 4950.26, "text": " organized." }, { "end": 4955.9400000000005, "start": 4951.46, "text": " And then the last section here is the pushback against diversity." }, { "end": 4963.379999999999, "start": 4955.94, "text": " So in this section, they're kind of documenting and arguing against people who have basically" }, { "end": 4967.78, "start": 4963.379999999999, "text": " stated counter arguments to their recommendations mainly." }, { "end": 4973.62, "start": 4967.78, "text": " So their recommendations being, let's change the hiring, let's change the promotion, and" }, { "end": 4979.78, "start": 4973.62, "text": " so on to be based on race and gender." }, { "end": 4984.54, "start": 4979.78, "text": " And the pushback here characterized in different ways." }, { "end": 4986.98, "start": 4984.54, "text": " So we'll go through this." }, { "end": 4987.98, "start": 4986.98, "text": " This is the last section." }, { "end": 4990.6, "start": 4987.98, "text": " I know it's a long video already." }, { "end": 4995.1, "start": 4990.6, "text": " If you're still here, like the one person who's still here, hi, I hope you're doing" }, { "end": 4996.1, "start": 4995.1, "text": " well." }, { "end": 4997.1, "start": 4996.1, "text": " Good." }, { "end": 4998.1, "start": 4997.1, "text": " Keep hydrated." }, { "end": 4999.1, "start": 4998.1, "text": " Yeah." }, { "end": 5002.22, "start": 4999.1, "text": " So they say, it's a critical time." }, { "end": 5010.62, "start": 5002.22, "text": " We now see diversity itself being weaponized." }, { "end": 5016.9, "start": 5010.62, "text": " So they say this growing awareness accompanied by demands for inclusion and equity has led" }, { "end": 5023.22, "start": 5016.9, "text": " to some change, but there has also been resistance, especially among those implicitly privileged" }, { "end": 5024.54, "start": 5023.22, "text": " by the status quo." }, { "end": 5028.7, "start": 5024.54, "text": " So again, jumping straight to attack on the person." }, { "end": 5033.74, "start": 5028.7, "text": " Like I don't care if who makes an argument against me." }, { "end": 5039.34, "start": 5033.74, "text": " I want to go on the argument and I'm going to go on the content of the argument." }, { "end": 5047.34, "start": 5039.34, "text": " But these people straight, first thing they stayed is that's just by the people who are" }, { "end": 5048.34, "start": 5047.34, "text": " benefiting." }, { "end": 5051.900000000001, "start": 5048.34, "text": " That's just by the white men, basically." }, { "end": 5053.900000000001, "start": 5051.900000000001, "text": " Straight to the identity of the person." }, { "end": 5058.38, "start": 5053.900000000001, "text": " That's dishonesty right there." }, { "end": 5065.66, "start": 5058.38, "text": " So those questioning and even rejecting the idea that racism, misogyny, and harassment" }, { "end": 5070.46, "start": 5065.66, "text": " are problems within the AI field and the tech industry have appropriated the language of" }, { "end": 5077.34, "start": 5070.46, "text": " diversity to argue that efforts to improve inclusion are in fact exclusionary and addressing" }, { "end": 5082.62, "start": 5077.34, "text": " the deeper structural challenges posed by racism, sex and inequity is misguided." }, { "end": 5089.58, "start": 5082.62, "text": " And yes, yes, definitely efforts to improve inclusion can be exclusionary." }, { "end": 5101.1, "start": 5089.58, "text": " Like just because, so this is a thing, just because you're fixing a problem doesn't mean" }, { "end": 5107.98, "start": 5101.1, "text": " the method you're using to fixing it is justified and is itself good." }, { "end": 5115.3, "start": 5107.98, "text": " Methods to improve inclusion can be exclusionary and some that have been proposed are exclusionary." }, { "end": 5117.58, "start": 5115.3, "text": " Definitely it depends on the method." }, { "end": 5121.48, "start": 5117.58, "text": " It doesn't mean these people are against these efforts." }, { "end": 5128.66, "start": 5121.48, "text": " It means that the measures, for example, implementing racist hiring policy, I can definitely see" }, { "end": 5134.0199999999995, "start": 5128.66, "text": " that this is going to lead to more equal representation within the workforce." }, { "end": 5141.86, "start": 5134.0199999999995, "text": " But the tool itself is really bad and exclusionary and discriminating." }, { "end": 5149.5, "start": 5141.86, "text": " So yeah, I would say that it's accurate that it can be exclusionary." }, { "end": 5154.98, "start": 5149.5, "text": " I say, for example, some AI researchers greeted the announcement of Black in AI Workshop at" }, { "end": 5159.7, "start": 5154.98, "text": " NRIPS leading machine learning conference by questioning whether the event was necessary," }, { "end": 5162.62, "start": 5159.7, "text": " arguing that it would be discriminatory." }, { "end": 5163.98, "start": 5162.62, "text": " But can't they?" }, { "end": 5166.98, "start": 5163.98, "text": " Can't they question whether the event was necessary?" }, { "end": 5170.42, "start": 5166.98, "text": " Like that would, I would, here I would need a discussion." }, { "end": 5172.06, "start": 5170.42, "text": " What is it for?" }, { "end": 5173.06, "start": 5172.06, "text": " Right?" }, { "end": 5175.64, "start": 5173.06, "text": " Why is this event happening?" }, { "end": 5177.74, "start": 5175.64, "text": " And what is it doing?" }, { "end": 5180.5, "start": 5177.74, "text": " And is it discriminatory?" }, { "end": 5181.62, "start": 5180.5, "text": " It could be." }, { "end": 5183.22, "start": 5181.62, "text": " Any event can be discriminatory." }, { "end": 5190.3, "start": 5183.22, "text": " Does it discriminate based on race or gender or anything?" }, { "end": 5194.74, "start": 5190.3, "text": " Is it, you know, does it do so unjustly and all?" }, { "end": 5198.42, "start": 5194.74, "text": " So I don't, I don't just don't see why." }, { "end": 5199.42, "start": 5198.42, "text": " Could still be wrong." }, { "end": 5203.74, "start": 5199.42, "text": " Like you could question and then you could be wrong." }, { "end": 5206.7, "start": 5203.74, "text": " But you should be taken on your argument." }, { "end": 5216.06, "start": 5206.7, "text": " But the argument here is just already questioning this is already on the wrong side of the argument." }, { "end": 5217.66, "start": 5216.06, "text": " And I don't agree with this." }, { "end": 5221.46, "start": 5217.66, "text": " I don't agree with these people that question this workshop." }, { "end": 5225.74, "start": 5221.46, "text": " Don't have a particular opinion on these things." }, { "end": 5231.82, "start": 5225.74, "text": " But I have the opinion that you have to take arguments at their argument value and not" }, { "end": 5238.54, "start": 5231.82, "text": " just at who makes them or whether or not they're against a particular viewpoint." }, { "end": 5240.66, "start": 5238.54, "text": " All right." }, { "end": 5247.139999999999, "start": 5240.66, "text": " They say such pushback often centers calls for cognitive diversity or viewpoint diversity." }, { "end": 5251.7, "start": 5247.139999999999, "text": " The idea that individual differences in the ways people think and understand the world" }, { "end": 5257.0199999999995, "start": 5251.7, "text": " are distinctions that should be counted alongside or instead of other identity categories such" }, { "end": 5258.5, "start": 5257.0199999999995, "text": " as race and gender." }, { "end": 5266.34, "start": 5258.5, "text": " Well, yes, that's I mean, isn't that isn't that a very reasonable thing to say?" }, { "end": 5272.54, "start": 5266.34, "text": " Isn't it very reasonable to say that differences in the ways people think and understand the" }, { "end": 5278.139999999999, "start": 5272.54, "text": " world, their distinctions that should be counted alongside other identity categories such as" }, { "end": 5285.780000000001, "start": 5278.14, "text": " race and gender, they say a dozen white men so long as they were not raised in the same" }, { "end": 5291.02, "start": 5285.780000000001, "text": " household and don't think identical thoughts could be considered diverse." }, { "end": 5295.700000000001, "start": 5291.02, "text": " That's I don't know if this is a sarcastic statement or not, but clearly it's it's kind" }, { "end": 5302.18, "start": 5295.700000000001, "text": " of the counterpoint they're trying to make here that but yes, I would I would totally" }, { "end": 5309.740000000001, "start": 5302.18, "text": " agree with this statement in a way a white man growing up in San Francisco, a white man" }, { "end": 5317.820000000001, "start": 5309.740000000001, "text": " growing up in rural Idaho, a white man growing up in Florida, a white man growing up in Western" }, { "end": 5326.02, "start": 5317.820000000001, "text": " Europe, one in Russia, and one growing up on the road with its circus, his circus parents" }, { "end": 5334.26, "start": 5326.02, "text": " in Mongolia would definitely be that plenty diverse, right?" }, { "end": 5342.02, "start": 5334.26, "text": " I mean, they criticize this here, but this is is actually how can you how can you not" }, { "end": 5343.740000000001, "start": 5342.02, "text": " see this that?" }, { "end": 5348.540000000001, "start": 5343.740000000001, "text": " Yes, these are valid differences, and people are going to think differently, independent" }, { "end": 5351.5, "start": 5348.540000000001, "text": " of how they look, people are going to have different thoughts." }, { "end": 5356.42, "start": 5351.5, "text": " And it's important to recognize other people think differently." }, { "end": 5362.7, "start": 5356.42, "text": " And therefore, you should, you know, include them if it's relevant." }, { "end": 5366.82, "start": 5362.7, "text": " And the counter argument to this is, of course, what the authors here are saying basically" }, { "end": 5379.62, "start": 5366.82, "text": " is that 12, a dozen people, as long as they are don't look the same, could be considered" }, { "end": 5383.98, "start": 5379.62, "text": " diverse, even if they all were raised in the same place, and basically all live in San" }, { "end": 5387.98, "start": 5383.98, "text": " Francisco, and think the exact same thing." }, { "end": 5395.58, "start": 5387.98, "text": " Yeah, that's, I mean, it sounds to me, it sounds as absurd as the other way around." }, { "end": 5396.66, "start": 5395.58, "text": " To me." }, { "end": 5401.46, "start": 5396.66, "text": " So here's, here's my, here's my thoughts on this." }, { "end": 5407.58, "start": 5401.46, "text": " I am not going to pretend that I know what life is like as a woman." }, { "end": 5408.58, "start": 5407.58, "text": " Right?" }, { "end": 5418.0599999999995, "start": 5408.58, "text": " I'm absolutely sure that for areas of life, it is it is definitely valuable to listen" }, { "end": 5427.5, "start": 5418.0599999999995, "text": " to the experience of a woman or multiple women, an aggregate of women, because the life is" }, { "end": 5429.46, "start": 5427.5, "text": " just different as a woman." }, { "end": 5431.18, "start": 5429.46, "text": " Life is also different." }, { "end": 5437.5199999999995, "start": 5431.18, "text": " As a black person, I absolutely concede that there are things that I might not be able" }, { "end": 5445.5, "start": 5437.52, "text": " to draw from my life experience, because I am not of that skin color that different problems" }, { "end": 5446.5, "start": 5445.5, "text": " that people face." }, { "end": 5450.5, "start": 5446.5, "text": " And that's why it's important to have an opinion of that at the table." }, { "end": 5461.22, "start": 5450.5, "text": " But I'm also absolutely certain that I have no relation to someone who grew up as a child" }, { "end": 5466.9400000000005, "start": 5461.22, "text": " pop star from the age of 12, and then had that life." }, { "end": 5472.339999999999, "start": 5466.94, "text": " I have no relation to someone growing up under a communist regime." }, { "end": 5480.179999999999, "start": 5472.339999999999, "text": " I have no relation to someone growing up in in kind of a Buddhist religious tradition." }, { "end": 5481.179999999999, "start": 5480.179999999999, "text": " I just don't." }, { "end": 5482.74, "start": 5481.179999999999, "text": " And I don't care how they look." }, { "end": 5485.219999999999, "start": 5482.74, "text": " They have different experiences." }, { "end": 5488.94, "start": 5485.219999999999, "text": " They have different bodies of knowledge to draw on." }, { "end": 5496.219999999999, "start": 5488.94, "text": " And I don't think why we should make the difference along the exact lines of race and gender." }, { "end": 5500.900000000001, "start": 5496.22, "text": " Yeah, but that's what they that's of course what they argue here." }, { "end": 5508.18, "start": 5500.900000000001, "text": " Those arguments work by centering identity while flattening or ignoring power relationships." }, { "end": 5515.34, "start": 5508.18, "text": " Here the VP, the Facebook VP of engineering said that the ultimate goal is cognitive diversity" }, { "end": 5519.62, "start": 5515.34, "text": " and cognitive diversity is correlated with identity diversity." }, { "end": 5525.34, "start": 5519.62, "text": " That means it's not just about getting women in tech, it's about broad voices, broad representation." }, { "end": 5526.34, "start": 5525.34, "text": " Right?" }, { "end": 5537.38, "start": 5526.34, "text": " So the the this is exactly what I would say the reason why we want different the reason" }, { "end": 5542.62, "start": 5537.38, "text": " why we want a woman or a black person at the table is because they have a different knowledge" }, { "end": 5546.38, "start": 5542.62, "text": " is because they have different thoughts because of their different life experience." }, { "end": 5549.34, "start": 5546.38, "text": " They have different thoughts that they can bring in." }, { "end": 5557.860000000001, "start": 5549.34, "text": " So actually, by including these what they call bodies, it is about cognitive diversity," }, { "end": 5559.5, "start": 5557.860000000001, "text": " even in itself." }, { "end": 5562.62, "start": 5559.5, "text": " But the authors here really see this from a different angle." }, { "end": 5568.4400000000005, "start": 5562.62, "text": " They really see this in terms of power relationships between race and gender groups." }, { "end": 5573.5, "start": 5568.4400000000005, "text": " And I yeah, the arguments of the authors don't make sense if you don't view it through that" }, { "end": 5574.5, "start": 5573.5, "text": " lens." }, { "end": 5581.54, "start": 5574.5, "text": " That lens to me is just such a it's such a I don't know, it's just sad look on the world." }, { "end": 5585.78, "start": 5581.54, "text": " And also, I think it's a very, very inaccurate look on the world." }, { "end": 5590.22, "start": 5585.78, "text": " And it's, I think, a very dangerous look on the world." }, { "end": 5597.94, "start": 5590.22, "text": " Um, yeah, again, they say instead of looking at historical patterns of marginalization," }, { "end": 5601.34, "start": 5597.94, "text": " calls for cognitive diversity argued that all differences are equal." }, { "end": 5602.42, "start": 5601.34, "text": " No, we're not." }, { "end": 5608.54, "start": 5602.42, "text": " Like, no calls for cognitive diversity or don't argue that all differences are equal." }, { "end": 5614.7, "start": 5608.54, "text": " Well aware that some people have it harder, well aware that some differences are bigger," }, { "end": 5616.9, "start": 5614.7, "text": " worse or better." }, { "end": 5625.26, "start": 5616.9, "text": " That's absolutely well aware all they're saying is that race and gender shouldn't be the like," }, { "end": 5633.74, "start": 5625.26, "text": " only things to consider and shouldn't be in itself be considered diverse." }, { "end": 5639.22, "start": 5633.74, "text": " Just because someone is of a certain skin color, it doesn't mean anything, right?" }, { "end": 5643.3, "start": 5639.22, "text": " It doesn't actually tell you anything about that person." }, { "end": 5650.56, "start": 5643.3, "text": " So why not consider people as individuals and look at what was their life like until" }, { "end": 5655.22, "start": 5650.56, "text": " this point and what could they contribute to the discussion we're having rather than" }, { "end": 5657.860000000001, "start": 5655.22, "text": " looking at the color of their skin." }, { "end": 5663.18, "start": 5657.860000000001, "text": " I mean, if the color of their skin played a role in their life, then obviously that" }, { "end": 5667.22, "start": 5663.18, "text": " would manifest in my suggestion as well." }, { "end": 5673.34, "start": 5667.22, "text": " But to just look at people through this kind of group lens is is so foreign to me." }, { "end": 5681.26, "start": 5673.34, "text": " And yeah, I feel it's it's quite dangerous." }, { "end": 5690.9800000000005, "start": 5681.26, "text": " Yeah, so again, and this this could argue that all differences are equal." }, { "end": 5697.06, "start": 5690.9800000000005, "text": " I mean, the point where you have to start misrepresenting what the counter argument" }, { "end": 5701.62, "start": 5697.06, "text": " is saying, that's really how you know you're dealing with a with not a well intentioned" }, { "end": 5704.46, "start": 5701.62, "text": " person on the other side of the of the discussion." }, { "end": 5706.62, "start": 5704.46, "text": " This is really politics now." }, { "end": 5710.04, "start": 5706.62, "text": " This isn't a well intended argumentation." }, { "end": 5714.7, "start": 5710.04, "text": " It's really someone to trying to achieve some goal, because they have to misrepresent the" }, { "end": 5715.9, "start": 5714.7, "text": " other side." }, { "end": 5719.0599999999995, "start": 5715.9, "text": " And this only gets worse from here." }, { "end": 5727.0199999999995, "start": 5719.0599999999995, "text": " They say recently was exemplified in the controversy over Google's appointment of Heritage Foundation" }, { "end": 5733.700000000001, "start": 5727.02, "text": " CEO K calls James to its Advanced Technology External Advisory Council." }, { "end": 5738.540000000001, "start": 5733.700000000001, "text": " Google's reasoning for the appointment of James was ostensibly to ensure diversity of" }, { "end": 5743.3, "start": 5738.540000000001, "text": " thought by including a conservative viewpoint on the council." }, { "end": 5751.18, "start": 5743.3, "text": " Alright, so Google has a technology advisory board, or council, sorry, of external people," }, { "end": 5753.780000000001, "start": 5751.18, "text": " and they've included a conservative." }, { "end": 5760.38, "start": 5753.78, "text": " And she is by all by all metrics, let's say, a standard conservative." }, { "end": 5765.78, "start": 5760.38, "text": " So this is not a far right neo Nazi type." }, { "end": 5766.78, "start": 5765.78, "text": " I don't know." }, { "end": 5774.62, "start": 5766.78, "text": " But this is this is someone who has similar opinions than half the US country and in generally" }, { "end": 5781.38, "start": 5774.62, "text": " in at least in the Western world, generally half of the of the country's population tends" }, { "end": 5784.46, "start": 5781.38, "text": " to be conservative." }, { "end": 5786.3, "start": 5784.46, "text": " More or less, I mean, there's differences." }, { "end": 5792.66, "start": 5786.3, "text": " But yeah, so this this is a this is an opinion that a large portion of the population shares." }, { "end": 5799.46, "start": 5792.66, "text": " So it would be I don't know, it would be suitable to include at least someone of that opinion" }, { "end": 5804.46, "start": 5799.46, "text": " in an external advisory council to to have that on board." }, { "end": 5809.34, "start": 5804.46, "text": " You don't have to listen to her like she's not like she's made king." }, { "end": 5818.22, "start": 5809.34, "text": " It's simply that she will have the opportunity to input her voice representative of kind" }, { "end": 5821.9400000000005, "start": 5818.22, "text": " of that large, very large percentage of people." }, { "end": 5828.9400000000005, "start": 5821.9400000000005, "text": " They go on to say, James is also a black woman, thus adding racial and gender diversity to" }, { "end": 5830.22, "start": 5828.9400000000005, "text": " the panel." }, { "end": 5835.46, "start": 5830.22, "text": " So even further, right, this is it's a conservative black woman." }, { "end": 5841.86, "start": 5835.46, "text": " All right, but the pushback following James's inclusion focused on her policy position," }, { "end": 5849.42, "start": 5841.86, "text": " citing specifically her vocal anti LGBTQ and anti immigrant views and highlighted why cognitive" }, { "end": 5853.1, "start": 5849.42, "text": " diversity is a particularly limited lens." }, { "end": 5861.46, "start": 5853.1, "text": " And the pushback here was very much spearheaded by one of the authors of this article." }, { "end": 5864.46, "start": 5861.46, "text": " So I am this isn't just reporting." }, { "end": 5873.34, "start": 5864.46, "text": " I will also I'll also criticize the the this pushback here since it's, you know, it's kind" }, { "end": 5875.46, "start": 5873.34, "text": " of argued for in this article." }, { "end": 5881.86, "start": 5875.46, "text": " It's not just reported and also because the authors are the same." }, { "end": 5887.14, "start": 5881.86, "text": " So here they say they have vocal anti LGBTQ and anti immigrant views." }, { "end": 5891.82, "start": 5887.14, "text": " And I haven't actually gone specifically and looked at what this person particularly has" }, { "end": 5899.179999999999, "start": 5891.82, "text": " said, but given that she's a standard conservative and has been in public office, I believe under" }, { "end": 5909.139999999999, "start": 5899.179999999999, "text": " George W. Bush, she can't like I have trouble believing that she has like extremely hateful" }, { "end": 5915.299999999999, "start": 5909.139999999999, "text": " opinions like these people shouldn't exist or like something like that nature." }, { "end": 5924.22, "start": 5915.3, "text": " Like often people like conservative people have have issues with forcing people to adopt" }, { "end": 5931.38, "start": 5924.22, "text": " certain pronouns for people or issues with which bathrooms do people go in and, you know," }, { "end": 5937.34, "start": 5931.38, "text": " generally are tougher on immigration, especially illegal immigration and so on." }, { "end": 5943.22, "start": 5937.34, "text": " I mean, these are these are views that people hold." }, { "end": 5946.900000000001, "start": 5943.22, "text": " It's a large part of people and these are discussions to be had." }, { "end": 5952.06, "start": 5946.900000000001, "text": " So including this this person would be very sensible move." }, { "end": 5957.26, "start": 5952.06, "text": " But they say in a letter opposing the appointment, a group of Google workers calling themselves" }, { "end": 5964.780000000001, "start": 5957.26, "text": " Googlers against transphobia and hate, transphobia and hate responded to the idea that diversity" }, { "end": 5967.62, "start": 5964.780000000001, "text": " of thought justified James's addition to the council." }, { "end": 5973.66, "start": 5967.62, "text": " This is a weaponization of the language of diversity by appointing James to the ATAC." }, { "end": 5978.86, "start": 5973.66, "text": " Google elevates and endorses her view, implying that hers is a valid perspective worthy of" }, { "end": 5980.86, "start": 5978.86, "text": " inclusions in its decision making." }, { "end": 5981.86, "start": 5980.86, "text": " This is unacceptable." }, { "end": 5989.099999999999, "start": 5981.86, "text": " Here it says again, the author was one of the organizers of that." }, { "end": 5990.86, "start": 5989.099999999999, "text": " And that's what they're saying here." }, { "end": 5996.94, "start": 5990.86, "text": " The views, if you don't have our views, these are unacceptable views, right?" }, { "end": 5999.9, "start": 5996.94, "text": " It's valid perspective worthy of inclusion." }, { "end": 6005.379999999999, "start": 5999.9, "text": " It's what they're saying basically is you don't even talk to these to this person, like" }, { "end": 6009.379999999999, "start": 6005.379999999999, "text": " talking to this person, considering their opinion." }, { "end": 6015.339999999999, "start": 6009.379999999999, "text": " You can still evaluate the opinion, but even considering their opinion is already wrong." }, { "end": 6018.58, "start": 6015.339999999999, "text": " And that given that the person is a black woman." }, { "end": 6026.58, "start": 6018.58, "text": " So basically, they are called the author's idea of diversity is people that look different" }, { "end": 6033.42, "start": 6026.58, "text": " that are from race and gender groups that have don't have much power or perceived what" }, { "end": 6035.44, "start": 6033.42, "text": " they call power right now." }, { "end": 6039.94, "start": 6035.44, "text": " As long as they all think exactly as we think, right, then that's fine." }, { "end": 6044.78, "start": 6039.94, "text": " As long as they they share our thoughts, as long as they don't have dissenting opinions," }, { "end": 6049.18, "start": 6044.78, "text": " we want the we want the different looking people." }, { "end": 6053.58, "start": 6049.18, "text": " But don't dare talk to anyone of a different opinion." }, { "end": 6060.3, "start": 6053.58, "text": " Yeah, this, I don't I don't see how I mean, these these authors, in my opinion, they really" }, { "end": 6067.74, "start": 6060.3, "text": " live in in a bubble, they really live in the in a tiny Silicon Valley or Silicon Valley" }, { "end": 6074.34, "start": 6067.74, "text": " influenced spaces, because this is this is half the people they basically saying half" }, { "end": 6083.38, "start": 6074.34, "text": " the people in their greater community in their country aren't even worthy listening to their" }, { "end": 6090.14, "start": 6083.38, "text": " opinions aren't even worthy of inclusion in of consideration." }, { "end": 6102.02, "start": 6090.14, "text": " So yeah, well, well done might as well discredit them at once." }, { "end": 6106.86, "start": 6102.02, "text": " I'm sure I'm sure I'm sure that's gonna fly well with these people." }, { "end": 6109.14, "start": 6106.86, "text": " All right." }, { "end": 6114.700000000001, "start": 6109.14, "text": " Yeah, might might start calling them deplorables and see what they do." }, { "end": 6122.14, "start": 6114.700000000001, "text": " Maybe they'll return the favor and elect a moron just to stick it in your face." }, { "end": 6124.14, "start": 6122.14, "text": " I mean, that's what happened." }, { "end": 6134.780000000001, "start": 6124.14, "text": " So the idea of cognitive diversity is mobilized by some support in support that the AI field" }, { "end": 6139.02, "start": 6134.780000000001, "text": " and the tech industry are already diverse." }, { "end": 6143.1, "start": 6139.02, "text": " Including as far as to support claims that not including identities like white and male" }, { "end": 6145.1, "start": 6143.1, "text": " constitutes discrimination." }, { "end": 6146.9400000000005, "start": 6145.1, "text": " Yes, it can." }, { "end": 6157.3, "start": 6146.9400000000005, "text": " Like if, if you include every single identity except white and male, that constitutes discrimination." }, { "end": 6163.1, "start": 6157.3, "text": " That's I mean, yes, even if they're in the majority is still constitutes discrimination," }, { "end": 6168.9800000000005, "start": 6163.1, "text": " like no one can help being born white and male, no one white and male chose to be born" }, { "end": 6169.98, "start": 6168.98, "text": " like that." }, { "end": 6177.219999999999, "start": 6169.98, "text": " Don't mostly don't choose the melanin content of your skin, you can modulate it a bit by" }, { "end": 6184.62, "start": 6177.219999999999, "text": " going to the sun, which computer science people statistically don't do very often." }, { "end": 6187.0599999999995, "start": 6184.62, "text": " So there's not much leeway there." }, { "end": 6196.74, "start": 6187.0599999999995, "text": " So yeah, to not include identities like that, if you include every other one, can constitute" }, { "end": 6197.74, "start": 6196.74, "text": " discrimination." }, { "end": 6199.099999999999, "start": 6197.74, "text": " True." }, { "end": 6205.34, "start": 6199.099999999999, "text": " A July 2017 memo written by James Damore, a software engineer at Google is illustrative" }, { "end": 6210.7, "start": 6205.34, "text": " of such pushback titled Google's ideological echo chamber." }, { "end": 6215.0599999999995, "start": 6210.7, "text": " And published in an internal mailing list, the memo critiqued the company's diversity" }, { "end": 6220.62, "start": 6215.0599999999995, "text": " policies arguing that biological differences between men and women rather than bias and" }, { "end": 6225.26, "start": 6220.62, "text": " discrimination help explain gender disparities at the company." }, { "end": 6230.14, "start": 6225.26, "text": " I feel the you can leave out the rather than here." }, { "end": 6240.06, "start": 6230.14, "text": " I think the memo simply stated that biological differences can help explain the gender disparities." }, { "end": 6244.66, "start": 6240.06, "text": " The most objective writing the memo was to make the case that policies designed to achieve" }, { "end": 6249.14, "start": 6244.66, "text": " equal representation are unfair, divisive and bad for business." }, { "end": 6250.26, "start": 6249.14, "text": " Well some are." }, { "end": 6256.74, "start": 6250.26, "text": " Yes, especially the recommendations that you've given at the beginning, number seven, is unfair," }, { "end": 6264.46, "start": 6256.74, "text": " divisive and I would also argue bad for business." }, { "end": 6272.5, "start": 6264.46, "text": " So supporters for Damore's point of view at times even drew on the rhetoric of the pipeline" }, { "end": 6275.900000000001, "start": 6272.5, "text": " to make the case that diversity initiatives are in fact discriminatory." }, { "end": 6281.299999999999, "start": 6275.9, "text": " They argue incorrectly that if there aren't qualified candidates in the pipeline, then" }, { "end": 6287.0199999999995, "start": 6281.299999999999, "text": " hiring those who are unqualified on the basis of identity discriminates against those who" }, { "end": 6288.7, "start": 6287.0199999999995, "text": " are qualified." }, { "end": 6300.98, "start": 6288.7, "text": " No, I would say hiring anyone on the basis of identity discriminates." }, { "end": 6303.259999999999, "start": 6300.98, "text": " I mean inherently." }, { "end": 6310.18, "start": 6303.26, "text": " So again I think that's the larger argument that these people are making, which is not" }, { "end": 6316.22, "start": 6310.18, "text": " incorrect, is very correct." }, { "end": 6322.5, "start": 6316.22, "text": " So in an update to the memo Damore himself asserted that he values diversity and inclusion," }, { "end": 6326.7, "start": 6322.5, "text": " but his primary concern was cognitive diversity." }, { "end": 6331.54, "start": 6326.7, "text": " He says diversity inclusion is not denying that sexism exists, doesn't endorse using" }, { "end": 6332.900000000001, "start": 6331.54, "text": " stereotypes." }, { "end": 6339.74, "start": 6332.9, "text": " And in specific I've read the memo and it directly says these are population level kind" }, { "end": 6344.78, "start": 6339.74, "text": " of statistics and there is more overlap than difference and you absolutely can't say anything" }, { "end": 6348.66, "start": 6344.78, "text": " about an individual by looking at these statistics." }, { "end": 6351.62, "start": 6348.66, "text": " That's almost a quote from this memo." }, { "end": 6359.86, "start": 6351.62, "text": " So he was very much concerned with considering people as individuals, but also if you like" }, { "end": 6362.379999999999, "start": 6359.86, "text": " he was basically making the same argument as earlier." }, { "end": 6370.3, "start": 6362.38, "text": " I told you to remember, hey look this one study that found that women's interests might" }, { "end": 6373.3, "start": 6370.3, "text": " be different and we might shape the curriculum." }, { "end": 6375.22, "start": 6373.3, "text": " That's basically what Damore said." }, { "end": 6380.66, "start": 6375.22, "text": " He said women's interests might be different and we'd have to maybe shape the way we do" }, { "end": 6386.1, "start": 6380.66, "text": " work, like change the way we do software engineering to attract more of them." }, { "end": 6388.9800000000005, "start": 6386.1, "text": " That was one of his points." }, { "end": 6394.86, "start": 6388.98, "text": " So he's exactly the same thing, but of course he's a misogynist because he suggested that" }, { "end": 6400.259999999999, "start": 6394.86, "text": " this could be due partly because of biological differences." }, { "end": 6407.0199999999995, "start": 6400.259999999999, "text": " And the way he was dragged through the mud is just crazy." }, { "end": 6413.82, "start": 6407.0199999999995, "text": " And they shoot here very much against this kind of biological, what they call biological" }, { "end": 6414.82, "start": 6413.82, "text": " determinism." }, { "end": 6417.94, "start": 6414.82, "text": " We'll see this very briefly." }, { "end": 6423.139999999999, "start": 6417.94, "text": " I'd say diversity becomes an empty signifier, stripped of the histories and experiences" }, { "end": 6429.379999999999, "start": 6423.139999999999, "text": " of systemic discrimination, repurposed around ideology rather than bodies." }, { "end": 6436.94, "start": 6429.379999999999, "text": " I'd say diversity has nothing inherently to do with bodies as such." }, { "end": 6449.419999999999, "start": 6436.94, "text": " I think that's only the case if you are already convinced of this." }, { "end": 6453.98, "start": 6449.419999999999, "text": " Within hours of the memo's publication, harassment targeting minority advocates who pushed back" }, { "end": 6460.9, "start": 6453.98, "text": " against the claims in the memo began, with a particular focus on queer and trans workers." }, { "end": 6468.379999999999, "start": 6460.9, "text": " That's bad, but also I think the pushback against people who voiced support was also" }, { "end": 6474.54, "start": 6468.379999999999, "text": " pretty bad because one of them was fired, as you already stated." }, { "end": 6477.62, "start": 6474.54, "text": " Google's vice president of diversity even locked down her Twitter account shortly after" }, { "end": 6483.42, "start": 6477.62, "text": " Demours firing, responding to the barrage of threats describing her as a police Nazi." }, { "end": 6484.74, "start": 6483.42, "text": " Well yeah, if you fire something." }, { "end": 6489.759999999999, "start": 6484.74, "text": " I mean undoubtedly Google fired this guy because they thought it was less of a PR disaster" }, { "end": 6492.62, "start": 6489.76, "text": " if they also fired him now." }, { "end": 6501.860000000001, "start": 6492.62, "text": " This probably wasn't an ideological decision, much more a PR decision." }, { "end": 6508.780000000001, "start": 6501.860000000001, "text": " If you fire someone after stating something like this, it very much looks like you're" }, { "end": 6514.3, "start": 6508.780000000001, "text": " firing them because you don't like their ideas and you don't like what they're saying," }, { "end": 6522.860000000001, "start": 6514.3, "text": " which people generally are not in favor of censoring freedom of speech." }, { "end": 6527.5, "start": 6522.860000000001, "text": " But yeah, that being said, harassment is bad, don't harass people." }, { "end": 6540, "start": 6527.5, "text": " Also that being said, criticism isn't always harassment and don't conflate the two." }, { "end": 6544.7, "start": 6540, "text": " Demours' memo also stated that the distribution of preference abilities of men and women differ" }, { "end": 6550.54, "start": 6544.7, "text": " in part due to biological causes and that these differences may explain why we don't" }, { "end": 6556.58, "start": 6550.54, "text": " see equal representation of women in tech and leadership." }, { "end": 6561.42, "start": 6556.58, "text": " This assertion hinges on a flawed assumption that identities like gender and race are essential" }, { "end": 6568.5, "start": 6561.42, "text": " and fixed biological attributes and that inequalities are at least in part the product of such irreducible" }, { "end": 6569.5, "start": 6568.5, "text": " differences." }, { "end": 6576.26, "start": 6569.5, "text": " Well, I mean, if they're not fixed biological attributes, certainly gender and race have" }, { "end": 6582.54, "start": 6576.26, "text": " a 0.99 correlation with biology." }, { "end": 6590.46, "start": 6582.54, "text": " Since your biology is first and it's determined when you're conceived, that demonstrates a" }, { "end": 6594.14, "start": 6590.46, "text": " causal direction." }, { "end": 6600.14, "start": 6594.14, "text": " Even if they're not exactly fixed, they are overwhelmingly fixed." }, { "end": 6607.5, "start": 6600.14, "text": " And to suggest that this is a flawed assumption, that these inequalities are at least part" }, { "end": 6612.860000000001, "start": 6607.5, "text": " the product of such differences, what you'd have to do, they simply state it's a flawed" }, { "end": 6614.18, "start": 6612.860000000001, "text": " assumption." }, { "end": 6621.820000000001, "start": 6614.18, "text": " What you have to do in order to show this is a flawed assumption, you have to show that" }, { "end": 6628.66, "start": 6621.82, "text": " gender and race, as far as they're biologically determined, have no influence whatsoever on" }, { "end": 6629.66, "start": 6628.66, "text": " these differences." }, { "end": 6631.299999999999, "start": 6629.66, "text": " That's what you have to show, right?" }, { "end": 6636.94, "start": 6631.299999999999, "text": " That's the counterclaim because the claim is they have at least in part something to" }, { "end": 6637.94, "start": 6636.94, "text": " do with it." }, { "end": 6644.54, "start": 6637.94, "text": " And that's also, I believe, what the more stated and what the predominant opinion like" }, { "end": 6651.179999999999, "start": 6644.54, "text": " is very like all the research points to, for example, there is a large difference in interest" }, { "end": 6657.5, "start": 6651.18, "text": " between genders as far as, for example, career selection goes and so on." }, { "end": 6664.780000000001, "start": 6657.5, "text": " Now, we can talk about why that is, but there's also a large consensus, I believe, that this" }, { "end": 6673.14, "start": 6664.780000000001, "text": " is at least partly determined to however degree, but it is at least partly determined by biology." }, { "end": 6680.12, "start": 6673.14, "text": " In order to show that this is flawed, you need to show that it does not have, it can't" }, { "end": 6682.099999999999, "start": 6680.12, "text": " have any influence, right?" }, { "end": 6688.9, "start": 6682.099999999999, "text": " You have to basically prove them the impossibility of this having an influence, which no one" }, { "end": 6692.94, "start": 6688.9, "text": " has done so far, much to the contrary." }, { "end": 6698.12, "start": 6692.94, "text": " So simply state this is a flawed assumption kind of shows to me that they've already," }, { "end": 6706.22, "start": 6698.12, "text": " they are there, they're in a bubble and they're expecting to speak to people in the same bubble." }, { "end": 6719.66, "start": 6706.22, "text": " Yeah, so they go on and kind of discredit this as called a biological determinism, which" }, { "end": 6728.14, "start": 6719.66, "text": " I don't think that's a correct use of the term biological determinism, but you can judge" }, { "end": 6729.14, "start": 6728.14, "text": " for yourself." }, { "end": 6735.46, "start": 6729.14, "text": " All I think these people are saying that biology might have some influence and we could adjust" }, { "end": 6737.5, "start": 6735.46, "text": " for that." }, { "end": 6739.46, "start": 6737.5, "text": " It's not even right, it's not even." }, { "end": 6741.38, "start": 6739.46, "text": " Yeah, this comes up here." }, { "end": 6745.82, "start": 6741.38, "text": " So conclusion, conclusion, finally, I think it's been two hours." }, { "end": 6746.82, "start": 6745.82, "text": " Sorry." }, { "end": 6747.82, "start": 6746.82, "text": " Conclusion." }, { "end": 6754.38, "start": 6747.82, "text": " Throughout this report, we've outlined the scope and scale of the problem, tracing how" }, { "end": 6759.52, "start": 6754.38, "text": " the diversity crisis in the industry and the problems of bias and AI systems are interrelated" }, { "end": 6762.58, "start": 6759.52, "text": " aspect of the same issue." }, { "end": 6765.24, "start": 6762.58, "text": " No." }, { "end": 6770.36, "start": 6765.24, "text": " In the past, these topics are commonly examined in isolation, but increasing evidence shows" }, { "end": 6772.98, "start": 6770.36, "text": " that they are closely intertwined." }, { "end": 6776.48, "start": 6772.98, "text": " No, you've shown that they're parallel." }, { "end": 6782.84, "start": 6776.48, "text": " You have absolutely not shown that they're interrelated aspects of the same issue and" }, { "end": 6787.86, "start": 6782.84, "text": " you have not shown that one, any one of these causally influences the other, that there" }, { "end": 6789.179999999999, "start": 6787.86, "text": " is any feedback loop." }, { "end": 6792.82, "start": 6789.179999999999, "text": " You have not shown that fixing one leads to fixing the other." }, { "end": 6801.86, "start": 6792.82, "text": " I mean, you could also take a company that extremely is focused on, or for some reason" }, { "end": 6808.42, "start": 6801.86, "text": " has a different workforce and then show how their products with the same data sets as" }, { "end": 6814.219999999999, "start": 6808.42, "text": " the previous companies don't end up being biased." }, { "end": 6816.38, "start": 6814.219999999999, "text": " Probably not so easy." }, { "end": 6819.299999999999, "start": 6816.38, "text": " But again, none of that is in the report." }, { "end": 6825.38, "start": 6819.3, "text": " There are many things you could actually do to show what you wanted to show, but it's" }, { "end": 6830.820000000001, "start": 6825.38, "text": " just not the case in this article." }, { "end": 6835.22, "start": 6830.820000000001, "text": " Our analysis surfaced two prominent responses to the diversity crisis." }, { "end": 6840.18, "start": 6835.22, "text": " On one hand, a worker driven movement, which we've skipped." }, { "end": 6846.66, "start": 6840.18, "text": " On the other hand, we observe a small but vocal counter movement that actively resists" }, { "end": 6850.5, "start": 6846.66, "text": " diversity in the industry." }, { "end": 6854.42, "start": 6850.5, "text": " What dishonesty actively resists diversity?" }, { "end": 6861.3, "start": 6854.42, "text": " I mean, the thought that these people stray around like, no, I don't like the other looking" }, { "end": 6862.3, "start": 6861.3, "text": " people." }, { "end": 6864.42, "start": 6862.3, "text": " It's just so absurd." }, { "end": 6871.18, "start": 6864.42, "text": " All they're saying is that either we don't understand the problem in the correct way" }, { "end": 6873.98, "start": 6871.18, "text": " or our tools aren't appropriate to solve the problem." }, { "end": 6881.9, "start": 6873.98, "text": " I think everyone has the same goal of the workplace and the AI systems being as fair" }, { "end": 6887.339999999999, "start": 6881.9, "text": " and as non discriminatory as possible." }, { "end": 6890.9, "start": 6887.339999999999, "text": " Misrepresentation of the other side is something that really bugs me." }, { "end": 6893.419999999999, "start": 6890.9, "text": " And it's something that these authors do a lot." }, { "end": 6900.82, "start": 6893.419999999999, "text": " So yeah, I lose my polite side maybe." }, { "end": 6907.94, "start": 6900.82, "text": " And uses arguments from biological determinism to assert that women are inherently less suited" }, { "end": 6910.5, "start": 6907.94, "text": " to computer science and AI." }, { "end": 6912.179999999999, "start": 6910.5, "text": " What a load of crap." }, { "end": 6919.139999999999, "start": 6912.179999999999, "text": " Sorry, but uses to assert that women are inherently less suited to computer science." }, { "end": 6920.139999999999, "start": 6919.139999999999, "text": " No one." }, { "end": 6925.78, "start": 6920.139999999999, "text": " Okay, not no one, but no one that I know." }, { "end": 6930.179999999999, "start": 6925.78, "text": " Asserts that absolutely no one that makes these arguments." }, { "end": 6931.820000000001, "start": 6930.18, "text": " Sorry, not no one." }, { "end": 6939.700000000001, "start": 6931.820000000001, "text": " You can always find a sexist douchebag that makes that argument." }, { "end": 6943.62, "start": 6939.700000000001, "text": " But this is not a serious argument made." }, { "end": 6947.900000000001, "start": 6943.62, "text": " And this is not this counter movement." }, { "end": 6951.46, "start": 6947.900000000001, "text": " Most people in the argument that most people in this counter movement make." }, { "end": 6952.62, "start": 6951.46, "text": " Not at all." }, { "end": 6962.82, "start": 6952.62, "text": " And to represent them as such is just so dishonest that yeah, this this this basically this is" }, { "end": 6968.94, "start": 6962.82, "text": " the it's nice that it's in the conclusion because it finally like at the end it completely" }, { "end": 6975.98, "start": 6968.94, "text": " destroys the credibility of me taking seriously these authors." }, { "end": 6981.74, "start": 6975.98, "text": " I thought they had so that the parts we skipped over I mostly would say I'm mostly okay with" }, { "end": 6989.66, "start": 6981.74, "text": " they mostly show parallels between the that AI systems are biased and they also show that" }, { "end": 6991.3, "start": 6989.66, "text": " there is unequal representation." }, { "end": 6996.0199999999995, "start": 6991.3, "text": " They also show examples of discrimination, harassment and so on." }, { "end": 7001.38, "start": 6996.0199999999995, "text": " Problems in AI companies and universities that all you can read the report for this" }, { "end": 7003.98, "start": 7001.38, "text": " that's it's pretty interesting to read." }, { "end": 7008.94, "start": 7003.98, "text": " But the points I've addressed, I'm not happy with." }, { "end": 7011.78, "start": 7008.94, "text": " Yeah, so that was it for now." }, { "end": 7018.179999999999, "start": 7011.78, "text": " Sorry this was took so long, but I felt that a thorough take was necessary." }, { "end": 7039.22, "start": 7018.18, "text": " Have a nice rest of the day." } ]
sbKaUc0tPaY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
[ "Science & Technology" ]
[]
https://arxiv.org/abs/1902.04818 Abstract: We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy. Authors: Kevin Roth, Yannic Kilcher, Thomas Hofmann
Hello and welcome. Today we're looking at the odds are odd, a statistical test for detecting adversarial examples. So shameless self-promotion here since this is me. So this is an archive and basically what we do is we're detecting adversarial examples. For those who don't know what an adversarial example is, it's basically a way of fooling a classifier in order to kind of get it to do something weird. Let's look at it. So maybe you have an image of a cat. I have no clue how a cat looks. Alright, so you have an image of a cat and you have a classifier. So the classifier takes this image as an input, kind of winds it down to some probabilities of classes and cat, dog and so on. And it then gives you an estimate of how likely each class is. So what the adversarial example does is it changes this image and it adds a noise. So this is just a very specific noise and you have kind of a multiplier here, gamma, which is super small. So the noise is almost... you can't see it with a human eye basically, it's so small. But it's able to perturb this image in a way that the probabilities will change such that all of a sudden a different class now is the highest class. So basically we're able to fool these classifiers by adding just very little bit of very very specific noise. So that's an adversarial example. These have many implications in, let's say, security applications and also in understanding how these classifier works. Alright, so our task is to explain and detect them, explain why they happen and detect when they happen. Alright, so what do we do? Basically let's just jump right into the thing here. We view a classifier as an output, so you have logits, what's called logits, L is this. This here is your neural network up to the last layer. Basically it can be like something like a convolutional neural network and so on. It gives you a feature representation. So you extract from the image X a feature representation, which is this entire thing here, and then you multiply this feature representation. So this is going to be some vector of dimension D. You multiply this by this weight matrix, which is going to be something like, okay I've drawn it in the wrong direction here. Let's draw W over here. It's going to be D by, let's say K, where K is the number of classes. Okay, still wrong. D by K, right? And output a vector of dimension K, which then is this cat, dog and so on. So these are the logits and the logits get transformed to the probabilities by running it through a softmax layer. But basically we view a classifier as having a feature representation and a weight matrix. And this here is a matrix multiplication adult product by matrix. So what we see basically is, this is kind of where the adversarial examples happen. So when we look at this weight matrix, right, again we look at the D dimensional feature vector here, and we look at the weight matrix, what it does is it has columns, right? Columns. Let's say we have four classes here, right? So it has these four columns and each of them is D dimensional. So each of them is going to be multiplied by this thing and giving a score. So the final score for a class is going to be the multiplication of a row W1, W2, W3, W4 by this feature vector. Let's call the feature vector little f. So your logit of class i is going to be the inner product of W i and f. Alright, we'll leave away biases for now. There's okay, we can introduce biases to make it a bit more complicated but it changes nothing. So your logit is going to be the inner product and whichever logit is the highest wins. So that's going to be the prediction of the classifier. So since you can, in an adversarial example, what can you change? You can change this feature vector here, this f. By changing the x you can change the output of the convolutional neural network which is the feature vector. And what you have to do in order to make a logit as high as possible, basically make one class as high as possible, is you need to make this inner product as high as possible. And what's an inner product? If you look in a classic vector representation space, if this is W i and this is f, what you want to do is you want to make f and W align as much as possible. Because the inner product is going to be basically dependent on the angle and the magnitude. So you can you can stretch f for sure but it's going to be kind of aligned with all the W's then more by stretching or more negatively in whatever way you want it. But basically you want to rotate, you want to align as much as possible the f with the W i. So you want to kind of go into this direction with f. So now not only do you want to kind of maximize f with a particular W i, what you want to do is be adversarial. The adversarial task is often framed as just it's either targeted, so find a... so i needs to be a particular other class, or it's untargeted which means just just give me a perturbation that will make the classifier be fooled. And be fooled means whatever it predicts right now it should predict something different. So what you ultimately want to do is you want this as high as possible for some i that is not the correct i and you want this other quantity W y. Let's call it W y. Let's say the classifier is 100% correct. So W y, y is the label of x. W y is whatever column here is currently predicted. So you want the sum column where i is not equal to y to have maximum inner product and so this is not no longer l i, we'll get to that, to have maximum inner product and you want this inner product with the correct class to be as small as possible, which ultimately means you want this entire quantity maximized. So it's a pretty simple idea. We'll call, let's say this is the log i minus the log y. We have slightly different notation in the paper. I think we call this z but never mind. So you basically just want to make this as large as possible. So our point is since this is not the only thing, you want to maximize this but you have a constraint. Namely your constraint is that your delta x can only be small. Your delta x can only be small because the point of an adversarial example is that the perturbation is so small you can't see it and that means that you basically don't have much wiggle room to do these perturbations, which means that we should be able to detect a pattern like this in the latent space. So this here is the latent space feature vector and if we can kind of detect a pattern in the latent space then we kind of get the adversarial example detector. So how do we do this? We measure exactly this. What we do is we measure the alignment between the original, between the currently predicted class and between all other classes. So in this graphic here you see this. It's a 10 class classifier. This is CIFAR10. We only show one, two, three, four, we only show six of the other classes but we have the full graphic in this. So this shows an adversarial example. The axis going on top of each of the images is the alignment with the adversarial class. So this has been an adversarially perturbed sample. So this shows the alignment with the adversarial class and of course you see the bright red dot, if you just focus on that, that is the adversarial example projected into this. So of course the alignment is going to be very very high with this class since the classifier actually predicts this class. The blue here is the sample that the adversarial sample was derived from which means the original image. And you already see without looking at any of the other dots that the blue is around zero here, around zero here, around zero here, here and here. But here it's very high in this axis. So the axis to the right is for each of these plots here it's one of the other classes except for the currently predicted adversarial class. So that this axis is always the same axis and while the axis to the right for each plot is a different one. And you can already see, and that's why we frame it in the green, this plot here is where the axis to the right corresponds to the original class of the classifier. So don't look yet at the other plots. What you see here is basically the blue is really high in this class right and the adversarial example procedure basically has driven it down this class and up this class which is exactly saying it has made this inner product small and this inner product large. So where do we go from here? Let's actually jump this graphic a bit and go to this one that's way down here. Alright so what we've done is we've taken the an example just out of the data set right and then we've taken an adversarial example. So say X is the example of the data set and then X hat is the adversarial example derived from this. Alright in this plot to the right X would be sitting about here. I'm gonna explain what the what the the kind of meaning is and X hat would be sitting down here right it's about one third from the top one third from the bottom let me draw this more. Alright so what this axis represents here is basically what we've done is we've gone from X to X hat in very small steps and at each step we've asked the classifier hey classifier what's the probability of the class Y so Y is the class of X right and the class of X X hat is some some different what some some other class right since it's an adversarial example we've so we've asked the classifier what's the class of X and basically no basically we've asked what what's the probability that Y is the class of X and that's represented in white right so the more white the higher the classifier thinks the class Y is probable. So the direction down is going into the direction of X hat minus X so it's going into the into the adversarial direction and then the direction across is we've taken some direction that's orthogonal to this direction and then also went into tiny steps and asked the classifier hey classifier what do you think is the probability of Y here so we've basically done this kind of grid sampling and at each point we've asked the classifier what do you think which how probable is Y and the classifier will always output some number and we plot it in white and so this direction again is always the same in this direction we basically randomize it and then aggregate it over over lots and lots of different samples and we also aggregate this entire thing over the entire over the data set so we get a comprehensive view of what adversarial examples look like in the view of the classifier and what we find is pretty interesting so when you go from the original class you basically in every direction here in every direction it kind of the original class kind of decreases smoothly right you see at the edges here it kind of gets black so the further away you go from the original example that the more the kind of shadier the classifier gets it's like yeah I'm not so sure anymore that this is the class right but if you go into the direction of here if you go into the direction of the adversarial example the kind of drop-off is first of all it's very steep so all of a sudden here you're in very dark territory which means the classifier is doesn't think why is probable at all anymore and moreover you get this kind of cone here so what we see is what we what we think is happening is that given an example there are these directions in late in in in image space basically straight directions that go to adversarial examples right and we call these cones because they they're kind of low dimensional directions in in the space where the adversarial example lies and what's really interesting is we have those plots here do we have more so what's what's quite interesting is that if you if you go come on well this is kind of okay the quality of the of the plot is not is not very very good so I'm gonna I may be able to to draw this here so if your start here and you go here what happens to the original class is you start out high you go down rapidly and you stay down even if you go super far into this direction the this class will stay down whereas let's say this is y hat y hat will start low go up and then kind of fade so here is where the adversarial example would sit sorry at about this distance that's this distance here means as you go towards the adversarial example right here the probability of the adversarial class rises and the probability of the original class drops then as you go further this is what's what's interesting kind of this probability here drops which means the classifier is kind of like yeah okay there's too much noise now I'm not so sure about this class anymore but the this this class here kind of stays low very very long even if you go into this direction so this this gives us kind of a hint that adversarial examples are characterized by specific directions that you go into that you that you can go into and kind of suppress the original class and pump the new class up which is kind of exactly what we've claimed with this inner inner product alignment right that the next experiment we've done is we've taken this adversarial example here and said well if we go outside if we go into random directions right it's just really this one direction that's problematic if we go into random directions actually we should be you know go back to the original class right since it's basically surrounded by the original class this is just one direction and this here represents all the other directions there are and how many directions are there in in pixel space like a lot so we should be able to get back to the original class but that's not the case that's we found that's not the case and we also found why so I still want to go back to this plot here if you do this if you add noise and this is the noise magnitude here what you'll see is the orange here is the adversarial class so orange will go down down down down down right as you increase the noise the blue is the source class so the blue goes up and it goes up faster you see it goes up faster than the green which is the highest other class so green is whatever class is not that was there not the source but the highest class other than that so the source class goes up quickly but before the source class can overpass the adversarial class which happens back there the highest other class has already kind of taken over so the source class is basically too weak and if you again look at this this plot here if you go with an actual color picker you see that the amount of white here and here is is not high enough it's like 0.3 or something out of one or even lower so the the kind of source class is not strong enough that by simply adding a bit of noise you can go back but we thought hey if this is correct we can actually detect we can detect this effect here this rising of the source class faster so our plan is basically we add noise a particular amount of noise just a little bit actually and then we detect which basically which class falls and which class rises and the way we do this is we we detect the this exact alignment that I've described before under noise so we form this quantity here for all classes other than y so y is the the class that's currently predicted and we look at it what happens under it under noise right so and that's where we get to this graphic here so again this axis is the adversarial class or the class that's currently predicted right this axis here is all the other classes for each plot one and when we add noise what do you see is the noise magnitude is encoded in the brightness of the dots so the darker the red dots the more noise we've added here is the original adversarial sample then as we add noise you see here here more noise more noise more noise it nothing's really happening for the for the if if if it's like one class that has nothing to do with the original class it simply kind of goes down simply kind of gets less sure about this class right but in case of the original class that the adversarial example was derived from it really rises it really kind of at the same time that it drops it rises into that direction so we're able to measure these these deltas here under noise and we're able to to devise basically statistics of what happens to these quantities under like if it's not an adversarial sample versus what happens to these quantities if it's an adversarial sample so here you see pairings of basically source class and adversarial class samples so each of these histograms is collected from that and what you can see is in blue the kind of alignment under noise of the source class sorry the alignments under noise of a non perturbed sample and in orange the alignments under noise of an adversarial sample and what's cool is that these these alignments you can see in all of these cases are very different so there is a clear signature in the adversarial sample in these noise induced alignments with the with the weight matrix rows that makes you able to basically build a detector you can say all right anything to the left is clean anything to the right is adversarial and we can do this over many different types of noises and then build basically a voting mechanism on that and thereby detect adversarial examples so we have a bunch of experiments we mostly experiment on the c410 and on the image net data set and you can see over here so this is the main kind of one of the main results the detection rates of our statistical test so as you can see we are detection rate this is on clean samples on clean samples you want the detection rate to be low on adversarial samples you want the detection rate to be high and this we achieve very large detection rates while having very low false positive rates especially on image net so it seems like the more tuned these models are the better these models are the better we are at detecting adversarial examples to it it's kind of a direct correlation to how well the models perform on accuracy in a clean setting and what we can do is now since we cannot only detect these things but we can detect these things in a fashion so if if you look at these things and you have like a sample of a particular class that's predicted right let's say this class and you go and look at it at the position of the noise induced features over each of them so let's say here here here here here here here here here right you can then clearly say well not only do I detect an adversarial example here right I look at the I look at each of the class of the classes that it could be derived from right if all if all of them say it's a clean sample then all right it's a clean sample but if one of them says it's an adversarial sample then I don't not only do I know it's an adversarial sample but I say aha this must be the source class right this is the exact effect we saw here all right we can if we detect this pattern here we can also back deduce basically aha so this must be the original class that the adversarial example was derived from so we're basically able to build a not only a detector but we're basically able to reconstruct the original class and here you see for these models let's say on CIFAR-10 we imagine that is a bit too large as of yet for our compute but on these models that have clean accuracies that are pretty high on CIFAR-10 plus this this kind of toy network here we're able to reconstruct the original class so basically this is defense against adversarial examples by by getting to almost clean accuracy back so this is a really surprising actually and kind of nice so we we do a bunch of other experiments including we defend against an attacker that's actually aware of this thing but the main the main point here is we don't say this is kind of the end-all method of defending against adversarial examples we simply want to kind of encourage the way of thinking of of these kind of noise what what if you what if you noise induce perturbations how does your network react to that can you can you detect these effects here can you detect effects like this and are these unavoidable or are there architectures are there architectures we can basically build such that adversarial examples have no chance except doing something like this which we can then easily detect all right so that was a bit of an introduction if you like it check out the entire paper and goodbye
[ { "end": 5.6000000000000005, "start": 0, "text": " Hello and welcome. Today we're looking at the odds are odd, a statistical test for" }, { "end": 11.76, "start": 5.6000000000000005, "text": " detecting adversarial examples. So shameless self-promotion here since this" }, { "end": 21.28, "start": 11.76, "text": " is me. So this is an archive and basically what we do is we're detecting" }, { "end": 25.8, "start": 21.28, "text": " adversarial examples. For those who don't know what an adversarial example is, it's" }, { "end": 36.28, "start": 25.8, "text": " basically a way of fooling a classifier in order to kind of get it to do" }, { "end": 42.2, "start": 36.28, "text": " something weird. Let's look at it. So maybe you have an image of a cat." }, { "end": 50.24, "start": 42.2, "text": " I have no clue how a cat looks. Alright, so you have an image of a cat and you have a" }, { "end": 54.88, "start": 50.24, "text": " classifier. So the classifier takes this image as an input, kind of winds it" }, { "end": 65, "start": 54.88, "text": " down to some probabilities of classes and cat, dog and so on. And it then gives you" }, { "end": 76.4, "start": 65, "text": " an estimate of how likely each class is. So what the adversarial example does" }, { "end": 85.92, "start": 76.4, "text": " is it changes this image and it adds a noise. So this is just a very" }, { "end": 91.32000000000001, "start": 85.92, "text": " specific noise and you have kind of a multiplier here, gamma, which is super" }, { "end": 97.4, "start": 91.32000000000001, "text": " small. So the noise is almost... you can't see it with a human eye basically, it's" }, { "end": 104.56, "start": 97.4, "text": " so small. But it's able to perturb this image in a way that the" }, { "end": 111.2, "start": 104.56, "text": " probabilities will change such that all of a sudden a different class now is the" }, { "end": 116.32000000000001, "start": 111.2, "text": " highest class. So basically we're able to fool these classifiers by adding just" }, { "end": 121.92, "start": 116.32000000000001, "text": " very little bit of very very specific noise. So that's an adversarial example." }, { "end": 127, "start": 121.92, "text": " These have many implications in, let's say, security applications and also in" }, { "end": 132.92000000000002, "start": 127, "text": " understanding how these classifier works. Alright, so our task is to explain and" }, { "end": 138.51999999999998, "start": 132.92, "text": " detect them, explain why they happen and detect when they happen." }, { "end": 150.07999999999998, "start": 138.51999999999998, "text": " Alright, so what do we do? Basically let's just jump right into" }, { "end": 162.35999999999999, "start": 150.07999999999998, "text": " the thing here. We view a classifier as an output, so you" }, { "end": 168.44000000000003, "start": 162.36, "text": " have logits, what's called logits, L is" }, { "end": 180.84, "start": 172.8, "text": " this. This here is your neural network up to the last layer." }, { "end": 185.16000000000003, "start": 180.84, "text": " Basically it can be like something like a convolutional neural network and so on." }, { "end": 190.56, "start": 185.16000000000003, "text": " It gives you a feature representation. So you extract from the image X a feature" }, { "end": 196, "start": 190.56, "text": " representation, which is this entire thing here, and then you multiply this" }, { "end": 202, "start": 196, "text": " feature representation. So this is going to be some vector of dimension D. You" }, { "end": 210.72, "start": 202, "text": " multiply this by this weight matrix, which is going to be something like, okay" }, { "end": 216.88, "start": 210.72, "text": " I've drawn it in the wrong direction here. Let's draw W over here." }, { "end": 223.4, "start": 216.88, "text": " It's going to be D by, let's say K, where K is the number of classes." }, { "end": 233.2, "start": 223.4, "text": " Okay, still wrong. D by K, right? And output a vector of dimension K, which" }, { "end": 238.4, "start": 233.2, "text": " then is this cat, dog and so on. So these are the logits and the logits get" }, { "end": 244.2, "start": 238.4, "text": " transformed to the probabilities by running it through a softmax layer. But" }, { "end": 251.83999999999997, "start": 244.2, "text": " basically we view a classifier as having a feature representation and a weight" }, { "end": 256.96, "start": 251.83999999999997, "text": " matrix. And this here is a matrix multiplication adult product by" }, { "end": 266.84, "start": 256.96, "text": " matrix. So what we see basically is, this is kind of where the" }, { "end": 272.08, "start": 266.84, "text": " adversarial examples happen. So when we look at this weight matrix, right, again" }, { "end": 275.64, "start": 272.08, "text": " we look at the D dimensional feature vector here, and we look at the weight" }, { "end": 286.52, "start": 275.64, "text": " matrix, what it does is it has columns, right? Columns. Let's say we have four" }, { "end": 293.28, "start": 286.52, "text": " classes here, right? So it has these four columns and each of them is D" }, { "end": 300.2, "start": 293.28, "text": " dimensional. So each of them is going to be multiplied by this thing and giving a" }, { "end": 305.64, "start": 300.2, "text": " score. So the final score for a class is going to be the multiplication of a row" }, { "end": 317.59999999999997, "start": 305.64, "text": " W1, W2, W3, W4 by this feature vector. Let's call the feature vector little f." }, { "end": 328.03999999999996, "start": 317.59999999999997, "text": " So your logit of class i is going to be the inner product of W i and f." }, { "end": 334.72, "start": 328.04, "text": " Alright, we'll leave away biases for now. There's okay, we can introduce biases to" }, { "end": 339.40000000000003, "start": 334.72, "text": " make it a bit more complicated but it changes nothing. So your logit is going to" }, { "end": 346, "start": 339.40000000000003, "text": " be the inner product and whichever logit is the highest wins. So that's going" }, { "end": 352.08000000000004, "start": 346, "text": " to be the prediction of the classifier. So since you can, in an" }, { "end": 356.24, "start": 352.08000000000004, "text": " adversarial example, what can you change? You can change this feature vector here," }, { "end": 361.84000000000003, "start": 356.24, "text": " this f. By changing the x you can change the output of the" }, { "end": 367.6, "start": 361.84000000000003, "text": " convolutional neural network which is the feature vector. And what you have to" }, { "end": 374.2, "start": 367.6, "text": " do in order to make a logit as high as possible, basically make one class" }, { "end": 378.96000000000004, "start": 374.2, "text": " as high as possible, is you need to make this inner product as high as possible." }, { "end": 384.52, "start": 378.96000000000004, "text": " And what's an inner product? If you look in a classic vector representation" }, { "end": 397.12, "start": 384.52, "text": " space, if this is W i and this is f, what you want to do is you want to make f and" }, { "end": 403.12, "start": 397.12, "text": " W align as much as possible. Because the inner product is going to be basically" }, { "end": 407.91999999999996, "start": 403.12, "text": " dependent on the angle and the magnitude. So you can you can stretch f for sure" }, { "end": 413.03999999999996, "start": 407.91999999999996, "text": " but it's going to be kind of aligned with all the W's then more by stretching" }, { "end": 417.76000000000005, "start": 413.04, "text": " or more negatively in whatever way you want it. But basically you want to rotate," }, { "end": 425.76000000000005, "start": 417.76000000000005, "text": " you want to align as much as possible the f with the W i. So you want to kind" }, { "end": 434.32000000000005, "start": 425.76000000000005, "text": " of go into this direction with f. So now not only do you want to kind of maximize" }, { "end": 440.32000000000005, "start": 434.32000000000005, "text": " f with a particular W i, what you want to do is be adversarial. The adversarial" }, { "end": 446.59999999999997, "start": 440.32, "text": " task is often framed as just it's either targeted, so find a... so i needs to be a" }, { "end": 450.96, "start": 446.59999999999997, "text": " particular other class, or it's untargeted which means just just give me" }, { "end": 457.71999999999997, "start": 450.96, "text": " a perturbation that will make the classifier be fooled. And be fooled means" }, { "end": 464.96, "start": 457.71999999999997, "text": " whatever it predicts right now it should predict something different. So what" }, { "end": 473, "start": 464.96, "text": " you ultimately want to do is you want this as high as possible for some i" }, { "end": 481.15999999999997, "start": 473, "text": " that is not the correct i and you want this other" }, { "end": 492.56, "start": 481.15999999999997, "text": " quantity W y. Let's call it W y. Let's say the classifier is 100% correct." }, { "end": 502.08, "start": 492.56, "text": " So W y, y is the label of x. W y is whatever column here is" }, { "end": 511.64, "start": 502.08, "text": " currently predicted. So you want the sum column where i is not equal to y to" }, { "end": 517.52, "start": 511.64, "text": " have maximum inner product and so this is not no longer l i, we'll get to that," }, { "end": 525.24, "start": 517.52, "text": " to have maximum inner product and you want this inner product" }, { "end": 530.56, "start": 525.24, "text": " with the correct class to be as small as possible, which ultimately means you want" }, { "end": 537.4399999999999, "start": 530.56, "text": " this entire quantity maximized. So it's a pretty simple idea. We'll call, let's say" }, { "end": 542.6, "start": 537.4399999999999, "text": " this is the log i minus the log y. We have slightly different notation in the" }, { "end": 551.52, "start": 542.6, "text": " paper. I think we call this z but never mind. So you basically just want to make" }, { "end": 559.36, "start": 551.52, "text": " this as large as possible. So our point is since this is not the only" }, { "end": 567.44, "start": 559.36, "text": " thing, you want to maximize this but you have a constraint. Namely your constraint" }, { "end": 574.48, "start": 567.44, "text": " is that your delta x can only be small. Your delta x can only be small" }, { "end": 578.96, "start": 574.48, "text": " because the point of an adversarial example is that the perturbation is so" }, { "end": 585.44, "start": 578.96, "text": " small you can't see it and that means that you basically don't have much" }, { "end": 593.48, "start": 585.44, "text": " wiggle room to do these perturbations, which means that we should be" }, { "end": 600.24, "start": 593.48, "text": " able to detect a pattern like this in the latent space. So this here is" }, { "end": 608.08, "start": 600.24, "text": " the latent space feature vector and if we can kind of detect a pattern in the" }, { "end": 616.64, "start": 608.08, "text": " latent space then we kind of get the adversarial example detector." }, { "end": 623.6, "start": 616.64, "text": " So how do we do this? We measure exactly this. What we do is we measure the" }, { "end": 631.16, "start": 623.6, "text": " alignment between the original, between the currently predicted class and" }, { "end": 638.36, "start": 631.16, "text": " between all other classes. So in this graphic here you see this. It's a 10" }, { "end": 644.92, "start": 638.36, "text": " class classifier. This is CIFAR10. We only show one, two, three, four, we only show six of the other" }, { "end": 653.52, "start": 644.92, "text": " classes but we have the full graphic in this. So this shows an adversarial" }, { "end": 662.16, "start": 653.52, "text": " example. The axis going on top of each of the images is the alignment with the" }, { "end": 667.16, "start": 662.16, "text": " adversarial class. So this has been an adversarially perturbed sample. So this" }, { "end": 672.0799999999999, "start": 667.16, "text": " shows the alignment with the adversarial class and of course you see the bright" }, { "end": 679.2, "start": 672.08, "text": " red dot, if you just focus on that, that is the adversarial example" }, { "end": 684.5600000000001, "start": 679.2, "text": " projected into this. So of course the alignment is going to be very very" }, { "end": 689.48, "start": 684.5600000000001, "text": " high with this class since the classifier actually predicts this class." }, { "end": 695.84, "start": 689.48, "text": " The blue here is the sample that the adversarial sample was derived from" }, { "end": 702.88, "start": 695.84, "text": " which means the original image. And you already see without looking at any of" }, { "end": 709.0400000000001, "start": 702.88, "text": " the other dots that the blue is around zero here, around zero here, around zero" }, { "end": 715.88, "start": 709.0400000000001, "text": " here, here and here. But here it's very high in this axis. So the axis to the" }, { "end": 722.2800000000001, "start": 715.88, "text": " right is for each of these plots here it's one of the other" }, { "end": 727.12, "start": 722.28, "text": " classes except for the currently predicted adversarial class. So that" }, { "end": 732.4, "start": 727.12, "text": " this axis is always the same axis and while the axis to the right for each" }, { "end": 736.64, "start": 732.4, "text": " plot is a different one. And you can already see, and that's why we frame it" }, { "end": 742.9599999999999, "start": 736.64, "text": " in the green, this plot here is where the axis to the right" }, { "end": 749.3199999999999, "start": 742.9599999999999, "text": " corresponds to the original class of the classifier. So don't look yet at" }, { "end": 756.5200000000001, "start": 749.32, "text": " the other plots. What you see here is basically the blue is really high" }, { "end": 763.0400000000001, "start": 756.5200000000001, "text": " in this class right and the adversarial example procedure basically has driven" }, { "end": 770.88, "start": 763.0400000000001, "text": " it down this class and up this class which is exactly saying it has made this" }, { "end": 779.68, "start": 770.88, "text": " inner product small and this inner product large. So where do we go from" }, { "end": 788.96, "start": 779.68, "text": " here? Let's actually jump this graphic a bit and go to this one that's way down" }, { "end": 800.48, "start": 788.96, "text": " here. Alright so what we've done is we've taken the an example just out of the" }, { "end": 808.48, "start": 800.48, "text": " data set right and then we've taken an adversarial example. So say X is the" }, { "end": 815.12, "start": 808.48, "text": " example of the data set and then X hat is the adversarial example derived from" }, { "end": 821.08, "start": 815.12, "text": " this. Alright in this plot to the right X would be sitting about here. I'm gonna" }, { "end": 827.76, "start": 821.08, "text": " explain what the what the the kind of meaning is and X hat would be sitting" }, { "end": 833.04, "start": 827.76, "text": " down here right it's about one third from the top one third from the bottom" }, { "end": 844.2, "start": 833.04, "text": " let me draw this more. Alright so what this axis represents here is basically" }, { "end": 851.64, "start": 844.2, "text": " what we've done is we've gone from X to X hat in very small steps and at each" }, { "end": 859.76, "start": 851.64, "text": " step we've asked the classifier hey classifier what's the probability of the" }, { "end": 869.4, "start": 859.76, "text": " class Y so Y is the class of X right and the class of X X hat is some some" }, { "end": 873.64, "start": 869.4, "text": " different what some some other class right since it's an adversarial example" }, { "end": 880.8, "start": 873.64, "text": " we've so we've asked the classifier what's the class of X and basically no" }, { "end": 885.9599999999999, "start": 880.8, "text": " basically we've asked what what's the probability that Y is the class of X and" }, { "end": 892.28, "start": 885.9599999999999, "text": " that's represented in white right so the more white the higher the classifier" }, { "end": 902.12, "start": 892.28, "text": " thinks the class Y is probable. So the direction down is going into the" }, { "end": 912.68, "start": 902.12, "text": " direction of X hat minus X so it's going into the into the adversarial direction" }, { "end": 921.04, "start": 912.68, "text": " and then the direction across is we've taken some direction that's orthogonal" }, { "end": 927.04, "start": 921.04, "text": " to this direction and then also went into tiny steps and asked the classifier" }, { "end": 931.72, "start": 927.04, "text": " hey classifier what do you think is the probability of Y here so we've basically" }, { "end": 938.0400000000001, "start": 931.72, "text": " done this kind of grid sampling and at each point we've asked the classifier" }, { "end": 943.8000000000001, "start": 938.0400000000001, "text": " what do you think which how probable is Y and the classifier will always output" }, { "end": 949.72, "start": 943.8000000000001, "text": " some number and we plot it in white and so this direction again is always the" }, { "end": 955.4, "start": 949.72, "text": " same in this direction we basically randomize it and then aggregate it over" }, { "end": 961.44, "start": 955.4, "text": " over lots and lots of different samples and we also aggregate this entire thing" }, { "end": 968.1600000000001, "start": 961.44, "text": " over the entire over the data set so we get a comprehensive view of what" }, { "end": 973.32, "start": 968.1600000000001, "text": " adversarial examples look like in the view of the classifier and what we find" }, { "end": 981.8000000000001, "start": 973.32, "text": " is pretty interesting so when you go from the original class you basically in" }, { "end": 989.32, "start": 981.8000000000001, "text": " every direction here in every direction it kind of the original class kind of" }, { "end": 994.8000000000001, "start": 989.32, "text": " decreases smoothly right you see at the edges here it kind of gets black so the" }, { "end": 1001.2, "start": 994.8000000000001, "text": " further away you go from the original example that the more the kind of" }, { "end": 1005.6800000000001, "start": 1001.2, "text": " shadier the classifier gets it's like yeah I'm not so sure anymore that this" }, { "end": 1011.5200000000001, "start": 1005.6800000000001, "text": " is the class right but if you go into the direction of here if you go into" }, { "end": 1017.9200000000001, "start": 1011.5200000000001, "text": " the direction of the adversarial example the kind of drop-off is first of all" }, { "end": 1024.08, "start": 1017.92, "text": " it's very steep so all of a sudden here you're in very dark territory which" }, { "end": 1031.56, "start": 1024.08, "text": " means the classifier is doesn't think why is probable at all anymore and" }, { "end": 1039.92, "start": 1031.56, "text": " moreover you get this kind of cone here so what we see is what we what we think" }, { "end": 1046.6, "start": 1039.92, "text": " is happening is that given an example there are these directions in late in in" }, { "end": 1053.56, "start": 1046.6, "text": " in image space basically straight directions that go to adversarial" }, { "end": 1059.1999999999998, "start": 1053.56, "text": " examples right and we call these cones because they they're kind of low" }, { "end": 1066.1599999999999, "start": 1059.1999999999998, "text": " dimensional directions in in the space where the adversarial example lies and" }, { "end": 1078.72, "start": 1066.16, "text": " what's really interesting is we have those plots here do we have more so" }, { "end": 1096.6000000000001, "start": 1078.72, "text": " what's what's quite interesting is that if you if you go come on well this is" }, { "end": 1104.92, "start": 1096.6000000000001, "text": " kind of okay the quality of the of the plot is not is not very very good so I'm" }, { "end": 1114.52, "start": 1104.92, "text": " gonna I may be able to to draw this here so if your start here and you go here" }, { "end": 1124.1200000000001, "start": 1114.52, "text": " what happens to the original class is you start out high you go down rapidly" }, { "end": 1132.8000000000002, "start": 1124.1200000000001, "text": " and you stay down even if you go super far into this direction the this class" }, { "end": 1141.9199999999998, "start": 1132.8, "text": " will stay down whereas let's say this is y hat y hat will start low go up and" }, { "end": 1150.8799999999999, "start": 1141.9199999999998, "text": " then kind of fade so here is where the adversarial example would sit sorry at" }, { "end": 1160, "start": 1150.8799999999999, "text": " about this distance that's this distance here means as you go towards the" }, { "end": 1166.12, "start": 1160, "text": " adversarial example right here the probability of the adversarial class" }, { "end": 1170.8, "start": 1166.12, "text": " rises and the probability of the original class drops then as you go" }, { "end": 1175.28, "start": 1170.8, "text": " further this is what's what's interesting kind of this probability here" }, { "end": 1179.28, "start": 1175.28, "text": " drops which means the classifier is kind of like yeah okay there's too much noise" }, { "end": 1183.56, "start": 1179.28, "text": " now I'm not so sure about this class anymore but the this this class here" }, { "end": 1189.56, "start": 1183.56, "text": " kind of stays low very very long even if you go into this direction so this this" }, { "end": 1194.1599999999999, "start": 1189.56, "text": " gives us kind of a hint that adversarial examples are characterized by specific" }, { "end": 1202.28, "start": 1194.1599999999999, "text": " directions that you go into that you that you can go into and kind of" }, { "end": 1207.96, "start": 1202.28, "text": " suppress the original class and pump the new class up which is kind of exactly" }, { "end": 1217.6799999999998, "start": 1207.96, "text": " what we've claimed with this inner inner product alignment right that the next" }, { "end": 1223.96, "start": 1217.68, "text": " experiment we've done is we've taken this adversarial example here and said" }, { "end": 1231.6000000000001, "start": 1223.96, "text": " well if we go outside if we go into random directions right it's just really" }, { "end": 1235.96, "start": 1231.6000000000001, "text": " this one direction that's problematic if we go into random directions actually we" }, { "end": 1239.48, "start": 1235.96, "text": " should be you know go back to the original class right since it's" }, { "end": 1243.6000000000001, "start": 1239.48, "text": " basically surrounded by the original class this is just one direction and this" }, { "end": 1247.5600000000002, "start": 1243.6000000000001, "text": " here represents all the other directions there are and how many directions are" }, { "end": 1252.76, "start": 1247.56, "text": " there in in pixel space like a lot so we should be able to get back to the" }, { "end": 1258.32, "start": 1252.76, "text": " original class but that's not the case that's we found that's not the case and" }, { "end": 1267.08, "start": 1258.32, "text": " we also found why so I still want to go back to this plot here if you do this if" }, { "end": 1274.3999999999999, "start": 1267.08, "text": " you add noise and this is the noise magnitude here what you'll see is the" }, { "end": 1281.3600000000001, "start": 1274.4, "text": " orange here is the adversarial class so orange will go down down down down down" }, { "end": 1289.76, "start": 1281.3600000000001, "text": " right as you increase the noise the blue is the source class so the blue goes up" }, { "end": 1295.88, "start": 1289.76, "text": " and it goes up faster you see it goes up faster than the green which is the" }, { "end": 1299.72, "start": 1295.88, "text": " highest other class so green is whatever class is not that was there not the" }, { "end": 1305.92, "start": 1299.72, "text": " source but the highest class other than that so the source class goes up quickly" }, { "end": 1312.3600000000001, "start": 1305.92, "text": " but before the source class can overpass the adversarial class which happens back" }, { "end": 1317.16, "start": 1312.3600000000001, "text": " there the highest other class has already kind of taken over so the source" }, { "end": 1323.3600000000001, "start": 1317.16, "text": " class is basically too weak and if you again look at this this plot here if you" }, { "end": 1329.68, "start": 1323.3600000000001, "text": " go with an actual color picker you see that the amount of white here and here" }, { "end": 1338.64, "start": 1329.68, "text": " is is not high enough it's like 0.3 or something out of one or even lower so" }, { "end": 1343.72, "start": 1338.64, "text": " the the kind of source class is not strong enough that by simply adding a" }, { "end": 1354.16, "start": 1343.72, "text": " bit of noise you can go back but we thought hey if this is correct we can" }, { "end": 1360.76, "start": 1354.16, "text": " actually detect we can detect this effect here this rising of the source" }, { "end": 1368.64, "start": 1360.76, "text": " class faster so our plan is basically we add noise a particular amount of noise" }, { "end": 1375.52, "start": 1368.64, "text": " just a little bit actually and then we detect which basically which class falls" }, { "end": 1381.44, "start": 1375.52, "text": " and which class rises and the way we do this is we we detect the this exact" }, { "end": 1391.6000000000001, "start": 1381.44, "text": " alignment that I've described before under noise so we form this quantity" }, { "end": 1399.24, "start": 1391.6000000000001, "text": " here for all classes other than y so y is the the class that's currently" }, { "end": 1408.72, "start": 1399.24, "text": " predicted and we look at it what happens under it under noise right so and that's" }, { "end": 1418.48, "start": 1408.72, "text": " where we get to this graphic here so again this axis is the adversarial class" }, { "end": 1424.44, "start": 1418.48, "text": " or the class that's currently predicted right this axis here is all the other" }, { "end": 1430.68, "start": 1424.44, "text": " classes for each plot one and when we add noise what do you see is the noise" }, { "end": 1435.28, "start": 1430.68, "text": " magnitude is encoded in the brightness of the dots so the darker the red dots" }, { "end": 1442.36, "start": 1435.28, "text": " the more noise we've added here is the original adversarial sample then as we" }, { "end": 1450.2, "start": 1442.36, "text": " add noise you see here here more noise more noise more noise it nothing's" }, { "end": 1457.2, "start": 1450.2, "text": " really happening for the for the if if if it's like one class that has nothing" }, { "end": 1463.16, "start": 1457.2, "text": " to do with the original class it simply kind of goes down simply kind of gets" }, { "end": 1470.4, "start": 1463.16, "text": " less sure about this class right but in case of the original class that the" }, { "end": 1477.2, "start": 1470.4, "text": " adversarial example was derived from it really rises it really kind of at the" }, { "end": 1482.3600000000001, "start": 1477.2, "text": " same time that it drops it rises into that direction so we're able to measure" }, { "end": 1489.4, "start": 1482.3600000000001, "text": " these these deltas here under noise and we're able to to devise basically" }, { "end": 1496.24, "start": 1489.4, "text": " statistics of what happens to these quantities under like if it's not an" }, { "end": 1499.6000000000001, "start": 1496.24, "text": " adversarial sample versus what happens to these quantities if it's an adversarial" }, { "end": 1504.3200000000002, "start": 1499.6000000000001, "text": " sample so here you see pairings of basically source class and adversarial" }, { "end": 1508.8000000000002, "start": 1504.3200000000002, "text": " class samples so each of these histograms is collected from that and" }, { "end": 1517.68, "start": 1508.8000000000002, "text": " what you can see is in blue the kind of alignment under noise of the source" }, { "end": 1524.92, "start": 1517.68, "text": " class sorry the alignments under noise of a non perturbed sample and in orange" }, { "end": 1530.3200000000002, "start": 1524.92, "text": " the alignments under noise of an adversarial sample and what's cool is" }, { "end": 1536.52, "start": 1530.3200000000002, "text": " that these these alignments you can see in all of these cases are very different" }, { "end": 1541.0800000000002, "start": 1536.52, "text": " so there is a clear signature in the adversarial sample in these noise" }, { "end": 1549.56, "start": 1541.08, "text": " induced alignments with the with the weight matrix rows that makes you able" }, { "end": 1555, "start": 1549.56, "text": " to basically build a detector you can say all right anything to the left is" }, { "end": 1559.6, "start": 1555, "text": " clean anything to the right is adversarial and we can do this over many" }, { "end": 1565.6399999999999, "start": 1559.6, "text": " different types of noises and then build basically a voting mechanism on that and" }, { "end": 1571.88, "start": 1565.64, "text": " thereby detect adversarial examples so we have a bunch of experiments we mostly" }, { "end": 1584.5200000000002, "start": 1571.88, "text": " experiment on the c410 and on the image net data set and you can see over here" }, { "end": 1591, "start": 1584.5200000000002, "text": " so this is the main kind of one of the main results the detection rates of our" }, { "end": 1597.76, "start": 1591, "text": " statistical test so as you can see we are detection rate this is on clean" }, { "end": 1601.6, "start": 1597.76, "text": " samples on clean samples you want the detection rate to be low on adversarial" }, { "end": 1607.96, "start": 1601.6, "text": " samples you want the detection rate to be high and this we achieve very large" }, { "end": 1616.44, "start": 1607.96, "text": " detection rates while having very low false positive rates especially on image" }, { "end": 1621.3600000000001, "start": 1616.44, "text": " net so it seems like the more tuned these models are the better these models" }, { "end": 1625.64, "start": 1621.3600000000001, "text": " are the better we are at detecting adversarial examples to it it's kind of" }, { "end": 1632.92, "start": 1625.64, "text": " a direct correlation to how well the models perform on accuracy in a clean" }, { "end": 1640.0800000000002, "start": 1632.92, "text": " setting and what we can do is now since we cannot only detect these things but" }, { "end": 1646.9199999999998, "start": 1640.08, "text": " we can detect these things in a fashion so if if you look at these things and" }, { "end": 1652.84, "start": 1646.9199999999998, "text": " you have like a sample of a particular class that's predicted right let's say" }, { "end": 1657.36, "start": 1652.84, "text": " this class and you go and look at it at the position of the noise induced" }, { "end": 1665.6, "start": 1657.36, "text": " features over each of them so let's say here here here here here here here here" }, { "end": 1672.4399999999998, "start": 1665.6, "text": " here right you can then clearly say well not only do I detect an adversarial" }, { "end": 1678.56, "start": 1672.4399999999998, "text": " example here right I look at the I look at each of the class of the classes that" }, { "end": 1685.76, "start": 1678.56, "text": " it could be derived from right if all if all of them say it's a clean sample then" }, { "end": 1689.7199999999998, "start": 1685.76, "text": " all right it's a clean sample but if one of them says it's an adversarial sample" }, { "end": 1694.84, "start": 1689.7199999999998, "text": " then I don't not only do I know it's an adversarial sample but I say aha this" }, { "end": 1701.56, "start": 1694.84, "text": " must be the source class right this is the exact effect we saw here all right" }, { "end": 1711.6399999999999, "start": 1701.56, "text": " we can if we detect this pattern here we can also back deduce basically aha so" }, { "end": 1718.8, "start": 1711.6399999999999, "text": " this must be the original class that the adversarial example was derived from so" }, { "end": 1723.24, "start": 1718.8, "text": " we're basically able to build a not only a detector but we're basically able to" }, { "end": 1728.6, "start": 1723.24, "text": " reconstruct the original class and here you see for these models let's say on" }, { "end": 1733.64, "start": 1728.6, "text": " CIFAR-10 we imagine that is a bit too large as of yet for our compute but" }, { "end": 1739.28, "start": 1733.64, "text": " on these models that have clean accuracies that are pretty high on CIFAR-10" }, { "end": 1745.32, "start": 1739.28, "text": " plus this this kind of toy network here we're able to reconstruct the original" }, { "end": 1751.76, "start": 1745.32, "text": " class so basically this is defense against adversarial examples by by" }, { "end": 1757.8, "start": 1751.76, "text": " getting to almost clean accuracy back so this is a really surprising actually and" }, { "end": 1767.84, "start": 1757.8, "text": " kind of nice so we we do a bunch of other experiments including we defend" }, { "end": 1774.4, "start": 1767.84, "text": " against an attacker that's actually aware of this thing but the main the" }, { "end": 1779.8799999999999, "start": 1774.4, "text": " main point here is we don't say this is kind of the end-all method of defending" }, { "end": 1783.8400000000001, "start": 1779.88, "text": " against adversarial examples we simply want to kind of encourage the way of" }, { "end": 1790.0800000000002, "start": 1783.8400000000001, "text": " thinking of of these kind of noise what what if you what if you noise induce" }, { "end": 1797.16, "start": 1790.0800000000002, "text": " perturbations how does your network react to that can you can you detect" }, { "end": 1804.3600000000001, "start": 1797.16, "text": " these effects here can you detect effects like this and are these" }, { "end": 1809.24, "start": 1804.3600000000001, "text": " unavoidable or are there architectures are there architectures we can basically" }, { "end": 1814.84, "start": 1809.24, "text": " build such that adversarial examples have no chance except doing something" }, { "end": 1819.84, "start": 1814.84, "text": " like this which we can then easily detect all right so that was a bit of an" }, { "end": 1840, "start": 1819.84, "text": " introduction if you like it check out the entire paper and goodbye" } ]
jltgNGt8Lpg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Neural Ordinary Differential Equations
[ "Science & Technology" ]
[]
https://arxiv.org/abs/1806.07366 Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models. Authors: Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud
Hello and welcome. Today we're going to look at Neural Ordinary Differential Equations by Rick Chen, Julia Rubinova, Jesse Bettencourt and David Dovenoe. This has been quite an interesting kind of paper to see because it's a bit special. We're going to go over parts of it, not the full paper, just kind of the important parts because the paper is quite packed and we'd rather explain it in parts and kind of get the gist of it. So basically what they do is they say we introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black box differential equation solver. These continuous depth models have constant memory cost, adapt their evaluation strategy to each input, can explicitly trade numerical precision for speed. It sounds awesome, honestly. It sounds really cool and it sounds really new. Let's jump in. What they say is let's look at kind of classic neural networks, especially residual neural networks. What residual neural networks do is in each hidden layer they kind of have a representation. This is kind of their hidden representation at layer t. What they do is they then add something. If you don't know a recurrent neural network is where you have, let's say this is your hidden state ht, and in a classic neural network you would just have a weight matrix here, blah blah blah blah. You do a matrix multiplication to get ht plus 1. So to get the next kind of the next hidden state you do a matrix multiplication by a big weight matrix here w. In a residual neural network what you do is you have this weight matrix w, you multiply it to get delta ht plus 1 and you take ht and you add the two. You add ht and delta ht plus 1 to arrive at ht plus 1. That's a residual network. It basically doesn't learn the transformation to the next layer but it learns how is the next representation different from this representation. What do I need to add to this representation to get to the next representation? It's reasoned that for deep networks since each layer only does a little bit of transformation we should basically bias it towards keeping the representation the same and just kind of changing it a little bit. So this is the inherent bias, the identity transform. So that's a residual network. This here is characterized by f of kind of theta and ht. So this is kind of the this is what we called delta h. It's now called f. So this f would be the kind of neural network layer and theta would be the parameters of it. So the weight matrix in our case. They say okay what if you do many of those? So they say basically what this is is kind of a time process. It's kind of you have a state and the next state and the next state and you always learn how to go to the next state to the next state and so on. What if you go very deep and what if you look at this as a time process and kind of make these steps very small? Make these super small and basically what if you have many many infinitely many layers? I say well okay this then becomes a dynamic process. Basically an ordinary differential equation where I say okay my time is now continuous and I look at it as a linearization as a local linearization basically and I say okay I basically specify how to get from this time to the next instance of time. The next instant the next infinitesimally small instance of time by specifying this f and in the continuous case this is to say that the derivative of the hidden state is now parameterized by a neural network. So if you know what a differential equation is it has like a start state and then what you do is you specify how at each point in time so that's t at each point in time how does the gradient look so maybe the gradient looks like this and then what an ODE solver will do is the ODE solver will say okay the gradients we're gonna do an infinite small step in this direction and then it goes back to f. What's the gradient at this infinitely small step next in time and then f would say well the gradient is like this and then the ODE solver will go like okay I need to be a little bit flatter so I go here so what's the gradient at this time okay maybe it's up this I need to go up here so the ODE solver will kind of construct a curve and at each point it needs to look that whatever f says is the gradient is actually the gradient right if this is the gradient this is the gradient this is the gradient so that's that's kind of how an ODE works and that's they say okay you can actually look at residual networks here as being a discrete time analog to such an ODE so what we want to do is actually we want to specify we want to actually and this is the the crazy part right or the cool part is we want to do this for neural networks basically we simply specify an ODE and the start state here the start state is let's say if you want to build an MNIST classifier it's our it's our image right the start state is our MNIST image and we're simply training a neural network such that the ODE that the equation if you solve it the curve at the end will arrive at the correct class I mean that's that's I'm skipping a few parts here about dimensionalities and so on right because you need to keep in the same dimension but in essence they say here we start out with our input and we train the neural network to give us the correct gradients the correct derivatives of this curve at each point in time such that when you solve the ODE at the end point you are going to be at the correct label so that's this is the input to your task basically and this is the output right but instead of having a neural network go from input to output you have a neural network that parameterizes how you go from each step in time to the next one what's what's the gradient at each point in time that's that's the kind of gist of it and that's that's kind of really cool it's a really new approach alright so they give various advantages of this and so here is this demonstrated again right you are here this is your input and you want to go to the output and then the loss of the loss that you specify it can depend on kind of either on the output as in like an image classifier or it can depend on intermediate states this is it's kept general right so the way they go about it is they say well okay but so the neural network now specifies how to get from one step to the next right here and the neural network has parameters right so we we need to train this network such that the correct output is given to some input right we actually need to train it so we need to we need to some how way to train these parameters theta and they say okay we do gradient descent on theta like in a classic neural network but now we need it's not it's not so easy right it's not one pass through this function it's like infinitely many passes through this function until you arrive here and then if you basically need to somehow get a gradient with respect to these parameters here so they say this again the loss of the this is the loss of the end state right is the loss of the start state plus the the integral over time of this is derivative which is basically this curve and the curve is given by an ODE solver where we input all these things so we need gradients with respect to that how do we do that and they give away here of saying okay we could either kind of back propagate through the ODE solver but that would you know depend on the ODE solver and so on but there's another method there's called what's called the we need the what's called the adjoint so this is reverse mode differentiation of an ODE solution adjoint sensitivity method solves an augmented ODE backwards in time so basically what you need to do is you forward propagate you come here right and then what you can do is you can solve the second ODE so you can generate a second curve here this one and don't worry about these little jumps here you can solve the second curve and the second curve together with the first and second curve you can then compute the gradients you need right so the second curve is is basically simply something like the application of the chain rule to the continuous domain and you need to you need to adjust these jumps here only when your loss depends on intermediate states this is this is kind of the offset caused by including or not including the loss so let's dive a bit further into this adjoint state what's the red curve the red curve is called a and what's a a is a curve and this is the differential equation for it again we specify the curve a by specifying its start state and its derivative and from its start state and its derivative at each time the ODE solver is able to construct the curve entirely so a t it says here is del L to del ZT this means how does the loss depend on this ZT on the hidden state right how does the loss depend on the hidden state at time T so it doesn't even have to be any of these points here how does the loss depend on this hidden state here and in order to find that out you would need to go you would need to develop the the curve until here right and then calculate the loss and then back propagate through here but you can do this by calculating this adjoint thing so as you can see here is a demonstration it's an example right so the start state here is simply given by the loss how does the loss of this state how does the loss depend on this state well simply by plugging it into the into the loss equation right so your losses might be a cross entropy loss or something how does the loss do that depend on this state here well we go we go from this state that we already know and we know how in reverse time so backwards in time this sensitivity of the loss develops so we go and we develop this curve until here and we say aha this point influences this loss in this much basically right so so and if the loss explicitly depends on this point then we have to we have to calculate in this offset since this point here only depends on this time up till here and then it changes so there is there's a discontinuation but don't worry about that too much basically what we can do is we can calculate the curve in a forward pass curve and the loss in the forward pass then we can do a second pass backward again by an ODE solve to say how does the how does the loss depend on each one of the states here of the hidden states right so that's the second point but that's not all because we're ultimately not interested in the how the loss depends on the state where the we're interested in how the loss depends on these parameters that tell us how to get from one hidden state to the next but luckily we can then simply evaluate this integral that depends as you can see here on a and on Z we can evaluate this and get the gradients for the the parameters right so I also have to say the parameters are static so the parameters are given over the entire duration of this they're they're the same and it's simply what changes is time alright so this is how you can get this is how you can get gradients with respect to parameters and the cool thing is now you can train these you can actually train this neural network here that tells you how to go from one state to the next such that if you input the digit 2 as an image well you can output to I mean not exactly but that's that's the point right you can by by going through this motion by going through this od solve so that's I mean that's immensely cool they actually define how to do this here in one forward one kind of backward pass you can solve everything at the same time it's it's pretty cool and they evaluate their their net and they compare it with a different bunch of other nets and they interestingly show that so basically with an od solver you can never kind of tell how many evaluations it's going to do because it's going to get increasing like it's increasingly accurate over time so you let it run and maybe it's going to first generate a curve that's like something like this right and then it needs to say crap okay I need to go back and refine and then it maybe goes the curve like this and so on so it gets continually closer over time and for that it needs to kind of query it's like a query it needs to query this this F so you need to give it the function as an invaluable function and it goes and just okay I need to I need to know it here okay I got it from here okay I need to know it here okay I got it from oh no I didn't get it okay I need also need to know it here all right and so you can never know how much they will evaluate but you basically have a parameter to trade off accuracy and how much they evaluate that's what they show here so the the less error they want in their forward pass the more forward passes they have to do that's this curve the more forward passes they do the more time they have to invest right that's this curve but interestingly the more forward passes the time required for forward passes or the evaluations required for passes increases also the evaluation required for backward passes but not by much so that the backward passes require about half the amount of evaluations that's forward passes which is encouraging since the the backward passes don't go kind of overboard like if you had to back propagate through the operations of the ODE solver itself and they also show as your training epoch continues that the ODE solver requests more and more evaluations for so for the same epoch basically or the same samples within different epochs which means as it gets more accurate kind of needs to know more and more and more about the the samples basically about the test the training samples which is all basically showing that this kind of works yeah so they they kind of to get into normalizing flows which I don't want to get into here much because we haven't done a video on that yet we'll do one but they basically show that it's it's quite easy to do normalizing flows in a continuous fashion and the topic normalizing flows it's in itself pretty cool what they do at the end is they say okay what we can now do is we can actually take sequential data so now we've just talked about let's input one data point get out let's say a label or something which we can actually do sequential data and let's for example have an RNN encoder for our sequential data so here here these are data points right these are measurements like a blood pressure of a of a person and what we can do is we can do a variational autoencoder we've talked about this we can have an RNN encoder parameterize a distribution and then as a decoder have this ODE neural network and basically what that allows us to do is that allows us to deal with time steps that are not regularly sampled and so we can extrapolate from the data point at time yeah times not regular samplings like or with RNNs you basically forced to have always the same time step difference otherwise you have a very tough time but with this since these are continuous flows you're basically you can basically unroll them and evaluate them at whatever time you want so they have pretty cool experiments here where they kind of try to learn these kind of spiraling behaviors and you see on top the RNN decoder will get all jaggy and so on where as the so so basically as the the neural ordinary differential equation will generate quite let's say smooth things and also it can extrapolate as you can see here it can it can go the red the red thing is the extrapolation only there's only data where the green dots are so that's pretty cool you can see the RNN sometimes isn't able to kind of continue the flow as you can see in here it extrapolates wrongly so the this kind of I mean it's toy it's a toy example but these kind of dynamics are pretty cool and they also show here when they learn the spirals and vary one dimension of the latent code that is given by the encoder then the flow goes from clockwise it goes from to to counter clockwise as you see here I've turned this in I've drawn this in wrong but so it's pretty pretty cool what these these things learn and since it's only small data right now small models but I'm pretty sure this is going to develop further and be a cool just a cool way cool alley of research cool idea and looking forward to what they come up next alright so that was it for today a bit shorter but I hope this was somewhat clear enough all right have a great day
[ { "end": 5, "start": 0, "text": " Hello and welcome. Today we're going to look at Neural Ordinary Differential" }, { "end": 13.280000000000001, "start": 5, "text": " Equations by Rick Chen, Julia Rubinova, Jesse Bettencourt and David Dovenoe." }, { "end": 17.56, "start": 13.280000000000001, "text": " This has been quite an interesting kind of paper to see because it's a bit special." }, { "end": 22.400000000000002, "start": 17.56, "text": " We're going to go over parts of it, not the full paper, just kind of the" }, { "end": 28.28, "start": 22.400000000000002, "text": " important parts because the paper is quite packed and we'd rather" }, { "end": 35.32, "start": 28.28, "text": " explain it in parts and kind of get the gist of it. So basically what they do is" }, { "end": 40.8, "start": 35.32, "text": " they say we introduce a new family of deep neural network models. Instead of" }, { "end": 44.64, "start": 40.8, "text": " specifying a discrete sequence of hidden layers we parameterize the" }, { "end": 49.24, "start": 44.64, "text": " derivative of the hidden state using a neural network. The output of the network" }, { "end": 53.44, "start": 49.24, "text": " is computed using a black box differential equation solver. These" }, { "end": 57.040000000000006, "start": 53.44, "text": " continuous depth models have constant memory cost, adapt their evaluation" }, { "end": 62.12, "start": 57.04, "text": " strategy to each input, can explicitly trade numerical precision for speed." }, { "end": 66.48, "start": 62.12, "text": " It sounds awesome, honestly. It sounds really cool and it sounds really new." }, { "end": 76, "start": 66.48, "text": " Let's jump in. What they say is let's look at kind of classic neural" }, { "end": 79.32, "start": 76, "text": " networks, especially residual neural networks. What residual neural" }, { "end": 85.16, "start": 79.32, "text": " networks do is in each hidden layer they kind of have a representation." }, { "end": 91.39999999999999, "start": 85.16, "text": " This is kind of their hidden representation at layer t. What they do" }, { "end": 98.24, "start": 91.39999999999999, "text": " is they then add something. If you don't know a recurrent neural network" }, { "end": 105.64, "start": 98.24, "text": " is where you have, let's say this is your hidden state ht, and in a classic neural" }, { "end": 110.67999999999999, "start": 105.64, "text": " network you would just have a weight matrix here, blah blah blah blah. You do a" }, { "end": 117.84, "start": 110.68, "text": " matrix multiplication to get ht plus 1. So to get the next kind of the next" }, { "end": 121.16000000000001, "start": 117.84, "text": " hidden state you do a matrix multiplication by a big weight matrix" }, { "end": 128.28, "start": 121.16000000000001, "text": " here w. In a residual neural network what you do is you have this" }, { "end": 139.20000000000002, "start": 128.28, "text": " weight matrix w, you multiply it to get delta ht plus 1 and you take ht and you" }, { "end": 146.04, "start": 139.2, "text": " add the two. You add ht and delta ht plus 1 to arrive at ht plus 1." }, { "end": 150.56, "start": 146.04, "text": " That's a residual network. It basically doesn't learn the transformation to the" }, { "end": 156.23999999999998, "start": 150.56, "text": " next layer but it learns how is the next representation different from" }, { "end": 160.78, "start": 156.23999999999998, "text": " this representation. What do I need to add to this representation to get to" }, { "end": 165.88, "start": 160.78, "text": " the next representation? It's reasoned that for deep networks" }, { "end": 170.56, "start": 165.88, "text": " since each layer only does a little bit of transformation we should basically" }, { "end": 175.35999999999999, "start": 170.56, "text": " bias it towards keeping the representation the same and just kind of" }, { "end": 179.84, "start": 175.35999999999999, "text": " changing it a little bit. So this is the inherent bias, the identity" }, { "end": 183.76, "start": 179.84, "text": " transform. So that's a residual" }, { "end": 193.12, "start": 183.76, "text": " network. This here is characterized by f of kind of theta and ht. So this" }, { "end": 202.24, "start": 193.12, "text": " is kind of the this is what we called delta h. It's now called f. So this" }, { "end": 207.64000000000001, "start": 202.24, "text": " f would be the kind of neural network layer and theta would be the" }, { "end": 215.52, "start": 207.64000000000001, "text": " parameters of it. So the weight matrix in our case. They say okay what if you do" }, { "end": 221.8, "start": 215.52, "text": " many of those? So they say basically what this is is kind of a time" }, { "end": 225.84, "start": 221.8, "text": " process. It's kind of you have a state and the next state and the next state" }, { "end": 230.32000000000002, "start": 225.84, "text": " and you always learn how to go to the next state to the next state and so on." }, { "end": 235.96, "start": 230.32000000000002, "text": " What if you go very deep and what if you look at this as a time process and" }, { "end": 244.72000000000003, "start": 235.96, "text": " kind of make these steps very small? Make these super small and basically" }, { "end": 252.96, "start": 244.72, "text": " what if you have many many infinitely many layers? I say well okay this" }, { "end": 257.72, "start": 252.96, "text": " then becomes a dynamic process. Basically an ordinary differential" }, { "end": 265.96, "start": 257.72, "text": " equation where I say okay my time is now continuous and I look at it as a" }, { "end": 276.64, "start": 265.96, "text": " linearization as a local linearization basically and I say okay I basically" }, { "end": 282.4, "start": 276.64, "text": " specify how to get from this time to the next instance of time. The next" }, { "end": 289.76, "start": 282.4, "text": " instant the next infinitesimally small instance of time by specifying this f" }, { "end": 296.64, "start": 289.76, "text": " and in the continuous case this is to say that the derivative of the hidden" }, { "end": 305.48, "start": 296.64, "text": " state is now parameterized by a neural network. So if you know what a" }, { "end": 310.32, "start": 305.48, "text": " differential equation is it has like a start" }, { "end": 316.84, "start": 310.32, "text": " state and then what you do is you specify how at each point in time" }, { "end": 321.47999999999996, "start": 316.84, "text": " so that's t at each point in time how does the gradient look so maybe the" }, { "end": 328.2, "start": 321.47999999999996, "text": " gradient looks like this and then what an ODE solver will do is the ODE solver" }, { "end": 332.73999999999995, "start": 328.2, "text": " will say okay the gradients we're gonna do an infinite small step in this" }, { "end": 337.96, "start": 332.73999999999995, "text": " direction and then it goes back to f. What's the gradient at this" }, { "end": 344.84, "start": 337.96, "text": " infinitely small step next in time and then f would say well the gradient is" }, { "end": 349.47999999999996, "start": 344.84, "text": " like this and then the ODE solver will go like okay I need to be a little bit" }, { "end": 355.23999999999995, "start": 349.47999999999996, "text": " flatter so I go here so what's the gradient at this time okay maybe it's up" }, { "end": 360.88, "start": 355.23999999999995, "text": " this I need to go up here so the ODE solver will kind of construct a curve" }, { "end": 370.23999999999995, "start": 360.88, "text": " and at each point it needs to look that whatever f says is the gradient is" }, { "end": 375.2, "start": 370.24, "text": " actually the gradient right if this is the gradient this is the gradient this" }, { "end": 383, "start": 375.2, "text": " is the gradient so that's that's kind of how an ODE works and that's they say" }, { "end": 389.68, "start": 383, "text": " okay you can actually look at residual networks here as being a discrete time" }, { "end": 395.8, "start": 389.68, "text": " analog to such an ODE so what we want to do is actually we want to specify we" }, { "end": 400.72, "start": 395.8, "text": " want to actually and this is the the crazy part right or the cool part is we" }, { "end": 406.68, "start": 400.72, "text": " want to do this for neural networks basically we simply specify an ODE and" }, { "end": 416.96000000000004, "start": 406.68, "text": " the start state here the start state is let's say if you want to build an MNIST" }, { "end": 422.56, "start": 416.96000000000004, "text": " classifier it's our it's our image right the start state is our MNIST image and" }, { "end": 430.64, "start": 422.56, "text": " we're simply training a neural network such that the ODE that the equation if" }, { "end": 436.12, "start": 430.64, "text": " you solve it the curve at the end will arrive at the correct class I mean" }, { "end": 440.36, "start": 436.12, "text": " that's that's I'm skipping a few parts here about dimensionalities and so on" }, { "end": 445.76, "start": 440.36, "text": " right because you need to keep in the same dimension but in essence they say" }, { "end": 451.88, "start": 445.76, "text": " here we start out with our input and we train the neural network to give us the" }, { "end": 456.12, "start": 451.88, "text": " correct gradients the correct derivatives of this curve at each point" }, { "end": 461.76, "start": 456.12, "text": " in time such that when you solve the ODE at the end point you are going to be at" }, { "end": 467.8, "start": 461.76, "text": " the correct label so that's this is the input to your task basically and" }, { "end": 473.6, "start": 467.8, "text": " this is the output right but instead of having a neural network go from input" }, { "end": 479.2, "start": 473.6, "text": " to output you have a neural network that parameterizes how you go from each step" }, { "end": 484.84, "start": 479.2, "text": " in time to the next one what's what's the gradient at each point in time" }, { "end": 492.03999999999996, "start": 484.84, "text": " that's that's the kind of gist of it and that's that's kind of really cool it's" }, { "end": 500.2, "start": 492.03999999999996, "text": " a really new approach alright so they give various advantages of this and so" }, { "end": 506.28, "start": 500.2, "text": " here is this demonstrated again right you are here this is your input and you" }, { "end": 513.28, "start": 506.28, "text": " want to go to the output and then the loss of the loss that you specify it can" }, { "end": 518.56, "start": 513.28, "text": " depend on kind of either on the output as in like an image classifier or it can" }, { "end": 525.56, "start": 518.56, "text": " depend on intermediate states this is it's kept general right so the way they" }, { "end": 530.28, "start": 525.56, "text": " go about it is they say well okay but so the neural network now specifies how to" }, { "end": 535.04, "start": 530.28, "text": " get from one step to the next right here and the neural network has parameters" }, { "end": 540.92, "start": 535.04, "text": " right so we we need to train this network such that the correct output is" }, { "end": 546.28, "start": 540.92, "text": " given to some input right we actually need to train it so we need to we need" }, { "end": 550.28, "start": 546.28, "text": " to some how way to train these parameters theta and they say okay we do" }, { "end": 553.8399999999999, "start": 550.28, "text": " gradient descent on theta like in a classic neural network but now we need" }, { "end": 561.12, "start": 553.8399999999999, "text": " it's not it's not so easy right it's not one pass through this function it's like" }, { "end": 569.2, "start": 561.12, "text": " infinitely many passes through this function until you arrive here and then" }, { "end": 576.48, "start": 569.2, "text": " if you basically need to somehow get a gradient with respect to these" }, { "end": 580.92, "start": 576.48, "text": " parameters here so they say this again the loss of the this is the loss of the" }, { "end": 589.52, "start": 580.92, "text": " end state right is the loss of the start state plus the the integral over time of" }, { "end": 596.52, "start": 589.52, "text": " this is derivative which is basically this curve and the curve is given by an" }, { "end": 601.76, "start": 596.52, "text": " ODE solver where we input all these things so we need gradients with respect" }, { "end": 607.6, "start": 601.76, "text": " to that how do we do that and they give away here of saying okay we could either" }, { "end": 613.4, "start": 607.6, "text": " kind of back propagate through the ODE solver but that would you know depend on" }, { "end": 619.92, "start": 613.4, "text": " the ODE solver and so on but there's another method there's called what's called the we" }, { "end": 624.64, "start": 619.92, "text": " need the what's called the adjoint so this is reverse mode differentiation of" }, { "end": 629.88, "start": 624.64, "text": " an ODE solution adjoint sensitivity method solves an augmented ODE" }, { "end": 634.84, "start": 629.88, "text": " backwards in time so basically what you need to do is you forward propagate you" }, { "end": 640.88, "start": 634.84, "text": " come here right and then what you can do is you can solve the second ODE so you" }, { "end": 645.56, "start": 640.88, "text": " can generate a second curve here this one and don't worry about these little" }, { "end": 651.2, "start": 645.56, "text": " jumps here you can solve the second curve and the second curve together with" }, { "end": 657.72, "start": 651.2, "text": " the first and second curve you can then compute the gradients you need right so" }, { "end": 664.04, "start": 657.72, "text": " the second curve is is basically simply something like the application of the" }, { "end": 671.68, "start": 664.04, "text": " chain rule to the continuous domain and you need to you need to adjust these" }, { "end": 677.04, "start": 671.68, "text": " jumps here only when your loss depends on intermediate states this is this is" }, { "end": 685.1999999999999, "start": 677.04, "text": " kind of the offset caused by including or not including the loss so let's dive" }, { "end": 690.04, "start": 685.1999999999999, "text": " a bit further into this adjoint state what's the red curve the red curve is" }, { "end": 698.8399999999999, "start": 690.04, "text": " called a and what's a a is a curve and this is the differential equation for it" }, { "end": 704.36, "start": 698.8399999999999, "text": " again we specify the curve a by specifying its start state and its" }, { "end": 708.92, "start": 704.36, "text": " derivative and from its start state and its derivative at each time the ODE" }, { "end": 722.3199999999999, "start": 708.92, "text": " solver is able to construct the curve entirely so a t it says here is del L to" }, { "end": 731.52, "start": 722.3199999999999, "text": " del ZT this means how does the loss depend on this ZT on the hidden state" }, { "end": 738.16, "start": 731.52, "text": " right how does the loss depend on the hidden state at time T so it doesn't" }, { "end": 743.36, "start": 738.16, "text": " even have to be any of these points here how does the loss depend on this hidden" }, { "end": 747.6, "start": 743.36, "text": " state here and in order to find that out you would need to go you would need to" }, { "end": 752.4399999999999, "start": 747.6, "text": " develop the the curve until here right and then calculate the loss and then" }, { "end": 758.68, "start": 752.4399999999999, "text": " back propagate through here but you can do this by calculating this adjoint" }, { "end": 765.9599999999999, "start": 758.68, "text": " thing so as you can see here is a demonstration it's an example right so" }, { "end": 773.52, "start": 765.96, "text": " the start state here is simply given by the loss how does the loss of this state" }, { "end": 779.76, "start": 773.52, "text": " how does the loss depend on this state well simply by plugging it into the into" }, { "end": 783.64, "start": 779.76, "text": " the loss equation right so your losses might be a cross entropy loss or" }, { "end": 790.72, "start": 783.64, "text": " something how does the loss do that depend on this state here well we go we" }, { "end": 797.76, "start": 790.72, "text": " go from this state that we already know and we know how in reverse time so" }, { "end": 804.88, "start": 797.76, "text": " backwards in time this sensitivity of the loss develops so we go and we" }, { "end": 813.36, "start": 804.88, "text": " develop this curve until here and we say aha this point influences this loss in" }, { "end": 824, "start": 813.36, "text": " this much basically right so so and if the loss explicitly depends on this" }, { "end": 828.2, "start": 824, "text": " point then we have to we have to calculate in this offset since this" }, { "end": 834.88, "start": 828.2, "text": " point here only depends on this time up till here and then it changes so there" }, { "end": 839.8000000000001, "start": 834.88, "text": " is there's a discontinuation but don't worry about that too much basically what" }, { "end": 851.92, "start": 839.8, "text": " we can do is we can calculate the curve in a forward pass curve and the loss in" }, { "end": 859.12, "start": 851.92, "text": " the forward pass then we can do a second pass backward again by an ODE solve to" }, { "end": 867.52, "start": 859.12, "text": " say how does the how does the loss depend on each one of the states here" }, { "end": 873.28, "start": 867.52, "text": " of the hidden states right so that's the second point but that's not all because" }, { "end": 879.12, "start": 873.28, "text": " we're ultimately not interested in the how the loss depends on the state where" }, { "end": 883.84, "start": 879.12, "text": " the we're interested in how the loss depends on these parameters that tell us" }, { "end": 891.04, "start": 883.84, "text": " how to get from one hidden state to the next but luckily we can then simply" }, { "end": 899.92, "start": 891.04, "text": " evaluate this integral that depends as you can see here on a and on Z we can" }, { "end": 908.76, "start": 899.92, "text": " evaluate this and get the gradients for the the parameters right so I also have" }, { "end": 912.88, "start": 908.76, "text": " to say the parameters are static so the parameters are given over the entire" }, { "end": 917.4399999999999, "start": 912.88, "text": " duration of this they're they're the same and it's simply what changes is" }, { "end": 925.5200000000001, "start": 917.44, "text": " time alright so this is how you can get this is how you can get gradients with" }, { "end": 928.5600000000001, "start": 925.5200000000001, "text": " respect to parameters and the cool thing is now you can train these you can" }, { "end": 934.0400000000001, "start": 928.5600000000001, "text": " actually train this neural network here that tells you how to go from one state" }, { "end": 940.7600000000001, "start": 934.0400000000001, "text": " to the next such that if you input the digit 2 as an image well you can output" }, { "end": 948.4399999999999, "start": 940.76, "text": " to I mean not exactly but that's that's the point right you can by by going" }, { "end": 952.68, "start": 948.4399999999999, "text": " through this motion by going through this od solve so that's I mean that's" }, { "end": 957.96, "start": 952.68, "text": " immensely cool they actually define how to do this here in one forward one kind" }, { "end": 961.92, "start": 957.96, "text": " of backward pass you can solve everything at the same time it's it's" }, { "end": 969.24, "start": 961.92, "text": " pretty cool and they evaluate their their net and they compare it with a" }, { "end": 976.32, "start": 969.24, "text": " different bunch of other nets and they interestingly show that so basically" }, { "end": 982.5600000000001, "start": 976.32, "text": " with an od solver you can never kind of tell how many evaluations it's going to" }, { "end": 988.84, "start": 982.5600000000001, "text": " do because it's going to get increasing like it's increasingly accurate over" }, { "end": 994.48, "start": 988.84, "text": " time so you let it run and maybe it's going to first generate a curve that's" }, { "end": 1001.72, "start": 994.48, "text": " like something like this right and then it needs to say crap okay I need to go" }, { "end": 1005.64, "start": 1001.72, "text": " back and refine and then it maybe goes the curve like this and so on so it gets" }, { "end": 1011.6, "start": 1005.64, "text": " continually closer over time and for that it needs to kind of query it's like" }, { "end": 1015.44, "start": 1011.6, "text": " a query it needs to query this this F so you need to give it the function as an" }, { "end": 1020, "start": 1015.44, "text": " invaluable function and it goes and just okay I need to I need to know it here" }, { "end": 1023.64, "start": 1020, "text": " okay I got it from here okay I need to know it here okay I got it from oh no I" }, { "end": 1029.84, "start": 1023.64, "text": " didn't get it okay I need also need to know it here all right and so you can" }, { "end": 1034, "start": 1029.84, "text": " never know how much they will evaluate but you basically have a parameter to" }, { "end": 1038.08, "start": 1034, "text": " trade off accuracy and how much they evaluate that's what they show here so" }, { "end": 1043.96, "start": 1038.08, "text": " the the less error they want in their forward pass the more forward passes" }, { "end": 1049.28, "start": 1043.96, "text": " they have to do that's this curve the more forward passes they do the more" }, { "end": 1054.16, "start": 1049.28, "text": " time they have to invest right that's this curve but interestingly the more" }, { "end": 1060.76, "start": 1054.16, "text": " forward passes the time required for forward passes or the evaluations" }, { "end": 1065.6, "start": 1060.76, "text": " required for passes increases also the evaluation required for backward passes" }, { "end": 1069.8, "start": 1065.6, "text": " but not by much so that the backward passes require about half the amount of" }, { "end": 1076.52, "start": 1069.8, "text": " evaluations that's forward passes which is encouraging since the the backward" }, { "end": 1082.8799999999999, "start": 1076.52, "text": " passes don't go kind of overboard like if you had to back propagate through" }, { "end": 1089.56, "start": 1082.8799999999999, "text": " the operations of the ODE solver itself and they also show as your training epoch" }, { "end": 1097.4, "start": 1089.56, "text": " continues that the ODE solver requests more and more evaluations for so for the" }, { "end": 1101.52, "start": 1097.4, "text": " same epoch basically or the same samples within different epochs which" }, { "end": 1107, "start": 1101.52, "text": " means as it gets more accurate kind of needs to know more and more and more" }, { "end": 1112.8799999999999, "start": 1107, "text": " about the the samples basically about the test the training samples which is" }, { "end": 1121.16, "start": 1112.8799999999999, "text": " all basically showing that this kind of works yeah so they they kind of to get" }, { "end": 1125.52, "start": 1121.16, "text": " into normalizing flows which I don't want to get into here much because we" }, { "end": 1129.4, "start": 1125.52, "text": " haven't done a video on that yet we'll do one but they basically show that it's" }, { "end": 1138.44, "start": 1129.4, "text": " it's quite easy to do normalizing flows in a continuous fashion and the topic" }, { "end": 1142.64, "start": 1138.44, "text": " normalizing flows it's in itself pretty cool what they do at the end is they say" }, { "end": 1147.8000000000002, "start": 1142.64, "text": " okay what we can now do is we can actually take sequential data so now" }, { "end": 1151.96, "start": 1147.8000000000002, "text": " we've just talked about let's input one data point get out let's say a label or" }, { "end": 1160.04, "start": 1151.96, "text": " something which we can actually do sequential data and let's for example" }, { "end": 1165.96, "start": 1160.04, "text": " have an RNN encoder for our sequential data so here here these are data points" }, { "end": 1170.1200000000001, "start": 1165.96, "text": " right these are measurements like a blood pressure of a of a person and what" }, { "end": 1174.3600000000001, "start": 1170.1200000000001, "text": " we can do is we can do a variational autoencoder we've talked about this we" }, { "end": 1180.72, "start": 1174.3600000000001, "text": " can have an RNN encoder parameterize a distribution and then as a decoder have" }, { "end": 1186.48, "start": 1180.72, "text": " this ODE neural network and basically what that allows us to do is that allows" }, { "end": 1192.96, "start": 1186.48, "text": " us to deal with time steps that are not regularly sampled and so we can" }, { "end": 1202, "start": 1192.96, "text": " extrapolate from the data point at time yeah times not regular samplings like" }, { "end": 1208.44, "start": 1202, "text": " or with RNNs you basically forced to have always the same time step" }, { "end": 1213.68, "start": 1208.44, "text": " difference otherwise you have a very tough time but with this since these are" }, { "end": 1218.3200000000002, "start": 1213.68, "text": " continuous flows you're basically you can basically unroll them and evaluate" }, { "end": 1222.8400000000001, "start": 1218.3200000000002, "text": " them at whatever time you want so they have pretty cool experiments here where" }, { "end": 1228.6000000000001, "start": 1222.8400000000001, "text": " they kind of try to learn these kind of spiraling behaviors and you see on top" }, { "end": 1241.8, "start": 1228.6, "text": " the RNN decoder will get all jaggy and so on where as the so so basically as the" }, { "end": 1249.24, "start": 1241.8, "text": " the neural ordinary differential equation will generate quite let's say" }, { "end": 1256.1999999999998, "start": 1249.24, "text": " smooth things and also it can extrapolate as you can see here it can it" }, { "end": 1261.8400000000001, "start": 1256.2, "text": " can go the red the red thing is the extrapolation only there's only data" }, { "end": 1268.44, "start": 1261.8400000000001, "text": " where the green dots are so that's pretty cool you can see the RNN" }, { "end": 1273.68, "start": 1268.44, "text": " sometimes isn't able to kind of continue the flow as you can see in here it" }, { "end": 1282.68, "start": 1273.68, "text": " extrapolates wrongly so the this kind of I mean it's toy it's a toy example but" }, { "end": 1285.8400000000001, "start": 1282.68, "text": " these kind of dynamics are pretty cool and they also show here when they learn" }, { "end": 1293.1599999999999, "start": 1285.84, "text": " the spirals and vary one dimension of the latent code that is given by the" }, { "end": 1302.8, "start": 1293.1599999999999, "text": " encoder then the flow goes from clockwise it goes from to to counter" }, { "end": 1307.6399999999999, "start": 1302.8, "text": " clockwise as you see here I've turned this in I've drawn this in wrong but so" }, { "end": 1313.56, "start": 1307.6399999999999, "text": " it's pretty pretty cool what these these things learn and since it's only small" }, { "end": 1317.1599999999999, "start": 1313.56, "text": " data right now small models but I'm pretty sure this is going to develop" }, { "end": 1325, "start": 1317.1599999999999, "text": " further and be a cool just a cool way cool alley of research cool idea and" }, { "end": 1329.9199999999998, "start": 1325, "text": " looking forward to what they come up next alright so that was it for today a" }, { "end": 1344.1200000000001, "start": 1329.92, "text": " bit shorter but I hope this was somewhat clear enough all right have a great day" } ]
u1_qMdb0kYU
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
GPT-2: Language Models are Unsupervised Multitask Learners
[ "Science & Technology" ]
[ "gpt2", "transformer", "language model", "deep learning", "nlp", "openai", "security", "translation", "neural network", "attention", "attention mechanism", "unsupervised learning", "controversy" ]
A look at OpenAI's new GPT-2 model and the surrounding controversy. https://blog.openai.com/better-language-models/ Abstract: Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. Authors: Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever
Hi, today we're looking at language models are unsupervised multitask learners by Alec Radford, Jeffrey Wu, Reverend Child, David Luan, Dario Amadai and Ilya Sotskyver from OpenAI. This paper has generated a bit of hype in the last few days, so I wanted to go over it, basically take a look and take a look at the surrounding, let's say controversy. So let's actually have a look at the blog post that OpenAI released along with this paper. They say, we've trained a large scale unsupervised language model which generates coherent paragraphs of text, achieves state of the art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarization all without task specific training. So this sounds quite suspicious at the beginning, but we're actually going to have to look at how they do this. It sounds really good being able to do a rudimentary translation without any training on translation itself, just learning a language model. But this has been continuing a trend in recent kind of time where we see that the better your language model gets, the better basically your model gets on all these kind of language tasks. Alright, they go into this and we'll see how they do it. So basically what they've done is they've taken kind of a bigger dataset of language model, of language model dataset, which is about 40 gigabytes of internet text, I say this is here on the top. So it's one of the largest kind of text datasets there is unsupervised. And they also taken one of the largest language models. So they have their largest transformer based model has 1.5 billion parameters. So they take huge amount of data, huge model, they train this on, they train the model on the data and what comes out is like giant super language model that is able to perform all these cool tasks. So here they have like a bit of a sample. So what they can do is they can basically, so the way a language model works is you always try to predict the next word based on what you've already seen. So you kind of query it by giving it some starting words and it needs to continue the text from there. So here system prompt on top you see in a shocking finding scientists discovered a herd of unicorns living in a remote previously unexplored valley in the Andes mountains. Even more surprising to the researcher the fact that the unicorns spoke perfect English. And then the model continues. The scientists named their population the population after their distinctive horn, Ovitz unicorn. These four horns silver white unicorns were previously unknown to science. Now after almost two centuries the mystery of what sparked this odd phenomenon is finally solved. I mean you can even read this it's really, really coherent text and it's quite surprising. I think it's like slightly cherry picked but still the fact that a model is able to do this is unprecedented. Especially like since it's like a general language model not specific to the task of continuing news articles about unicorns or anything. So yeah they go into these findings we'll see them in the paper and they also say that yeah they can now do all these kind of tasks like question answering reading comprehension in a zero-shot fashion. So at the end here they say what it's capable of. So let's say AI writing assistants more capable dialogue agents unsupervised translation blah blah blah. They also say a few kind of let's say bad implications. Generate misleading news articles impersonate others online automate the production of abusive or fake content to post on social media automate the production of spam or phishing content. They liken it to a system called deep fakes which generates really well crafted let's say videos of people. So that the kind of they frame it in a way as this could be used for dangerous things and they say they aren't releasing they're only releasing the small version of GPT-2 along with the code. They're not releasing the data set training code or the GPT-2 this is the big model the model of weights right. And they do this they cite safety concerns. So I mean the community basically is going nuts over this statement this decision not release basically the code or the model or the data set to the world. And so if you search on Twitter for GPT-2 then everyone basically has an opinion of whether or not this is a good thing or not apart from people testing it out. So they've given access to like a selected set of journalists to an API where they can query the model. So there are some samples flying around but basically people are just debating whether or not this is a good thing or not and I mean it's just hilarious to go along with this and to just read all the people having opinions. I mean I've given my opinion as well just chime in it's a fun ride especially like this post here on reddit machine learning says should I release my NIST model or keep it closed source fearing malicious use. Today I trained a 23,000 layer ResNet got a 99% accuracy and MNIST. I'd love to share the model but I fear it being used maliciously. What if it is used to read documents by the Russians? What are your thoughts? I mean yeah this is in essence I mean in essence it's that right. It's like yeah come on. So I can just give my opinion up front. I think a lot of things came together here and I think that this being OpenAI being kind of this initiative I mean it's never been there before they're not an academic institution. They're not a company but still they you know they're researchers they want to have a career so there's lots of pressures on them. There's pressure to further publish so I think that that's kind of an underlying motivation to not release your model and your code and your data set is you actually you know there's a lot of work in this and you actually might want to get more than one paper out of it so keeping these things to yourself is basically a surefire guarantee you're going to get another two, three, four, five papers out of this data or model or whatever. It's also a good way to kind of generate press if you basically say oh we're not releasing but we have this really good model and there's one thing on Twitter right I mean you can't probably can't find it but says like step one my model is state of the art step two my model is state of the art but generalizes better to other tasks step three my model does the same thing but with fewer parameters and step four my model is so good I can't even talk about it. So basically I think a lot of things came together this press generating the pressure to create more kind of papers out of it and genuinely security concerns. I think being open AI and open AI kind of was established as a way to let's say the demo like their statutes pretty clearly say we want to open AI and research it in ethical use and you have backers like Elon Musk that talk all the time about kind of safety related issues in AI. I think there's a lot of pressure on these people to with everything they do basically have an ethical component. So everything they do is kind of scrutiny to this does this have ethical implications and where can we basically stand out from the rest of the community by doing something it doesn't need to be more ethical just needs to be different with an ethical reason behind reasoning behind it and this I think this is it I think there's a lot of things coming together I don't I don't think anyone like maliciously thought oh this you know we're gonna do this it's gonna generate so much press or I don't think anyone actively thought ah we'll just keep it secret we're gonna get so much more papers out of it I honestly think that the reasoning was there's a lot of you know a lot of pressure to do this ethical things when there's there's not if you think about it it's yeah it's a good language model and it can generate text but you can also hire you know people to generate text to generate fake news to do phishing and spam it's just a bit more efficient now right and yeah so it's it's unprecedented that you don't you don't release this research kind of cold war style so it's not really dangerous to release this and it's just delaying the inevitable basically but I think that the pressure the pressure was on and they made a decision that they thought was in line with their values and I think the this just neatly aligns with the underlying the other benefits to them that yeah all right so let's dive into the paper the paper is actually not you know too too much content there what they basically say so far is that a lot of a lot of these kind of papers on these tasks they they say the dominant approach to creating ML systems is to collect a data set of training examples demonstrate correct behavior train a system to imitate test its performance on in IID samples so they basically say the there's kind of the single task training on single domain data sets is a major contributor to the lack of generalization observed in current systems so they basically say these language systems they don't generalize like a QA system might be trained on a QA task but it you know has nothing to do with the task is basically a little bit different and even in multitask learning they say multitask learning is a promising framework but also it's kind of say it's still nascent and there's only very few different tasks right to do so they basically say you need basically a big big unsupervised model and it will implicitly learn all of the kind of special tasks and yeah so they say there there are approaches that basically basically learn these language models but then still require supervised training so basically fine-tuning this has been the this is for example the bird paper we discussed in the in the last video or two two videos ago that learns a giant language model but then does fine-tuning for each of these tasks and gets really well what they want to do here is basically simply learn a language model and then investigate whether or not the language model can perform downstream tasks in a zero-shot setting that means without any parameter or architecture modification so no fine-tuning all right so what they do so basically what a language model is if for those who don't know it's it's if you have a sequence of text let's say a b c d e these are words let's act like some actual words the cat sat on the mat and so on and you and you a language model is you kind of remove the end of the sentence at some point and ask the model what comes next right that's a language model I mean there's different kinds of language models specific language ones but that's the basic the basic thing so the you just ask the model what's next so you can you can do a lot of unsupervised training because you don't need a label data set for this you simply need a text corpus and that's basically all they do they use transformers which we've also discussed in attention is all you need paper so if you if you don't know what transformers are go back and look at that yeah all right so basically they say a lot of these special tasks like translation and question answering can be framed in language model way for example if you simply input if this is your text translate to French and then the English text and then the French text right and then at at test time basically you leave away the French text you simply ask the language model what comes next right if and its input is translate to French and then English text this is the translation framed as a language model task because you can specify the task that the language allows to do also as language so this is quite this is quite an interesting approach and one they exploit here and they basically say well since in a large and diverse corpus of web pages that they collect here some there is going to be some websites that basically all do translation from English to French and the model can learn from that so here in this paragraph they basically list examples of naturally occurring demonstrations of English to French and French to English translation found throughout the training data set so basically this is this is how the model could learn let's just look at one I hate the word perfume Bursas it's somewhat better in French right so there's a way in just an unsupervised setting where the language model could learn right if you just cross out this word at the end and you just ask the model what comes next right the model sees I hate the word perfume Bursas it's somewhat better in French period colon then the model has to put something there and the most logical continuation is to put the French word for perfume right so that that's kind of how they frame translation and these other tasks in language model way all right so they talk about the training data set which is a major component here they say they make a new training data set because all of the current ones aren't sufficient they say most prominent source of diverse nearly unlimited text is web scripts such as common crawl while these archives are many orders of magnitude larger than current language modeling datasets they have significant data quality issues so to say content are mostly unintelligible and so on so they basically describe here how they scrape a new web scrape which emphasizes document quality they go on reddit basically and scrape all outbound links from reddit that have received at least three karma which means that it yeah three up votes for a post of a link which basically means that three humans agreed that this was a good link so so they that's that's how they collect the data set resulting data set web stack web text contains text subset of the 45 million links they then kind of clean this and scrape it down and remove some stuff and they they end up with a large corpus right and then they talk about how they represent the input which is byte pair encoding style it's not exactly by parent coding it's a byte pair encoding inspired encoding we won't make a video about this by itself because it's really interesting but basically it's you can think of it as tokenization and pre-processing right then they say they they show their models so architecture hyperparameters basically these are these are their models this is the smallest one this second smallest one they say it's the same size as BERT so the the language model by google that we've looked at and then the largest one 1.5 billion parameters I mean that's huge and yeah they say it's ten times larger than the previous so the first one is their previous model and this now is this is the GPT-2 model that that gets all these these nice results so they do experiments first they do experiments on language modeling itself right so they train on their on their corpus and then they evaluate on a bunch of other language modeling corpus so these up here are language modeling corpuses and the state of the art is in this row and then you just look at basically the bottom row compare to their largest model this this is perplexity where it says PPL and the I think this here is is is accuracy so perplexity lower is better which you can you can see here the previous state of the art was 39 on wiki text 2 they get to 18 with accuracy obviously higher is better so the the kind of previous accuracy in Lombarda was 59 they get to 63 basically improve everything except for this 1 billion word corpus and they they also explain why they say this is the most heavily pre-processed text and so on so that basically they basically are really good at language modeling even though they train on a different data set that's the the point right the point is they train on their own corpus and then they go and just evaluate on the test set of these of these new of these new tasks and they become better basically than the models that trained on the training data set of that particular task all right so they they do a number of further experiments where they basically show that the model has now learned kind of implicitly learned a number of different tasks so let's look at for example summarization this just want to show an example of how you can do this so summarization summarization task is you get a long text you need to produce a short text and that short text is then compared to short texts that humans wrote when the task was to summarize the long text and you get points on how much your text overlaps with these human texts all right so they they say we test gpt2's ability to perform summarization on the cnn and daily mail data set to induce summarization here's what i found interesting we add the text tldr after the article and generate 100 tokens right then they say they need to reduce repetition and so on but basically this this this is right this is the way you can frame summarization by text input so i find this just kind of a really nice way to think about these problems the fact that instructions of the task can be given as text this is a very nice example here so basically you you put you as input you put the entire article right and so you here is the the cnn article blah blah blah it's super long right and then here you put tldr which is for those who don't know it's too long didn't read so people use this this phrase to indicate that then they will write a short summary of whatever was before here they will either put this at the beginning or at the end of a long text right to to say to people okay if you if you don't want to read all this just read this down here um gives you the gist of it which is exactly summarization so if you then take this away and ask the language model what's here right basically throughout the training corpus the language model will have encountered such pieces of text with a tldr in it and the language model might have learned that whatever is down here is a short version of whatever is up here and thereby if you then ask the language model what comes next here right the language model might learn aha i need to summarize whatever is above and that's the my best shot at completing at at answering the question what comes next and yeah so they get you know surprisingly good results um from from this so they say on the commonly reported rouge 12l metrics the generated summaries only begin to approach the performance of classic neural baselines just barely outperforms selecting three random sentences from the article uh but but um still it it um while qualitatively the generations resemble summaries they often focus on recent content from there to color confuse specific details so this is kind of a task where it kind of worked but not really um but i just find it it's really interesting that that it it kind of how they frame the task and how this can still so it still kind of works and that's the the gist here in all of these tasks is also with like translation they obviously don't get near the performance of a system specifically trained to do this task but they all also always say it kind of works right it's sort of sort of it learns something and their entire point of this paper is to say well look um yeah the the the diversity of tasks the model is able to perform and i would say kind of perform in a zero shot setting suggests that high capacity models trained to maximize the likelihood of a sufficiently varied text corpus begin to learn how to perform a surprising amount of tasks without the need for explicit supervision so yeah their entire point is if we train on such varied data that kind of um that spans the entire range of human language expression the the kind of tasks we want these systems to do will be learned implicitly so basically it points to let's get an even bigger corpus let's get even bigger models and we might get even better unsupervised zero shot way in these kind of special tasks and general language understanding all right so that that was basically i've jumped over a lot of points but i encourage you to look into this to look into the specific experiments they're really interesting the way how they framed things and um give just just shout your opinion around about whether or not the publishing is a good thing or not it's really funny i love it um and with that have a good day
[ { "end": 6.5200000000000005, "start": 0, "text": " Hi, today we're looking at language models are unsupervised multitask learners by Alec" }, { "end": 13.52, "start": 6.5200000000000005, "text": " Radford, Jeffrey Wu, Reverend Child, David Luan, Dario Amadai and Ilya Sotskyver from" }, { "end": 20.16, "start": 13.52, "text": " OpenAI. This paper has generated a bit of hype in the last few days, so I wanted to" }, { "end": 27.12, "start": 20.16, "text": " go over it, basically take a look and take a look at the surrounding, let's say controversy." }, { "end": 32.08, "start": 27.12, "text": " So let's actually have a look at the blog post that OpenAI released along with this" }, { "end": 40.22, "start": 32.08, "text": " paper. They say, we've trained a large scale unsupervised language model which generates" }, { "end": 44.96, "start": 40.22, "text": " coherent paragraphs of text, achieves state of the art performance on many language modeling" }, { "end": 50.08, "start": 44.96, "text": " benchmarks and performs rudimentary reading comprehension, machine translation, question" }, { "end": 56.040000000000006, "start": 50.08, "text": " answering and summarization all without task specific training. So this sounds quite suspicious" }, { "end": 61.68, "start": 56.04, "text": " at the beginning, but we're actually going to have to look at how they do this. It sounds" }, { "end": 67.88, "start": 61.68, "text": " really good being able to do a rudimentary translation without any training on translation" }, { "end": 74.92, "start": 67.88, "text": " itself, just learning a language model. But this has been continuing a trend in recent" }, { "end": 81.72, "start": 74.92, "text": " kind of time where we see that the better your language model gets, the better basically" }, { "end": 92.76, "start": 81.72, "text": " your model gets on all these kind of language tasks. Alright, they go into this and we'll" }, { "end": 101.2, "start": 92.76, "text": " see how they do it. So basically what they've done is they've taken kind of a bigger dataset" }, { "end": 107.56, "start": 101.2, "text": " of language model, of language model dataset, which is about 40 gigabytes of internet text," }, { "end": 114.32000000000001, "start": 107.56, "text": " I say this is here on the top. So it's one of the largest kind of text datasets there" }, { "end": 122.64, "start": 114.32000000000001, "text": " is unsupervised. And they also taken one of the largest language models. So they have" }, { "end": 130.88, "start": 122.64, "text": " their largest transformer based model has 1.5 billion parameters. So they take huge" }, { "end": 138.51999999999998, "start": 130.88, "text": " amount of data, huge model, they train this on, they train the model on the data and what" }, { "end": 146.12, "start": 138.51999999999998, "text": " comes out is like giant super language model that is able to perform all these cool tasks." }, { "end": 153.88, "start": 146.12, "text": " So here they have like a bit of a sample. So what they can do is they can basically," }, { "end": 158.12, "start": 153.88, "text": " so the way a language model works is you always try to predict the next word based on what" }, { "end": 164.44, "start": 158.12, "text": " you've already seen. So you kind of query it by giving it some starting words and it" }, { "end": 170.4, "start": 164.44, "text": " needs to continue the text from there. So here system prompt on top you see in a shocking" }, { "end": 175.56, "start": 170.4, "text": " finding scientists discovered a herd of unicorns living in a remote previously unexplored valley" }, { "end": 180.8, "start": 175.56, "text": " in the Andes mountains. Even more surprising to the researcher the fact that the unicorns" }, { "end": 190.8, "start": 180.8, "text": " spoke perfect English. And then the model continues. The scientists named their population" }, { "end": 195.84, "start": 190.8, "text": " the population after their distinctive horn, Ovitz unicorn. These four horns silver white" }, { "end": 200.28, "start": 195.84, "text": " unicorns were previously unknown to science. Now after almost two centuries the mystery" }, { "end": 205.48000000000002, "start": 200.28, "text": " of what sparked this odd phenomenon is finally solved. I mean you can even read this it's" }, { "end": 213.76, "start": 205.48, "text": " really, really coherent text and it's quite surprising. I think it's like slightly cherry" }, { "end": 223.48, "start": 213.76, "text": " picked but still the fact that a model is able to do this is unprecedented. Especially" }, { "end": 228.67999999999998, "start": 223.48, "text": " like since it's like a general language model not specific to the task of continuing news" }, { "end": 238.56, "start": 228.68, "text": " articles about unicorns or anything. So yeah they go into these findings we'll see them" }, { "end": 247.36, "start": 238.56, "text": " in the paper and they also say that yeah they can now do all these kind of tasks like question" }, { "end": 255.84, "start": 247.36, "text": " answering reading comprehension in a zero-shot fashion. So at the end here they say what" }, { "end": 262.68, "start": 255.84, "text": " it's capable of. So let's say AI writing assistants more capable dialogue agents unsupervised" }, { "end": 270.12, "start": 262.68, "text": " translation blah blah blah. They also say a few kind of let's say bad implications." }, { "end": 274.6, "start": 270.12, "text": " Generate misleading news articles impersonate others online automate the production of abusive" }, { "end": 281.68, "start": 274.6, "text": " or fake content to post on social media automate the production of spam or phishing content." }, { "end": 287.88, "start": 281.68, "text": " They liken it to a system called deep fakes which generates really well crafted let's" }, { "end": 299.12, "start": 287.88, "text": " say videos of people. So that the kind of they frame it in a way as this could be used" }, { "end": 307, "start": 299.12, "text": " for dangerous things and they say they aren't releasing they're only releasing the small" }, { "end": 317.32, "start": 307, "text": " version of GPT-2 along with the code. They're not releasing the data set training code or" }, { "end": 324.8, "start": 317.32, "text": " the GPT-2 this is the big model the model of weights right. And they do this they cite" }, { "end": 334, "start": 324.8, "text": " safety concerns. So I mean the community basically is going nuts over this statement this decision" }, { "end": 343.56, "start": 334, "text": " not release basically the code or the model or the data set to the world. And so if you" }, { "end": 352.68, "start": 343.56, "text": " search on Twitter for GPT-2 then everyone basically has an opinion of whether or not" }, { "end": 360.64, "start": 352.68, "text": " this is a good thing or not apart from people testing it out. So they've given access to" }, { "end": 370, "start": 360.64, "text": " like a selected set of journalists to an API where they can query the model. So there are" }, { "end": 378.91999999999996, "start": 370, "text": " some samples flying around but basically people are just debating whether or not this is a" }, { "end": 387.8, "start": 378.91999999999996, "text": " good thing or not and I mean it's just hilarious to go along with this and to just read all" }, { "end": 397.36, "start": 387.8, "text": " the people having opinions. I mean I've given my opinion as well just chime in it's a fun" }, { "end": 405.44, "start": 397.36, "text": " ride especially like this post here on reddit machine learning says should I release my" }, { "end": 413.88, "start": 405.44, "text": " NIST model or keep it closed source fearing malicious use. Today I trained a 23,000 layer" }, { "end": 421.56, "start": 413.88, "text": " ResNet got a 99% accuracy and MNIST. I'd love to share the model but I fear it being used" }, { "end": 428.08, "start": 421.56, "text": " maliciously. What if it is used to read documents by the Russians? What are your thoughts? I" }, { "end": 439.84, "start": 428.08, "text": " mean yeah this is in essence I mean in essence it's that right. It's like yeah come on. So" }, { "end": 450.4, "start": 439.84, "text": " I can just give my opinion up front. I think a lot of things came together here and I think" }, { "end": 456.17999999999995, "start": 450.4, "text": " that this being OpenAI being kind of this initiative I mean it's never been there before" }, { "end": 462.91999999999996, "start": 456.17999999999995, "text": " they're not an academic institution. They're not a company but still they you know they're" }, { "end": 467.76, "start": 462.91999999999996, "text": " researchers they want to have a career so there's lots of pressures on them. There's" }, { "end": 475.08, "start": 467.76, "text": " pressure to further publish so I think that that's kind of an underlying motivation to" }, { "end": 480.76, "start": 475.08, "text": " not release your model and your code and your data set is you actually you know there's" }, { "end": 485.28, "start": 480.76, "text": " a lot of work in this and you actually might want to get more than one paper out of it" }, { "end": 492.52, "start": 485.28, "text": " so keeping these things to yourself is basically a surefire guarantee you're going to get another" }, { "end": 501.47999999999996, "start": 492.52, "text": " two, three, four, five papers out of this data or model or whatever. It's also a good" }, { "end": 507.59999999999997, "start": 501.47999999999996, "text": " way to kind of generate press if you basically say oh we're not releasing but we have this" }, { "end": 512.8, "start": 507.59999999999997, "text": " really good model and there's one thing on Twitter right I mean you can't probably can't" }, { "end": 518.36, "start": 512.8, "text": " find it but says like step one my model is state of the art step two my model is state" }, { "end": 524.5600000000001, "start": 518.36, "text": " of the art but generalizes better to other tasks step three my model does the same thing" }, { "end": 535.48, "start": 524.5600000000001, "text": " but with fewer parameters and step four my model is so good I can't even talk about it." }, { "end": 546, "start": 535.48, "text": " So basically I think a lot of things came together this press generating the pressure" }, { "end": 554.56, "start": 546, "text": " to create more kind of papers out of it and genuinely security concerns. I think being" }, { "end": 562.2, "start": 554.56, "text": " open AI and open AI kind of was established as a way to let's say the demo like their" }, { "end": 568.88, "start": 562.2, "text": " statutes pretty clearly say we want to open AI and research it in ethical use and you" }, { "end": 575.32, "start": 568.88, "text": " have backers like Elon Musk that talk all the time about kind of safety related issues" }, { "end": 581.12, "start": 575.32, "text": " in AI. I think there's a lot of pressure on these people to with everything they do basically" }, { "end": 590.96, "start": 581.12, "text": " have an ethical component. So everything they do is kind of scrutiny to this does this have" }, { "end": 597.8000000000001, "start": 590.96, "text": " ethical implications and where can we basically stand out from the rest of the community by" }, { "end": 602, "start": 597.8000000000001, "text": " doing something it doesn't need to be more ethical just needs to be different with an" }, { "end": 607.52, "start": 602, "text": " ethical reason behind reasoning behind it and this I think this is it I think there's" }, { "end": 613.2, "start": 607.52, "text": " a lot of things coming together I don't I don't think anyone like maliciously thought" }, { "end": 619.04, "start": 613.2, "text": " oh this you know we're gonna do this it's gonna generate so much press or I don't think" }, { "end": 626.5, "start": 619.04, "text": " anyone actively thought ah we'll just keep it secret we're gonna get so much more papers" }, { "end": 632.56, "start": 626.5, "text": " out of it I honestly think that the reasoning was there's a lot of you know a lot of pressure" }, { "end": 638.64, "start": 632.56, "text": " to do this ethical things when there's there's not if you think about it it's yeah it's" }, { "end": 644.02, "start": 638.64, "text": " a good language model and it can generate text but you can also hire you know people" }, { "end": 649.92, "start": 644.02, "text": " to generate text to generate fake news to do phishing and spam it's just a bit more" }, { "end": 656.4399999999999, "start": 649.92, "text": " efficient now right and yeah so it's it's unprecedented that you don't you don't release" }, { "end": 666.0799999999999, "start": 656.4399999999999, "text": " this research kind of cold war style so it's not really dangerous to release this and it's" }, { "end": 671.92, "start": 666.0799999999999, "text": " just delaying the inevitable basically but I think that the pressure the pressure was" }, { "end": 677.88, "start": 671.92, "text": " on and they made a decision that they thought was in line with their values and I think" }, { "end": 686.88, "start": 677.88, "text": " the this just neatly aligns with the underlying the other benefits to them that yeah all right" }, { "end": 692.36, "start": 686.88, "text": " so let's dive into the paper the paper is actually not you know too too much content" }, { "end": 700.64, "start": 692.36, "text": " there what they basically say so far is that a lot of a lot of these kind of papers on" }, { "end": 706.32, "start": 700.64, "text": " these tasks they they say the dominant approach to creating ML systems is to collect a data" }, { "end": 711.2800000000001, "start": 706.32, "text": " set of training examples demonstrate correct behavior train a system to imitate test its" }, { "end": 719.2, "start": 711.2800000000001, "text": " performance on in IID samples so they basically say the there's kind of the single task training" }, { "end": 724.32, "start": 719.2, "text": " on single domain data sets is a major contributor to the lack of generalization observed in" }, { "end": 727.72, "start": 724.32, "text": " current systems so they basically say these language systems they don't generalize like" }, { "end": 734.1600000000001, "start": 727.72, "text": " a QA system might be trained on a QA task but it you know has nothing to do with the" }, { "end": 740.52, "start": 734.16, "text": " task is basically a little bit different and even in multitask learning they say multitask" }, { "end": 747.6, "start": 740.52, "text": " learning is a promising framework but also it's kind of say it's still nascent and there's" }, { "end": 754.24, "start": 747.6, "text": " only very few different tasks right to do so they basically say you need basically a" }, { "end": 763.64, "start": 754.24, "text": " big big unsupervised model and it will implicitly learn all of the kind of special tasks and" }, { "end": 773.88, "start": 763.64, "text": " yeah so they say there there are approaches that basically basically learn these language" }, { "end": 781.48, "start": 773.88, "text": " models but then still require supervised training so basically fine-tuning this has been the" }, { "end": 787.6, "start": 781.48, "text": " this is for example the bird paper we discussed in the in the last video or two two videos" }, { "end": 793.48, "start": 787.6, "text": " ago that learns a giant language model but then does fine-tuning for each of these tasks" }, { "end": 799.88, "start": 793.48, "text": " and gets really well what they want to do here is basically simply learn a language" }, { "end": 805.88, "start": 799.88, "text": " model and then investigate whether or not the language model can perform downstream" }, { "end": 812.12, "start": 805.88, "text": " tasks in a zero-shot setting that means without any parameter or architecture modification" }, { "end": 821.2, "start": 812.12, "text": " so no fine-tuning all right so what they do so basically what a language model is if for" }, { "end": 828.84, "start": 821.2, "text": " those who don't know it's it's if you have a sequence of text let's say a b c d e these" }, { "end": 837.96, "start": 828.84, "text": " are words let's act like some actual words the cat sat on the mat and so on and you and" }, { "end": 843.72, "start": 837.96, "text": " you a language model is you kind of remove the end of the sentence at some point and" }, { "end": 853, "start": 843.72, "text": " ask the model what comes next right that's a language model I mean there's different" }, { "end": 858.12, "start": 853, "text": " kinds of language models specific language ones but that's the basic the basic thing" }, { "end": 863.32, "start": 858.12, "text": " so the you just ask the model what's next so you can you can do a lot of unsupervised" }, { "end": 867.48, "start": 863.32, "text": " training because you don't need a label data set for this you simply need a text corpus" }, { "end": 872.48, "start": 867.48, "text": " and that's basically all they do they use transformers which we've also discussed in" }, { "end": 878.6, "start": 872.48, "text": " attention is all you need paper so if you if you don't know what transformers are go" }, { "end": 889, "start": 878.6, "text": " back and look at that yeah all right so basically they say a lot of these special tasks like" }, { "end": 895.26, "start": 889, "text": " translation and question answering can be framed in language model way for example if" }, { "end": 902.84, "start": 895.26, "text": " you simply input if this is your text translate to French and then the English text and then" }, { "end": 909.4399999999999, "start": 902.84, "text": " the French text right and then at at test time basically you leave away the French text" }, { "end": 917.96, "start": 909.4399999999999, "text": " you simply ask the language model what comes next right if and its input is translate to" }, { "end": 924.48, "start": 917.96, "text": " French and then English text this is the translation framed as a language model task because you" }, { "end": 931.24, "start": 924.48, "text": " can specify the task that the language allows to do also as language so this is quite this" }, { "end": 937.44, "start": 931.24, "text": " is quite an interesting approach and one they exploit here and they basically say well since" }, { "end": 944.88, "start": 937.44, "text": " in a large and diverse corpus of web pages that they collect here some there is going" }, { "end": 951.48, "start": 944.88, "text": " to be some websites that basically all do translation from English to French and the" }, { "end": 958.6, "start": 951.48, "text": " model can learn from that so here in this paragraph they basically list examples of" }, { "end": 963.4, "start": 958.6, "text": " naturally occurring demonstrations of English to French and French to English translation" }, { "end": 968.84, "start": 963.4, "text": " found throughout the training data set so basically this is this is how the model could" }, { "end": 977.12, "start": 968.84, "text": " learn let's just look at one I hate the word perfume Bursas it's somewhat better in French" }, { "end": 987, "start": 977.12, "text": " right so there's a way in just an unsupervised setting where the language model could learn" }, { "end": 993.36, "start": 987, "text": " right if you just cross out this word at the end and you just ask the model what comes" }, { "end": 1001.84, "start": 993.36, "text": " next right the model sees I hate the word perfume Bursas it's somewhat better in French" }, { "end": 1006.64, "start": 1001.84, "text": " period colon then the model has to put something there and the most logical continuation is" }, { "end": 1012.8, "start": 1006.64, "text": " to put the French word for perfume right so that that's kind of how they frame translation" }, { "end": 1019.56, "start": 1012.8, "text": " and these other tasks in language model way all right so they talk about the training" }, { "end": 1026.74, "start": 1019.56, "text": " data set which is a major component here they say they make a new training data set because" }, { "end": 1031.4, "start": 1026.74, "text": " all of the current ones aren't sufficient they say most prominent source of diverse" }, { "end": 1036.5600000000002, "start": 1031.4, "text": " nearly unlimited text is web scripts such as common crawl while these archives are many" }, { "end": 1040.72, "start": 1036.5600000000002, "text": " orders of magnitude larger than current language modeling datasets they have significant data" }, { "end": 1048.72, "start": 1040.72, "text": " quality issues so to say content are mostly unintelligible and so on so they basically" }, { "end": 1057.76, "start": 1048.72, "text": " describe here how they scrape a new web scrape which emphasizes document quality they go" }, { "end": 1066.64, "start": 1057.76, "text": " on reddit basically and scrape all outbound links from reddit that have received at least" }, { "end": 1074.28, "start": 1066.64, "text": " three karma which means that it yeah three up votes for a post of a link which basically" }, { "end": 1084.92, "start": 1074.28, "text": " means that three humans agreed that this was a good link so so they that's that's how they" }, { "end": 1091.16, "start": 1084.92, "text": " collect the data set resulting data set web stack web text contains text subset of the" }, { "end": 1101.1200000000001, "start": 1091.16, "text": " 45 million links they then kind of clean this and scrape it down and remove some stuff and" }, { "end": 1107.8000000000002, "start": 1101.1200000000001, "text": " they they end up with a large corpus right and then they talk about how they represent" }, { "end": 1113.88, "start": 1107.8000000000002, "text": " the input which is byte pair encoding style it's not exactly by parent coding it's a" }, { "end": 1124.2800000000002, "start": 1113.88, "text": " byte pair encoding inspired encoding we won't make a video about this by itself because" }, { "end": 1131.0800000000002, "start": 1124.2800000000002, "text": " it's really interesting but basically it's you can think of it as tokenization and pre-processing" }, { "end": 1139.68, "start": 1131.0800000000002, "text": " right then they say they they show their models so architecture hyperparameters basically" }, { "end": 1145.68, "start": 1139.68, "text": " these are these are their models this is the smallest one this second smallest one they" }, { "end": 1154.3200000000002, "start": 1145.68, "text": " say it's the same size as BERT so the the language model by google that we've looked" }, { "end": 1166.98, "start": 1154.3200000000002, "text": " at and then the largest one 1.5 billion parameters I mean that's huge and yeah they say it's" }, { "end": 1174.56, "start": 1166.98, "text": " ten times larger than the previous so the first one is their previous model and this" }, { "end": 1187.16, "start": 1174.56, "text": " now is this is the GPT-2 model that that gets all these these nice results so they do experiments" }, { "end": 1193.16, "start": 1187.16, "text": " first they do experiments on language modeling itself right so they train on their on their" }, { "end": 1199.72, "start": 1193.16, "text": " corpus and then they evaluate on a bunch of other language modeling corpus so these up" }, { "end": 1209.0800000000002, "start": 1199.72, "text": " here are language modeling corpuses and the state of the art is in this row and then you" }, { "end": 1219.16, "start": 1209.0800000000002, "text": " just look at basically the bottom row compare to their largest model this this is perplexity" }, { "end": 1233.92, "start": 1219.16, "text": " where it says PPL and the I think this here is is is accuracy so perplexity lower is better" }, { "end": 1240.2, "start": 1233.92, "text": " which you can you can see here the previous state of the art was 39 on wiki text 2 they" }, { "end": 1247.16, "start": 1240.2, "text": " get to 18 with accuracy obviously higher is better so the the kind of previous accuracy" }, { "end": 1256.2, "start": 1247.16, "text": " in Lombarda was 59 they get to 63 basically improve everything except for this 1 billion" }, { "end": 1262.3200000000002, "start": 1256.2, "text": " word corpus and they they also explain why they say this is the most heavily pre-processed" }, { "end": 1271.7, "start": 1262.3200000000002, "text": " text and so on so that basically they basically are really good at language modeling even" }, { "end": 1276.0800000000002, "start": 1271.7, "text": " though they train on a different data set that's the the point right the point is they" }, { "end": 1280.6799999999998, "start": 1276.08, "text": " train on their own corpus and then they go and just evaluate on the test set of these" }, { "end": 1288.1999999999998, "start": 1280.6799999999998, "text": " of these new of these new tasks and they become better basically than the models that trained" }, { "end": 1296.1599999999999, "start": 1288.1999999999998, "text": " on the training data set of that particular task all right so they they do a number of" }, { "end": 1304.06, "start": 1296.1599999999999, "text": " further experiments where they basically show that the model has now learned kind of implicitly" }, { "end": 1313.6, "start": 1304.06, "text": " learned a number of different tasks so let's look at for example summarization this just" }, { "end": 1318.76, "start": 1313.6, "text": " want to show an example of how you can do this so summarization summarization task is" }, { "end": 1326.48, "start": 1318.76, "text": " you get a long text you need to produce a short text and that short text is then compared" }, { "end": 1334, "start": 1326.48, "text": " to short texts that humans wrote when the task was to summarize the long text and you" }, { "end": 1339.28, "start": 1334, "text": " get points on how much your text overlaps with these human texts all right so they they" }, { "end": 1348.1200000000001, "start": 1339.28, "text": " say we test gpt2's ability to perform summarization on the cnn and daily mail data set to induce" }, { "end": 1356, "start": 1348.1200000000001, "text": " summarization here's what i found interesting we add the text tldr after the article and" }, { "end": 1362.44, "start": 1356, "text": " generate 100 tokens right then they say they need to reduce repetition and so on but basically" }, { "end": 1376.4, "start": 1362.44, "text": " this this this is right this is the way you can frame summarization by text input so i" }, { "end": 1384.48, "start": 1376.4, "text": " find this just kind of a really nice way to think about these problems the fact that instructions" }, { "end": 1390.28, "start": 1384.48, "text": " of the task can be given as text this is a very nice example here so basically you you" }, { "end": 1399.8, "start": 1390.28, "text": " put you as input you put the entire article right and so you here is the the cnn article" }, { "end": 1408.48, "start": 1399.8, "text": " blah blah blah it's super long right and then here you put tldr which is for those who don't" }, { "end": 1416.92, "start": 1408.48, "text": " know it's too long didn't read so people use this this phrase to indicate that then they" }, { "end": 1422.6, "start": 1416.92, "text": " will write a short summary of whatever was before here they will either put this at the" }, { "end": 1428.24, "start": 1422.6, "text": " beginning or at the end of a long text right to to say to people okay if you if you don't" }, { "end": 1432.84, "start": 1428.24, "text": " want to read all this just read this down here um gives you the gist of it which is" }, { "end": 1438.8799999999999, "start": 1432.84, "text": " exactly summarization so if you then take this away and ask the language model what's" }, { "end": 1445.24, "start": 1438.8799999999999, "text": " here right basically throughout the training corpus the language model will have encountered" }, { "end": 1452.04, "start": 1445.24, "text": " such pieces of text with a tldr in it and the language model might have learned that" }, { "end": 1459.52, "start": 1452.04, "text": " whatever is down here is a short version of whatever is up here and thereby if you then" }, { "end": 1465.76, "start": 1459.52, "text": " ask the language model what comes next here right the language model might learn aha i" }, { "end": 1473.82, "start": 1465.76, "text": " need to summarize whatever is above and that's the my best shot at completing at at answering" }, { "end": 1484.76, "start": 1473.82, "text": " the question what comes next and yeah so they get you know surprisingly good results um" }, { "end": 1494.24, "start": 1484.76, "text": " from from this so they say on the commonly reported rouge 12l metrics the generated summaries" }, { "end": 1499.16, "start": 1494.24, "text": " only begin to approach the performance of classic neural baselines just barely outperforms" }, { "end": 1509.6, "start": 1499.16, "text": " selecting three random sentences from the article uh but but um still it it um while" }, { "end": 1516, "start": 1509.6, "text": " qualitatively the generations resemble summaries they often focus on recent content from there" }, { "end": 1520.8799999999999, "start": 1516, "text": " to color confuse specific details so this is kind of a task where it kind of worked but" }, { "end": 1527.56, "start": 1520.8799999999999, "text": " not really um but i just find it it's really interesting that that it it kind of how they" }, { "end": 1534.12, "start": 1527.56, "text": " frame the task and how this can still so it still kind of works and that's the the gist" }, { "end": 1539.8, "start": 1534.12, "text": " here in all of these tasks is also with like translation they obviously don't get near" }, { "end": 1547.1999999999998, "start": 1539.8, "text": " the performance of a system specifically trained to do this task but they all also always say" }, { "end": 1555.6799999999998, "start": 1547.1999999999998, "text": " it kind of works right it's sort of sort of it learns something and their entire point" }, { "end": 1573.6000000000001, "start": 1555.68, "text": " of this paper is to say well look um yeah the the the diversity of tasks the model is" }, { "end": 1578.52, "start": 1573.6000000000001, "text": " able to perform and i would say kind of perform in a zero shot setting suggests that high" }, { "end": 1584.42, "start": 1578.52, "text": " capacity models trained to maximize the likelihood of a sufficiently varied text corpus begin" }, { "end": 1589.64, "start": 1584.42, "text": " to learn how to perform a surprising amount of tasks without the need for explicit supervision" }, { "end": 1599.76, "start": 1589.64, "text": " so yeah their entire point is if we train on such varied data that kind of um that spans" }, { "end": 1606.3400000000001, "start": 1599.76, "text": " the entire range of human language expression the the kind of tasks we want these systems" }, { "end": 1613.88, "start": 1606.3400000000001, "text": " to do will be learned implicitly so basically it points to let's get an even bigger corpus" }, { "end": 1620.44, "start": 1613.88, "text": " let's get even bigger models and we might get even better unsupervised zero shot way" }, { "end": 1627.96, "start": 1620.44, "text": " in these kind of special tasks and general language understanding all right so that that" }, { "end": 1632.4, "start": 1627.96, "text": " was basically i've jumped over a lot of points but i encourage you to look into this to look" }, { "end": 1637.48, "start": 1632.4, "text": " into the specific experiments they're really interesting the way how they framed things" }, { "end": 1645.56, "start": 1637.48, "text": " and um give just just shout your opinion around about whether or not the publishing is a good" }, { "end": 1671.8, "start": 1645.56, "text": " thing or not it's really funny i love it um and with that have a good day" } ]
OioFONrSETc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
[ "Science & Technology" ]
[ "machine learning", "deep learning", "neural networks", "batch normalization", "batchnorm", "whitening", "data", "internal covariate shift", "deep neural networks", "deep nets", "mini-batch", "training" ]
https://arxiv.org/abs/1502.03167 Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters. Authors: Sergey Ioffe, Christian Szegedy
Hi, today we're looking at batch normalization. Accelerating deep network training by reducing internal covariate shift by Sergey Ioff and Christian Skiddeds. Yeah, not my best pronouncer. Segedi. Close enough. Alright, so this is a bit of an older paper and I think it's still good to look at it. It's relevant and people just kind of throw batch normalization into networks and maybe don't really know what it's doing. So let's look at it. So what these people argue is that in a network usually you have structures like this. So if something like that, it means that your loss kind of, this is a two layer network, your loss is a composition of the first layer on the input view with parameters theta 1 and the second layer with parameters theta 2. So conceptually that would look something like this. You have your input, maybe it's an image, right? And you put it through the network and it becomes some intermediate representation, right? That's X0, that's X1, or maybe we'll call it even H1, hidden representation, right? Then that becomes, then through the layer becomes H2 and so on, right? So this stuff here, these would be weight matrices, W1, W2, that transform the image into a new image or whatever. So what they're arguing is that well, if you only consider a single layer, like the first layer here, it's kind of the same if you only consider the second layer with the H1 now as the input, right? It's pretty natural to see each layer of the neural network is kind of like its own transformation, taking inputs and producing some outputs. So what people usually do with the very first input here with your data in machine learning generally is so called whitening the data, which means that they have this over here. Usually data is whitened, I can't find it, but what it means is you basically want to, if you have data, let's say here is a coordinated axis, you have 2D data, and you might want to do kind of a linear regression on it, and you have data that's kind of like that, right? It suits you to transform this data into, by, first of all, looking where its mean is, mean is about here, and subtracting that, so here, here, and then kind of dividing by its standard deviation in each direction, so there's a standard deviation here, and there is a standard deviation here. So you would transform this data into something like, maybe something like this, so you see that the mean is now in the middle, and it's not so elongated anymore. So you have a much easier time to kind of learn something on this data than on this data over here, simply because our classifiers usually tend to rely on inner products, and if you do an inner product here, you have one of these vectors here, and you do some inner product, it's always going to be far away from the mean, and thereby the inner products are going to be large no matter what, right? Whereas here, if you take a random one, and then another random, so if you take two random points here, there are two vectors from the mean are almost the same, whereas if you take two random points here, they tend to look uniformly in the directions, so it's kind of the sense we know that machine learning methods work better if we whiten the data first. So these people ask, hey, why do we only do this at the very beginning, right? If each layer basically takes its input and learns something, each layer is basically a machine learning method, why don't we just whiten the data to every single layer, or every single subcomponent of a deep network? And that's the kind of basic step here. So they argue how this has been kind of tried before, or what kind of methods you would usually get, and why these aren't so good, mainly because you kind of need to intermingle this whitening with training the network, and thereby if you just go about this naively, then you would kind of produce artifacts from training. So that's this section here, where they argue that you can't really go about this super naively, but what they do isn't super complicated, but they just do it in a smart way. So we'll jump directly to that. What they say is, okay, let's look at what they call normalization via mini-batch statistics. Let's say we have some d-dimensional input x, and we're just going to look at per dimension. So we only care about per individual dimension normalization. So what are we going to do? We're going to take the kth dimension, we're going to subtract from it the mean of the kth dimension. Within a mini-batch, within a mini-batch of data. So a mini-batch may be something like 32 examples, or 100 examples, or something like this. And then we'll divide by the variance of that mini-batch. So this is done over here in BASIC. So you compute mu of the mini-batch, which is simply the empirical mean of the data at that particular layer. And then you compute sigma squared b, which is simply the empirical estimate of the variance computed on that particular mini-batch. And then you transform your data by subtracting that and by dividing it by this. And this constant here is simply to prevent from dividing by two small values. So you get like numerical problems. So what does it do? It does basically what we did above. But now what they say is, okay, we want to make sure that this transformation can potentially represent the identity, because sometimes, or like a natural, natural, if you had to do something with your input when giving it to the next layer, the very baseline is to do nothing to it, to do the identity transform. But if you do this, you probably won't end up with the identity transform, except if the mean is exactly zero and the variance is exactly one. So what they say is, okay, we'll also introduce two new parameters to this. Here, this gamma and this beta here. And these are learned, like other parameters in the network. We learn the parameter gamma and beta. And gamma and beta are simply a scalar that this transformed x is multiplied by. And beta is simply a scalar that is then added to it. So in each dimension of your hidden representation, you basically learn how to scale it and how to shift it, scale and shift, after you've done the normalization. So first, you do the normalization. First, you go from this type of data to this type of data. And then you say, well, maybe it's actually more beneficial to have it not centered. So that the network can actually learn then to transform this somewhere. This might seem redundant, but it's really powerful, because what you're basically saying is that, okay, this probably isn't the best distribution. This probably is better, but if the network, if the backpropagation algorithm or the training algorithm decides that this first representation was actually useful, it has the option of going back. But it also has the option of going to any other kind of form of distribution. So it's pretty powerful in terms of what it does. It's not really correct here that it has the power to go to any distribution, because it's only kind of a per dimension scalar that it learns, but still, the potential to transform the distribution by these learned scalars is pretty big. All right. So basically, that's it. That's the whole shebang. You normalize your inputs to each layer by this formula, and then you introduce new parameters that you learn along with your network parameters. So this kind of has some implications. First of all, one implication is this here. If you build a batch norm into your network, it kind of learns this plus beta, which is basically a bias parameter, if you think of a traditional kind of fully connected layer. This isn't a fully connected layer because this scalar here is only per dimension, but the bias in a fully connected layer is also just per dimension. So the beta is equal to a bias in a fully connected layer. So if you have a batch normalization after a fully connected or convolutional layer, or anything that can or sometimes has a bias parameter, it's almost not worth it to kind of learn both. So you would rather just only have the one from the batch normalization and leave and use the convolution or fully connected layer without a bias. So that's kind of one implication. Another implication is we have just lost the ability to have deterministic test time inference. So much like dropout, which is kind of random dropping out of nodes, here we have quantities that depend on the mini-batch. Not only the individual sample, but they actually depend on what other samples are randomly selected to be trained with that particular sample. So that's kind of awkward if you want to have some deterministic reproducible thing at test time. So what people do is... And here, this is discussed. What people do is, while training, they use these quantities, the quantities we just discussed, but they keep kind of a running average over them. So what I would do is in each mini-batch, I would compute this mini-batch mean and this mini-batch variance, and I would keep running averages of them. And at test time, I'm going to plug in these running averages, so there's nothing dependent on the mini-batch anymore. So that's a pretty neat trick, I think. You can even imagine at the end of your network training, using these here to kind of fine-tune the weights to these exact parameters. So that's one thing that you have to pay attention to. So usually in neural network libraries, there are parameters you can set whether or not this network is in train mode or in test mode. And depending on that, the batch norm layer will use the mini-batch statistics or will use the kind of over-dataset statistics. Alright, the second thing is training. So how do you actually train this thing? Because now, you can't just... We started with our multi-layer network up here. F2, F1, right? First, I'm going to put my things through F1, and then I'm going to put my things through F2. And the backpropagation here is quite easy. So let me get rid of this. The backprop here is quite easy. You go to L, and maybe you want to derive it by theta 1. So you first go to derive it by the hidden representation 1, and then the hidden representation 1 with respect to theta 1. So the hidden representation would be whatever comes out of here. H1, sorry, not I. And so on. So you kind of chain rule your way through here. But now in between these layers here, you have these batch norm things. And so the authors discuss how we now do backpropagation in the face of these things. So here is basically what they discuss. It actually pays to have a graph of what's going on. So here is x. This is the input to our layer. So what do we compute from x? We compute mu, let's just call it mu, or mu B it's called here. This is the mean of all the x's. So this is x, xi until x, well, x1 until xn. This is the mini-batch. We compute the mean, and then from this and from this, we can compute this estimate of the variance. We need both. So we now have the mean and the variance over the mini-batch. So we're going to take one of these x's, just the i-th one, and we're going to use this and this to compute x, what? Compute x, is it called hat? Yeah, probably. It's called x hat, right? Yeah, we saw about x hat. So x hat i is xi minus mu B divided by sigma squared B, the square root of it plus this kind of little constant here. We're going to leave away the little constant for clarity's sake. Actually, it's in the calculations here. So then we have a new parameter, gamma, right? We're going to use it and our x hat to compute, and also this beta here, to compute y hat. Y or y, just y. And of course this is i, this is i. And this here is our final output of the layer. You can see now the backpropagation paths if you go through here. So the backpropagation path, if we have some loss coming in here, we backprop through yi, right? So here is the L, the loss to yi. That's here. So if we want, for example, the backprop with respect to beta, what we do is we simply, and this is over the mini-batch of course, we simply backprop here through this path. So in our formula for beta, there should be only mention yi. And that's what we see here, right? In our formula for gamma, there should only be mention of yi. So because the path leads only through yi. Oh, no, I'm sorry. Actually, because of the, what I mean is of the derivative with respect to yi. Of course, we also have to pay attention that this is multiplied here by this x hat i, where of course that's not the case when we just add something. Because the derivative of an addition like x plus b with respect to b disregards x, whereas if it's x times b, it doesn't disregard x. Alright, so if we, yeah, so you can go back. So the interesting bit basically comes when we want to find out, okay, how? Because here is another layer, right? Down here somewhere, there is another layer. And we basically want to know this input here to the next layer, how do we compute it in the face of this mess here? Because it's not so easy, right? So you have to see we have three paths here. We go back through x, and let me get rid of these blue lines. We go back through x hat directly to x. We go one path is through here, and one path is through this mu. So basically you have to compute derivatives with respect to sigma squared and mu. And for that we need the derivative with respect to x hat. So basically the way backprop works is you just find all paths from where you are to where you want to go, and then you kind of iteratively compute this. So this one here is the easiest. As you see here they did it on top. Well first they did this one, which is simply going from y to x hat i. Then they go from x hat i to sigma squared, which simply involves kind of the reverse operations of how you got it. This is simply a derivative formula here of the division by square root. Then you can use this quantity here to compute that. So basically you just go in reverse of how you computed the operations in the first place. We said we needed mu b to compute sigma squared b. Now we need the derivative with respect to sigma squared b in order to compute the derivative to mu b. And once you have that, and you see the addition here, the add here is the fact that two things contribute to mu b. So two paths lead to mu b. One path is from here, and one path is through here. So here there should be a green. Since two paths, you have two components to your derivative and you add each of them. So that's how that's going to be. And then this here, with respect to this x here, we have three paths. Because we have three arrows going out of xi. One here, one here, and one here. So you have to take into account all of them. This one is pretty easy, that's the first one. Then the second one goes through this mu b, which we've already computed, and the third one goes through the sigma, which we've also already computed. And these are added, because you have to add all the paths in the backprop algorithm. Maybe we'll do a video on backprop later to really dive into how this works. And finally, they compute these, these we've already discussed. So in essence, the whole thing is differentiable. You just have to kind of pay attention how to do it, but the whole thing is differentiable. And thereby, you can basically backprop through a network that has these batch normal layers built in. So that's pretty cool. I just want to quickly jump over to the results. Keep in mind, this paper is from 2015, so networks weren't that big back then. We didn't know that much about training yet, but the interesting thing is they basically discovered, look, we can have drastically fewer steps in order to reach the same accuracies. And these are kind of the activations of the network over the course of training. So without patch norm, you see, especially at the beginning, there's large fluctuations in the activations. And because they use batch norm now, there's no such thing. So basically, the reason for that is pretty simple. While you learn and you learn your layered representation here, let's say there's X and X is fed through layers, and there's hidden representations, each in between. So you're trying to learn all these parameters. Let's say this one here, W3, but at the beginning of training, everything is kind of prone to shifting around a lot. So when you change W1, that kind of changes the entire distribution of your hidden representations after the fact. So basically, whatever you learn for W3 is now already almost obsolete because you've changed W1 basically, and W3 was kind of assuming that its inputs would remain the same because that's what you assume in machine learning. Your input distribution is kind of the same. So that's why at the beginning of training, you see these kind of large variances. And with batch norm, this tends to go away. So that's pretty cool. They also kind of show, they mainly show that they can reach the same accuracies as other training methods, but with much, much fewer steps, and they can go much higher learning rates than others. So because of that. So that's pretty cool. I encourage you to check out the rest of the paper. Use batch norm in your network. Sometimes it works. It sometimes doesn't work, strangely enough. But I guess that's just a matter of experimentation. All right. That was it for me. Bye bye.
[ { "end": 5.3, "start": 0, "text": " Hi, today we're looking at batch normalization. Accelerating deep network" }, { "end": 12.76, "start": 5.3, "text": " training by reducing internal covariate shift by Sergey Ioff and Christian" }, { "end": 22.66, "start": 12.76, "text": " Skiddeds. Yeah, not my best pronouncer." }, { "end": 27.66, "start": 22.66, "text": " Segedi. Close enough." }, { "end": 30.66, "start": 27.66, "text": " Alright, so this is a bit of an older paper and" }, { "end": 35.66, "start": 30.66, "text": " I think it's still good to look at it." }, { "end": 39.66, "start": 35.66, "text": " It's relevant and people just kind of" }, { "end": 41.66, "start": 39.66, "text": " throw batch normalization into networks" }, { "end": 44.66, "start": 41.66, "text": " and maybe don't really know what it's doing." }, { "end": 47.66, "start": 44.66, "text": " So let's look at it." }, { "end": 50.66, "start": 47.66, "text": " So what these people argue is that in a" }, { "end": 53.66, "start": 50.66, "text": " network usually you have structures like this." }, { "end": 59.66, "start": 53.66, "text": " So if something like that, it means that" }, { "end": 61.66, "start": 59.66, "text": " your loss kind of, this is a two layer network," }, { "end": 63.66, "start": 61.66, "text": " your loss is a composition of the first" }, { "end": 66.66, "start": 63.66, "text": " layer on the input view with parameters" }, { "end": 70.66, "start": 66.66, "text": " theta 1 and the second layer with parameters" }, { "end": 72.66, "start": 70.66, "text": " theta 2. So conceptually that would look" }, { "end": 74.66, "start": 72.66, "text": " something like this. You have your input," }, { "end": 78.66, "start": 74.66, "text": " maybe it's an image, right? And you put it" }, { "end": 81.66, "start": 78.66, "text": " through the network and it becomes some" }, { "end": 83.66, "start": 81.66, "text": " intermediate representation, right?" }, { "end": 89.66, "start": 83.66, "text": " That's X0, that's X1, or maybe we'll call it" }, { "end": 93.66, "start": 89.66, "text": " even H1, hidden representation, right?" }, { "end": 96.66, "start": 93.66, "text": " Then that becomes, then through the layer" }, { "end": 101.66, "start": 96.66, "text": " becomes H2 and so on, right? So this stuff here," }, { "end": 105.66, "start": 101.66, "text": " these would be weight matrices, W1, W2," }, { "end": 109.66, "start": 105.66, "text": " that transform the image into a new image" }, { "end": 113.66, "start": 109.66, "text": " or whatever. So what they're arguing is that" }, { "end": 116.66, "start": 113.66, "text": " well, if you only consider a single layer," }, { "end": 122.66, "start": 116.66, "text": " like the first layer here, it's kind of the same" }, { "end": 124.66, "start": 122.66, "text": " if you only consider the second layer" }, { "end": 127.66, "start": 124.66, "text": " with the H1 now as the input, right?" }, { "end": 130.66, "start": 127.66, "text": " It's pretty natural to see each layer of the neural" }, { "end": 133.66, "start": 130.66, "text": " network is kind of like its own transformation," }, { "end": 137.66, "start": 133.66, "text": " taking inputs and producing some outputs." }, { "end": 141.66, "start": 137.66, "text": " So what people usually do with the very first" }, { "end": 145.66, "start": 141.66, "text": " input here with your data in machine learning" }, { "end": 148.66, "start": 145.66, "text": " generally is so called whitening the data," }, { "end": 156.66, "start": 148.66, "text": " which means that they have this over here." }, { "end": 160.66, "start": 156.66, "text": " Usually data is whitened, I can't find it," }, { "end": 164.66, "start": 160.66, "text": " but what it means is you basically want to," }, { "end": 169.66, "start": 164.66, "text": " if you have data, let's say here is a coordinated axis," }, { "end": 173.66, "start": 169.66, "text": " you have 2D data, and you might want to do" }, { "end": 176.66, "start": 173.66, "text": " kind of a linear regression on it, and you have data" }, { "end": 180.66, "start": 176.66, "text": " that's kind of like that, right?" }, { "end": 185.66, "start": 180.66, "text": " It suits you to transform this data into, by," }, { "end": 188.66, "start": 185.66, "text": " first of all, looking where its mean is," }, { "end": 191.66, "start": 188.66, "text": " mean is about here, and subtracting that," }, { "end": 197.66, "start": 191.66, "text": " so here, here, and then kind of dividing by" }, { "end": 200.66, "start": 197.66, "text": " its standard deviation in each direction," }, { "end": 202.66, "start": 200.66, "text": " so there's a standard deviation here," }, { "end": 204.66, "start": 202.66, "text": " and there is a standard deviation here." }, { "end": 211.66, "start": 204.66, "text": " So you would transform this data into something like," }, { "end": 217.66, "start": 211.66, "text": " maybe something like this, so you see that the mean" }, { "end": 225.66, "start": 217.66, "text": " is now in the middle, and it's not so elongated anymore." }, { "end": 229.66, "start": 225.66, "text": " So you have a much easier time to kind of learn" }, { "end": 232.66, "start": 229.66, "text": " something on this data than on this data over here," }, { "end": 235.66, "start": 232.66, "text": " simply because our classifiers usually tend to" }, { "end": 240.66, "start": 235.66, "text": " rely on inner products, and if you do an inner product here," }, { "end": 242.66, "start": 240.66, "text": " you have one of these vectors here," }, { "end": 244.66, "start": 242.66, "text": " and you do some inner product, it's always going to be" }, { "end": 249.66, "start": 244.66, "text": " far away from the mean, and thereby the inner products" }, { "end": 252.66, "start": 249.66, "text": " are going to be large no matter what, right?" }, { "end": 255.66, "start": 252.66, "text": " Whereas here, if you take a random one," }, { "end": 258.65999999999997, "start": 255.66, "text": " and then another random, so if you take two random points here," }, { "end": 263.65999999999997, "start": 258.65999999999997, "text": " there are two vectors from the mean are almost the same," }, { "end": 265.65999999999997, "start": 263.65999999999997, "text": " whereas if you take two random points here," }, { "end": 269.65999999999997, "start": 265.65999999999997, "text": " they tend to look uniformly in the directions," }, { "end": 271.65999999999997, "start": 269.65999999999997, "text": " so it's kind of the sense we know that machine learning" }, { "end": 274.66, "start": 271.66, "text": " methods work better if we whiten the data first." }, { "end": 277.66, "start": 274.66, "text": " So these people ask, hey, why do we only do this" }, { "end": 279.66, "start": 277.66, "text": " at the very beginning, right?" }, { "end": 286.66, "start": 279.66, "text": " If each layer basically takes its input and learns something," }, { "end": 288.66, "start": 286.66, "text": " each layer is basically a machine learning method," }, { "end": 293.66, "start": 288.66, "text": " why don't we just whiten the data to every single layer," }, { "end": 297.66, "start": 293.66, "text": " or every single subcomponent of a deep network?" }, { "end": 300.66, "start": 297.66, "text": " And that's the kind of basic step here." }, { "end": 303.66, "start": 300.66, "text": " So they argue how this has been kind of tried before," }, { "end": 306.66, "start": 303.66, "text": " or what kind of methods you would usually get," }, { "end": 312.66, "start": 306.66, "text": " and why these aren't so good, mainly because you kind of need" }, { "end": 316.66, "start": 312.66, "text": " to intermingle this whitening with training the network," }, { "end": 319.66, "start": 316.66, "text": " and thereby if you just go about this naively," }, { "end": 325.66, "start": 319.66, "text": " then you would kind of produce artifacts from training." }, { "end": 331.66, "start": 325.66, "text": " So that's this section here, where they argue that" }, { "end": 335.66, "start": 331.66, "text": " you can't really go about this super naively," }, { "end": 338.66, "start": 335.66, "text": " but what they do isn't super complicated," }, { "end": 340.66, "start": 338.66, "text": " but they just do it in a smart way." }, { "end": 344.66, "start": 340.66, "text": " So we'll jump directly to that." }, { "end": 350.66, "start": 344.66, "text": " What they say is, okay, let's look at what they call" }, { "end": 353.66, "start": 350.66, "text": " normalization via mini-batch statistics." }, { "end": 359.66, "start": 353.66, "text": " Let's say we have some d-dimensional input x," }, { "end": 363.66, "start": 359.66, "text": " and we're just going to look at per dimension." }, { "end": 370.66, "start": 363.66, "text": " So we only care about per individual dimension normalization." }, { "end": 374.66, "start": 370.66, "text": " So what are we going to do?" }, { "end": 377.66, "start": 374.66, "text": " We're going to take the kth dimension," }, { "end": 382.66, "start": 377.66, "text": " we're going to subtract from it the mean of the kth dimension." }, { "end": 387.66, "start": 382.66, "text": " Within a mini-batch, within a mini-batch of data." }, { "end": 391.66, "start": 387.66, "text": " So a mini-batch may be something like 32 examples," }, { "end": 393.66, "start": 391.66, "text": " or 100 examples, or something like this." }, { "end": 398.66, "start": 393.66, "text": " And then we'll divide by the variance of that mini-batch." }, { "end": 405.66, "start": 398.66, "text": " So this is done over here in BASIC." }, { "end": 408.66, "start": 405.66, "text": " So you compute mu of the mini-batch," }, { "end": 416.66, "start": 408.66, "text": " which is simply the empirical mean of the data at that particular layer." }, { "end": 419.66, "start": 416.66, "text": " And then you compute sigma squared b," }, { "end": 425.66, "start": 419.66, "text": " which is simply the empirical estimate of the variance" }, { "end": 429.66, "start": 425.66, "text": " computed on that particular mini-batch." }, { "end": 434.66, "start": 429.66, "text": " And then you transform your data by subtracting that" }, { "end": 437.66, "start": 434.66, "text": " and by dividing it by this." }, { "end": 446.66, "start": 437.66, "text": " And this constant here is simply to prevent from dividing by two small values." }, { "end": 450.66, "start": 446.66, "text": " So you get like numerical problems." }, { "end": 453.66, "start": 450.66, "text": " So what does it do?" }, { "end": 457.66, "start": 453.66, "text": " It does basically what we did above." }, { "end": 460.66, "start": 457.66, "text": " But now what they say is, okay," }, { "end": 465.66, "start": 460.66, "text": " we want to make sure that this transformation can potentially" }, { "end": 469.66, "start": 465.66, "text": " represent the identity, because sometimes," }, { "end": 474.66, "start": 469.66, "text": " or like a natural, natural, if you had to do something with your input" }, { "end": 476.66, "start": 474.66, "text": " when giving it to the next layer," }, { "end": 482.66, "start": 476.66, "text": " the very baseline is to do nothing to it, to do the identity transform." }, { "end": 489.66, "start": 482.66, "text": " But if you do this, you probably won't end up with the identity transform," }, { "end": 494.66, "start": 489.66, "text": " except if the mean is exactly zero and the variance is exactly one." }, { "end": 498.66, "start": 494.66, "text": " So what they say is, okay," }, { "end": 502.66, "start": 498.66, "text": " we'll also introduce two new parameters to this." }, { "end": 508.66, "start": 502.66, "text": " Here, this gamma and this beta here." }, { "end": 512.6600000000001, "start": 508.66, "text": " And these are learned, like other parameters in the network." }, { "end": 515.6600000000001, "start": 512.6600000000001, "text": " We learn the parameter gamma and beta." }, { "end": 523.6600000000001, "start": 515.6600000000001, "text": " And gamma and beta are simply a scalar that this transformed x is multiplied by." }, { "end": 527.66, "start": 523.66, "text": " And beta is simply a scalar that is then added to it." }, { "end": 531.66, "start": 527.66, "text": " So in each dimension of your hidden representation," }, { "end": 537.66, "start": 531.66, "text": " you basically learn how to scale it and how to shift it," }, { "end": 540.66, "start": 537.66, "text": " scale and shift, after you've done the normalization." }, { "end": 546.66, "start": 540.66, "text": " So first, you do the normalization." }, { "end": 551.66, "start": 546.66, "text": " First, you go from this type of data to this type of data." }, { "end": 558.66, "start": 551.66, "text": " And then you say, well, maybe it's actually more beneficial to have it not centered." }, { "end": 564.66, "start": 558.66, "text": " So that the network can actually learn then to transform this somewhere." }, { "end": 568.66, "start": 564.66, "text": " This might seem redundant, but it's really powerful," }, { "end": 573.66, "start": 568.66, "text": " because what you're basically saying is that, okay," }, { "end": 578.66, "start": 573.66, "text": " this probably isn't the best distribution." }, { "end": 582.66, "start": 578.66, "text": " This probably is better, but if the network," }, { "end": 586.66, "start": 582.66, "text": " if the backpropagation algorithm or the training algorithm decides" }, { "end": 589.66, "start": 586.66, "text": " that this first representation was actually useful," }, { "end": 591.66, "start": 589.66, "text": " it has the option of going back." }, { "end": 598.66, "start": 591.66, "text": " But it also has the option of going to any other kind of form of distribution." }, { "end": 603.66, "start": 598.66, "text": " So it's pretty powerful in terms of what it does." }, { "end": 607.66, "start": 603.66, "text": " It's not really correct here that it has the power to go to any distribution," }, { "end": 611.66, "start": 607.66, "text": " because it's only kind of a per dimension scalar that it learns," }, { "end": 617.66, "start": 611.66, "text": " but still, the potential to transform the distribution" }, { "end": 622.66, "start": 617.66, "text": " by these learned scalars is pretty big." }, { "end": 625.66, "start": 622.66, "text": " All right." }, { "end": 628.66, "start": 625.66, "text": " So basically, that's it." }, { "end": 631.66, "start": 628.66, "text": " That's the whole shebang." }, { "end": 636.66, "start": 631.66, "text": " You normalize your inputs to each layer by this formula," }, { "end": 643.66, "start": 636.66, "text": " and then you introduce new parameters that you learn along with your network parameters." }, { "end": 649.66, "start": 643.66, "text": " So this kind of has some implications." }, { "end": 656.66, "start": 649.66, "text": " First of all, one implication is this here." }, { "end": 660.66, "start": 656.66, "text": " If you build a batch norm into your network," }, { "end": 666.66, "start": 660.66, "text": " it kind of learns this plus beta, which is basically a bias parameter," }, { "end": 669.66, "start": 666.66, "text": " if you think of a traditional kind of fully connected layer." }, { "end": 673.66, "start": 669.66, "text": " This isn't a fully connected layer because this scalar here is only per dimension," }, { "end": 677.66, "start": 673.66, "text": " but the bias in a fully connected layer is also just per dimension." }, { "end": 680.66, "start": 677.66, "text": " So the beta is equal to a bias in a fully connected layer." }, { "end": 693.66, "start": 680.66, "text": " So if you have a batch normalization after a fully connected or convolutional layer," }, { "end": 697.66, "start": 693.66, "text": " or anything that can or sometimes has a bias parameter," }, { "end": 701.66, "start": 697.66, "text": " it's almost not worth it to kind of learn both." }, { "end": 705.66, "start": 701.66, "text": " So you would rather just only have the one from the batch normalization" }, { "end": 710.66, "start": 705.66, "text": " and leave and use the convolution or fully connected layer without a bias." }, { "end": 712.66, "start": 710.66, "text": " So that's kind of one implication." }, { "end": 722.66, "start": 712.66, "text": " Another implication is we have just lost the ability to have deterministic test time inference." }, { "end": 727.66, "start": 722.66, "text": " So much like dropout, which is kind of random dropping out of nodes," }, { "end": 733.66, "start": 727.66, "text": " here we have quantities that depend on the mini-batch." }, { "end": 738.66, "start": 733.66, "text": " Not only the individual sample, but they actually depend on what other samples" }, { "end": 743.66, "start": 738.66, "text": " are randomly selected to be trained with that particular sample." }, { "end": 751.66, "start": 743.66, "text": " So that's kind of awkward if you want to have some deterministic reproducible thing at test time." }, { "end": 754.66, "start": 751.66, "text": " So what people do is..." }, { "end": 760.66, "start": 754.66, "text": " And here, this is discussed." }, { "end": 771.66, "start": 760.66, "text": " What people do is, while training, they use these quantities," }, { "end": 778.66, "start": 771.66, "text": " the quantities we just discussed, but they keep kind of a running average over them." }, { "end": 785.66, "start": 778.66, "text": " So what I would do is in each mini-batch, I would compute this mini-batch mean and this mini-batch variance," }, { "end": 793.66, "start": 785.66, "text": " and I would keep running averages of them." }, { "end": 798.66, "start": 793.66, "text": " And at test time, I'm going to plug in these running averages," }, { "end": 802.66, "start": 798.66, "text": " so there's nothing dependent on the mini-batch anymore." }, { "end": 807.66, "start": 802.66, "text": " So that's a pretty neat trick, I think." }, { "end": 812.66, "start": 807.66, "text": " You can even imagine at the end of your network training," }, { "end": 819.66, "start": 812.66, "text": " using these here to kind of fine-tune the weights to these exact parameters." }, { "end": 826.66, "start": 819.66, "text": " So that's one thing that you have to pay attention to." }, { "end": 832.66, "start": 826.66, "text": " So usually in neural network libraries, there are parameters you can set" }, { "end": 836.66, "start": 832.66, "text": " whether or not this network is in train mode or in test mode." }, { "end": 843.66, "start": 836.66, "text": " And depending on that, the batch norm layer will use the mini-batch statistics" }, { "end": 849.66, "start": 843.66, "text": " or will use the kind of over-dataset statistics." }, { "end": 852.66, "start": 849.66, "text": " Alright, the second thing is training." }, { "end": 855.66, "start": 852.66, "text": " So how do you actually train this thing?" }, { "end": 857.66, "start": 855.66, "text": " Because now, you can't just..." }, { "end": 865.66, "start": 857.66, "text": " We started with our multi-layer network up here." }, { "end": 867.66, "start": 865.66, "text": " F2, F1, right?" }, { "end": 872.66, "start": 867.66, "text": " First, I'm going to put my things through F1, and then I'm going to put my things through F2." }, { "end": 876.66, "start": 872.66, "text": " And the backpropagation here is quite easy." }, { "end": 880.66, "start": 876.66, "text": " So let me get rid of this." }, { "end": 882.66, "start": 880.66, "text": " The backprop here is quite easy." }, { "end": 888.66, "start": 882.66, "text": " You go to L, and maybe you want to derive it by theta 1." }, { "end": 895.66, "start": 888.66, "text": " So you first go to derive it by the hidden representation 1," }, { "end": 899.66, "start": 895.66, "text": " and then the hidden representation 1 with respect to theta 1." }, { "end": 904.66, "start": 899.66, "text": " So the hidden representation would be whatever comes out of here." }, { "end": 908.66, "start": 904.66, "text": " H1, sorry, not I." }, { "end": 911.66, "start": 908.66, "text": " And so on. So you kind of chain rule your way through here." }, { "end": 917.66, "start": 911.66, "text": " But now in between these layers here, you have these batch norm things." }, { "end": 926.66, "start": 917.66, "text": " And so the authors discuss how we now do backpropagation in the face of these things." }, { "end": 932.66, "start": 926.66, "text": " So here is basically what they discuss." }, { "end": 937.66, "start": 932.66, "text": " It actually pays to have a graph of what's going on." }, { "end": 941.66, "start": 937.66, "text": " So here is x. This is the input to our layer." }, { "end": 943.66, "start": 941.66, "text": " So what do we compute from x?" }, { "end": 950.66, "start": 943.66, "text": " We compute mu, let's just call it mu, or mu B it's called here." }, { "end": 953.66, "start": 950.66, "text": " This is the mean of all the x's." }, { "end": 962.66, "start": 953.66, "text": " So this is x, xi until x, well, x1 until xn." }, { "end": 964.66, "start": 962.66, "text": " This is the mini-batch." }, { "end": 971.66, "start": 964.66, "text": " We compute the mean, and then from this and from this," }, { "end": 977.66, "start": 971.66, "text": " we can compute this estimate of the variance. We need both." }, { "end": 982.66, "start": 977.66, "text": " So we now have the mean and the variance over the mini-batch." }, { "end": 987.66, "start": 982.66, "text": " So we're going to take one of these x's, just the i-th one," }, { "end": 1003.66, "start": 987.66, "text": " and we're going to use this and this to compute x, what? Compute x, is it called hat?" }, { "end": 1006.66, "start": 1003.66, "text": " Yeah, probably. It's called x hat, right?" }, { "end": 1008.66, "start": 1006.66, "text": " Yeah, we saw about x hat." }, { "end": 1019.66, "start": 1008.66, "text": " So x hat i is xi minus mu B divided by sigma squared B," }, { "end": 1023.66, "start": 1019.66, "text": " the square root of it plus this kind of little constant here." }, { "end": 1027.6599999999999, "start": 1023.66, "text": " We're going to leave away the little constant for clarity's sake." }, { "end": 1030.6599999999999, "start": 1027.6599999999999, "text": " Actually, it's in the calculations here." }, { "end": 1036.6599999999999, "start": 1030.6599999999999, "text": " So then we have a new parameter, gamma, right?" }, { "end": 1043.66, "start": 1036.66, "text": " We're going to use it and our x hat to compute, and also this beta here," }, { "end": 1047.66, "start": 1043.66, "text": " to compute y hat." }, { "end": 1051.66, "start": 1047.66, "text": " Y or y, just y." }, { "end": 1056.66, "start": 1051.66, "text": " And of course this is i, this is i." }, { "end": 1060.66, "start": 1056.66, "text": " And this here is our final output of the layer." }, { "end": 1064.66, "start": 1060.66, "text": " You can see now the backpropagation paths if you go through here." }, { "end": 1068.66, "start": 1064.66, "text": " So the backpropagation path, if we have some loss coming in here," }, { "end": 1073.66, "start": 1068.66, "text": " we backprop through yi, right?" }, { "end": 1080.66, "start": 1073.66, "text": " So here is the L, the loss to yi. That's here." }, { "end": 1087.66, "start": 1080.66, "text": " So if we want, for example, the backprop with respect to beta," }, { "end": 1092.66, "start": 1087.66, "text": " what we do is we simply, and this is over the mini-batch of course," }, { "end": 1095.66, "start": 1092.66, "text": " we simply backprop here through this path." }, { "end": 1101.66, "start": 1095.66, "text": " So in our formula for beta, there should be only mention yi." }, { "end": 1104.66, "start": 1101.66, "text": " And that's what we see here, right?" }, { "end": 1108.66, "start": 1104.66, "text": " In our formula for gamma, there should only be mention of yi." }, { "end": 1114.66, "start": 1108.66, "text": " So because the path leads only through yi." }, { "end": 1119.66, "start": 1114.66, "text": " Oh, no, I'm sorry. Actually, because of the," }, { "end": 1122.66, "start": 1119.66, "text": " what I mean is of the derivative with respect to yi." }, { "end": 1128.66, "start": 1122.66, "text": " Of course, we also have to pay attention that this is multiplied here" }, { "end": 1133.66, "start": 1128.66, "text": " by this x hat i, where of course that's not the case when we just add something." }, { "end": 1143.66, "start": 1133.66, "text": " Because the derivative of an addition like x plus b with respect to b" }, { "end": 1150.66, "start": 1143.66, "text": " disregards x, whereas if it's x times b, it doesn't disregard x." }, { "end": 1156.66, "start": 1150.66, "text": " Alright, so if we, yeah, so you can go back." }, { "end": 1162.66, "start": 1156.66, "text": " So the interesting bit basically comes when we want to find out, okay, how?" }, { "end": 1166.66, "start": 1162.66, "text": " Because here is another layer, right?" }, { "end": 1169.66, "start": 1166.66, "text": " Down here somewhere, there is another layer." }, { "end": 1174.66, "start": 1169.66, "text": " And we basically want to know this input here to the next layer," }, { "end": 1178.66, "start": 1174.66, "text": " how do we compute it in the face of this mess here?" }, { "end": 1181.66, "start": 1178.66, "text": " Because it's not so easy, right?" }, { "end": 1183.66, "start": 1181.66, "text": " So you have to see we have three paths here." }, { "end": 1188.66, "start": 1183.66, "text": " We go back through x, and let me get rid of these blue lines." }, { "end": 1195.66, "start": 1188.66, "text": " We go back through x hat directly to x." }, { "end": 1203.66, "start": 1195.66, "text": " We go one path is through here, and one path is through this mu." }, { "end": 1208.66, "start": 1203.66, "text": " So basically you have to compute derivatives with respect to sigma squared and mu." }, { "end": 1213.66, "start": 1208.66, "text": " And for that we need the derivative with respect to x hat." }, { "end": 1218.66, "start": 1213.66, "text": " So basically the way backprop works is you just find all paths from where you are" }, { "end": 1223.66, "start": 1218.66, "text": " to where you want to go, and then you kind of iteratively compute this." }, { "end": 1228.66, "start": 1223.66, "text": " So this one here is the easiest." }, { "end": 1231.66, "start": 1228.66, "text": " As you see here they did it on top." }, { "end": 1240.66, "start": 1231.66, "text": " Well first they did this one, which is simply going from y to x hat i." }, { "end": 1245.66, "start": 1240.66, "text": " Then they go from x hat i to sigma squared," }, { "end": 1252.66, "start": 1245.66, "text": " which simply involves kind of the reverse operations of how you got it." }, { "end": 1259.66, "start": 1252.66, "text": " This is simply a derivative formula here of the division by square root." }, { "end": 1266.66, "start": 1259.66, "text": " Then you can use this quantity here to compute that." }, { "end": 1271.66, "start": 1266.66, "text": " So basically you just go in reverse of how you computed the operations in the first place." }, { "end": 1275.66, "start": 1271.66, "text": " We said we needed mu b to compute sigma squared b." }, { "end": 1282.66, "start": 1275.66, "text": " Now we need the derivative with respect to sigma squared b in order to compute the derivative to mu b." }, { "end": 1288.66, "start": 1282.66, "text": " And once you have that, and you see the addition here," }, { "end": 1297.66, "start": 1288.66, "text": " the add here is the fact that two things contribute to mu b." }, { "end": 1303.66, "start": 1297.66, "text": " So two paths lead to mu b." }, { "end": 1311.66, "start": 1303.66, "text": " One path is from here, and one path is through here." }, { "end": 1314.66, "start": 1311.66, "text": " So here there should be a green." }, { "end": 1321.66, "start": 1314.66, "text": " Since two paths, you have two components to your derivative and you add each of them." }, { "end": 1323.66, "start": 1321.66, "text": " So that's how that's going to be." }, { "end": 1331.66, "start": 1323.66, "text": " And then this here, with respect to this x here, we have three paths." }, { "end": 1334.66, "start": 1331.66, "text": " Because we have three arrows going out of xi." }, { "end": 1338.66, "start": 1334.66, "text": " One here, one here, and one here." }, { "end": 1341.66, "start": 1338.66, "text": " So you have to take into account all of them." }, { "end": 1345.66, "start": 1341.66, "text": " This one is pretty easy, that's the first one." }, { "end": 1354.66, "start": 1345.66, "text": " Then the second one goes through this mu b, which we've already computed," }, { "end": 1359.66, "start": 1354.66, "text": " and the third one goes through the sigma, which we've also already computed." }, { "end": 1368.66, "start": 1359.66, "text": " And these are added, because you have to add all the paths in the backprop algorithm." }, { "end": 1376.66, "start": 1368.66, "text": " Maybe we'll do a video on backprop later to really dive into how this works." }, { "end": 1379.66, "start": 1376.66, "text": " And finally, they compute these, these we've already discussed." }, { "end": 1384.66, "start": 1379.66, "text": " So in essence, the whole thing is differentiable." }, { "end": 1391.66, "start": 1384.66, "text": " You just have to kind of pay attention how to do it, but the whole thing is differentiable." }, { "end": 1400.66, "start": 1391.66, "text": " And thereby, you can basically backprop through a network that has these batch normal layers built in." }, { "end": 1403.66, "start": 1400.66, "text": " So that's pretty cool." }, { "end": 1407.66, "start": 1403.66, "text": " I just want to quickly jump over to the results." }, { "end": 1415.66, "start": 1407.66, "text": " Keep in mind, this paper is from 2015, so networks weren't that big back then." }, { "end": 1419.66, "start": 1415.66, "text": " We didn't know that much about training yet, but the interesting thing is they basically discovered," }, { "end": 1426.66, "start": 1419.66, "text": " look, we can have drastically fewer steps in order to reach the same accuracies." }, { "end": 1431.66, "start": 1426.66, "text": " And these are kind of the activations of the network over the course of training." }, { "end": 1436.66, "start": 1431.66, "text": " So without patch norm, you see, especially at the beginning, there's large fluctuations in the activations." }, { "end": 1443.66, "start": 1436.66, "text": " And because they use batch norm now, there's no such thing." }, { "end": 1448.66, "start": 1443.66, "text": " So basically, the reason for that is pretty simple." }, { "end": 1455.66, "start": 1448.66, "text": " While you learn and you learn your layered representation here, let's say there's X and X is fed through layers," }, { "end": 1459.66, "start": 1455.66, "text": " and there's hidden representations, each in between." }, { "end": 1462.66, "start": 1459.66, "text": " So you're trying to learn all these parameters." }, { "end": 1470.66, "start": 1462.66, "text": " Let's say this one here, W3, but at the beginning of training, everything is kind of prone to shifting around a lot." }, { "end": 1479.66, "start": 1470.66, "text": " So when you change W1, that kind of changes the entire distribution of your hidden representations after the fact." }, { "end": 1487.66, "start": 1479.66, "text": " So basically, whatever you learn for W3 is now already almost obsolete because you've changed W1 basically," }, { "end": 1494.66, "start": 1487.66, "text": " and W3 was kind of assuming that its inputs would remain the same because that's what you assume in machine learning." }, { "end": 1497.66, "start": 1494.66, "text": " Your input distribution is kind of the same." }, { "end": 1503.66, "start": 1497.66, "text": " So that's why at the beginning of training, you see these kind of large variances." }, { "end": 1506.66, "start": 1503.66, "text": " And with batch norm, this tends to go away." }, { "end": 1508.66, "start": 1506.66, "text": " So that's pretty cool." }, { "end": 1516.66, "start": 1508.66, "text": " They also kind of show, they mainly show that they can reach the same accuracies as other training methods," }, { "end": 1522.66, "start": 1516.66, "text": " but with much, much fewer steps, and they can go much higher learning rates than others." }, { "end": 1525.66, "start": 1522.66, "text": " So because of that." }, { "end": 1527.66, "start": 1525.66, "text": " So that's pretty cool." }, { "end": 1530.66, "start": 1527.66, "text": " I encourage you to check out the rest of the paper." }, { "end": 1531.66, "start": 1530.66, "text": " Use batch norm in your network." }, { "end": 1532.66, "start": 1531.66, "text": " Sometimes it works." }, { "end": 1536.66, "start": 1532.66, "text": " It sometimes doesn't work, strangely enough." }, { "end": 1540.66, "start": 1536.66, "text": " But I guess that's just a matter of experimentation." }, { "end": 1542.66, "start": 1540.66, "text": " All right. That was it for me." }, { "end": 1547.66, "start": 1542.66, "text": " Bye bye." } ]
-9evrZnBorM
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
[ "Science & Technology" ]
[ "bert", "deep learning", "attention", "unsupervised", "nlp", "transformer", "squad", "wordpiece", "embeddings", "language", "language modeling", "attention layers", "bidirectional", "elmo", "natural language processing", "machine learning", "word vectors", "pretrained", "fine tuning" ]
https://arxiv.org/abs/1810.04805 Abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%. Authors: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
Hello everyone, today we're looking at BERT pre-training of deep bidirectional transformers for language understanding by Jacob Devlin and Min-Wai Chung, Kenton Lee, Kristina Tatanova. These are people from Google AI language, so you're about to see the most hyped model currently. So basically BERT is a model that takes as an input language, so token sequences, and outputs various things. So it can be made to do various things, almost any NLP task, with basically little training because the BERT model comes pre-trained on a very large corpus, and we're going to see how that's done. Alright, so the paper introduces basically the current state of the art of language models, and they say, okay, what they want to do new is they want to do bidirectional training. We're going to go down here and see their comparison. So here they compare three models, and these are representative of three types of models. So first, here is, for example, the OpenAI transformer. So this is one of the classic transformer models. We've talked about transformers before in the attention is all you need video. So what a transformer does is it uses attention, and for those who forgot what attention is, if you have a token sequence A, B, C, D, E, then a classic model to use that would be an LSTM. So the LSTM would go here. It would have a vector representation, a hidden state, and then it would take this A, it would take this hidden state and compute a new hidden state, and then it would go on and take the B and incorporate this into the hidden state. The hidden state kind of always stays the same size, but the recurrent model will update the hidden state as it goes over the input sequence. So this is one way of dealing with language, but people have kind of done another way, and that's the attention-based mechanism, where basically for each of these you compute a vector independently of each other. So each one has a vector representation, and then you have a vector representation of what you want, which is called an attention head, and you can have multiple of these. But in the simplest case, let's just say we are looking for the subject in this sentence. So A, B, C, D, E is a sentence, and one of the words is the subject of the sentence. Then we could have a vector here that's called a query vector. So these are called values V, and this is called a query Q, and then these vectors are the same size. I know I'm very poor at this. You're going to compute the inner product with each of these. So the inner product you want to do... Okay, I already screwed this up. You're actually computing two vectors for each token. But this is not too important for this step. One is the key, and one is the value. This is called the key, and you have your query Q, and you compute the inner products actually with the key. The values aren't too important for what I want to demonstrate, but you compute key with query, and that gives you basically... For each key, it's going to give you an output. So for this A, B, C, D, E, you're going to have this much inner product, this much inner product, this much, this much, this much inner product. So after maybe a softmax, you have a nice distribution, and then you can say, aha, here, this is the biggest alignment of the particular key with my query, and my query is which one is the subject. Of course, you're going to train all these queries and keys producing procedures. So this is a tension mechanism, and if you then want... That's where the value comes in. If your query is not only which one is the subject, but it's actually a generic query that, okay, I'm going to extract some information from some token that I'm going to use later, then you would actually take B and say, ah, B is the best one. Okay, I'm going to take the value of B. You're basically going to take a weighted average of the values according to these values here. So this is very shortly what attention is. If you want a lengthy explanation, go to the Attention is All You Need video. So OpenAI GPT uses attention here, and it's a left-to-right transformer. That's what it says here. And what that means is it goes also step-by-step, but in each step it uses attention. So here is the input tokens, and as you can see, it goes in this direction. So each one of the... And these are multiple layers of attention, so you can also layer these, of course. So each one of the attention intermediate steps can only attend to whatever is on to the left of it. You can see this here. So it goes step-by-step, and it goes left to right. So it can take the sequence in as a left-to-right input. Basically what that means is whenever you interpret a particular token, your context is only to the left of that token. You don't know what's coming yet. It's like when you read a sentence from left to right, but then as humans, unconsciously, we probably go and at the end of the sentence kind of make sense of the thing as a whole. But here the model is forced to make sense of the thing only from whatever is to the left of it. So that's a basic limitation of these left-to-right models. Then there's another approach, which is called ELMO, which has been popular recently as a substitute for word vectors. So if you know word vectors, word vectors are basically the kind of first stage in most language processing tasks, where for each word, say the cat sat on something, for each word you have a big giant table, and for each word you associate a vector of fixed size dimension. So you place every word in a vector space, and these vectors you pre-compute with something like word2vec or GloVe. That gives you a nice way to basically deal with these words in a canonical way. You can pre-train the word vectors. That's already nice. But people have realized, okay, words can have multiple meanings, and words can kind of slightly change meaning depending on words around them and so on. So what ELMO does is ELMO uses two LSTMs. One LSTM goes into this direction, one LSTM goes into this direction. And basically a single LSTM, as we saw before, it takes in the input sequence one by one. So here E1, then E2, then E3, then E4. It produces hidden states at each step. It produces a hidden state that is a result of a previous hidden state and the current token. And then what it says is, okay, now these hidden states here, basically, these are now the embeddings of the token E1, E3, and so on. These are the embeddings. So the word vectors, as to say, are no longer just one vector per word. So they're not in isolation anymore. But basically you need the entire sequence to compute the word vectors as a result of this LSTM. This is more powerful because it can give individual words multiple or each word has kind of a unique embedding depending on the surrounding words. You would still hope that a given word would have similar embedding or similar word vector all across the language. But you can kind of fine tune it to the particular sentence it is in. And also you can completely change its meaning if it's kind of a word that has a completely new meaning in that sentence. So basically it uses two LSTMs, one, as I said here, forward, one backward. These also have multipliers and so on. And each of these produce one such hidden vector per token. And you simply concatenate the two from the LSTM on the left produces one, this LSTM on the right produces maybe here another one. And you simply concatenate the two to get the final embedding, the final word vector for each token. So the fundamental limitation here is that this is kind of you have information from the left end, you have information from the right. So other than here the original transformer, you actually have you actually can condition on the left context and the right context. But it's very it's very shallow because it's simply a concatenation of the left facing LSTM and the concatenation of the right facing LSTM. And these ultimately intrinsically they have nothing to do with each other. So you simply concatenate the two things that the left facing LSTM still can only see to the left and the right facing LSTM still can only see to the right. So you basically have two half blind models and then you kind of concatenate. So the it's still suboptimal because of what you want is you want a single model to output your word vectors or to interpret the language that can look at both the left and the right at the same time and then incorporate information from both of them simultaneously and not just at the end by concatenation. This is what BERT does. So BERT here and this is kind of what they claim is the new contribution. BERT at each in each layer here of the model. The the let's look at this. And for a particular token, they look at all of the context. So every every other token in the in the input, they look at that. And so the the basically it seems kind of it seems kind of obvious, but it's it's actually there's reasons why these other models don't do this. But so this is the entire point of BERT is at each layer in this in this transformer architecture is still an attention mechanism, by the way, so that there's there's the mechanism of attention here and here is exactly the same or almost the same. They actually keep it close on purpose in order to compare. But now we have attention not only to the left, but also to the right to everything. Right. So why do these other model whether, for example, the OpenAI transformer only look to the left. That's because somehow you need a task to train on. Right. And most of the time, if you especially if you want unsupervised training, you going to do something like language modeling. And language modeling, what you have is a sentence A, B, C, D, and you're asking what comes next here. Right. So by by the definition of the task, you can only look to the left. That's that's just how these like how the task works. So it makes sense that that these other models kind of do this because they pre train on this number has a different pre training because they can they can only they have to look to the left and the right. And the other thing is what you want to use the model for. So the good thing if you if you go left to right, you can use the model now for generating language in the same vein. If if you have a B, C, D, and you ask and the model is trained to produce the next character only looking to the left. Right. Then you can you can say what's the next character of the model says E and then you can feed the same thing into the model and say OK, what's now the next character? Well, says what's now the next character G. So there's pretty useful if you only look to the left, you can actually use the model then for generating language, which is something you can't do with BERT or it's not it's not really obvious now how to do it with BERT. People are I know people are investigating into language producing producing entire sequences with BERT. But as yet, it's not super clear how to do this with this model. That being said, the model is pretty good at pretty much everything else. So let's jump in to how they train. They train. Let's see where we are here. They train using masked basically masked language modeling. So I want to actually go into that first mask language modeling. What they do is they basically replace some words by the mask token and they don't have a good. They don't have a nice. All right. They have they have one here. All right. Here, if you just look at kind of the top sentence here. The man went to mask store. Don't don't don't worry about the set and so on. Just this. The man went to mask store and the model simply asked to predict what's here, which word is there. So it needs to incorporate information from the right and from the left to do this. So that's basically how you train it. They simply drop out some of the words some of the time and they have different techniques. So you can clearly tell a lot of work has gone into kind of fine tuning everything in this model, like how to train it and so on. So let's say we don't always do this. Sometimes we do this other thing and sometimes we do that. And there's several ways of biasing this model. But basically you do this masked language modeling. And then because they also want to evaluate on, let's say, entire sequence tasks or tasks that span multiple sentences. What they do is the second pre-training task at the same time, as you can see here, where they feed two sentences. So that's the first sentence. That's the second sentence. They feed these two sentences as an input. So at first they have this token and these separate the sentences. And then they ask the model to predict a label is next. And is next is true if the second sentence follows the first sentence. So if it's like a logical continuation. And the way you do this on supervised is really easy. You take a big giant corpus and you take a sentence for the first sentence. And then 50 percent of the time you take the next sentence in the corpus and the label is true. And 50 percent of the time you take some random sentence. Here you say, for example, the man mask to the store. And the next sentence is penguin mask or flightless birds. And that's kind of a random sentence. So the model is asked to predict. Well, that's probably not the next sentence following this first sentence. So you do these two tasks. You pre-train and you can do this on supervised. You don't need supervised data for that. You just need a corpus. And they do this for a long time with a lot of data. And the model itself is giant. It has 24, I think, of these transformer layers. So it's giant. And then you kind of pre-train this model. Here is an illustration of some extra things. So what they do is they first. This is the input up here. So the first token is this CLS token, which is kind of the start token. And then this is the first sentence. Then the set is the separator of two sentences. And this is the second sentence. And then again, we'll get to these hashtags in a second. But first, they say, OK, first we have the token embeddings. So they kind of start with the original concept of word vectors at the very basis because you need to start with actually going into a vector space to use these models. But they then kind of transform these through the transformer layers. They also use segment embeddings. Segment embeddings, as you can see here, is simply kind of a binary label. E, A being the label for the first sentence and E, B being the label for the second sentence. So just the model can differentiate which one is the first and which one is the second because it's kind of hard to learn for a transformer architecture that the set tokens kind of separate the sentences. So you kind of want to help it. And the last thing is positional embeddings. And we've already talked about these in Attention is All You Need. This is where you can kind of, the model, since it's a transformer, it doesn't go step by step. It doesn't go one, done, done, done, done. So it's kind of hard for the model to make out how far two things are apart from each other, how far two tokens, if they're neighbors or if they're really far apart. And these positional embeddings kind of help the model decide if two tokens are close to each other in input, if they're just neighbors or if they are actually really far apart. All right. So this is how the kind of first input is constructed out of these embeddings and then it's fed through these transformer layers, as we saw, with the mask-dllm task and the is-next task. I want to quickly get to these hashtags, what they mean. So the input here is separated into word pieces, so-called word pieces. And what that is, is so in language processing tasks, you have kind of a choice. You have a choice of how to tokenize your input. So let's look at a sentence here. Subscribe to PewDiePie. So this is a sentence and the sentence is rather, let's say, word-wise complicated. So why might a language model have a problem with this? So first you need to tokenize this sentence. So what most people do is they say, okay, here are the word boundaries. We're going to tokenize this into three segments. First is subscribe to PewDiePie. Okay, so three things and each of these now needs a word vector associated with it. Now the thing is, the word vectors, let's assume you have them pre-trained or something. In any case, you need a big table, a big, big table, and this goes down here, where for each word, a, the, to, I, you, you have a vector associated with it, right? So you need to keep this in your model. And as you know, English has a lot of words here. So this table is going to be really big. And the problem is how do you make this table, right? Okay, you could make it kind of dynamically and so on, but in general you're going to create this table with all the words you know, and that's going to be too big because English has so many words. And then you can say, all right, we'll only take the top, whatever is used in 90% of the language, which turns out to be this kind of burrito distributed. So it turns out to be like 5% of the words are used in 90% of the language. So you just take these, but then you're going to have the problem. Okay, here, two, two is not a problem. Why not? Two is used super often. We're going to have it at the very top somewhere, and we're going to have a vector for it. Subscribe is already, it's not so common, right? So maybe you have a word for it somewhere down. But then PewDiePie is a name. So there is no, there's not even a word like, that's not even a word. It's just, so what you usually do, what people usually do is they have this out of vocabulary token, and then they have a vector associated somewhere here with the out of vocabulary token. Is it whatever? And I don't know what it is. I just know that I don't have it in my vocabulary, and the model kind of deals with that. That's kind of, it's not really ideal, especially if you then want to generate language. Also, your model tends to generate out of vocabulary tokens. If you allow that, if you don't allow that, you have a problem during training. So it's all kind of messy. What's the alternative? The alternative is to go character level. So let's look at character level. In character level, you say, all right, my words are obviously made of characters. And characters, I'm just going to split at each character, right? And here the white space can be a character too. So I'm going to split at each character, and then I'm simply going to have one vector for each character. And there's only like 20 something, six of those. And so I can keep 26 vectors. But this tends to be rather problematic because a character by itself having a meaning that can be encapsulated by a vector is kind of shady because a character by itself usually doesn't mean any, doesn't have a meaning. So what's the solution here? The solution is to go in between. The solution is to say, well, let's actually go for word pieces. And you can kind of think of them as syllables, but you can split, you can make them in a way that you have a fixed size vocabulary. Say, okay, I have 4,000 entry places in my big table. I can afford 4,000 size table. So first of all, I'm going to have for each character, A, B, C, D, E, and so on. I'm going to have a vector. But then I only have 26. I have 3,000 some left. I'm going to have also the most common words. Now, A is already here, but maybe I can have to and from. And so the most common words, they also get there. And then for the other things, I'm going to split the words maybe in sub scribe. So these are two syllables and sub can be kind of a prefix to many things. And I only need then one, one. So I have sub here, sub. I only need one vector for that. And then the rest, if scribe, scribe is by the way also a word, so I can have that. But if scribe weren't in my vocabulary, I can divide scribe then up into characters and then describe them with the character level. So basically I can mix and match here. I can sub, that's, I have that. And then scribe, I don't have it. I don't have any of the pieces, so I can just use the character. So this would be sub and then S-C-R-I-B-E. So these would be the tokens that I work with now as my input. And these tags here, so this is what would happen to PewDiePie. You could simply split along each character. So you basically, this is kind of an interpolation between the token model and the character model. And it's really neat and it usually works quite well. As I said, the hashtag sign here simply means that these two have originally been one word. And now this in here is just a word piece token. This is a really good example where word piece come in. Because play by itself is a word and I can make play in instead of having an own vector for that. I can divide it into play, which already has a meaning. And presumably play in and play would have similar meanings. So it makes sense to have play as the token singled out here and then ing as a suffix. Also makes sense to have a token for that in my table. And then I simply have these two tokens here. That probably already gives me more information than simply having the word playing. By the way, you should subscribe to PewDiePie. Just FYI. Alright, let's go on. So we do word piece tokenization. We do the masked language model. We do the next sentence prediction pre-training. What do we have now? We have a model that can really, really well predict some masked words. Now how do we use it? Now they evaluate on these, I believe it's 11 tasks. 11 different tasks of... Or is it... I don't know how many it is. It is a lot with the same model. So this pre-trend model, they now claim, can be fine-tuned to do all of these tasks. And it gets up, it's like state of the art on everyone. It's crazy. So how do they fine-tune it? So the easiest tasks are the so-called sequence level task. Where you basically have the sequence and you're about to predict one class label for the entire sequence. So here we have the sentence pair classification tasks. For example, the task we saw before, the isNext task. There is more sophisticated tasks that you need kind of supervised data for. And so with the supervised data you'd have a class label that you could train on. So what you do is... Let's look at one of them. M-L-I. They had it up here. Nope. Here. Multi-genre natural language inference. And that's our entailment classification task. So given a pair of sentences, the goal is to predict whether the second sentence is an entailment, contradiction or neutral with respect to the first one. Alright, two sentences and you're about to predict which one of these three labels it is. So you put the two sentences here. Bert can already take two sentences as an input, as we saw. The embeddings are... the A and B embeddings and the position embeddings are left out of the picture here, but they would be added to it. And these would be the embeddings for it. And then you pass this through the Bert model and this is the final layer. And what they do is they simply take now the embedding, the final embedding for this first one corresponding to this start token. And they simply put a single layer of classification, so basically a logistic regression on it. And that's how they then get a class label. So if this is whatever... let's say this is... this gives you here a hidden vector of 512 dimensions. 512. And you have three labels to output here. One, two, three. You simply need a matrix that's 512 by 3 of size. And these are the weights that you would then have to train in addition to Bert. So Bert is pre-trained and you have to simply only now learn these weights. Of course they also kind of fine-tune the entire Bert model, but that's really fine-tuning. The only thing you have to learn from scratch is this, these weights here. That's pretty... first of all it's pretty neat because you can be very quick at learning new tasks. Because you simply start from the pre-trained Bert and then you go and learn a single class for a layer on top. And astonishingly this works extremely well for these tasks. A bit of a more challenging task is this here. Squat is a question answering task. And we're going to jump down here where they explain the task. So you have an input question. Oops. You have an input question and the input question is where do water droplets collide with ice crystals to form precipitation? And you have an input paragraph which is kind of a paragraph from Wikipedia page. And you know that the answer is somewhere in this paragraph, right? The data set is constructed such that the answer is in the paragraph. So the input paragraph reads, precipitation forms as smaller droplets coalesce via collision with other raindrops or ice crystals within a cloud. So the question is where do water droplets collide to form precipitation? The answer here is within a cloud. So that's this thing here. So usually what squad models do is they predict the span. They predict where's the start of the answer and where's the end of the answer. That's also what kind of BERT's trained to do. So in order to do this, what you do is again, you already have the ability to input two sequences. So we've trained with two sentences, but here they simply say, oh well, the first sequence is going to be the question. Our second sequence is going to be the entire paragraph from Wikipedia. And then for each output, for the output of each token, remember there's as many outputs as there's inputs because the transformer will always transform to the same length of sequence. For each token in the output, we classify it. Is this token the start token or is this token the end token or is this token none of all? Now, what they do effectively is that here each one outputs, each one is a vector. And they, as we said at the beginning of finding out which one's the subject, now here we have two queries, namely query one, which is, is this the start? Let's call it query S and query E is, is this the end token? So these are two queries and I'm going to just produce, compute the inner product of each query with each of these outputs. And over my sequence here, this is going to give me a distribution. So start for start, maybe this token is not much and this token is a lot and so on. There's five tokens and for the end, not so much, not so probable, not so probable, very probable, not so probable. So what you get, going to get is from these inner products is a distribution over which one's the start and which one's the end. And you're going to say, okay, this one's probably the start and this one's probably the end. So that's how you predict the span. And again, what you have to ultimately learn is these, these queries here. And so not that much. And this is named entity recognition and named entity recognition, you have a sentence and you're supposed to recognize named entities. Like up here, we saw subscribe to PewDiePie and the named entity would be PewDiePie. Right. This is a name and you're supposed to recognize that this is a name. And they do it the same, same way that they do the squat basically or a similar way. Sorry. They basically for each of the outputs here, they simply classify whether or not it's part of an entity or not. So what they have to do is they have to simply train if they also have different labels for which kind of entity is this. This is like a person and this is this is no entity. So if you have 10 of the labels, then each for each thing, you would classify it into one of 10 classes. You need a classifier of input size versus number of classes. That's all you have to train in addition to pre to fine tuning BERT itself. All right. So they kind of evaluate on all of these tasks. They get super duper numbers on all of them here. BERT large wins on pretty much everything. And this model is big. Just saying. And they trained it on TPUs, which is available in kind of Google Cloud infrastructure. So far, it's trained it on a lot of data. So to to away, it's it's kind of expected that you would outperform, but it's very surprising that you outperform everyone else by this much. And they've done a lot of kind of ablation studies where they show that it's really due to the fact that they do this left and right context. They take into account the left and right context of a given token when doing the attention that it's that that's why it's better. So here, for example, they compare the BERT base model and they say, OK, what if we don't do the NSP, the next sentence prediction task? Then you can see the numbers, they already kind of they drop on these tasks. And what if we then additionally do only left to right training and the numbers, they drop pretty seriously again, you see, sometimes here, for example, you see a pretty serious drop in the number also here. So there really seems to be a real value in doing this kind of left and right context attention. So it's not just about the model size and the amount of data. That's basically what they show here. And it's really cool that the paper actually shows this, because usually people have an idea and they throw a lot more resources at it and they're better. You'd never know why. And this is pretty cool that they actually show. All right. So this is all I have to say about this paper. Check it out. The models are here pre trained. You can actually download them. You can fine tune in for yourself, for your own task. And they're pretty, pretty powerful. There are smaller models for if you don't have a TPU that are also pre trained. So check these out as well. And thanks a lot for listening.
[ { "end": 14, "start": 0, "text": " Hello everyone, today we're looking at BERT pre-training of deep bidirectional transformers for language understanding by Jacob Devlin and Min-Wai Chung, Kenton Lee, Kristina Tatanova." }, { "end": 23, "start": 14, "text": " These are people from Google AI language, so you're about to see the most hyped model currently." }, { "end": 34, "start": 23, "text": " So basically BERT is a model that takes as an input language, so token sequences, and outputs various things." }, { "end": 49, "start": 34, "text": " So it can be made to do various things, almost any NLP task, with basically little training because the BERT model comes pre-trained on a very large corpus, and we're going to see how that's done." }, { "end": 67, "start": 49, "text": " Alright, so the paper introduces basically the current state of the art of language models, and they say, okay, what they want to do new is they want to do bidirectional training." }, { "end": 81, "start": 67, "text": " We're going to go down here and see their comparison. So here they compare three models, and these are representative of three types of models." }, { "end": 99, "start": 81, "text": " So first, here is, for example, the OpenAI transformer. So this is one of the classic transformer models. We've talked about transformers before in the attention is all you need video." }, { "end": 118, "start": 99, "text": " So what a transformer does is it uses attention, and for those who forgot what attention is, if you have a token sequence A, B, C, D, E, then a classic model to use that would be an LSTM." }, { "end": 136, "start": 118, "text": " So the LSTM would go here. It would have a vector representation, a hidden state, and then it would take this A, it would take this hidden state and compute a new hidden state, and then it would go on and take the B and incorporate this into the hidden state." }, { "end": 147, "start": 136, "text": " The hidden state kind of always stays the same size, but the recurrent model will update the hidden state as it goes over the input sequence." }, { "end": 167, "start": 147, "text": " So this is one way of dealing with language, but people have kind of done another way, and that's the attention-based mechanism, where basically for each of these you compute a vector independently of each other." }, { "end": 182, "start": 167, "text": " So each one has a vector representation, and then you have a vector representation of what you want, which is called an attention head, and you can have multiple of these." }, { "end": 201, "start": 182, "text": " But in the simplest case, let's just say we are looking for the subject in this sentence. So A, B, C, D, E is a sentence, and one of the words is the subject of the sentence. Then we could have a vector here that's called a query vector." }, { "end": 214, "start": 201, "text": " So these are called values V, and this is called a query Q, and then these vectors are the same size. I know I'm very poor at this. You're going to compute the inner product with each of these." }, { "end": 232, "start": 214, "text": " So the inner product you want to do... Okay, I already screwed this up. You're actually computing two vectors for each token. But this is not too important for this step." }, { "end": 245, "start": 232, "text": " One is the key, and one is the value. This is called the key, and you have your query Q, and you compute the inner products actually with the key." }, { "end": 258, "start": 245, "text": " The values aren't too important for what I want to demonstrate, but you compute key with query, and that gives you basically... For each key, it's going to give you an output." }, { "end": 275, "start": 258, "text": " So for this A, B, C, D, E, you're going to have this much inner product, this much inner product, this much, this much, this much inner product." }, { "end": 290, "start": 275, "text": " So after maybe a softmax, you have a nice distribution, and then you can say, aha, here, this is the biggest alignment of the particular key with my query, and my query is which one is the subject." }, { "end": 301, "start": 290, "text": " Of course, you're going to train all these queries and keys producing procedures. So this is a tension mechanism, and if you then want... That's where the value comes in." }, { "end": 314, "start": 301, "text": " If your query is not only which one is the subject, but it's actually a generic query that, okay, I'm going to extract some information from some token that I'm going to use later," }, { "end": 319, "start": 314, "text": " then you would actually take B and say, ah, B is the best one. Okay, I'm going to take the value of B." }, { "end": 325, "start": 319, "text": " You're basically going to take a weighted average of the values according to these values here." }, { "end": 334, "start": 325, "text": " So this is very shortly what attention is. If you want a lengthy explanation, go to the Attention is All You Need video." }, { "end": 346, "start": 334, "text": " So OpenAI GPT uses attention here, and it's a left-to-right transformer. That's what it says here." }, { "end": 351, "start": 346, "text": " And what that means is it goes also step-by-step, but in each step it uses attention." }, { "end": 356, "start": 351, "text": " So here is the input tokens, and as you can see, it goes in this direction." }, { "end": 363, "start": 356, "text": " So each one of the... And these are multiple layers of attention, so you can also layer these, of course." }, { "end": 375, "start": 363, "text": " So each one of the attention intermediate steps can only attend to whatever is on to the left of it." }, { "end": 386, "start": 375, "text": " You can see this here. So it goes step-by-step, and it goes left to right. So it can take the sequence in as a left-to-right input." }, { "end": 394, "start": 386, "text": " Basically what that means is whenever you interpret a particular token, your context is only to the left of that token." }, { "end": 399, "start": 394, "text": " You don't know what's coming yet. It's like when you read a sentence from left to right," }, { "end": 408, "start": 399, "text": " but then as humans, unconsciously, we probably go and at the end of the sentence kind of make sense of the thing as a whole." }, { "end": 416, "start": 408, "text": " But here the model is forced to make sense of the thing only from whatever is to the left of it." }, { "end": 420, "start": 416, "text": " So that's a basic limitation of these left-to-right models." }, { "end": 430, "start": 420, "text": " Then there's another approach, which is called ELMO, which has been popular recently as a substitute for word vectors." }, { "end": 440, "start": 430, "text": " So if you know word vectors, word vectors are basically the kind of first stage in most language processing tasks," }, { "end": 452, "start": 440, "text": " where for each word, say the cat sat on something, for each word you have a big giant table," }, { "end": 457, "start": 452, "text": " and for each word you associate a vector of fixed size dimension." }, { "end": 465, "start": 457, "text": " So you place every word in a vector space, and these vectors you pre-compute with something like word2vec or GloVe." }, { "end": 472, "start": 465, "text": " That gives you a nice way to basically deal with these words in a canonical way." }, { "end": 475, "start": 472, "text": " You can pre-train the word vectors. That's already nice." }, { "end": 479, "start": 475, "text": " But people have realized, okay, words can have multiple meanings," }, { "end": 484, "start": 479, "text": " and words can kind of slightly change meaning depending on words around them and so on." }, { "end": 489, "start": 484, "text": " So what ELMO does is ELMO uses two LSTMs." }, { "end": 494, "start": 489, "text": " One LSTM goes into this direction, one LSTM goes into this direction." }, { "end": 501, "start": 494, "text": " And basically a single LSTM, as we saw before, it takes in the input sequence one by one." }, { "end": 504, "start": 501, "text": " So here E1, then E2, then E3, then E4." }, { "end": 508, "start": 504, "text": " It produces hidden states at each step." }, { "end": 514, "start": 508, "text": " It produces a hidden state that is a result of a previous hidden state and the current token." }, { "end": 529, "start": 514, "text": " And then what it says is, okay, now these hidden states here, basically, these are now the embeddings of the token E1, E3, and so on." }, { "end": 531, "start": 529, "text": " These are the embeddings." }, { "end": 539, "start": 531, "text": " So the word vectors, as to say, are no longer just one vector per word." }, { "end": 541, "start": 539, "text": " So they're not in isolation anymore." }, { "end": 548, "start": 541, "text": " But basically you need the entire sequence to compute the word vectors as a result of this LSTM." }, { "end": 560, "start": 548, "text": " This is more powerful because it can give individual words multiple or each word has kind of a unique embedding depending on the surrounding words." }, { "end": 570, "start": 560, "text": " You would still hope that a given word would have similar embedding or similar word vector all across the language." }, { "end": 574, "start": 570, "text": " But you can kind of fine tune it to the particular sentence it is in." }, { "end": 581, "start": 574, "text": " And also you can completely change its meaning if it's kind of a word that has a completely new meaning in that sentence." }, { "end": 587, "start": 581, "text": " So basically it uses two LSTMs, one, as I said here, forward, one backward." }, { "end": 589, "start": 587, "text": " These also have multipliers and so on." }, { "end": 594, "start": 589, "text": " And each of these produce one such hidden vector per token." }, { "end": 605, "start": 594, "text": " And you simply concatenate the two from the LSTM on the left produces one, this LSTM on the right produces maybe here another one." }, { "end": 615, "start": 605, "text": " And you simply concatenate the two to get the final embedding, the final word vector for each token." }, { "end": 627, "start": 615, "text": " So the fundamental limitation here is that this is kind of you have information from the left end, you have information from the right." }, { "end": 635, "start": 627, "text": " So other than here the original transformer, you actually have you actually can condition on the left context and the right context." }, { "end": 644, "start": 635, "text": " But it's very it's very shallow because it's simply a concatenation of the left facing LSTM and the concatenation of the right facing LSTM." }, { "end": 650, "start": 644, "text": " And these ultimately intrinsically they have nothing to do with each other." }, { "end": 661, "start": 650, "text": " So you simply concatenate the two things that the left facing LSTM still can only see to the left and the right facing LSTM still can only see to the right." }, { "end": 667, "start": 661, "text": " So you basically have two half blind models and then you kind of concatenate." }, { "end": 689, "start": 667, "text": " So the it's still suboptimal because of what you want is you want a single model to output your word vectors or to interpret the language that can look at both the left and the right at the same time and then incorporate information from both of them simultaneously and not just at the end by concatenation." }, { "end": 691, "start": 689, "text": " This is what BERT does." }, { "end": 697, "start": 691, "text": " So BERT here and this is kind of what they claim is the new contribution." }, { "end": 701, "start": 697, "text": " BERT at each in each layer here of the model." }, { "end": 704, "start": 701, "text": " The the let's look at this." }, { "end": 709, "start": 704, "text": " And for a particular token, they look at all of the context." }, { "end": 717, "start": 709, "text": " So every every other token in the in the input, they look at that." }, { "end": 731, "start": 717, "text": " And so the the basically it seems kind of it seems kind of obvious, but it's it's actually there's reasons why these other models don't do this." }, { "end": 748, "start": 731, "text": " But so this is the entire point of BERT is at each layer in this in this transformer architecture is still an attention mechanism, by the way, so that there's there's the mechanism of attention here and here is exactly the same or almost the same." }, { "end": 752, "start": 748, "text": " They actually keep it close on purpose in order to compare." }, { "end": 761, "start": 752, "text": " But now we have attention not only to the left, but also to the right to everything." }, { "end": 768, "start": 761, "text": " Right. So why do these other model whether, for example, the OpenAI transformer only look to the left." }, { "end": 772, "start": 768, "text": " That's because somehow you need a task to train on." }, { "end": 781, "start": 772, "text": " Right. And most of the time, if you especially if you want unsupervised training, you going to do something like language modeling." }, { "end": 791, "start": 781, "text": " And language modeling, what you have is a sentence A, B, C, D, and you're asking what comes next here." }, { "end": 797, "start": 791, "text": " Right. So by by the definition of the task, you can only look to the left." }, { "end": 803, "start": 797, "text": " That's that's just how these like how the task works." }, { "end": 818, "start": 803, "text": " So it makes sense that that these other models kind of do this because they pre train on this number has a different pre training because they can they can only they have to look to the left and the right." }, { "end": 822, "start": 818, "text": " And the other thing is what you want to use the model for." }, { "end": 830, "start": 822, "text": " So the good thing if you if you go left to right, you can use the model now for generating language in the same vein." }, { "end": 838, "start": 830, "text": " If if you have a B, C, D, and you ask and the model is trained to produce the next character only looking to the left." }, { "end": 848, "start": 838, "text": " Right. Then you can you can say what's the next character of the model says E and then you can feed the same thing into the model and say OK, what's now the next character?" }, { "end": 853, "start": 848, "text": " Well, says what's now the next character G." }, { "end": 866, "start": 853, "text": " So there's pretty useful if you only look to the left, you can actually use the model then for generating language, which is something you can't do with BERT or it's not it's not really obvious now how to do it with BERT." }, { "end": 875, "start": 866, "text": " People are I know people are investigating into language producing producing entire sequences with BERT." }, { "end": 881, "start": 875, "text": " But as yet, it's not super clear how to do this with this model." }, { "end": 885, "start": 881, "text": " That being said, the model is pretty good at pretty much everything else." }, { "end": 889, "start": 885, "text": " So let's jump in to how they train." }, { "end": 892, "start": 889, "text": " They train. Let's see where we are here." }, { "end": 898, "start": 892, "text": " They train using masked basically masked language modeling." }, { "end": 906, "start": 898, "text": " So I want to actually go into that first mask language modeling." }, { "end": 915, "start": 906, "text": " What they do is they basically replace some words by the mask token and they don't have a good." }, { "end": 917, "start": 915, "text": " They don't have a nice." }, { "end": 920, "start": 917, "text": " All right. They have they have one here." }, { "end": 922, "start": 920, "text": " All right." }, { "end": 927, "start": 922, "text": " Here, if you just look at kind of the top sentence here." }, { "end": 930, "start": 927, "text": " The man went to mask store." }, { "end": 936, "start": 930, "text": " Don't don't don't worry about the set and so on. Just this." }, { "end": 943, "start": 936, "text": " The man went to mask store and the model simply asked to predict what's here, which word is there." }, { "end": 948, "start": 943, "text": " So it needs to incorporate information from the right and from the left to do this." }, { "end": 951, "start": 948, "text": " So that's basically how you train it." }, { "end": 958, "start": 951, "text": " They simply drop out some of the words some of the time and they have different techniques." }, { "end": 966, "start": 958, "text": " So you can clearly tell a lot of work has gone into kind of fine tuning everything in this model, like how to train it and so on." }, { "end": 968, "start": 966, "text": " So let's say we don't always do this." }, { "end": 971, "start": 968, "text": " Sometimes we do this other thing and sometimes we do that." }, { "end": 973, "start": 971, "text": " And there's several ways of biasing this model." }, { "end": 977, "start": 973, "text": " But basically you do this masked language modeling." }, { "end": 986, "start": 977, "text": " And then because they also want to evaluate on, let's say, entire sequence tasks or tasks that span multiple sentences." }, { "end": 995, "start": 986, "text": " What they do is the second pre-training task at the same time, as you can see here, where they feed two sentences." }, { "end": 998, "start": 995, "text": " So that's the first sentence. That's the second sentence." }, { "end": 1001, "start": 998, "text": " They feed these two sentences as an input." }, { "end": 1006, "start": 1001, "text": " So at first they have this token and these separate the sentences." }, { "end": 1011, "start": 1006, "text": " And then they ask the model to predict a label is next." }, { "end": 1018, "start": 1011, "text": " And is next is true if the second sentence follows the first sentence." }, { "end": 1020, "start": 1018, "text": " So if it's like a logical continuation." }, { "end": 1023, "start": 1020, "text": " And the way you do this on supervised is really easy." }, { "end": 1029, "start": 1023, "text": " You take a big giant corpus and you take a sentence for the first sentence." }, { "end": 1035, "start": 1029, "text": " And then 50 percent of the time you take the next sentence in the corpus and the label is true." }, { "end": 1040, "start": 1035, "text": " And 50 percent of the time you take some random sentence." }, { "end": 1049, "start": 1040, "text": " Here you say, for example, the man mask to the store." }, { "end": 1056, "start": 1049, "text": " And the next sentence is penguin mask or flightless birds." }, { "end": 1059, "start": 1056, "text": " And that's kind of a random sentence." }, { "end": 1061, "start": 1059, "text": " So the model is asked to predict." }, { "end": 1066, "start": 1061, "text": " Well, that's probably not the next sentence following this first sentence." }, { "end": 1068, "start": 1066, "text": " So you do these two tasks." }, { "end": 1071, "start": 1068, "text": " You pre-train and you can do this on supervised." }, { "end": 1073, "start": 1071, "text": " You don't need supervised data for that." }, { "end": 1075, "start": 1073, "text": " You just need a corpus." }, { "end": 1080, "start": 1075, "text": " And they do this for a long time with a lot of data." }, { "end": 1082, "start": 1080, "text": " And the model itself is giant." }, { "end": 1086, "start": 1082, "text": " It has 24, I think, of these transformer layers." }, { "end": 1088, "start": 1086, "text": " So it's giant." }, { "end": 1092, "start": 1088, "text": " And then you kind of pre-train this model." }, { "end": 1097, "start": 1092, "text": " Here is an illustration of some extra things." }, { "end": 1103, "start": 1097, "text": " So what they do is they first." }, { "end": 1105, "start": 1103, "text": " This is the input up here." }, { "end": 1110, "start": 1105, "text": " So the first token is this CLS token, which is kind of the start token." }, { "end": 1113, "start": 1110, "text": " And then this is the first sentence." }, { "end": 1118, "start": 1113, "text": " Then the set is the separator of two sentences." }, { "end": 1120, "start": 1118, "text": " And this is the second sentence." }, { "end": 1125, "start": 1120, "text": " And then again, we'll get to these hashtags in a second." }, { "end": 1129, "start": 1125, "text": " But first, they say, OK, first we have the token embeddings." }, { "end": 1136, "start": 1129, "text": " So they kind of start with the original concept of word vectors at the very basis" }, { "end": 1143, "start": 1136, "text": " because you need to start with actually going into a vector space to use these models." }, { "end": 1149, "start": 1143, "text": " But they then kind of transform these through the transformer layers." }, { "end": 1151, "start": 1149, "text": " They also use segment embeddings." }, { "end": 1156, "start": 1151, "text": " Segment embeddings, as you can see here, is simply kind of a binary label." }, { "end": 1163, "start": 1156, "text": " E, A being the label for the first sentence and E, B being the label for the second sentence." }, { "end": 1168, "start": 1163, "text": " So just the model can differentiate which one is the first and which one is the second" }, { "end": 1172, "start": 1168, "text": " because it's kind of hard to learn for a transformer architecture" }, { "end": 1176, "start": 1172, "text": " that the set tokens kind of separate the sentences." }, { "end": 1178, "start": 1176, "text": " So you kind of want to help it." }, { "end": 1181, "start": 1178, "text": " And the last thing is positional embeddings." }, { "end": 1185, "start": 1181, "text": " And we've already talked about these in Attention is All You Need." }, { "end": 1191, "start": 1185, "text": " This is where you can kind of, the model, since it's a transformer," }, { "end": 1195, "start": 1191, "text": " it doesn't go step by step. It doesn't go one, done, done, done, done." }, { "end": 1201, "start": 1195, "text": " So it's kind of hard for the model to make out how far two things are apart from each other," }, { "end": 1204, "start": 1201, "text": " how far two tokens, if they're neighbors or if they're really far apart." }, { "end": 1212, "start": 1204, "text": " And these positional embeddings kind of help the model decide if two tokens are close to each other in input," }, { "end": 1218, "start": 1212, "text": " if they're just neighbors or if they are actually really far apart." }, { "end": 1226, "start": 1218, "text": " All right. So this is how the kind of first input is constructed out of these embeddings" }, { "end": 1230, "start": 1226, "text": " and then it's fed through these transformer layers, as we saw," }, { "end": 1234, "start": 1230, "text": " with the mask-dllm task and the is-next task." }, { "end": 1240, "start": 1234, "text": " I want to quickly get to these hashtags, what they mean." }, { "end": 1247, "start": 1240, "text": " So the input here is separated into word pieces, so-called word pieces." }, { "end": 1252, "start": 1247, "text": " And what that is, is so in language processing tasks, you have kind of a choice." }, { "end": 1259, "start": 1252, "text": " You have a choice of how to tokenize your input." }, { "end": 1264, "start": 1259, "text": " So let's look at a sentence here." }, { "end": 1275, "start": 1264, "text": " Subscribe to PewDiePie." }, { "end": 1281, "start": 1275, "text": " So this is a sentence and the sentence is rather, let's say, word-wise complicated." }, { "end": 1285, "start": 1281, "text": " So why might a language model have a problem with this?" }, { "end": 1288, "start": 1285, "text": " So first you need to tokenize this sentence." }, { "end": 1293, "start": 1288, "text": " So what most people do is they say, okay, here are the word boundaries." }, { "end": 1296, "start": 1293, "text": " We're going to tokenize this into three segments." }, { "end": 1299, "start": 1296, "text": " First is subscribe to PewDiePie." }, { "end": 1305, "start": 1299, "text": " Okay, so three things and each of these now needs a word vector associated with it." }, { "end": 1313, "start": 1305, "text": " Now the thing is, the word vectors, let's assume you have them pre-trained or something." }, { "end": 1319, "start": 1313, "text": " In any case, you need a big table, a big, big table, and this goes down here," }, { "end": 1330, "start": 1319, "text": " where for each word, a, the, to, I, you, you have a vector associated with it, right?" }, { "end": 1334, "start": 1330, "text": " So you need to keep this in your model." }, { "end": 1339, "start": 1334, "text": " And as you know, English has a lot of words here." }, { "end": 1344, "start": 1339, "text": " So this table is going to be really big." }, { "end": 1350, "start": 1344, "text": " And the problem is how do you make this table, right?" }, { "end": 1353, "start": 1350, "text": " Okay, you could make it kind of dynamically and so on," }, { "end": 1358, "start": 1353, "text": " but in general you're going to create this table with all the words you know," }, { "end": 1361, "start": 1358, "text": " and that's going to be too big because English has so many words." }, { "end": 1366, "start": 1361, "text": " And then you can say, all right, we'll only take the top," }, { "end": 1370, "start": 1366, "text": " whatever is used in 90% of the language," }, { "end": 1373, "start": 1370, "text": " which turns out to be this kind of burrito distributed." }, { "end": 1379, "start": 1373, "text": " So it turns out to be like 5% of the words are used in 90% of the language." }, { "end": 1382, "start": 1379, "text": " So you just take these, but then you're going to have the problem." }, { "end": 1384, "start": 1382, "text": " Okay, here, two, two is not a problem." }, { "end": 1388, "start": 1384, "text": " Why not? Two is used super often." }, { "end": 1392, "start": 1388, "text": " We're going to have it at the very top somewhere, and we're going to have a vector for it." }, { "end": 1398, "start": 1392, "text": " Subscribe is already, it's not so common, right?" }, { "end": 1402, "start": 1398, "text": " So maybe you have a word for it somewhere down." }, { "end": 1405, "start": 1402, "text": " But then PewDiePie is a name." }, { "end": 1411, "start": 1405, "text": " So there is no, there's not even a word like, that's not even a word." }, { "end": 1415, "start": 1411, "text": " It's just, so what you usually do," }, { "end": 1420, "start": 1415, "text": " what people usually do is they have this out of vocabulary token," }, { "end": 1425, "start": 1420, "text": " and then they have a vector associated somewhere here with the out of vocabulary token." }, { "end": 1428, "start": 1425, "text": " Is it whatever? And I don't know what it is." }, { "end": 1432, "start": 1428, "text": " I just know that I don't have it in my vocabulary, and the model kind of deals with that." }, { "end": 1436, "start": 1432, "text": " That's kind of, it's not really ideal," }, { "end": 1439, "start": 1436, "text": " especially if you then want to generate language." }, { "end": 1442, "start": 1439, "text": " Also, your model tends to generate out of vocabulary tokens." }, { "end": 1445, "start": 1442, "text": " If you allow that, if you don't allow that, you have a problem during training." }, { "end": 1448, "start": 1445, "text": " So it's all kind of messy." }, { "end": 1452, "start": 1448, "text": " What's the alternative? The alternative is to go character level." }, { "end": 1455, "start": 1452, "text": " So let's look at character level." }, { "end": 1462, "start": 1455, "text": " In character level, you say, all right, my words are obviously made of characters." }, { "end": 1467, "start": 1462, "text": " And characters, I'm just going to split at each character, right?" }, { "end": 1471, "start": 1467, "text": " And here the white space can be a character too." }, { "end": 1473, "start": 1471, "text": " So I'm going to split at each character," }, { "end": 1478, "start": 1473, "text": " and then I'm simply going to have one vector for each character." }, { "end": 1482, "start": 1478, "text": " And there's only like 20 something, six of those." }, { "end": 1486, "start": 1482, "text": " And so I can keep 26 vectors." }, { "end": 1493, "start": 1486, "text": " But this tends to be rather problematic because a character by itself having a meaning" }, { "end": 1499, "start": 1493, "text": " that can be encapsulated by a vector is kind of shady" }, { "end": 1503, "start": 1499, "text": " because a character by itself usually doesn't mean any, doesn't have a meaning." }, { "end": 1508, "start": 1503, "text": " So what's the solution here? The solution is to go in between." }, { "end": 1513, "start": 1508, "text": " The solution is to say, well, let's actually go for word pieces." }, { "end": 1517, "start": 1513, "text": " And you can kind of think of them as syllables," }, { "end": 1524, "start": 1517, "text": " but you can split, you can make them in a way that you have a fixed size vocabulary." }, { "end": 1530, "start": 1524, "text": " Say, okay, I have 4,000 entry places in my big table." }, { "end": 1534, "start": 1530, "text": " I can afford 4,000 size table." }, { "end": 1541, "start": 1534, "text": " So first of all, I'm going to have for each character, A, B, C, D, E, and so on." }, { "end": 1542, "start": 1541, "text": " I'm going to have a vector." }, { "end": 1546, "start": 1542, "text": " But then I only have 26. I have 3,000 some left." }, { "end": 1549, "start": 1546, "text": " I'm going to have also the most common words." }, { "end": 1555, "start": 1549, "text": " Now, A is already here, but maybe I can have to and from." }, { "end": 1558, "start": 1555, "text": " And so the most common words, they also get there." }, { "end": 1566, "start": 1558, "text": " And then for the other things, I'm going to split the words maybe in sub scribe." }, { "end": 1571, "start": 1566, "text": " So these are two syllables and sub can be kind of a prefix to many things." }, { "end": 1576, "start": 1571, "text": " And I only need then one, one." }, { "end": 1580, "start": 1576, "text": " So I have sub here, sub. I only need one vector for that." }, { "end": 1586, "start": 1580, "text": " And then the rest, if scribe, scribe is by the way also a word, so I can have that." }, { "end": 1593, "start": 1586, "text": " But if scribe weren't in my vocabulary, I can divide scribe then up into characters" }, { "end": 1595, "start": 1593, "text": " and then describe them with the character level." }, { "end": 1597, "start": 1595, "text": " So basically I can mix and match here." }, { "end": 1600, "start": 1597, "text": " I can sub, that's, I have that." }, { "end": 1602, "start": 1600, "text": " And then scribe, I don't have it." }, { "end": 1606, "start": 1602, "text": " I don't have any of the pieces, so I can just use the character." }, { "end": 1615, "start": 1606, "text": " So this would be sub and then S-C-R-I-B-E." }, { "end": 1622, "start": 1615, "text": " So these would be the tokens that I work with now as my input." }, { "end": 1627, "start": 1622, "text": " And these tags here, so this is what would happen to PewDiePie." }, { "end": 1632, "start": 1627, "text": " You could simply split along each character." }, { "end": 1640, "start": 1632, "text": " So you basically, this is kind of an interpolation between the token model and the character model." }, { "end": 1647, "start": 1640, "text": " And it's really neat and it usually works quite well." }, { "end": 1654, "start": 1647, "text": " As I said, the hashtag sign here simply means that these two have originally been one word." }, { "end": 1658, "start": 1654, "text": " And now this in here is just a word piece token." }, { "end": 1662, "start": 1658, "text": " This is a really good example where word piece come in." }, { "end": 1669, "start": 1662, "text": " Because play by itself is a word and I can make play in instead of having an own vector for that." }, { "end": 1672, "start": 1669, "text": " I can divide it into play, which already has a meaning." }, { "end": 1676, "start": 1672, "text": " And presumably play in and play would have similar meanings." }, { "end": 1684, "start": 1676, "text": " So it makes sense to have play as the token singled out here and then ing as a suffix." }, { "end": 1688, "start": 1684, "text": " Also makes sense to have a token for that in my table." }, { "end": 1690, "start": 1688, "text": " And then I simply have these two tokens here." }, { "end": 1697, "start": 1690, "text": " That probably already gives me more information than simply having the word playing." }, { "end": 1703, "start": 1697, "text": " By the way, you should subscribe to PewDiePie." }, { "end": 1706, "start": 1703, "text": " Just FYI." }, { "end": 1710, "start": 1706, "text": " Alright, let's go on." }, { "end": 1714, "start": 1710, "text": " So we do word piece tokenization." }, { "end": 1716, "start": 1714, "text": " We do the masked language model." }, { "end": 1719, "start": 1716, "text": " We do the next sentence prediction pre-training." }, { "end": 1721, "start": 1719, "text": " What do we have now?" }, { "end": 1727, "start": 1721, "text": " We have a model that can really, really well predict some masked words." }, { "end": 1728, "start": 1727, "text": " Now how do we use it?" }, { "end": 1734, "start": 1728, "text": " Now they evaluate on these, I believe it's 11 tasks." }, { "end": 1739, "start": 1734, "text": " 11 different tasks of..." }, { "end": 1741, "start": 1739, "text": " Or is it..." }, { "end": 1742, "start": 1741, "text": " I don't know how many it is." }, { "end": 1744, "start": 1742, "text": " It is a lot with the same model." }, { "end": 1751, "start": 1744, "text": " So this pre-trend model, they now claim, can be fine-tuned to do all of these tasks." }, { "end": 1754, "start": 1751, "text": " And it gets up, it's like state of the art on everyone." }, { "end": 1757, "start": 1754, "text": " It's crazy." }, { "end": 1760, "start": 1757, "text": " So how do they fine-tune it?" }, { "end": 1767, "start": 1760, "text": " So the easiest tasks are the so-called sequence level task." }, { "end": 1774, "start": 1767, "text": " Where you basically have the sequence and you're about to predict one class label for the entire sequence." }, { "end": 1778, "start": 1774, "text": " So here we have the sentence pair classification tasks." }, { "end": 1782, "start": 1778, "text": " For example, the task we saw before, the isNext task." }, { "end": 1788, "start": 1782, "text": " There is more sophisticated tasks that you need kind of supervised data for." }, { "end": 1793, "start": 1788, "text": " And so with the supervised data you'd have a class label that you could train on." }, { "end": 1796, "start": 1793, "text": " So what you do is..." }, { "end": 1798, "start": 1796, "text": " Let's look at one of them." }, { "end": 1800, "start": 1798, "text": " M-L-I." }, { "end": 1804, "start": 1800, "text": " They had it up here." }, { "end": 1807, "start": 1804, "text": " Nope." }, { "end": 1808, "start": 1807, "text": " Here." }, { "end": 1811, "start": 1808, "text": " Multi-genre natural language inference." }, { "end": 1814, "start": 1811, "text": " And that's our entailment classification task." }, { "end": 1822, "start": 1814, "text": " So given a pair of sentences, the goal is to predict whether the second sentence is an entailment, contradiction or neutral with respect to the first one." }, { "end": 1828, "start": 1822, "text": " Alright, two sentences and you're about to predict which one of these three labels it is." }, { "end": 1831, "start": 1828, "text": " So you put the two sentences here." }, { "end": 1835, "start": 1831, "text": " Bert can already take two sentences as an input, as we saw." }, { "end": 1847, "start": 1835, "text": " The embeddings are... the A and B embeddings and the position embeddings are left out of the picture here, but they would be added to it." }, { "end": 1850, "start": 1847, "text": " And these would be the embeddings for it." }, { "end": 1855, "start": 1850, "text": " And then you pass this through the Bert model and this is the final layer." }, { "end": 1864, "start": 1855, "text": " And what they do is they simply take now the embedding, the final embedding for this first one corresponding to this start token." }, { "end": 1874, "start": 1864, "text": " And they simply put a single layer of classification, so basically a logistic regression on it." }, { "end": 1877, "start": 1874, "text": " And that's how they then get a class label." }, { "end": 1884, "start": 1877, "text": " So if this is whatever... let's say this is... this gives you here a hidden vector of 512 dimensions." }, { "end": 1886, "start": 1884, "text": " 512." }, { "end": 1889, "start": 1886, "text": " And you have three labels to output here." }, { "end": 1890, "start": 1889, "text": " One, two, three." }, { "end": 1900, "start": 1890, "text": " You simply need a matrix that's 512 by 3 of size." }, { "end": 1907, "start": 1900, "text": " And these are the weights that you would then have to train in addition to Bert." }, { "end": 1913, "start": 1907, "text": " So Bert is pre-trained and you have to simply only now learn these weights." }, { "end": 1920, "start": 1913, "text": " Of course they also kind of fine-tune the entire Bert model, but that's really fine-tuning." }, { "end": 1925, "start": 1920, "text": " The only thing you have to learn from scratch is this, these weights here." }, { "end": 1931, "start": 1925, "text": " That's pretty... first of all it's pretty neat because you can be very quick at learning new tasks." }, { "end": 1939, "start": 1931, "text": " Because you simply start from the pre-trained Bert and then you go and learn a single class for a layer on top." }, { "end": 1946, "start": 1939, "text": " And astonishingly this works extremely well for these tasks." }, { "end": 1951, "start": 1946, "text": " A bit of a more challenging task is this here." }, { "end": 1956, "start": 1951, "text": " Squat is a question answering task." }, { "end": 1959, "start": 1956, "text": " And we're going to jump down here where they explain the task." }, { "end": 1964, "start": 1959, "text": " So you have an input question." }, { "end": 1965, "start": 1964, "text": " Oops." }, { "end": 1973, "start": 1965, "text": " You have an input question and the input question is where do water droplets collide with ice crystals to form precipitation?" }, { "end": 1979, "start": 1973, "text": " And you have an input paragraph which is kind of a paragraph from Wikipedia page." }, { "end": 1984, "start": 1979, "text": " And you know that the answer is somewhere in this paragraph, right?" }, { "end": 1988, "start": 1984, "text": " The data set is constructed such that the answer is in the paragraph." }, { "end": 1999, "start": 1988, "text": " So the input paragraph reads, precipitation forms as smaller droplets coalesce via collision with other raindrops or ice crystals within a cloud." }, { "end": 2008, "start": 1999, "text": " So the question is where do water droplets collide to form precipitation?" }, { "end": 2011, "start": 2008, "text": " The answer here is within a cloud." }, { "end": 2013, "start": 2011, "text": " So that's this thing here." }, { "end": 2018, "start": 2013, "text": " So usually what squad models do is they predict the span." }, { "end": 2022, "start": 2018, "text": " They predict where's the start of the answer and where's the end of the answer." }, { "end": 2027, "start": 2022, "text": " That's also what kind of BERT's trained to do." }, { "end": 2036, "start": 2027, "text": " So in order to do this, what you do is again, you already have the ability to input two sequences." }, { "end": 2042, "start": 2036, "text": " So we've trained with two sentences, but here they simply say, oh well, the first sequence is going to be the question." }, { "end": 2047, "start": 2042, "text": " Our second sequence is going to be the entire paragraph from Wikipedia." }, { "end": 2063, "start": 2047, "text": " And then for each output, for the output of each token, remember there's as many outputs as there's inputs because the transformer will always transform to the same length of sequence." }, { "end": 2069, "start": 2063, "text": " For each token in the output, we classify it." }, { "end": 2079, "start": 2069, "text": " Is this token the start token or is this token the end token or is this token none of all?" }, { "end": 2086, "start": 2079, "text": " Now, what they do effectively is that here each one outputs, each one is a vector." }, { "end": 2098, "start": 2086, "text": " And they, as we said at the beginning of finding out which one's the subject, now here we have two queries, namely query one, which is, is this the start?" }, { "end": 2103, "start": 2098, "text": " Let's call it query S and query E is, is this the end token?" }, { "end": 2112, "start": 2103, "text": " So these are two queries and I'm going to just produce, compute the inner product of each query with each of these outputs." }, { "end": 2119, "start": 2112, "text": " And over my sequence here, this is going to give me a distribution." }, { "end": 2127, "start": 2119, "text": " So start for start, maybe this token is not much and this token is a lot and so on." }, { "end": 2138, "start": 2127, "text": " There's five tokens and for the end, not so much, not so probable, not so probable, very probable, not so probable." }, { "end": 2147, "start": 2138, "text": " So what you get, going to get is from these inner products is a distribution over which one's the start and which one's the end." }, { "end": 2152, "start": 2147, "text": " And you're going to say, okay, this one's probably the start and this one's probably the end." }, { "end": 2161, "start": 2152, "text": " So that's how you predict the span. And again, what you have to ultimately learn is these, these queries here." }, { "end": 2166, "start": 2161, "text": " And so not that much." }, { "end": 2177, "start": 2166, "text": " And this is named entity recognition and named entity recognition, you have a sentence and you're supposed to recognize named entities." }, { "end": 2187, "start": 2177, "text": " Like up here, we saw subscribe to PewDiePie and the named entity would be PewDiePie." }, { "end": 2193, "start": 2187, "text": " Right. This is a name and you're supposed to recognize that this is a name." }, { "end": 2201, "start": 2193, "text": " And they do it the same, same way that they do the squat basically or a similar way." }, { "end": 2214, "start": 2201, "text": " Sorry. They basically for each of the outputs here, they simply classify whether or not it's part of an entity or not." }, { "end": 2223, "start": 2214, "text": " So what they have to do is they have to simply train if they also have different labels for which kind of entity is this." }, { "end": 2228, "start": 2223, "text": " This is like a person and this is this is no entity." }, { "end": 2236, "start": 2228, "text": " So if you have 10 of the labels, then each for each thing, you would classify it into one of 10 classes." }, { "end": 2243, "start": 2236, "text": " You need a classifier of input size versus number of classes." }, { "end": 2250, "start": 2243, "text": " That's all you have to train in addition to pre to fine tuning BERT itself." }, { "end": 2259, "start": 2250, "text": " All right. So they kind of evaluate on all of these tasks. They get super duper numbers on all of them here." }, { "end": 2264, "start": 2259, "text": " BERT large wins on pretty much everything." }, { "end": 2270, "start": 2264, "text": " And this model is big. Just saying." }, { "end": 2279, "start": 2270, "text": " And they trained it on TPUs, which is available in kind of Google Cloud infrastructure." }, { "end": 2285, "start": 2279, "text": " So far, it's trained it on a lot of data." }, { "end": 2292, "start": 2285, "text": " So to to away, it's it's kind of expected that you would outperform," }, { "end": 2297, "start": 2292, "text": " but it's very surprising that you outperform everyone else by this much." }, { "end": 2308, "start": 2297, "text": " And they've done a lot of kind of ablation studies where they show that it's really due to the fact that they do this left and right context." }, { "end": 2320, "start": 2308, "text": " They take into account the left and right context of a given token when doing the attention that it's that that's why it's better." }, { "end": 2332, "start": 2320, "text": " So here, for example, they compare the BERT base model and they say, OK, what if we don't do the NSP, the next sentence prediction task?" }, { "end": 2338, "start": 2332, "text": " Then you can see the numbers, they already kind of they drop on these tasks." }, { "end": 2349, "start": 2338, "text": " And what if we then additionally do only left to right training and the numbers, they drop pretty seriously again, you see, sometimes here, for example," }, { "end": 2353, "start": 2349, "text": " you see a pretty serious drop in the number also here." }, { "end": 2365, "start": 2353, "text": " So there really seems to be a real value in doing this kind of left and right context attention." }, { "end": 2369, "start": 2365, "text": " So it's not just about the model size and the amount of data." }, { "end": 2371, "start": 2369, "text": " That's basically what they show here." }, { "end": 2378, "start": 2371, "text": " And it's really cool that the paper actually shows this, because usually people have an idea and they throw a lot more resources at it and they're better." }, { "end": 2383, "start": 2378, "text": " You'd never know why. And this is pretty cool that they actually show." }, { "end": 2388, "start": 2383, "text": " All right. So this is all I have to say about this paper." }, { "end": 2392, "start": 2388, "text": " Check it out. The models are here pre trained." }, { "end": 2397, "start": 2392, "text": " You can actually download them. You can fine tune in for yourself, for your own task." }, { "end": 2401, "start": 2397, "text": " And they're pretty, pretty powerful." }, { "end": 2408, "start": 2401, "text": " There are smaller models for if you don't have a TPU that are also pre trained." }, { "end": 2410, "start": 2408, "text": " So check these out as well." }, { "end": 2438, "start": 2410, "text": " And thanks a lot for listening." } ]
nPB0ppcnzZA
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
What’s in a name? The need to nip NIPS
[ "Science & Technology" ]
[ "NIPS", "NeurIPS", "nips 2018", "neurips 2018", "nips name change", "machine learning", "deep learning", "community", "sexism", "diversity", "inclusion", "bias", "gender", "women", "tech", "women in tech", "women in stem", "majority vote", "minorities", "statistics", "computer science", "harassment" ]
http://tensorlab.cms.caltech.edu/users/anima/pubs/NIPS_Name_Debate.pdf Abstract: There has been substantial recent controversy surrounding the use of the acronym "NIPS" for the Neural Information Processing Systems conference, stemming from the fact that the word "nips" is common slang for nipples, and has historically been used as a racial slur targeting people of Japanese origin. Here, we outline the ways in which this acronym has contributed to a hostile environment towards women in machine learning. We argue that an October 2018 decision by the Neural Information Processing Systems board not to change the name of the conference was based on a misunderstanding of the issues that women face in STEM fields, a poorly-designed survey, and a faulty statistical analysis. We applaud the board for a more recent announcement of the new abbreviation "NeurIPS", and emphasize that this name change is an important first step towards the creation of a more inclusive environment in machine learning. Authors: Daniela M. Witten, Elana J. Fertig, Animashree Anandkumar, Jeff Dean References: https://medium.com/@kristianlum/statistics-we-have-a-problem-304638dc5de5 https://nips.cc/Conferences/2018/News https://twitter.com/AnimaAnandkumar/status/1055278000588021762 https://www.change.org/p/members-of-nips-board-protestnips-nips-acronym-encourages-sexism-and-is-a-slur-change-the-name https://twitter.com/AnimaAnandkumar/status/1056971852248018944
Hello and welcome. Today we're going to look at what's in a name, the need to nip NIPS by Daniela Witten, Alina Oferdig, Anima Shri Anand Kumar and Jeff Dean. This is a bit of a special paper as it's not an academic topic. The paper in fact is about the change of name or rather change in acronym for the conference Neural Information Processing Systems, previously abbreviated NIPS, but now for the first year this conference has been hosted under the acronym NURIPS. The people here on the paper are not the organizers of the conference, they are advocates for the name change and the paper basically outlines their arguments and a bit of description of what happened. So they're also pretty big names in the community so it should be interesting to see what they have to say. The paper is pretty short, it's three parts, three pages and we're going to go through it and yeah let's jump into it. So I have it over here. Alright so the first part of the paper basically describes, it's called What's all the Fuzz About? It basically describes why a name change was necessary in their perspective. So they say in machine learning like the rest of them suffers from severe gender imbalance, low retention rates for women and so on. They also describe the MeToo movement, increased awareness of sexual harassment faced by many female researchers, pervasiveness of sexual harassment at computational conferences and they reference an article here. I want to kind of show you this article. It's this article here. So if you haven't seen this yet I encourage you to read it. It's pretty horrifying to read but it gives you an idea of what people talk about when they say sexual harassment is a problem, is pervasive at conferences and so on. So yeah just I don't want to go into this specifically. Just go ahead read it and you know see what people are talking about. I think it's important context to understand where people are coming from. So they go on to say however more subtle acts of gender harassment defined in this report. This includes like sexist hostility, crude behavior and so on have gotten less public attention. Nonetheless gender harassment is extremely pervasive, is direct contributor to the challenges faced by women in the STEM field. In this article we argue that NIPS, the former acronym of the Neuro-Information Processing Systems Conference, constituted gender harassment towards women. So that's what their arguments basically about. So the acronym led to basically had its part in gender harassment towards women. Basically led to an environment where women could not feel comfortable at this conference. So here's their description. In popular slang the word NIPS is an abbreviation for nipples. Furthermore it has historically been used as a racial slur targeting people of Japanese origin but we'll not go into this deeper because that's kind of a historic use of the word. The current use of the word in fact is the slang for nipples and so we'll focus on that. They say at first glance the fact that a major machine learning conference shared its name with this slang is an unfortunate but unimportant coincidence. And it really is a coincidence. I think the the conference name has been around for longer than the slang has been kind of popular. The slang word has been popular so it really is a coincidence. Many other conferences have same coincidences like Colt for example. Maybe actually that's even less a coincidence than here. They say in fact one might hope that members of the machine learning community are sufficiently mature that the conference's name is unimportant. That's basically what everyone would hope. Maybe people don't even notice and if they notice maybe they'll have like a two-second oh that's you know that's the other word haha but then we basically just go on with our lives and no one cares too much. So that that's kind of the ideal scenario and they acknowledge that here. It's really important that they say unfortunately this appears not to be the case. They detail a few examples here at the 2017 conference Elon Musk made inappropriate jokes about the acronym participants wore loot t-shirts. I think one said my nips are NP hard which is kind of a double computer science joke I guess. There was a pre-conference event named word I can't probably say out loud without getting some sort of strike. You can clearly see that even though the kind of original name is coincidental and you know one would hope that people are like you know just putting it off be adult about it. There have been jokes, there have been you know t-shirts made and you know you can say the name collision is not like is unintended but I think this word here is very intended. So I think the main argument here or one of the main arguments is this really first of all creates an environment where certain people don't feel comfortable. It creates kind of a sexualized environment. Second of all and the more broader sense it's just unprofessional as a community especially since the kind of community is booming. We want to represent machine learning to the wider world. One can say okay it's you know it's just in professional that we kind of bring intertwine these things. It doesn't make a good impression. They say furthermore reminders of the unfortunate acronym are everywhere. Online searches for the acronym led to not safer work content. The hashtag NIPS is devoted to pornography. If you misspell the conference website you get to an adult site and I think this yeah this further goes into the argument that it's just an unprofessional appearance towards the outside. It's unfortunate the conference has been here longer but you know still there's a need to do something about it and I largely agree with these arguments that these are good arguments to make for a change of name. This paragraph down here it's a bit of a we'll go into that later. It's not very connected to the arguments made here so well it's more like connected to what's been happening so we'll go into that later. People have been circulating these arguments and calling for a name change for a while and then the the board of the conference the NIPS board made a survey surveying the attendance of the last five years conferences whether or not the conference should change its name. The next section is dedicated to how the survey turned out and what the response of the board was. So actually let's first go to the decision by the board. So here is the press release. This is a press release after the survey results had been collected. So they said our survey was returned by about 2200 people here and as I said have attended NIPS in the last five years. Of the male respondents about 28% are in favor of the conference name change of the female respondents about 44% are in favor of a name change. 40% prefer the existing name 16% expressed no preferences. In fact let's go look at the detailed results which they have down here. So you can see overall there is a big a big slant towards not agree. So negative 2 is strongly disagree with the name change while positive 2 is strongly agree. So you can see there's a big slant towards the not agree. If you split this by gender of respondents then you can see the basically the male distribution is that slant while the female distribution is a bit different as you can see here. The first thing it's mostly towards the extremes. So there are more people strongly saying something than non-strongly saying something to either side. And the second of all it seems very divided and very evenly divided. So in fact if you look at the numbers if you count the disagrees and agrees you'll find there's a slight majority in the agrees. There is a slight majority in the disagrees if you only consider the strongs. But ultimately these numbers are pretty close so that there's people on either side feeling strongly and there's about in this survey about as many on either side. So that's basically the outcome of this. Here I find very interesting some quotes from respondents. So you had the opportunity to put quotes to put like a comment and these are quoted from these comments. So they say for example this thanks for considering a name change. I'm not personally bothered by the current name but I think the gesture will send a much-needed inclusive vibe in the right direction. One person says if you were up to me I'd call off this nice but symbolic gesture. Use whatever time money and energy to make actual changes. Then someone says please please please change the name it is sexist and racist slur. I'm embarrassed every time I have to say the name of the conference. This feeds into the unprofessionalism argument. The next one I find very interesting. It says as a woman I find it offensive that the board is seriously considering changing the name of the meeting because of an adolescent reference to a woman's body. From my point of view it shows that the board does not see me as an equal member of the community but as a woman first and a scientist second. This is extremely interesting. So this is one of the people who was a female respondent and said strongly disagree with the name change or disagree with the name change. I mean I can guess. So we've only heard so far that the name or the acronym is offensive to women but here we have a woman saying that the consideration to change the acronym is actually offensive to her. That's very special and understandable. I can understand why that happens. I can understand the argument made here. This woman feels like okay it shows me that basically my gender is important and not really my being scientist. It's an argument. The next one goes into the same direction. It says I'm a woman. I've experienced being harassed by male academics and I would like this problem to be discussed and addressed but not in this frankly almost offensive way. Another person saying basically that's changing the name is almost offensive and it's not the right way to go to achieve these results. There's another one saying I'm in favor of the name change but this is cosmetic. So you have basically people coming from all angles giving their opinions and you can clearly see why there is especially in the female respondent group why there is a divide. So the board overall said the following. The board overall said the following. After extensive discussions the NIPS board has decided not to change the name of the conference for now. The poll itself did not yield a clear consensus on a name change or a well-regarded alternative name. Further they state instead we ask the community support in implementing concrete steps to improve the inclusiveness of the conference. So these are described down here. They have a number of changes to make the conference basically more inclusive. So they basically said okay so the name change survey was inconclusive and they clearly say whatever we do here regardless of which decision we take we're failing to accommodate the opinions about half the women in the community. Which is true this is clearly what you can see from the results from the quotes. So basically what they say is we'll not change the conference name for now. We'll implement these steps because what they I can guess what they felt was okay even the people against the name change were in support of making the conference more inclusive. So they basically say okay we do these things we strengthen their code of conduct. We have two inclusion diversity chairs. We have an inclusion town hall. We have childcare support. Gender-inclusive restrooms and so on and so on. Mentoring breakfasts for women and other minorities. So they take these steps concretely. They say this is what we do and even further if you look at their page on diversity and inclusion which I have here. They say here on the top in addition to hosting diversity related event the conference also making consider structural changes include a new code of conduct we've already seen and in-depth discussion of the potential of changing the name of the conference. So in total what they're saying is we've done this poll. It came back inconclusive which you've I think has been clearly demonstrated. We'll not change the name of the conference for now and we'll do all of these other things right down there and at the conference we'll hold a meeting and discuss the name change so we could maybe potentially change it in upcoming years. I think this is a really sensible decision by the board. I mean given this data given all of that this is probably the most sensible decision. Let's take concrete steps. The name change seems to be you know debatable so let's actually debate it at the conference with the actual community. That was the basically result of the poll. Let's now go back to what the paper has to say about this. Here's the paper again and they say in order to collect data about the machine learning community's feelings about the conference name the conference board sent out a survey to people who have attended the conference during the past five years. However serving conference attendees results in a very biased sample of a much larger community of potential machine learning researchers. Bias arises due to the fact that some people who are made uncomfortable by the name or by other aspects of the machine learning culture may have decided not to enter or to remain in the or not to remain in the field have chosen not to attend the conference. So basically you're saying well if you only ask this one group of people right then this other group of people you know doesn't have a chance to make their voice heard and there is basically bias because in this other group of people the people who have not attended the conference they would would have a severely different opinion from the people who have attended the conference. So first of all I think this can be a valid point here of course all the ways if you ask one group of people and exclude another one you there's there's if the if the group you ask and the target group which here it's really unclear what it is I guess it's the machine learning community considering going to the conference if those don't overlap then you you will introduce some sort of bias and they say okay bias could come from the fact you know some people who actually are affected by these problems of which this name is one they may have you know not attended the conference because they may have left the field because the the gender harassment is so pervasive and they just didn't didn't stay and so on. So I think this can be a good point but the problem I have with it here is that it's simply stated without anything it's simply said okay there is bias, bias arises and my question would be how much is that bias of any data like any data on this you can't just criticize these that the survey for being biased and and then not provide actual data like how many people are there who are made uncomfortable by the name or have left the field in who have left the field because of these things and is it really viable to to count them in I guess okay we can argue it is but how would they have responded to this we've clearly seen that a lot of affected people that even have experienced harassment are not in favor of the name change so in this case I would really like to see some data on how much this bias is right and I cannot also say it's not it's not that bad of a decision to what the board did to send the survey to the last five years attendees I think is a very sensible choice if you want to gather the community's feelings towards these kind of things I mean you you can't just ask the entire world because the entire world is not the machine learning community so I think the this is a very sensible decision to ask last five years attendees and if you have real evidence that this causes a notifiable like a significant bias then we could potentially correct for that bias but without any data on that I think the the asking last five years participants was completely reasonable and one of I don't really see how you can do a much better job without much much more manual work and I want to make this point a bit clearer on how hard it actually is to do that by pointing to the response to this so here is a tweet thread by one of the authors of this paper after the conference decision came out she basically tweeted out this protest nips I am starting this new hashtag please retweet if you're in support of the next conference changing its name so basically kind of launching a a Twitter campaign a Twitter hashtag under this to come you know get into a conversation with people about this people could express their support she also that was a misclick she also here made a change dot org petition to change the name so a petition basically petition is here the text of the petition basically says something similar to the to the what we've already seen including there is a the criticism of the survey and as you can see here about 2,000 people have signed it so I mean a Twitter hashtag is all good you know you can do that a petition is all good you can do that but it's a bit ironic because a change that org petition literally anyone can sign this and in addition to that there's only one option you can only say yes you can't even say no right so and even more who's gonna see the change that org petition it's gonna be the social media followers of these people right so basically you have now a you have it now what's basically a survey of the social media network of people in favor of changing the name where there's only one option to respond I I find it and so I've gone through here the people who actually publicly associate their name give a reason for signing a lot of these they you know they give some argument why they've signed the petition but I've tried searching these people for any sort of academic track record and in my sample I've come up with between 10 and 20 percent of people who somehow have an academic track record so this is I mean certainly a valid thing to make your voice heard and to show your numbers and but I mean look at this there's a bot signing twice hello Jack Nelson and Richard Chi very nice but so basically I'm not here to criticize petitions but what I want to say is you can't like criticize this this poll so hard for being biased and then launching basically an own poll that's even more biased and even more non-representative of the community to me that's that's kind of ironic and just goes to show how hard this is and my argument would be it's actually not that unsensible of a decision of the board the way they did it and if you have again if you have data to actually quantify the bias here then it's viable to go and correct for that all right so to they go on to analyze the survey results conference board simply noted that of the 294 women surveyed the number who strongly support or support the name change is comparable to the number of women who are strongly opposed or opposed however this analysis implicitly assumes that one person's feeling of discomfort or marginalization as a result of the name should be given the same weight as another person's preference for the status quo this amounts to giving the same way to false positives and false negatives of course we learn in an introductory statistics course that false positives and false negatives should be assigned weights dependent on context in this context we feel that a much greater weight should be given to the views of a person who feels marginalized as a result of the name so up here I find this a bit strange they say this amounts to giving the same way to false positives and false negatives to me the false is here a bit confusing because it seems to me it's it's simply giving the same weight to negatives and positives there's I don't think there's a need to dress this up in statistical lingo here it simply we give the same weight to people who responded positively and to people who responded negatively I think that's that's it there's no false of course we learn in a truck see this is class that false positives and false negatives should be assigned weights dependent on context in this context we feel that a much greater weight should be given to the views of person who feels marginalized as a result of the name I would I would say to this it's the problem for me it's these are this is one of the things that where you at you read it first and you say like oh yeah this makes sense but first of all it's framed extremely one-sided it's framed as all the people who are for the name change like they they feel discomforted they feel marginalized and the people who are against the name change they simply and here specifically they they they talk about the women group so in argument they're all affected the people against it simply prefer the status quo but we've clearly seen in the in the in the press release and we'll go over to that now these quotes here we've clearly seen that the the offense and the marginalization happens on both sides so here this as a woman I find it offensive that the board is considering changing the name it shows that the board does not see me as an equal member of the community but as a woman first and the scientists second I mean this is almost a textbook definition of marginalization and this is clearly happening on the other side as well so I think the framing here is extremely dishonest and one-sided and there is given basically the the side that we just seen in this quote is given absolutely no not even a mention that it exists it's simply framed as this side is marginalized and oppressed and discomforted and the other side simply prefers the status quo but we've clearly seen that yeah it's almost a this fits exactly this definition it's just one person's feeling or discomfort or marginalization as a result of the name it's just as a result of the name change second of all I think the the bigger problem and this goes into the statement down here to state this last point more explicitly an issue adversely affecting the minority of participants should not be decided by a majority vote again something at first you say oh yeah that makes sense but if you think about it this is a really really outrageous statement and the reason is it's it's it's outrageous is if the mud if it's not majority vote if it's not one person one vote then someone has to decide who gets to vote and who doesn't and more so specifically here someone basically needs to decide who should be given what weight in the vote right you need someone to decide this and here you can say well it's easy it's just the the women right because they're affected I this but they go further they say well it's the women who feel discomforted and marginalized who should be given more weight than the ones who simply prefer the status quo but then you have to have someone assessing whether someone is really marginalized and discomforted or simply prefers the status quo and it's not like an environment where there is kind of a sexist undertone isn't also discomforting or can't also be discomforting to men to men of any sort or people of of any sort of gender it's just not clear that the fact that people should be given different weight in in crafting an opinion I mean this this can be true if you have like some clear area of expertise but in this case it's really unclear and the fact is if it's not majority vote you need someone deciding the weight and the someone deciding the weights automatically decides on the outcome of the vote and then why do you need a vote in the first place basically up here they say yeah we feel the great weights should be aligned like this and down here there is no more we feel it's be an issue at worst affecting the minority of participants should not be decided by majority vote they're basically calling for a dictatorship in this case and I'm gonna guess like everyone has the opinion the dictatorship would be an awesome idea if the dictator were me right that's that's what everyone thinks of course and that's basically the argument made here but it's not it's not true and there's some really really disturbing implicit things in here and maybe I want to quickly go over how I think a democratic decision works so imagine you have a person and the person has decision to make for or against in this case the name change right and the person must decide on one of these two things on a let's say on a continuous scale but it doesn't matter what what this what this stuff up here basically implicitly assumes is that the person looks at themselves and they think well am I personally discomforted or marginalized by the name or the climate it creates no then I'm obviously against the name change because it doesn't help me or another person go am I personally affected yes well I feel discomforted or marginalized well then I'm obviously for a name change so the basic assumption here is that people simply vote purely their own egotistical interests and that's that's it so basically if you're in one of these minorities then you'll vote for the name change because it affects you which we've already seen is not it's not a given that people vote that way and if you're not in this then you know you you'd vote against but you're not affected so your vote shouldn't count it's completely untrue what people do especially smart people and I believe the machine learning community consists largely of these what they do is they'll make a list of arguments argument one argument two argument three argument for everyone has the same arguments everyone's hurt the same arguments if not then maybe there's some work to do in actually getting arguments to people but that's not the same as weighing the people differently you get the arguments to the people and then you weigh each of them equally why because what every person does is they say okay argument one is maybe it's unprofessional right name is unprofessional alright how important is that to me give it a weight weight one cool that's really important to me I'll give it a big weight argument two some people feel really discomfort like discomforted if you're marginalized by the name creates a bad environment for them how much weight am I gonna give to that right so people can actually consider other people's feelings and other people's problems and decide on what's the best also for them in their own mind so they give it a weight two and then there's maybe two arguments against some given these weight three weight four at the end what you have is you have argument I you will sum it up by the weights W I J you will sum it up over all people so basically now and this will give you like a final number a which is either positive or negative if it's positive you do the name change if it's negative you don't do the name change if you do this over all people what you've basically done is you have just determined these weightings here by a democratic process you've crowd sourced the weighting this is exactly what these people say up here right we feel we feel that you're not false false positives false we feel that positives and negatives should be assigned weights dependent on context so the positive and negative arguments in this case are assigned weights dependent on context but the weights are crowd sourced to the community right and each person this who participates in that each person who participates is one more brain power in a complicated decision that no one basically no one has the authority just to just decide for themselves so these people are calling for different weighting this is the way to do it the democratic majority vote is the exact way to determine these weights what these people basically are no no no no no we should determine the weights we who know I'm a bit corny here but this is basically it's still it's two alternatives either you do democratic process one person one brain one vote and that will give you a crowd sourced crowd sourced true weighting of the arguments what the community feels or someone needs to decide some one needs to side by force basically and that's a dictatorship so these are the choices you have and clearly now you can maybe understand why I say this is an outrageous statement because to me the dictatorship option is not an option note that I'm not saying that democracy can never be wrong or the majority can never be wrong but in fact it's the best system there is can be wrong but anything else will undoubtedly go more wrong so that's my point here alright so that was a maybe a bit ranty but let's go on a false choice and a minimization of a real issue so they go on to say what they think of the decision that the board made in response to this so up was how they analyzed the poll and now it's the decision in announcing their decision not to change the conference name conference board expressed commitment to implement concrete steps to improve the inclusiveness of the conference and they list them here and they say we sincerely applaud the conference board for these efforts okay I yeah I think the community feels like that as well however the wording of the decision implied the need to choose between changing the name of the conference and taking concrete steps to improve its inclusiveness I don't see that at all say this was a false choice there's no reason that the board could not do both yes there's no reason that they couldn't do both and I believe we've read this together before I don't think the board ever said that there was a choice between one or the other I think they've said very much the opposite let's go back I think what they mean here is the word instead so here they say we won't change the name and then here's they say instead we ask for the community support and implementing creed steps I think this this must be it because I don't really see any other way you would ever think that and the reason is this here they say will not change the name of the conference for now on another page they say it will discuss the name change at the conference and then here the instead I think what is meant is instead what we will do right now is these things we'll discuss about the name change but what we will do right now which was basically not the the real problem in the first place the real issue raised was the name so instead of that issue we'll do these other things which we feel the community wants I think that's the I think there's no I think everyone reading this comes to the same conclusion after after reading that but so I really don't see how you you can say that this is kind of presented as an either or by the board I don't think that at all and but you decide for yourself I believe the real real real crocs here is the for now and the promise to discuss at the conference which if you can see here in the paper is never ever ever touched right this they make it basically seem that the board has decided to not change the name and that's it which is completely wrong they've clearly stated their openness to a name change they want to discuss it it was just inconclusive so they want to basically not do anything rash and then half the community is against it anyway so they want to discuss it I to say that this is the basically that that the wording implied the need to choose I don't see that um but you know you decide for yourselves the board suggested a name change would only be symbolic and so on would have no real consequences so that this this these are some of the arguments basically made in the quotes as well but you know the fact that the name change would only be symbolic and so on these are all things you could actually discuss at the con at this conference meeting you could even correct for your for your poll right you could invite people who have left the community to represent those you could invite new potential researchers you could give everyone their voice and then actually listen to all of them I think that's a very sensible decision by the board and I think this is misrepresented here lastly let's say another argument though not explicitly mentioned a number of machine learning researchers told us that changing the name of the conference lead to too much confusion in the community while we understand we respectfully do not share it I mean this is it's basically an argument against the name change I think it's also a point worthy of discussion right that they say they say we respectfully do not share this point yeah okay they don't share it other people do it's a point of discussion we could you know you could actually discuss it at the conference but I actually agree with the authors here I think changing the name will not have a big impact on the kind of recognizability of the conference especially now down here we'll actually get into what actually happened in November the in response to extensive public backlash the conference board announced a change to the official conference acronym to NRIPS they say we are pleased provides this provides a reasonable compromise so in in my opinion this is it as far as solutions go this is a good solution right the NRIPS acronym I think it's it's it's cool you don't have to change the name of the conference itself you simply change the acronym which you know was the the reported problem in the first place I think the all the new papers will like people will still recognize the old NIPS acronym or the new conference it will be clear that it's the same thing and I think this is a very good a very good new name and I think people will get used to it pretty quickly it also you know to say NRIPS it it's also rolls off the tongue easily so it's as far as solutions go I like it further they say however the work for the conference board is far from done oops we encourage the board to continue its efforts blah blah blah so they say okay you have to do more than just change the name and so on they say together these steps will help ensure that the NRIPS conference retains its place in the forefront of machine learning research while also creating a welcoming environment for women and members of other representative groups on other underrepresented groups we all hope that to me the problem is a bit how this how this went down and if we go back and look at the actual press release of the name change they say here dear members of the neural information processing systems community something remarkable has happened in our community the name NRIPS has sprung up organically as an alternative acronym we're delighted to see it being adopted indeed one forward-thinking member of the community purchased NRIPS comm described as purpose as hosting conference content under different acronym until the board catches up we've caught up we're considering alternative acronyms when the community support for NRIPS became apparent we ask all attendees to respect the solution from the community use the new acronym so basically they've rebranded the entire conference about a month before the actual meeting asked all sponsors all invited companies asked all invited papers to rebrand the acronym to me this the wording here is fit is a bit funny like something remarkable has happened in our community has sprung up organically and now we'll just adopt it it seems like it seems like much less of the fairy tale to describe here but the actual like there's a there's a mob with pitchforks around your house and this is like the first kind of straw that you can grab to to make them calm down and also know that some companies have begun pulling out funding for the conference so I think this is really this was really you know much more backed by force and and back yeah what they say in the paper extensive public backlash so loud screaming basically then this this kind of the name has sprung up organically and has been adopted and seems much more bit forceful to me it would have still been a viable path the most valuable path to actually wait for the conference and then have that discussion and then if indeed this name in the rips would be would be presented as a good alternative and you know people would be fine with that then you could still make the name change for last for next year I think this this would have been a good alternative my fear now is this has been extremely rash extremely forceful as as I've said also accompanied by with like by withdrawal of funding that I believe these things usually provoke a backlash and that's really something that I wouldn't look forward to so I hope that this con that this paragraph down here is true that actually we will see a more welcoming environment for everyone but I believe things like this tend in society to have the sometimes very opposite effects of what's intended and so I hope this does not produce a backlash I think having had the actual discussion doing things non rashly would have done much more in the direction of preventing such a backlash so this is the end of the paper so to recap they basically say the acronym was was inappropriate which I agree with they say the survey was bad which I could believe if there was data they say that an issue adversely affecting the minority of participants should not be cited by majority vote which I absolutely disagree with and then they say the board has basically stated this as an either or decision which is I believe not true and misrepresenting or maybe I've missed something it's always possible lastly I want to get to this paragraph in recent months a number of women including some of the authors of this article who publicly expressed support for a change of the conference name have been relentlessly trolled harassed verbally abused and even physically threatened on Twitter reddit other online forums much of this harassment they say has been anonymous and typically has had an extremely gendered tone furthermore some students have reached out to us the authors lamenting the fact that they felt unable to openly express their support for renaming the conference due to fear of bullying or retaliation by faculty advisors or others in position of power this I believe is really bad the fact that people can't speak out about something like this without being bullied or harassed or having to fear for their careers basically is is bad and I would really discourage everyone from engaging in such behavior verbal abuse physically threaten I mean that's I mean to one point you can say all right if you've been on the internet for longer than a week then this probably has happened to you if you have had any sort of serious discussion on the internet but you can also say that doesn't make it right so I believe it's it's really important to separate what is you know harassment basically from actual disagreement and criticism and please engage in the latter do not engage in the former my problem with this paragraph it's again it's very one-sided it's basically stated here some students have reached out to us lamenting the fact that they felt unable to openly express their support for renaming the conference due to fear of bullying retaliation by faculty or advisors of other and others of position power to me I'm you know I'm gonna say this probably happens on both sides what you know one could argue where it happens more but this very much happens on both sides of this issue and it's real shame for both sides basically I think anyone should be able to express your opinion to to demonstrate that here I'm gonna show another Twitter thread by one of the authors of this paper where basically this is a thread where she posts screenshots of conversations basically people reaching out to her saying exactly that like I can't share my I have trouble sharing my opinion I get mocked for my opinion I can't do so publicly because I fear you know from my from my faculty and so on but then there's also this one here where a person wrote an email to the author basically saying they disagree with her and I I've read this email I don't you know I don't agree with the arguments here made but I can say that the this is not verbal abuse it's not personal attack it's not physically threatening it's actually quite respectful disagreement that the person actually goes through length to say how respectful they are how much you know how much this is meant as a as a disagreement on factual terms and further what they say is that they want to be anonymous maybe you see it on the very bottom for example I haven't done too much to anonymize myself but I ask you to respect my wishes of remaining anonymous don't try to figure out who I am further up they state basically they want to remain anonymous because they fear for their ladder for their later career right they fear of a backlash up here wish to remain anonymous as I'm an early in my career someday we may work together so basically they say here I disagree here's why I disagree and they wish to remain anonymous because they fear for their career right so this is almost like this is this is very much here feeling unable and will will go feeling unable to openly express their in the case support against renaming the conference to to fear of bullying or retaliation by faculty advisor others in position of power so this author here is obviously a real person in position of power and in very famous senior researcher and this person basically says I'm afraid and I can't you know that that's why I'm anonymous and the way the author responded here as you can read is what an anonymous coward of course I will do everything to guess you and it's it's difficult to to kind of put this off as I mean even if it's I don't know how it's meant right I will do everything to guess you and the least it means she will try to figure out who that is right and she doesn't go as far as saying that she will then basically either you know remember that name in case of any future thing or share it or whatnot but it's certainly you can't argue that this is a real deterrent for other people to even anonymously voice their opinion to if if this person announces I will do everything to guess you to me that that shows that this fear that we discuss here is very much present on both sides and it's absolutely not okay if if either side reacts by basically by basically retaliation or even even the the possibility of retaliation and I believe everyone should be able to say their opinion I respect really everyone even like these these authors here clearly took a lot of effort and a lot of a lot of beating basically they say they've been relentlessly trolled harassed verbally abused even physically threatened this is just really bad and have lots of respect for them saying their opinions stating their opinions anyway I think everyone should be able to do that without these things happening so to everyone watching I encourage you to not engage in these things and that alone will probably make the environment much much more inclusive and nice for everybody irregardless of of affiliation so that was it for me for this paper it's a bit longer it's a bit ranty if you agree disagree let me know in the comments I guess and other than that have a nice week weekend whatever you do bye
[ { "end": 4.86, "start": 0, "text": " Hello and welcome. Today we're going to look at what's in a name, the need to nip" }, { "end": 10.68, "start": 4.86, "text": " NIPS by Daniela Witten, Alina Oferdig, Anima Shri Anand Kumar and Jeff Dean." }, { "end": 17.080000000000002, "start": 10.68, "text": " This is a bit of a special paper as it's not an academic topic. The paper in fact" }, { "end": 22.52, "start": 17.080000000000002, "text": " is about the change of name or rather change in acronym for the conference" }, { "end": 28.48, "start": 22.52, "text": " Neural Information Processing Systems, previously abbreviated NIPS, but now for" }, { "end": 34, "start": 28.48, "text": " the first year this conference has been hosted under the acronym NURIPS. The" }, { "end": 39.2, "start": 34, "text": " people here on the paper are not the organizers of the conference, they are" }, { "end": 45.2, "start": 39.2, "text": " advocates for the name change and the paper basically outlines their arguments" }, { "end": 52.08, "start": 45.2, "text": " and a bit of description of what happened. So they're also pretty big names" }, { "end": 55.56, "start": 52.08, "text": " in the community so it should be interesting to see what they have to say." }, { "end": 61.96, "start": 55.56, "text": " The paper is pretty short, it's three parts, three pages and we're going to go" }, { "end": 71.48, "start": 61.96, "text": " through it and yeah let's jump into it. So I have it over here. Alright so the" }, { "end": 75.68, "start": 71.48, "text": " first part of the paper basically describes, it's called What's all the Fuzz" }, { "end": 81.36, "start": 75.68, "text": " About? It basically describes why a name change was necessary in their" }, { "end": 86.52, "start": 81.36, "text": " perspective. So they say in machine learning like the rest of them" }, { "end": 94.16, "start": 86.52, "text": " suffers from severe gender imbalance, low retention rates for women and so on." }, { "end": 100.76, "start": 94.16, "text": " They also describe the MeToo movement, increased awareness of sexual harassment" }, { "end": 106.16, "start": 100.76, "text": " faced by many female researchers, pervasiveness of sexual harassment at" }, { "end": 111.28, "start": 106.16, "text": " computational conferences and they reference an article here. I want to kind" }, { "end": 120.24000000000001, "start": 111.28, "text": " of show you this article. It's this article here. So if you haven't seen this" }, { "end": 126.28, "start": 120.24000000000001, "text": " yet I encourage you to read it. It's pretty horrifying to read but it gives" }, { "end": 130.84, "start": 126.28, "text": " you an idea of what people talk about when they say sexual harassment is a" }, { "end": 136.68, "start": 130.84, "text": " problem, is pervasive at conferences and so on. So yeah just I don't want to go" }, { "end": 143.96, "start": 136.68, "text": " into this specifically. Just go ahead read it and you know see what people are" }, { "end": 148.28, "start": 143.96, "text": " talking about. I think it's important context to understand where people are" }, { "end": 155.48000000000002, "start": 148.28, "text": " coming from. So they go on to say however more subtle acts of gender" }, { "end": 164.92000000000002, "start": 155.48000000000002, "text": " harassment defined in this report. This includes like sexist hostility, crude" }, { "end": 169.6, "start": 164.92, "text": " behavior and so on have gotten less public attention. Nonetheless gender" }, { "end": 173.88, "start": 169.6, "text": " harassment is extremely pervasive, is direct contributor to the challenges" }, { "end": 178, "start": 173.88, "text": " faced by women in the STEM field. In this article we argue that NIPS, the former" }, { "end": 182.04, "start": 178, "text": " acronym of the Neuro-Information Processing Systems Conference, constituted" }, { "end": 185.88, "start": 182.04, "text": " gender harassment towards women. So that's what their arguments basically" }, { "end": 194.2, "start": 185.88, "text": " about. So the acronym led to basically had its part in gender harassment" }, { "end": 199.6, "start": 194.2, "text": " towards women. Basically led to an environment where women could not feel" }, { "end": 209.95999999999998, "start": 199.6, "text": " comfortable at this conference. So here's their description. In popular" }, { "end": 216.35999999999999, "start": 209.95999999999998, "text": " slang the word NIPS is an abbreviation for nipples. Furthermore it has" }, { "end": 220.23999999999998, "start": 216.35999999999999, "text": " historically been used as a racial slur targeting people of Japanese origin but" }, { "end": 224.8, "start": 220.24, "text": " we'll not go into this deeper because that's kind of a historic use of the" }, { "end": 231.28, "start": 224.8, "text": " word. The current use of the word in fact is the slang for nipples and so" }, { "end": 236.8, "start": 231.28, "text": " we'll focus on that. They say at first glance the fact that a major" }, { "end": 241.28, "start": 236.8, "text": " machine learning conference shared its name with this slang is an unfortunate" }, { "end": 247.24, "start": 241.28, "text": " but unimportant coincidence. And it really is a coincidence. I think the" }, { "end": 252.68, "start": 247.24, "text": " the conference name has been around for longer than the slang has been kind of" }, { "end": 258.12, "start": 252.68, "text": " popular. The slang word has been popular so it really is a coincidence. Many other" }, { "end": 265.32, "start": 258.12, "text": " conferences have same coincidences like Colt for example. Maybe actually that's" }, { "end": 271.40000000000003, "start": 265.32, "text": " even less a coincidence than here. They say in fact one might hope that" }, { "end": 275.36, "start": 271.40000000000003, "text": " members of the machine learning community are sufficiently mature that" }, { "end": 279.32, "start": 275.36, "text": " the conference's name is unimportant. That's basically what everyone" }, { "end": 284.2, "start": 279.32, "text": " would hope. Maybe people don't even notice and if they notice maybe" }, { "end": 289.04, "start": 284.2, "text": " they'll have like a two-second oh that's you know that's the other word haha but" }, { "end": 294.96000000000004, "start": 289.04, "text": " then we basically just go on with our lives and no one cares too much. So that" }, { "end": 300, "start": 294.96000000000004, "text": " that's kind of the ideal scenario and they acknowledge that here. It's" }, { "end": 307.28, "start": 300, "text": " really important that they say unfortunately this appears" }, { "end": 312.12, "start": 307.28, "text": " not to be the case. They detail a few examples here at the 2017 conference" }, { "end": 316.08, "start": 312.12, "text": " Elon Musk made inappropriate jokes about the acronym participants wore loot" }, { "end": 322.16, "start": 316.08, "text": " t-shirts. I think one said my nips are NP hard which is kind of a double" }, { "end": 330.8, "start": 322.16, "text": " computer science joke I guess. There was a pre-conference event named" }, { "end": 338.24, "start": 330.8, "text": " word I can't probably say out loud without getting some sort of strike." }, { "end": 343.12, "start": 338.24, "text": " You can clearly see that even though the kind of original name is coincidental" }, { "end": 350, "start": 343.12, "text": " and you know one would hope that people are like you know just putting it off be" }, { "end": 354.36, "start": 350, "text": " adult about it. There have been jokes, there have been you know t-shirts" }, { "end": 360.32, "start": 354.36, "text": " made and you know you can say the name collision is not like is" }, { "end": 369.2, "start": 360.32, "text": " unintended but I think this word here is very intended. So I think the" }, { "end": 375.64, "start": 369.2, "text": " main argument here or one of the main arguments is this really first of all" }, { "end": 380.03999999999996, "start": 375.64, "text": " creates an environment where certain people don't feel comfortable. It creates" }, { "end": 384.88, "start": 380.03999999999996, "text": " kind of a sexualized environment. Second of all and the more broader sense it's" }, { "end": 391.03999999999996, "start": 384.88, "text": " just unprofessional as a community especially since the kind of community" }, { "end": 396.24, "start": 391.03999999999996, "text": " is booming. We want to represent machine learning to the wider world. One can" }, { "end": 403.8, "start": 396.24, "text": " say okay it's you know it's just in professional that we kind of bring" }, { "end": 409.40000000000003, "start": 403.8, "text": " intertwine these things. It doesn't make a good impression. They say furthermore" }, { "end": 412.96000000000004, "start": 409.40000000000003, "text": " reminders of the unfortunate acronym are everywhere. Online searches for the" }, { "end": 417.6, "start": 412.96000000000004, "text": " acronym led to not safer work content. The hashtag NIPS is devoted to" }, { "end": 423.76, "start": 417.6, "text": " pornography. If you misspell the conference website you get to an adult" }, { "end": 429.24, "start": 423.76, "text": " site and I think this yeah this further goes into the argument that it's just an" }, { "end": 433.16, "start": 429.24, "text": " unprofessional appearance towards the outside. It's unfortunate the conference" }, { "end": 437.92, "start": 433.16, "text": " has been here longer but you know still there's a need to do something about it" }, { "end": 442.64000000000004, "start": 437.92, "text": " and I largely agree with these arguments that these are good arguments to make" }, { "end": 450.48, "start": 442.64000000000004, "text": " for a change of name. This paragraph down here it's a bit of a" }, { "end": 456.96000000000004, "start": 450.48, "text": " we'll go into that later. It's not very connected to the arguments made here so" }, { "end": 461.08000000000004, "start": 456.96000000000004, "text": " well it's more like connected to what's been happening so we'll go into that" }, { "end": 466.24, "start": 461.08, "text": " later. People have been circulating these arguments and calling for a name" }, { "end": 471.88, "start": 466.24, "text": " change for a while and then the the board of the conference the NIPS board" }, { "end": 477.32, "start": 471.88, "text": " made a survey surveying the attendance of the last five years conferences" }, { "end": 485.26, "start": 477.32, "text": " whether or not the conference should change its name. The next section" }, { "end": 489.2, "start": 485.26, "text": " is dedicated to how the survey turned out and what the response of the board" }, { "end": 501.68, "start": 489.2, "text": " was. So actually let's first go to the decision by the board." }, { "end": 508.24, "start": 501.68, "text": " So here is the press release. This is a press release after the survey results" }, { "end": 517.36, "start": 508.24, "text": " had been collected. So they said our survey was returned by about 2200" }, { "end": 525.52, "start": 517.36, "text": " people here and as I said have attended NIPS in the last five years. Of the male" }, { "end": 529.36, "start": 525.52, "text": " respondents about 28% are in favor of the conference name change of the female" }, { "end": 535.24, "start": 529.36, "text": " respondents about 44% are in favor of a name change. 40% prefer the existing" }, { "end": 540.6800000000001, "start": 535.24, "text": " name 16% expressed no preferences. In fact let's go look at the detailed" }, { "end": 546.52, "start": 540.6800000000001, "text": " results which they have down here. So you can see overall there is a big a big" }, { "end": 552.52, "start": 546.52, "text": " slant towards not agree. So negative 2 is strongly disagree with the name change" }, { "end": 557.24, "start": 552.52, "text": " while positive 2 is strongly agree. So you can see there's a big slant towards" }, { "end": 564.8, "start": 557.24, "text": " the not agree. If you split this by gender of respondents then you can see" }, { "end": 571.16, "start": 564.8, "text": " the basically the male distribution is that slant while the female" }, { "end": 577.56, "start": 571.16, "text": " distribution is a bit different as you can see here. The first thing it's" }, { "end": 583.68, "start": 577.56, "text": " mostly towards the extremes. So there are more people strongly saying" }, { "end": 588.88, "start": 583.68, "text": " something than non-strongly saying something to either side. And the" }, { "end": 594.24, "start": 588.88, "text": " second of all it seems very divided and very evenly divided. So in fact if you" }, { "end": 599.52, "start": 594.24, "text": " look at the numbers if you count the disagrees and agrees you'll find there's" }, { "end": 606.28, "start": 599.52, "text": " a slight majority in the agrees. There is a slight majority in the disagrees if" }, { "end": 611, "start": 606.28, "text": " you only consider the strongs. But ultimately these numbers are pretty" }, { "end": 615, "start": 611, "text": " close so that there's people on either side feeling strongly and" }, { "end": 621.84, "start": 615, "text": " there's about in this survey about as many on either side. So that's basically" }, { "end": 630.24, "start": 621.84, "text": " the outcome of this. Here I find very interesting some quotes from" }, { "end": 634.76, "start": 630.24, "text": " respondents. So you had the opportunity to put quotes to put like a" }, { "end": 639.84, "start": 634.76, "text": " comment and these are quoted from these comments. So they say for example this" }, { "end": 643.44, "start": 639.84, "text": " thanks for considering a name change. I'm not personally bothered by the current" }, { "end": 649.0400000000001, "start": 643.44, "text": " name but I think the gesture will send a much-needed inclusive vibe in the right" }, { "end": 657.04, "start": 649.04, "text": " direction. One person says if you were up to me I'd call off this nice" }, { "end": 663.4399999999999, "start": 657.04, "text": " but symbolic gesture. Use whatever time money and energy to make actual changes." }, { "end": 668.1999999999999, "start": 663.4399999999999, "text": " Then someone says please please please change the name it is sexist and racist" }, { "end": 672.4399999999999, "start": 668.1999999999999, "text": " slur. I'm embarrassed every time I have to say the name of the conference." }, { "end": 678.24, "start": 672.4399999999999, "text": " This feeds into the unprofessionalism argument. The next one I find very" }, { "end": 682.12, "start": 678.24, "text": " interesting. It says as a woman I find it offensive that the board is seriously" }, { "end": 685.88, "start": 682.12, "text": " considering changing the name of the meeting because of an adolescent" }, { "end": 689.8, "start": 685.88, "text": " reference to a woman's body. From my point of view it shows that the board" }, { "end": 693.8, "start": 689.8, "text": " does not see me as an equal member of the community but as a woman first and" }, { "end": 699.64, "start": 693.8, "text": " a scientist second. This is extremely interesting. So this is one of the" }, { "end": 707.12, "start": 699.64, "text": " people who was a female respondent and said strongly disagree with the name" }, { "end": 714.2, "start": 707.12, "text": " change or disagree with the name change. I mean I can guess. So we've only" }, { "end": 720.6, "start": 714.2, "text": " heard so far that the name or the acronym is offensive to women but here" }, { "end": 725.6, "start": 720.6, "text": " we have a woman saying that the consideration to change the acronym is" }, { "end": 731.6800000000001, "start": 725.6, "text": " actually offensive to her. That's very special and" }, { "end": 738.64, "start": 731.68, "text": " understandable. I can understand why that happens. I can" }, { "end": 745.2399999999999, "start": 738.64, "text": " understand the argument made here. This woman feels like okay it shows" }, { "end": 751.92, "start": 745.2399999999999, "text": " me that basically my gender is important and not really my being scientist." }, { "end": 758.16, "start": 751.92, "text": " It's an argument. The next one goes into the same direction. It says I'm" }, { "end": 762.0799999999999, "start": 758.16, "text": " a woman. I've experienced being harassed by male academics and I would like this" }, { "end": 766.56, "start": 762.0799999999999, "text": " problem to be discussed and addressed but not in this frankly almost offensive" }, { "end": 772.4, "start": 766.56, "text": " way. Another person saying basically that's changing the name is" }, { "end": 779.52, "start": 772.4, "text": " almost offensive and it's not the right way to go to achieve" }, { "end": 784.64, "start": 779.52, "text": " these results. There's another one saying I'm in favor of the name change but this" }, { "end": 790.24, "start": 784.64, "text": " is cosmetic. So you have basically people coming from all angles" }, { "end": 795.84, "start": 790.24, "text": " giving their opinions and you can clearly see why there is especially in" }, { "end": 805.3199999999999, "start": 795.84, "text": " the female respondent group why there is a divide. So the board" }, { "end": 814.16, "start": 805.3199999999999, "text": " overall said the following. The board overall said the following." }, { "end": 821.12, "start": 814.16, "text": " After extensive discussions the NIPS board has decided not to change the" }, { "end": 826.28, "start": 821.12, "text": " name of the conference for now. The poll itself did not yield a clear consensus" }, { "end": 832.1999999999999, "start": 826.28, "text": " on a name change or a well-regarded alternative name. Further they state" }, { "end": 836.64, "start": 832.1999999999999, "text": " instead we ask the community support in implementing concrete steps to improve" }, { "end": 841.9599999999999, "start": 836.64, "text": " the inclusiveness of the conference. So these are described down here. They have" }, { "end": 846.24, "start": 841.96, "text": " a number of changes to make the conference basically more inclusive. So" }, { "end": 855.72, "start": 846.24, "text": " they basically said okay so the name change survey was" }, { "end": 862.9200000000001, "start": 855.72, "text": " inconclusive and they clearly say whatever we do here regardless of which" }, { "end": 866.4000000000001, "start": 862.9200000000001, "text": " decision we take we're failing to accommodate the opinions about half the" }, { "end": 870.4000000000001, "start": 866.4000000000001, "text": " women in the community. Which is true this is clearly what you can see from" }, { "end": 874.68, "start": 870.4, "text": " the results from the quotes. So basically what they say is we'll not" }, { "end": 880.3199999999999, "start": 874.68, "text": " change the conference name for now. We'll implement these steps because what they" }, { "end": 885.4399999999999, "start": 880.3199999999999, "text": " I can guess what they felt was okay even the people against the name change were" }, { "end": 890.0799999999999, "start": 885.4399999999999, "text": " in support of making the conference more inclusive. So they basically say okay we" }, { "end": 894.3199999999999, "start": 890.0799999999999, "text": " do these things we strengthen their code of conduct. We have two inclusion" }, { "end": 900.12, "start": 894.3199999999999, "text": " diversity chairs. We have an inclusion town hall. We have childcare support." }, { "end": 905.4, "start": 900.12, "text": " Gender-inclusive restrooms and so on and so on. Mentoring breakfasts for women and" }, { "end": 911.5600000000001, "start": 905.4, "text": " other minorities. So they take these steps concretely. They say this is what" }, { "end": 918.08, "start": 911.5600000000001, "text": " we do and even further if you look at their page on diversity and inclusion" }, { "end": 926.04, "start": 918.08, "text": " which I have here. They say here on the top in addition to hosting diversity" }, { "end": 929.8, "start": 926.04, "text": " related event the conference also making consider structural changes include a" }, { "end": 934.4, "start": 929.8, "text": " new code of conduct we've already seen and in-depth discussion of the potential" }, { "end": 942.7199999999999, "start": 934.4, "text": " of changing the name of the conference. So in total what they're saying is we've" }, { "end": 949.24, "start": 942.7199999999999, "text": " done this poll. It came back inconclusive which you've I think" }, { "end": 953.88, "start": 949.24, "text": " has been clearly demonstrated. We'll not change the name of the" }, { "end": 959.3599999999999, "start": 953.88, "text": " conference for now and we'll do all of these other things" }, { "end": 965.08, "start": 959.36, "text": " right down there and at the conference we'll hold a meeting and discuss the" }, { "end": 969.36, "start": 965.08, "text": " name change so we could maybe potentially change it in upcoming years." }, { "end": 974.76, "start": 969.36, "text": " I think this is a really sensible decision by the board. I mean given" }, { "end": 979.8000000000001, "start": 974.76, "text": " this data given all of that this is probably the most sensible decision." }, { "end": 985.2, "start": 979.8000000000001, "text": " Let's take concrete steps. The name change seems to be you know debatable so" }, { "end": 991.6, "start": 985.2, "text": " let's actually debate it at the conference with the actual community." }, { "end": 997.5600000000001, "start": 991.6, "text": " That was the basically result of the poll. Let's now go back to what the paper" }, { "end": 1003.6800000000001, "start": 997.5600000000001, "text": " has to say about this. Here's the paper again and they say in order to collect" }, { "end": 1007.2800000000001, "start": 1003.6800000000001, "text": " data about the machine learning community's feelings about the" }, { "end": 1011.2800000000001, "start": 1007.2800000000001, "text": " conference name the conference board sent out a survey to people who have" }, { "end": 1018.3199999999999, "start": 1011.28, "text": " attended the conference during the past five years. However serving" }, { "end": 1023.24, "start": 1018.3199999999999, "text": " conference attendees results in a very biased sample of a much larger community" }, { "end": 1027.44, "start": 1023.24, "text": " of potential machine learning researchers. Bias arises due to the fact" }, { "end": 1031.04, "start": 1027.44, "text": " that some people who are made uncomfortable by the name or by other" }, { "end": 1037, "start": 1031.04, "text": " aspects of the machine learning culture may have decided not to enter or to" }, { "end": 1040.56, "start": 1037, "text": " remain in the or not to remain in the field have chosen not to attend the" }, { "end": 1045.84, "start": 1040.56, "text": " conference. So basically you're saying well if you only ask this one group of" }, { "end": 1050.52, "start": 1045.84, "text": " people right then this other group of people you know doesn't have a chance" }, { "end": 1055.56, "start": 1050.52, "text": " to make their voice heard and there is basically bias because in this other" }, { "end": 1061.36, "start": 1055.56, "text": " group of people the people who have not attended the conference they would would" }, { "end": 1065.04, "start": 1061.36, "text": " have a severely different opinion from the people who have attended the" }, { "end": 1070.44, "start": 1065.04, "text": " conference. So first of all I think this can be a valid point here of course all" }, { "end": 1075.52, "start": 1070.44, "text": " the ways if you ask one group of people and exclude another one you there's" }, { "end": 1083.76, "start": 1075.52, "text": " there's if the if the group you ask and the target group which here it's really" }, { "end": 1087.72, "start": 1083.76, "text": " unclear what it is I guess it's the machine learning community considering" }, { "end": 1095.92, "start": 1087.72, "text": " going to the conference if those don't overlap then you you will introduce some" }, { "end": 1100.5600000000002, "start": 1095.92, "text": " sort of bias and they say okay bias could come from the fact you know some" }, { "end": 1106.2, "start": 1100.5600000000002, "text": " people who actually are affected by these problems of which this name is one" }, { "end": 1110.28, "start": 1106.2, "text": " they may have you know not attended the conference because they may have left" }, { "end": 1114.24, "start": 1110.28, "text": " the field because the the gender harassment is so pervasive and they just" }, { "end": 1119.6000000000001, "start": 1114.24, "text": " didn't didn't stay and so on. So I think this can be a good point but the problem" }, { "end": 1125.64, "start": 1119.6000000000001, "text": " I have with it here is that it's simply stated without anything it's simply said" }, { "end": 1133.0800000000002, "start": 1125.64, "text": " okay there is bias, bias arises and my question would be how much is that bias" }, { "end": 1141.48, "start": 1133.0800000000002, "text": " of any data like any data on this you can't just criticize these that the" }, { "end": 1147.48, "start": 1141.48, "text": " survey for being biased and and then not provide actual data like how many people" }, { "end": 1152.8000000000002, "start": 1147.48, "text": " are there who are made uncomfortable by the name or have left the field in who" }, { "end": 1158.8799999999999, "start": 1152.8, "text": " have left the field because of these things and is it really viable to to" }, { "end": 1163.84, "start": 1158.8799999999999, "text": " count them in I guess okay we can argue it is but how would they have responded" }, { "end": 1170.8, "start": 1163.84, "text": " to this we've clearly seen that a lot of affected people that even have" }, { "end": 1177.44, "start": 1170.8, "text": " experienced harassment are not in favor of the name change so in this case I" }, { "end": 1188.44, "start": 1177.44, "text": " would really like to see some data on how much this bias is right and I cannot" }, { "end": 1196.48, "start": 1188.44, "text": " also say it's not it's not that bad of a decision to what the board did to send" }, { "end": 1200.56, "start": 1196.48, "text": " the survey to the last five years attendees I think is a very sensible" }, { "end": 1206, "start": 1200.56, "text": " choice if you want to gather the community's feelings towards these kind" }, { "end": 1211.48, "start": 1206, "text": " of things I mean you you can't just ask the entire world because the entire" }, { "end": 1217.76, "start": 1211.48, "text": " world is not the machine learning community so I think the this is a very" }, { "end": 1223.68, "start": 1217.76, "text": " sensible decision to ask last five years attendees and if you have real evidence" }, { "end": 1230.2, "start": 1223.68, "text": " that this causes a notifiable like a significant bias then we could" }, { "end": 1237.0800000000002, "start": 1230.2, "text": " potentially correct for that bias but without any data on that I think the the" }, { "end": 1244.92, "start": 1237.0800000000002, "text": " asking last five years participants was completely reasonable and one of I don't" }, { "end": 1251.48, "start": 1244.92, "text": " really see how you can do a much better job without much much more manual work" }, { "end": 1257.72, "start": 1251.48, "text": " and I want to make this point a bit clearer on how hard it actually is to do" }, { "end": 1266.4, "start": 1257.72, "text": " that by pointing to the response to this so here is a tweet thread by one of the" }, { "end": 1271.1200000000001, "start": 1266.4, "text": " authors of this paper after the conference decision came out she" }, { "end": 1277.08, "start": 1271.1200000000001, "text": " basically tweeted out this protest nips I am starting this new hashtag please" }, { "end": 1281.6000000000001, "start": 1277.08, "text": " retweet if you're in support of the next conference changing its name so" }, { "end": 1287.08, "start": 1281.6000000000001, "text": " basically kind of launching a a Twitter campaign a Twitter hashtag under this to" }, { "end": 1291.3999999999999, "start": 1287.08, "text": " come you know get into a conversation with people about this people could" }, { "end": 1301.72, "start": 1291.3999999999999, "text": " express their support she also that was a misclick she also here made a change" }, { "end": 1309.6399999999999, "start": 1301.72, "text": " dot org petition to change the name so a petition basically petition is here the" }, { "end": 1316.72, "start": 1309.6399999999999, "text": " text of the petition basically says something similar to the to the what" }, { "end": 1326.68, "start": 1316.72, "text": " we've already seen including there is a the criticism of the survey and as you" }, { "end": 1337.04, "start": 1326.68, "text": " can see here about 2,000 people have signed it so I mean a Twitter hashtag is" }, { "end": 1341.64, "start": 1337.04, "text": " all good you know you can do that a petition is all good you can do that but" }, { "end": 1346.64, "start": 1341.64, "text": " it's a bit ironic because a change that org petition literally anyone can" }, { "end": 1352, "start": 1346.64, "text": " sign this and in addition to that there's only one option you can only say" }, { "end": 1360.2, "start": 1352, "text": " yes you can't even say no right so and even more who's gonna see the change" }, { "end": 1364.4, "start": 1360.2, "text": " that org petition it's gonna be the social media followers of these people" }, { "end": 1370.24, "start": 1364.4, "text": " right so basically you have now a you have it now what's basically a survey of" }, { "end": 1375.92, "start": 1370.24, "text": " the social media network of people in favor of changing the name where there's" }, { "end": 1383.92, "start": 1375.92, "text": " only one option to respond I I find it and so I've gone through here the people" }, { "end": 1388.72, "start": 1383.92, "text": " who actually publicly associate their name give a reason for signing a lot of" }, { "end": 1394.6000000000001, "start": 1388.72, "text": " these they you know they give some argument why they've signed the petition" }, { "end": 1400.04, "start": 1394.6000000000001, "text": " but I've tried searching these people for any sort of academic track record and" }, { "end": 1405.8799999999999, "start": 1400.04, "text": " in my sample I've come up with between 10 and 20 percent of people who somehow" }, { "end": 1418.12, "start": 1405.8799999999999, "text": " have an academic track record so this is I mean certainly a valid thing to make" }, { "end": 1424.1599999999999, "start": 1418.12, "text": " your voice heard and to show your numbers and but I mean look at this there's a" }, { "end": 1435.64, "start": 1424.16, "text": " bot signing twice hello Jack Nelson and Richard Chi very nice but so basically" }, { "end": 1441.88, "start": 1435.64, "text": " I'm not here to criticize petitions but what I want to say is you can't like" }, { "end": 1450.48, "start": 1441.88, "text": " criticize this this poll so hard for being biased and then launching basically" }, { "end": 1456.56, "start": 1450.48, "text": " an own poll that's even more biased and even more non-representative of the" }, { "end": 1463.6, "start": 1456.56, "text": " community to me that's that's kind of ironic and just goes to show how hard" }, { "end": 1468.52, "start": 1463.6, "text": " this is and my argument would be it's actually not that unsensible of a" }, { "end": 1473.32, "start": 1468.52, "text": " decision of the board the way they did it and if you have again if you have" }, { "end": 1479.52, "start": 1473.32, "text": " data to actually quantify the bias here then it's viable to go and correct for" }, { "end": 1486.92, "start": 1479.52, "text": " that all right so to they go on to analyze the survey results conference" }, { "end": 1492.48, "start": 1486.92, "text": " board simply noted that of the 294 women surveyed the number who strongly" }, { "end": 1498.48, "start": 1492.48, "text": " support or support the name change is comparable to the number of women who" }, { "end": 1503.84, "start": 1498.48, "text": " are strongly opposed or opposed however this analysis implicitly assumes that" }, { "end": 1508.28, "start": 1503.84, "text": " one person's feeling of discomfort or marginalization as a result of the name" }, { "end": 1513.24, "start": 1508.28, "text": " should be given the same weight as another person's preference for the" }, { "end": 1519.92, "start": 1513.24, "text": " status quo this amounts to giving the same way to false positives and false" }, { "end": 1524.6399999999999, "start": 1519.92, "text": " negatives of course we learn in an introductory statistics course that" }, { "end": 1529.28, "start": 1524.6399999999999, "text": " false positives and false negatives should be assigned weights dependent on" }, { "end": 1534.2, "start": 1529.28, "text": " context in this context we feel that a much greater weight should be given to" }, { "end": 1540.44, "start": 1534.2, "text": " the views of a person who feels marginalized as a result of the name so" }, { "end": 1546.88, "start": 1540.44, "text": " up here I find this a bit strange they say this amounts to giving the same way" }, { "end": 1554.8, "start": 1546.88, "text": " to false positives and false negatives to me the false is here a bit confusing" }, { "end": 1559.32, "start": 1554.8, "text": " because it seems to me it's it's simply giving the same weight to negatives and" }, { "end": 1565.04, "start": 1559.32, "text": " positives there's I don't think there's a need to dress this up in statistical" }, { "end": 1570.54, "start": 1565.04, "text": " lingo here it simply we give the same weight to people who responded" }, { "end": 1576.08, "start": 1570.54, "text": " positively and to people who responded negatively I think that's that's it" }, { "end": 1583.8, "start": 1576.08, "text": " there's no false of course we learn in a truck see this is class that false" }, { "end": 1587.12, "start": 1583.8, "text": " positives and false negatives should be assigned weights dependent on context in" }, { "end": 1590.7199999999998, "start": 1587.12, "text": " this context we feel that a much greater weight should be given to the views of" }, { "end": 1596.1599999999999, "start": 1590.7199999999998, "text": " person who feels marginalized as a result of the name I would I would say" }, { "end": 1601.1999999999998, "start": 1596.1599999999999, "text": " to this it's the problem for me it's these are this is one of the things that" }, { "end": 1605.28, "start": 1601.1999999999998, "text": " where you at you read it first and you say like oh yeah this makes sense but" }, { "end": 1611.4399999999998, "start": 1605.28, "text": " first of all it's framed extremely one-sided it's framed as all the people" }, { "end": 1616.36, "start": 1611.4399999999998, "text": " who are for the name change like they they feel discomforted they feel" }, { "end": 1622.28, "start": 1616.36, "text": " marginalized and the people who are against the name change they simply and" }, { "end": 1629.1599999999999, "start": 1622.28, "text": " here specifically they they they talk about the women group so in argument" }, { "end": 1634.9599999999998, "start": 1629.1599999999999, "text": " they're all affected the people against it simply prefer the status quo but" }, { "end": 1641.04, "start": 1634.9599999999998, "text": " we've clearly seen in the in the in the press release and we'll go over to that" }, { "end": 1649.6, "start": 1641.04, "text": " now these quotes here we've clearly seen that the the offense and the" }, { "end": 1655.08, "start": 1649.6, "text": " marginalization happens on both sides so here this as a woman I find it" }, { "end": 1660.48, "start": 1655.08, "text": " offensive that the board is considering changing the name it shows that the" }, { "end": 1664.48, "start": 1660.48, "text": " board does not see me as an equal member of the community but as a woman first" }, { "end": 1669.24, "start": 1664.48, "text": " and the scientists second I mean this is almost a textbook definition of" }, { "end": 1675.08, "start": 1669.24, "text": " marginalization and this is clearly happening on the other side as well so I" }, { "end": 1682.04, "start": 1675.08, "text": " think the framing here is extremely dishonest and one-sided and there is" }, { "end": 1687.92, "start": 1682.04, "text": " given basically the the side that we just seen in this quote is given" }, { "end": 1693.36, "start": 1687.92, "text": " absolutely no not even a mention that it exists it's simply framed as this side" }, { "end": 1698.24, "start": 1693.36, "text": " is marginalized and oppressed and discomforted and the other side simply" }, { "end": 1704.32, "start": 1698.24, "text": " prefers the status quo but we've clearly seen that yeah it's almost a this fits" }, { "end": 1711.08, "start": 1704.32, "text": " exactly this definition it's just one person's feeling or discomfort or" }, { "end": 1718.56, "start": 1711.08, "text": " marginalization as a result of the name it's just as a result of the name change" }, { "end": 1725.1200000000001, "start": 1719.32, "text": " second of all I think the the bigger problem and this goes into the statement" }, { "end": 1730.84, "start": 1725.12, "text": " down here to state this last point more explicitly an issue adversely affecting" }, { "end": 1736.52, "start": 1730.84, "text": " the minority of participants should not be decided by a majority vote again" }, { "end": 1742.2399999999998, "start": 1736.52, "text": " something at first you say oh yeah that makes sense but if you think about it" }, { "end": 1749.3999999999999, "start": 1742.2399999999998, "text": " this is a really really outrageous statement and the reason is it's it's" }, { "end": 1758.3200000000002, "start": 1749.4, "text": " it's outrageous is if the mud if it's not majority vote if it's not one person" }, { "end": 1765.8400000000001, "start": 1758.3200000000002, "text": " one vote then someone has to decide who gets to vote and who doesn't and more so" }, { "end": 1771.24, "start": 1765.8400000000001, "text": " specifically here someone basically needs to decide who should be given what" }, { "end": 1777.4, "start": 1771.24, "text": " weight in the vote right you need someone to decide this and here you can" }, { "end": 1781.8000000000002, "start": 1777.4, "text": " say well it's easy it's just the the women right because they're affected I" }, { "end": 1788.16, "start": 1781.8000000000002, "text": " this but they go further they say well it's the women who feel discomforted" }, { "end": 1792.0800000000002, "start": 1788.16, "text": " and marginalized who should be given more weight than the ones who simply" }, { "end": 1796.24, "start": 1792.0800000000002, "text": " prefer the status quo but then you have to have someone assessing whether someone" }, { "end": 1800.52, "start": 1796.24, "text": " is really marginalized and discomforted or simply prefers the status quo and" }, { "end": 1808.36, "start": 1800.52, "text": " it's not like an environment where there is kind of a sexist undertone isn't" }, { "end": 1816.48, "start": 1808.36, "text": " also discomforting or can't also be discomforting to men to men of any sort" }, { "end": 1827.76, "start": 1816.48, "text": " or people of of any sort of gender it's just not clear that the fact that people" }, { "end": 1833.04, "start": 1827.76, "text": " should be given different weight in in crafting an opinion I mean this this can" }, { "end": 1839.16, "start": 1833.04, "text": " be true if you have like some clear area of expertise but in this case it's" }, { "end": 1845.12, "start": 1839.16, "text": " really unclear and the fact is if it's not majority vote you need someone" }, { "end": 1851.58, "start": 1845.12, "text": " deciding the weight and the someone deciding the weights automatically" }, { "end": 1857.16, "start": 1851.58, "text": " decides on the outcome of the vote and then why do you need a vote in the first" }, { "end": 1864.68, "start": 1857.16, "text": " place basically up here they say yeah we feel the great weights should be aligned" }, { "end": 1869.6000000000001, "start": 1864.68, "text": " like this and down here there is no more we feel it's be an issue at worst" }, { "end": 1873.3600000000001, "start": 1869.6000000000001, "text": " affecting the minority of participants should not be decided by majority vote" }, { "end": 1878.96, "start": 1873.3600000000001, "text": " they're basically calling for a dictatorship in this case and I'm gonna" }, { "end": 1885.0400000000002, "start": 1878.96, "text": " guess like everyone has the opinion the dictatorship would be an awesome idea if" }, { "end": 1891.76, "start": 1885.04, "text": " the dictator were me right that's that's what everyone thinks of course and that's" }, { "end": 1897.08, "start": 1891.76, "text": " basically the argument made here but it's not it's not true and there's some" }, { "end": 1904.96, "start": 1897.08, "text": " really really disturbing implicit things in here and maybe I want to quickly go" }, { "end": 1912.8799999999999, "start": 1904.96, "text": " over how I think a democratic decision works so imagine you have a person and" }, { "end": 1918.4, "start": 1912.88, "text": " the person has decision to make for or against in this case the name change" }, { "end": 1926.0800000000002, "start": 1918.4, "text": " right and the person must decide on one of these two things on a let's say on a" }, { "end": 1933.2, "start": 1926.0800000000002, "text": " continuous scale but it doesn't matter what what this what this stuff up here" }, { "end": 1938.5600000000002, "start": 1933.2, "text": " basically implicitly assumes is that the person looks at themselves and they" }, { "end": 1945.32, "start": 1938.56, "text": " think well am I personally discomforted or marginalized by the name or the" }, { "end": 1950, "start": 1945.32, "text": " climate it creates no then I'm obviously against the name change because it" }, { "end": 1956.76, "start": 1950, "text": " doesn't help me or another person go am I personally affected yes well I feel" }, { "end": 1963.58, "start": 1956.76, "text": " discomforted or marginalized well then I'm obviously for a name change so the" }, { "end": 1969.36, "start": 1963.58, "text": " basic assumption here is that people simply vote purely their own egotistical" }, { "end": 1974.6399999999999, "start": 1969.36, "text": " interests and that's that's it so basically if you're in one of these" }, { "end": 1979.32, "start": 1974.6399999999999, "text": " minorities then you'll vote for the name change because it affects you which" }, { "end": 1985, "start": 1979.32, "text": " we've already seen is not it's not a given that people vote that way and if" }, { "end": 1989.24, "start": 1985, "text": " you're not in this then you know you you'd vote against but you're not" }, { "end": 1993.52, "start": 1989.24, "text": " affected so your vote shouldn't count it's completely untrue what people do" }, { "end": 1998.52, "start": 1993.52, "text": " especially smart people and I believe the machine learning community consists" }, { "end": 2005.68, "start": 1998.52, "text": " largely of these what they do is they'll make a list of arguments argument one" }, { "end": 2011.92, "start": 2005.68, "text": " argument two argument three argument for everyone has the same arguments" }, { "end": 2015.28, "start": 2011.92, "text": " everyone's hurt the same arguments if not then maybe there's some work to do" }, { "end": 2021.72, "start": 2015.28, "text": " in actually getting arguments to people but that's not the same as weighing the" }, { "end": 2026.64, "start": 2021.72, "text": " people differently you get the arguments to the people and then you weigh each of" }, { "end": 2032, "start": 2026.64, "text": " them equally why because what every person does is they say okay argument" }, { "end": 2037.3600000000001, "start": 2032, "text": " one is maybe it's unprofessional right name is unprofessional alright how" }, { "end": 2042.08, "start": 2037.3600000000001, "text": " important is that to me give it a weight weight one cool that's really important" }, { "end": 2048.36, "start": 2042.08, "text": " to me I'll give it a big weight argument two some people feel really" }, { "end": 2052.84, "start": 2048.36, "text": " discomfort like discomforted if you're marginalized by the name creates a bad" }, { "end": 2057.44, "start": 2052.84, "text": " environment for them how much weight am I gonna give to that right so people can" }, { "end": 2062.08, "start": 2057.44, "text": " actually consider other people's feelings and other people's problems and" }, { "end": 2068.08, "start": 2062.08, "text": " decide on what's the best also for them in their own mind so they give it a weight" }, { "end": 2074.08, "start": 2068.08, "text": " two and then there's maybe two arguments against some given these weight three" }, { "end": 2082.04, "start": 2074.08, "text": " weight four at the end what you have is you have argument I you will sum it up" }, { "end": 2092.16, "start": 2082.04, "text": " by the weights W I J you will sum it up over all people so basically now and this" }, { "end": 2096.7999999999997, "start": 2092.16, "text": " will give you like a final number a which is either positive or negative if" }, { "end": 2100.2999999999997, "start": 2096.7999999999997, "text": " it's positive you do the name change if it's negative you don't do the name" }, { "end": 2106.5600000000004, "start": 2100.3, "text": " change if you do this over all people what you've basically done is you have" }, { "end": 2113.84, "start": 2106.5600000000004, "text": " just determined these weightings here by a democratic process you've crowd sourced" }, { "end": 2121.52, "start": 2113.84, "text": " the weighting this is exactly what these people say up here right we feel we feel" }, { "end": 2127.2000000000003, "start": 2121.52, "text": " that you're not false false positives false we feel that positives and" }, { "end": 2133.16, "start": 2127.2, "text": " negatives should be assigned weights dependent on context so the positive and" }, { "end": 2138.2, "start": 2133.16, "text": " negative arguments in this case are assigned weights dependent on context" }, { "end": 2144.3199999999997, "start": 2138.2, "text": " but the weights are crowd sourced to the community right and each person this who" }, { "end": 2149.52, "start": 2144.3199999999997, "text": " participates in that each person who participates is one more brain power in" }, { "end": 2156.3599999999997, "start": 2149.52, "text": " a complicated decision that no one basically no one has the authority just" }, { "end": 2159.88, "start": 2156.36, "text": " to just decide for themselves so these people are calling for different" }, { "end": 2165.2000000000003, "start": 2159.88, "text": " weighting this is the way to do it the democratic majority vote is the exact" }, { "end": 2170, "start": 2165.2000000000003, "text": " way to determine these weights what these people basically are no no no no" }, { "end": 2179.6, "start": 2170, "text": " no we should determine the weights we who know I'm a bit corny here but this is" }, { "end": 2182.88, "start": 2179.6, "text": " basically it's still it's two alternatives either you do democratic" }, { "end": 2190.48, "start": 2182.88, "text": " process one person one brain one vote and that will give you a crowd sourced" }, { "end": 2195.4, "start": 2190.48, "text": " crowd sourced true weighting of the arguments what the community feels or" }, { "end": 2203.56, "start": 2195.4, "text": " someone needs to decide some one needs to side by force basically and that's a" }, { "end": 2211.6, "start": 2203.56, "text": " dictatorship so these are the choices you have and clearly now you can maybe" }, { "end": 2216.2799999999997, "start": 2211.6, "text": " understand why I say this is an outrageous statement because to me the" }, { "end": 2223.44, "start": 2216.2799999999997, "text": " dictatorship option is not an option note that I'm not saying that democracy" }, { "end": 2230.2799999999997, "start": 2223.44, "text": " can never be wrong or the majority can never be wrong but in fact it's the best" }, { "end": 2237.16, "start": 2230.2799999999997, "text": " system there is can be wrong but anything else will undoubtedly go more" }, { "end": 2245.68, "start": 2237.16, "text": " wrong so that's my point here alright so that was a maybe a bit ranty but let's" }, { "end": 2255.3199999999997, "start": 2245.68, "text": " go on a false choice and a minimization of a real issue so they go on to say" }, { "end": 2260.48, "start": 2255.3199999999997, "text": " what they think of the decision that the board made in response to this so up was" }, { "end": 2265.52, "start": 2260.48, "text": " how they analyzed the poll and now it's the decision in announcing their" }, { "end": 2268.72, "start": 2265.52, "text": " decision not to change the conference name conference board expressed" }, { "end": 2272.08, "start": 2268.72, "text": " commitment to implement concrete steps to improve the inclusiveness of the" }, { "end": 2276.4, "start": 2272.08, "text": " conference and they list them here and they say we sincerely applaud the" }, { "end": 2284.44, "start": 2276.4, "text": " conference board for these efforts okay I yeah I think the community feels like" }, { "end": 2289.88, "start": 2284.44, "text": " that as well however the wording of the decision implied the need to choose" }, { "end": 2295.44, "start": 2289.88, "text": " between changing the name of the conference and taking concrete steps to" }, { "end": 2304.16, "start": 2295.44, "text": " improve its inclusiveness I don't see that at all say this was a false choice" }, { "end": 2308.04, "start": 2304.16, "text": " there's no reason that the board could not do both yes there's no reason that" }, { "end": 2312.96, "start": 2308.04, "text": " they couldn't do both and I believe we've read this together before I don't" }, { "end": 2317.04, "start": 2312.96, "text": " think the board ever said that there was a choice between one or the other I" }, { "end": 2323.8, "start": 2317.04, "text": " think they've said very much the opposite let's go back I think what they" }, { "end": 2334.1600000000003, "start": 2323.8, "text": " mean here is the word instead so here they say we won't change the name and" }, { "end": 2338.5600000000004, "start": 2334.1600000000003, "text": " then here's they say instead we ask for the community support and implementing" }, { "end": 2343.44, "start": 2338.5600000000004, "text": " creed steps I think this this must be it because I don't really see any other way" }, { "end": 2350.96, "start": 2343.44, "text": " you would ever think that and the reason is this here they say will not change" }, { "end": 2354.7200000000003, "start": 2350.96, "text": " the name of the conference for now on another page they say it will discuss" }, { "end": 2358.92, "start": 2354.7200000000003, "text": " the name change at the conference and then here the instead I think what is" }, { "end": 2365.52, "start": 2358.92, "text": " meant is instead what we will do right now is these things we'll discuss about" }, { "end": 2369.56, "start": 2365.52, "text": " the name change but what we will do right now which was basically not the" }, { "end": 2374.96, "start": 2369.56, "text": " the real problem in the first place the real issue raised was the name so" }, { "end": 2379.32, "start": 2374.96, "text": " instead of that issue we'll do these other things which we feel the community" }, { "end": 2385.56, "start": 2379.32, "text": " wants I think that's the I think there's no I think everyone reading this comes" }, { "end": 2390.56, "start": 2385.56, "text": " to the same conclusion after after reading that but so I really don't see" }, { "end": 2396.1200000000003, "start": 2390.56, "text": " how you you can say that this is kind of presented as an either or by the board I" }, { "end": 2401.6000000000004, "start": 2396.1200000000003, "text": " don't think that at all and but you decide for yourself I believe the real" }, { "end": 2408.56, "start": 2401.6000000000004, "text": " real real crocs here is the for now and the promise to discuss at the" }, { "end": 2415.92, "start": 2408.56, "text": " conference which if you can see here in the paper is never ever ever touched" }, { "end": 2420.16, "start": 2415.92, "text": " right this they make it basically seem that the board has decided to not" }, { "end": 2425.56, "start": 2420.16, "text": " change the name and that's it which is completely wrong they've clearly stated" }, { "end": 2430.08, "start": 2425.56, "text": " their openness to a name change they want to discuss it it was just" }, { "end": 2434.94, "start": 2430.08, "text": " inconclusive so they want to basically not do anything rash and then half the" }, { "end": 2440.52, "start": 2434.94, "text": " community is against it anyway so they want to discuss it I to say that this is" }, { "end": 2450.7200000000003, "start": 2440.52, "text": " the basically that that the wording implied the need to choose I don't see" }, { "end": 2458.08, "start": 2450.7200000000003, "text": " that um but you know you decide for yourselves the board suggested a name" }, { "end": 2464, "start": 2458.08, "text": " change would only be symbolic and so on would have no real consequences so that" }, { "end": 2467.24, "start": 2464, "text": " this this these are some of the arguments basically made in the quotes" }, { "end": 2474.24, "start": 2467.24, "text": " as well but you know the fact that the name change would only be symbolic and" }, { "end": 2478.84, "start": 2474.24, "text": " so on these are all things you could actually discuss at the con at this" }, { "end": 2484.32, "start": 2478.84, "text": " conference meeting you could even correct for your for your poll right you" }, { "end": 2488.92, "start": 2484.32, "text": " could invite people who have left the community to represent those you could" }, { "end": 2493.96, "start": 2488.92, "text": " invite new potential researchers you could give everyone their voice and then" }, { "end": 2498.2000000000003, "start": 2493.96, "text": " actually listen to all of them I think that's a very sensible decision by the" }, { "end": 2505.56, "start": 2498.2000000000003, "text": " board and I think this is misrepresented here lastly let's say another argument" }, { "end": 2508.96, "start": 2505.56, "text": " though not explicitly mentioned a number of machine learning researchers told us" }, { "end": 2512.16, "start": 2508.96, "text": " that changing the name of the conference lead to too much confusion in the" }, { "end": 2516.4, "start": 2512.16, "text": " community while we understand we respectfully do not share it I mean this" }, { "end": 2519.92, "start": 2516.4, "text": " is it's basically an argument against the name change I think it's also a" }, { "end": 2526.7200000000003, "start": 2519.92, "text": " point worthy of discussion right that they say they say we respectfully do not" }, { "end": 2531.44, "start": 2526.7200000000003, "text": " share this point yeah okay they don't share it other people do it's a point" }, { "end": 2535.44, "start": 2531.44, "text": " of discussion we could you know you could actually discuss it at the" }, { "end": 2539.7200000000003, "start": 2535.44, "text": " conference but I actually agree with the authors here I think changing the name" }, { "end": 2545.6800000000003, "start": 2539.7200000000003, "text": " will not have a big impact on the kind of recognizability of the conference" }, { "end": 2551.56, "start": 2545.68, "text": " especially now down here we'll actually get into what actually happened in" }, { "end": 2557.72, "start": 2551.56, "text": " November the in response to extensive public backlash the conference board" }, { "end": 2562.2799999999997, "start": 2557.72, "text": " announced a change to the official conference acronym to NRIPS they say we" }, { "end": 2570.2799999999997, "start": 2562.2799999999997, "text": " are pleased provides this provides a reasonable compromise so in in my opinion" }, { "end": 2576.0800000000004, "start": 2570.28, "text": " this is it as far as solutions go this is a good solution right the NRIPS" }, { "end": 2580.9, "start": 2576.0800000000004, "text": " acronym I think it's it's it's cool you don't have to change the name of the" }, { "end": 2586.2400000000002, "start": 2580.9, "text": " conference itself you simply change the acronym which you know was the the" }, { "end": 2592.2400000000002, "start": 2586.2400000000002, "text": " reported problem in the first place I think the all the new papers will like" }, { "end": 2598.28, "start": 2592.2400000000002, "text": " people will still recognize the old NIPS acronym or the new conference it will be" }, { "end": 2603.5600000000004, "start": 2598.28, "text": " clear that it's the same thing and I think this is a very good a very good" }, { "end": 2609.44, "start": 2603.5600000000004, "text": " new name and I think people will get used to it pretty quickly it also you" }, { "end": 2618.48, "start": 2609.44, "text": " know to say NRIPS it it's also rolls off the tongue easily so it's as far as" }, { "end": 2626.0400000000004, "start": 2618.48, "text": " solutions go I like it further they say however the work for the conference" }, { "end": 2631.68, "start": 2626.04, "text": " board is far from done oops we encourage the board to continue its efforts blah" }, { "end": 2638.2799999999997, "start": 2631.68, "text": " blah blah so they say okay you have to do more than just change the name and so" }, { "end": 2643.52, "start": 2638.2799999999997, "text": " on they say together these steps will help ensure that the NRIPS conference" }, { "end": 2646.2, "start": 2643.52, "text": " retains its place in the forefront of machine learning research while also" }, { "end": 2650, "start": 2646.2, "text": " creating a welcoming environment for women and members of other representative" }, { "end": 2659.2, "start": 2650, "text": " groups on other underrepresented groups we all hope that to me the problem is a" }, { "end": 2665.18, "start": 2659.2, "text": " bit how this how this went down and if we go back and look at the actual press" }, { "end": 2671.44, "start": 2665.18, "text": " release of the name change they say here dear members of the neural information" }, { "end": 2677.16, "start": 2671.44, "text": " processing systems community something remarkable has happened in our" }, { "end": 2681.7599999999998, "start": 2677.16, "text": " community the name NRIPS has sprung up organically as an alternative acronym" }, { "end": 2685.96, "start": 2681.7599999999998, "text": " we're delighted to see it being adopted indeed one forward-thinking member of" }, { "end": 2690.48, "start": 2685.96, "text": " the community purchased NRIPS comm described as purpose as hosting" }, { "end": 2694.2, "start": 2690.48, "text": " conference content under different acronym until the board catches up we've" }, { "end": 2700.44, "start": 2694.2, "text": " caught up we're considering alternative acronyms when the community support for" }, { "end": 2704.48, "start": 2700.44, "text": " NRIPS became apparent we ask all attendees to respect the solution from" }, { "end": 2710.04, "start": 2704.48, "text": " the community use the new acronym so basically they've rebranded the entire" }, { "end": 2715.96, "start": 2710.04, "text": " conference about a month before the actual meeting asked all sponsors all" }, { "end": 2723.64, "start": 2715.96, "text": " invited companies asked all invited papers to rebrand the acronym to me" }, { "end": 2728.92, "start": 2723.64, "text": " this the wording here is fit is a bit funny like something remarkable has" }, { "end": 2734.46, "start": 2728.92, "text": " happened in our community has sprung up organically and now we'll just adopt it" }, { "end": 2739.5, "start": 2734.46, "text": " it seems like it seems like much less of the fairy tale to describe here but the" }, { "end": 2745.32, "start": 2739.5, "text": " actual like there's a there's a mob with pitchforks around your house and this is" }, { "end": 2754.8, "start": 2745.32, "text": " like the first kind of straw that you can grab to to make them calm down and" }, { "end": 2759.56, "start": 2754.8, "text": " also know that some companies have begun pulling out funding for the conference" }, { "end": 2766.64, "start": 2759.56, "text": " so I think this is really this was really you know much more backed by" }, { "end": 2774.16, "start": 2766.64, "text": " force and and back yeah what they say in the paper extensive public backlash so" }, { "end": 2781, "start": 2774.16, "text": " loud screaming basically then this this kind of the name has sprung up" }, { "end": 2789.52, "start": 2781, "text": " organically and has been adopted and seems much more bit forceful to me it" }, { "end": 2795.16, "start": 2789.52, "text": " would have still been a viable path the most valuable path to actually wait for" }, { "end": 2800.7599999999998, "start": 2795.16, "text": " the conference and then have that discussion and then if indeed this name" }, { "end": 2805.56, "start": 2800.7599999999998, "text": " in the rips would be would be presented as a good alternative and you know" }, { "end": 2810.32, "start": 2805.56, "text": " people would be fine with that then you could still make the name change for" }, { "end": 2816.32, "start": 2810.32, "text": " last for next year I think this this would have been a good alternative my" }, { "end": 2823.6000000000004, "start": 2816.32, "text": " fear now is this has been extremely rash extremely forceful as as I've said also" }, { "end": 2831.6400000000003, "start": 2823.6000000000004, "text": " accompanied by with like by withdrawal of funding that I believe these things" }, { "end": 2836.96, "start": 2831.6400000000003, "text": " usually provoke a backlash and that's really something that I wouldn't look" }, { "end": 2841.4, "start": 2836.96, "text": " forward to so I hope that this con that this paragraph down here is true that" }, { "end": 2846.0800000000004, "start": 2841.4, "text": " actually we will see a more welcoming environment for everyone but I believe" }, { "end": 2852.72, "start": 2846.08, "text": " things like this tend in society to have the sometimes very opposite effects of" }, { "end": 2862.16, "start": 2852.72, "text": " what's intended and so I hope this does not produce a backlash I think having" }, { "end": 2867.7599999999998, "start": 2862.16, "text": " had the actual discussion doing things non rashly would have done much more in" }, { "end": 2875.36, "start": 2867.7599999999998, "text": " the direction of preventing such a backlash so this is the end of the paper" }, { "end": 2883.4, "start": 2875.36, "text": " so to recap they basically say the acronym was was inappropriate which I" }, { "end": 2892.1200000000003, "start": 2883.4, "text": " agree with they say the survey was bad which I could believe if there was data" }, { "end": 2896.88, "start": 2892.1200000000003, "text": " they say that an issue adversely affecting the minority of participants" }, { "end": 2902.7200000000003, "start": 2896.88, "text": " should not be cited by majority vote which I absolutely disagree with and" }, { "end": 2909.64, "start": 2902.72, "text": " then they say the board has basically stated this as an either or decision" }, { "end": 2917.12, "start": 2909.64, "text": " which is I believe not true and misrepresenting or maybe I've missed" }, { "end": 2922.8799999999997, "start": 2917.12, "text": " something it's always possible lastly I want to get to this paragraph in recent" }, { "end": 2926.68, "start": 2922.8799999999997, "text": " months a number of women including some of the authors of this article who" }, { "end": 2930.68, "start": 2926.68, "text": " publicly expressed support for a change of the conference name have been" }, { "end": 2934.9199999999996, "start": 2930.68, "text": " relentlessly trolled harassed verbally abused and even physically threatened on" }, { "end": 2941.24, "start": 2934.9199999999996, "text": " Twitter reddit other online forums much of this harassment they say has been" }, { "end": 2947.44, "start": 2941.24, "text": " anonymous and typically has had an extremely gendered tone furthermore some" }, { "end": 2952.48, "start": 2947.44, "text": " students have reached out to us the authors lamenting the fact that they" }, { "end": 2956.96, "start": 2952.48, "text": " felt unable to openly express their support for renaming the conference due" }, { "end": 2961.8, "start": 2956.96, "text": " to fear of bullying or retaliation by faculty advisors or others in position" }, { "end": 2967.84, "start": 2961.8, "text": " of power this I believe is really bad the fact that people can't speak out" }, { "end": 2973, "start": 2967.84, "text": " about something like this without being bullied or harassed or having to fear" }, { "end": 2979.68, "start": 2973, "text": " for their careers basically is is bad and I would really discourage everyone" }, { "end": 2986.44, "start": 2979.68, "text": " from engaging in such behavior verbal abuse physically threaten I mean that's" }, { "end": 2991.2400000000002, "start": 2986.44, "text": " I mean to one point you can say all right if you've been on the internet for" }, { "end": 2995.8, "start": 2991.2400000000002, "text": " longer than a week then this probably has happened to you if you have had any" }, { "end": 2999.96, "start": 2995.8, "text": " sort of serious discussion on the internet but you can also say that" }, { "end": 3007.04, "start": 2999.96, "text": " doesn't make it right so I believe it's it's really important to separate what" }, { "end": 3013.2400000000002, "start": 3007.04, "text": " is you know harassment basically from actual disagreement and criticism and" }, { "end": 3021.04, "start": 3013.24, "text": " please engage in the latter do not engage in the former my problem with" }, { "end": 3027.9199999999996, "start": 3021.04, "text": " this paragraph it's again it's very one-sided it's basically stated here" }, { "end": 3032.04, "start": 3027.9199999999996, "text": " some students have reached out to us lamenting the fact that they felt unable" }, { "end": 3037.8799999999997, "start": 3032.04, "text": " to openly express their support for renaming the conference due to fear of" }, { "end": 3042.2799999999997, "start": 3037.8799999999997, "text": " bullying retaliation by faculty or advisors of other and others of position" }, { "end": 3055.28, "start": 3042.28, "text": " power to me I'm you know I'm gonna say this probably happens on both sides what" }, { "end": 3058.8, "start": 3055.28, "text": " you know one could argue where it happens more but this very much happens" }, { "end": 3064.36, "start": 3058.8, "text": " on both sides of this issue and it's real shame for both sides basically I" }, { "end": 3068.96, "start": 3064.36, "text": " think anyone should be able to express your opinion to to demonstrate that here" }, { "end": 3075.16, "start": 3068.96, "text": " I'm gonna show another Twitter thread by one of the authors of this paper where" }, { "end": 3080.32, "start": 3075.16, "text": " basically this is a thread where she posts screenshots of conversations" }, { "end": 3084.2, "start": 3080.32, "text": " basically people reaching out to her saying exactly that like I can't share" }, { "end": 3091.2, "start": 3084.2, "text": " my I have trouble sharing my opinion I get mocked for my opinion I can't do so" }, { "end": 3098.08, "start": 3091.2, "text": " publicly because I fear you know from my from my faculty and so on but then" }, { "end": 3103.52, "start": 3098.08, "text": " there's also this one here where a person wrote an email to the author" }, { "end": 3112.2799999999997, "start": 3103.52, "text": " basically saying they disagree with her and I I've read this email I don't you" }, { "end": 3119.4, "start": 3112.2799999999997, "text": " know I don't agree with the arguments here made but I can say that the this is" }, { "end": 3125.3199999999997, "start": 3119.4, "text": " not verbal abuse it's not personal attack it's not physically threatening" }, { "end": 3131.1600000000003, "start": 3125.32, "text": " it's actually quite respectful disagreement that the person actually" }, { "end": 3136.32, "start": 3131.1600000000003, "text": " goes through length to say how respectful they are how much you know how" }, { "end": 3145.28, "start": 3136.32, "text": " much this is meant as a as a disagreement on factual terms and further" }, { "end": 3152.44, "start": 3145.28, "text": " what they say is that they want to be anonymous maybe you see it on the very" }, { "end": 3156.04, "start": 3152.44, "text": " bottom for example I haven't done too much to anonymize myself but I ask you" }, { "end": 3159.6, "start": 3156.04, "text": " to respect my wishes of remaining anonymous don't try to figure out who I" }, { "end": 3165.44, "start": 3159.6, "text": " am further up they state basically they want to remain anonymous because they" }, { "end": 3171.04, "start": 3165.44, "text": " fear for their ladder for their later career right they fear of a backlash up" }, { "end": 3175.92, "start": 3171.04, "text": " here wish to remain anonymous as I'm an early in my career someday we may work" }, { "end": 3186.84, "start": 3175.92, "text": " together so basically they say here I disagree here's why I disagree and they" }, { "end": 3191.2200000000003, "start": 3186.84, "text": " wish to remain anonymous because they fear for their career right so this is" }, { "end": 3198.52, "start": 3191.2200000000003, "text": " almost like this is this is very much here feeling unable and will will go" }, { "end": 3205.36, "start": 3198.52, "text": " feeling unable to openly express their in the case support against renaming" }, { "end": 3211.6400000000003, "start": 3205.36, "text": " the conference to to fear of bullying or retaliation by faculty advisor others" }, { "end": 3216.7200000000003, "start": 3211.6400000000003, "text": " in position of power so this author here is obviously a real person in position" }, { "end": 3222, "start": 3216.7200000000003, "text": " of power and in very famous senior researcher and this person basically" }, { "end": 3226.6600000000003, "start": 3222, "text": " says I'm afraid and I can't you know that that's why I'm anonymous and the" }, { "end": 3233.04, "start": 3226.6600000000003, "text": " way the author responded here as you can read is what an anonymous coward of" }, { "end": 3240.92, "start": 3233.04, "text": " course I will do everything to guess you and it's it's difficult to to kind of" }, { "end": 3246.88, "start": 3240.92, "text": " put this off as I mean even if it's I don't know how it's meant right I will" }, { "end": 3251.44, "start": 3246.88, "text": " do everything to guess you and the least it means she will try to figure out who" }, { "end": 3257.16, "start": 3251.44, "text": " that is right and she doesn't go as far as saying that she will then basically" }, { "end": 3263.8799999999997, "start": 3257.16, "text": " either you know remember that name in case of any future thing or share it or" }, { "end": 3270.12, "start": 3263.8799999999997, "text": " whatnot but it's certainly you can't argue that this is a real deterrent for" }, { "end": 3277.3199999999997, "start": 3270.12, "text": " other people to even anonymously voice their opinion to if if this person" }, { "end": 3283.72, "start": 3277.3199999999997, "text": " announces I will do everything to guess you to me that that shows that this" }, { "end": 3289.2799999999997, "start": 3283.72, "text": " fear that we discuss here is very much present on both sides and it's" }, { "end": 3298.48, "start": 3289.2799999999997, "text": " absolutely not okay if if either side reacts by basically by basically" }, { "end": 3304.8399999999997, "start": 3298.48, "text": " retaliation or even even the the possibility of retaliation and I believe" }, { "end": 3309.24, "start": 3304.8399999999997, "text": " everyone should be able to say their opinion I respect really everyone even" }, { "end": 3314.72, "start": 3309.24, "text": " like these these authors here clearly took a lot of effort and a lot of a lot" }, { "end": 3319.2, "start": 3314.72, "text": " of beating basically they say they've been relentlessly trolled harassed" }, { "end": 3323.68, "start": 3319.2, "text": " verbally abused even physically threatened this is just really bad and" }, { "end": 3328.3999999999996, "start": 3323.68, "text": " have lots of respect for them saying their opinions stating their opinions" }, { "end": 3333.04, "start": 3328.3999999999996, "text": " anyway I think everyone should be able to do that without these things happening" }, { "end": 3340, "start": 3333.04, "text": " so to everyone watching I encourage you to not engage in these things and that" }, { "end": 3345.16, "start": 3340, "text": " alone will probably make the environment much much more inclusive and nice for" }, { "end": 3353.08, "start": 3345.16, "text": " everybody irregardless of of affiliation so that was it for me for this paper" }, { "end": 3360.16, "start": 3353.08, "text": " it's a bit longer it's a bit ranty if you agree disagree let me know in the" }, { "end": 3369.24, "start": 3360.16, "text": " comments I guess and other than that have a nice week weekend whatever you do" }, { "end": 3392.4399999999996, "start": 3369.24, "text": " bye" } ]
_PyusGsbBPY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Stochastic RNNs without Teacher-Forcing
[ "Science & Technology" ]
[ "NeurIPS2018", "NIPS2018", "NLP", "deep learning", "RNN" ]
We present a stochastic non-autoregressive RNN that does not require teacher-forcing for training. The content is based on our 2018 NeurIPS paper: Deep State Space Models for Unconditional Word Generation https://arxiv.org/abs/1806.04550
Hi everybody, my name is Florian and Janik was nice enough to host me here as a guest to talk about Stochastic RNNs without teacher forcing. This is based on recent work, deep state space models for unconditional word generation, which we presented at this year's New RIPs. And if you feel like any more details, please check out the paper. We focus on a de facto standard training hack for any RNNs that generate text. It's called teacher forcing and it's used in any model, whether unconditional or conditional, such as in a sentence autoencoder or in a translation model. To understand where teacher forcing comes from, we first need to understand where text generation comes from. For the good or the bad, and here we will focus on the bad, text generation has its roots in language modeling. So language modeling is the problem of predicting the next word, given all the previous words. People used to use ANGRA models for this, but today people use recurrent neural networks to do that. Such recurrent neural networks or RNNs factorize the joint observation probability of a sequence that I here depict as W into independent softmax distributions over individual tokens. So for every time step, there's a softmax function. And the softmax is conditioned on a hidden state. And all the magic of the RNN goes into the function that gives you the new state, given the old hidden state. Usually this is called a transition function, F, and as an input it gets the last state and the last word. So F could be a GUO function or an LSTM function. Just like any other language model, you can turn this into a generative model of text. Let's look at the dependencies that you would have at test time. There's initial hidden state H1. We sample a new word. We use our transition function F and it gives us the new state H2. Then we can sample a new word W2, feed it back, get a new state, sample a new word, feed it back. It's important to note that all the stochasticity in the output is solely due to the stochasticity in the sampling process, because the transition function is deterministic. So far there's nothing to complain about. But so far I've only talked about test time. At training time there is a catch. This is where teacher forcing kicks in. It turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions. You have to use teacher forcing and that means you substitute your own prediction by the ground truth. So at training time there's no sampling loop. You just take the ground truth token and feed it into your state transition function. So that feels unintuitive because at test time we do something else than we do at training time. And it's also known in the literature for a few years to cause biases. So why is that problematic? Remember we come from language modeling. In language modeling we could argue that if our only goal is to predict one word given the previous words, then of course we can use the ground truth context to ground truth previous words. But if we're interested in generating like longer sequences, then we need to learn what to memorize. And in particular we need to become robust against our own predictions because we might make mistakes at test time and there's no ground truth at test time. Just to get this confirmed by somebody who has worked in the field for years, at the NeurIPS representation learning workshop Alex Grave mentioned teacher forcing as one of the big three problems for autoregressive models. And in his own words, teacher forcing might lead to predict one step ahead, not many and potentially brittle generation and myopic representations. How have people addressed teacher forcing so far? There are approaches to try to mitigate the problem. For example, by blending together these two views, training time and test time, so that sometimes you use your own prediction during training, but sometimes you use the ground truth. We believe for a rigorous model of text generation, we need a rigorous model of uncertainty. This should be an integral part of any generative model and therefore it should be the same model both at training time and test time without any hacks. We propose a fundamentally different approach by proposing a new transition function. The new transition function is non autoregressive. That means it depends on the last stage, ht-1, but it doesn't depend on the last word. That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore. Instead, the transition function accepts a white noise vector as the second input. Now you might wonder why do we need noise at all as an input to the transition function? Well, for a given prefix, there might be different continuations. So we need some source of entropy to model the entropy in different continuations. The rest of the paper pretty much focuses on the following two questions. A. Which function f is powerful enough to turn the most simple noise source, just the standard Gaussian vector, into something that is powerful enough to replace the autoregressive feedback mechanism of a standard RNN? And the second question is, of course, how do we train this? What framework do we train this in? And it will turn out that variational flows are suitable functions f and variational inference is the right framework to train them. So here's the roadmap to complete the model. First, we need to cast the generative model as a probabilistic method because so far I've only sketched a procedure that involves sampling some noise and then applying some function and then predicting observations. Then we need to propose a variational inference model so that we can do maximum likelihood training. We will derive an elbow, which is our objective. Then in the paper, we also describe how the tightness of the elbow can be improved. And here I will finish by talking a bit about the evaluation and what we do to inspect the model. Since this work is based a lot on variational flows, let me give you a quick summary of variational flows. A variational flow is a diffeomorphism f, which maps from what I will call a simple noise space, Xi, to a complex noise space, H. And here I'm already using the notation for our sequence model. Simply by the change of variable formula, we know that the probability of an event H in the complex space is simply the probability of the event in the simplest space Xi as given by the inverse of f times a Jacobian term with respect to f evaluated at Xi. How can we use this in our sequential setting? First, let me fix some notation because sequential models are pretty prone to overloaded notation. I'll write time as t running from 1 to capital T. And whenever I talk about a sequence of variables like w, I don't index them. I just write w without an index. And only when I need a specific element, I'll write it as wt. Let's formalize the generative model. We start out with the probability of observing a sequence w. And since we use the latent variable model, we marginalize out the latent variables H. And then we will assume that the overall dependencies between hidden states H and observations w follow like an HMM type of dependency. That means the new state only depends on the last state and the current observation only depends on the current state. And now the question is how do we model these transitions? I've so far pitched the ideas of sampling noise and then using some transition function f. And we have seen flows already. Now we are ready to combine the two. We propose a transition function fg, which has the signature as I mentioned before. It gets a hidden state and noise vector as an input. And it gives you a new state as an output. This can be seen as a conditional flow because any ht minus 1, any last state, inserted as the first argument into fg, induces a flow which maps from the simple noise distribution to the space of new hidden states. And as I've said before, for the prior distribution in the simple noise space, we simply assume it's a standard Gaussian. Let's look at this graphically, because in the end this is a graphical model. I copied over the formulas from the last slide. And at the bottom you see the graphical model. First we have a sequence of stochastic variables Xi. Those deterministically induce via the transition function f, via the flow, a sequence of hidden states. And those independently predict the observations. All the magic is in the transition. So let me sketch this process here in the big circle. How do we get from the last state h2 to the new state h3? Let's say h2 encodes a prefix and there are two possible continuations. They're equally likely in the corpus, so there are two potential new states. The blue state h3 and the yellow state h3. I've sketched the standard Gaussian noise distribution at the top. There are yellow samples and there are blue samples. The flow realizes a mapping that takes any yellow sample and maps it to the yellow hidden state. And it maps any blue sample to the blue hidden state. So with probability one half in this situation, we either get a blue or a yellow sample from the simple noise distribution. And it will induce new states, blue h3 or the yellow h3. So far we have proposed the generative model. Now the question is how do we train it if we don't know the hidden states? The answer is variational inference and in particular, amortized variational inference. The key idea of variational inference is to introduce a parameterized approximate inference model. How do we propose such a model? Well, a good recipe is to first look at a true posterior. The probability of a state sequence given an observation sequence. The true posterior turns out to factorize into individual components, which give us the probability of a state given the last state and the future observations. It turns out that we can formulate this inference model using two ingredients that should be familiar. First, we use a transition function Fq, which induces a flow. It has the same signature as Fg for the generative model. And we use a noise source q. But now the noise source isn't uninformative anymore. In variational inference, the inference network is informed about the data. So there's a base distribution q of Xi t, which is allowed to look at the data Wt. Now compare this to teacher forcing. In teacher forcing, we substitute our own predictions by inserting ground truth information into the generative model. In variational inference, it's very clear how to use the data. The data enters through the inference model and it enters in the form of future observation because the past observation we want to store in the hidden state. It remains to derive an elbow, which is the usual evidence lower bound objective used for variational inference. Any elbow, whether it's in a sequential setting or not, factorizes into two parts, a reconstruction loss and a model mismatch term. Here, reconstruction loss means probability of observation given a state. And model mismatch is between the generative model P and the inference model q. This is what is usually written as a KL divergence. To derive our elbow, we follow the literature on flows. In the first step, we introduced the flow on the inference model Fq. We turn the expectation with respect to the complex state space H into an expectation with respect to the simple noise distribution. And then, of course, at the same time, the flow appears inside the expectation. And we get the log-determinant terms that I've mentioned before. In the second step, we introduced the generative flow Fg using the same change of variable technique. It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows and so that the generative model always appears as the inverse concatenated with the inference flow. In a second, I'll show you what the interpretation of that is. Let's quickly recap what we've seen so far. There's a generative model. It consists of a generative flow Fg and an uninformed noise source. There's an inference model, which contains an inference flow Fq and a simple base distribution across the noise variables q of xi. In the elbow, the two flows appear concatenated, and we can interpret this in the following way. The inference model q proposes a noise vector, xi t, that is informed about the future. The inference flow maps this to a hidden state. At the hidden state, the reconstruction loss lives. This is where we pay a price for making a bad prediction. However, the inference model cannot encode all the possible information about the future into the hidden state, ht, because the mapping continues to the simple noise space of the generative model. And the inference model must make sure that the proposal also covers significant probability mass under the uninformed prior. This trade-off between reconstruction and model mismatch is common to all elbows. But here we highlight the special situation where we have two flows, one for the inference model and one for the generative model. In our paper, we also show how we can use the recently proposed important weighted autoencoder to improve the tightness of our bound, but I'll skip those steps here. Instead, let's quickly talk about evaluation. We apply our model to unconditional generation. So why in hell would somebody look into unconditional generation? Well, actually, it turns out it's harder than conditional generation. If you know what the French sentence looks like, it's much easier to continue a partial English translation. But it's not only harder, it's also more interesting to inspect which information does a sequence model need to store and which information can it forget. We use two metrics to evaluate our model. First, we look at sequence cross entropy. So we compare the model's sequence distribution to the data sequence distribution. Usually estimating the data distribution is impossible. You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data. However, for words, we can use unigram frequencies of words in a corpus as a pretty reliable estimate. Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling. We take the marginal likelihood, sample k trajectories, and assess the probability that the trajectories assigned to the given sequence. Since our model is not autoregressive, the sequence isn't tied to an observation. So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary. Since we've pitched our noise model as the key to contribution to our generative model, we want to empirically verify that the model is being used. Working with a clean probabilistic model allows us to use tools from probability theory to assess that. We use the mutual information between a noise vector at time t and the observation of time t. So this measures how much information in the output is actually due to the noise model. Before showing you the numbers, let's quickly go across the parameterization of our model. For the flows, we look at shift scaling transformations. And if the scaling g is lower triangular, we can compute efficiently the Jacobian determinant. We also look at real NVP and we compose flows by concatenation. The base distribution of our inference model depends on the future observations, which we summarize using a GRU RNN. The base distribution itself is a diagonal Gaussian. We use a state size of 8 and also run some experiments for 16 and 32. All the numbers are in the paper, so here are just the take-home messages. We are on par or better than a domestic RNN with teacher forcing trained at the same state size. Also, we observed that a powerful generative flow is essential to achieve good performance. Furthermore, we can confirm that important weightless elbow improved the results. This is the first model applying generative flows to sequence modeling. So naturally, we are interested in comparing the expressiveness of fg and fq. Our paper has a table that compares four choices for both flows. Our findings are that the generative flow should be powerful and the inference flow should be slightly less powerful. To understand our noise model, we look at the mutual information at every time step and show a box spot for all of them. Initially, the mutual information is highest, which means the initial character is most important to remember. The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences. A non-autoregressive model needs to have lower entropy in the observation model because any underentropy under the observation model is being forgotten because there is no feedback. The purple line shows you the observation model entropy during training. The dashed red line shows you the entropy on the observation model of a baseline. So indeed, we have lower entropy in the observation model and at the same time in green, you see the mutual information increasing. Let's summarize our findings. Using variational flows, non-autoregressive modeling of sequences is possible and teacher forcing is not necessary. At the same time, we get a noise model that is the driving factor of the sequence model and is easy to interpret. For any details, please check out the paper and for any questions, shoot me an email.
[ { "end": 6, "start": 0, "text": " Hi everybody, my name is Florian and Janik was nice enough to host me here as a guest to talk about" }, { "end": 14, "start": 6, "text": " Stochastic RNNs without teacher forcing. This is based on recent work, deep state space models for" }, { "end": 21, "start": 14, "text": " unconditional word generation, which we presented at this year's New RIPs. And if you feel like any more details," }, { "end": 29, "start": 21, "text": " please check out the paper. We focus on a de facto standard training hack for any RNNs that generate" }, { "end": 37, "start": 29, "text": " text. It's called teacher forcing and it's used in any model, whether unconditional or conditional," }, { "end": 45, "start": 37, "text": " such as in a sentence autoencoder or in a translation model. To understand where teacher forcing comes from," }, { "end": 52, "start": 45, "text": " we first need to understand where text generation comes from. For the good or the bad, and here we will focus on the bad," }, { "end": 60, "start": 52, "text": " text generation has its roots in language modeling. So language modeling is the problem of predicting the next word," }, { "end": 69, "start": 60, "text": " given all the previous words. People used to use ANGRA models for this, but today people use recurrent neural networks to do that." }, { "end": 78, "start": 69, "text": " Such recurrent neural networks or RNNs factorize the joint observation probability of a sequence that I here depict as W" }, { "end": 86, "start": 78, "text": " into independent softmax distributions over individual tokens. So for every time step, there's a softmax function." }, { "end": 93, "start": 86, "text": " And the softmax is conditioned on a hidden state. And all the magic of the RNN goes into the function that gives you the new state," }, { "end": 101, "start": 93, "text": " given the old hidden state. Usually this is called a transition function, F, and as an input it gets the last state and the last word." }, { "end": 111, "start": 101, "text": " So F could be a GUO function or an LSTM function. Just like any other language model, you can turn this into a generative model of text." }, { "end": 118, "start": 111, "text": " Let's look at the dependencies that you would have at test time. There's initial hidden state H1. We sample a new word." }, { "end": 126, "start": 118, "text": " We use our transition function F and it gives us the new state H2. Then we can sample a new word W2, feed it back," }, { "end": 135, "start": 126, "text": " get a new state, sample a new word, feed it back. It's important to note that all the stochasticity in the output is solely due to the stochasticity" }, { "end": 142, "start": 135, "text": " in the sampling process, because the transition function is deterministic. So far there's nothing to complain about." }, { "end": 149, "start": 142, "text": " But so far I've only talked about test time. At training time there is a catch. This is where teacher forcing kicks in." }, { "end": 156, "start": 149, "text": " It turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions." }, { "end": 161, "start": 156, "text": " You have to use teacher forcing and that means you substitute your own prediction by the ground truth." }, { "end": 168, "start": 161, "text": " So at training time there's no sampling loop. You just take the ground truth token and feed it into your state transition function." }, { "end": 173, "start": 168, "text": " So that feels unintuitive because at test time we do something else than we do at training time." }, { "end": 179, "start": 173, "text": " And it's also known in the literature for a few years to cause biases. So why is that problematic?" }, { "end": 188, "start": 179, "text": " Remember we come from language modeling. In language modeling we could argue that if our only goal is to predict one word given the previous words," }, { "end": 192, "start": 188, "text": " then of course we can use the ground truth context to ground truth previous words." }, { "end": 198, "start": 192, "text": " But if we're interested in generating like longer sequences, then we need to learn what to memorize." }, { "end": 207, "start": 198, "text": " And in particular we need to become robust against our own predictions because we might make mistakes at test time and there's no ground truth at test time." }, { "end": 215, "start": 207, "text": " Just to get this confirmed by somebody who has worked in the field for years, at the NeurIPS representation learning workshop Alex Grave mentioned" }, { "end": 220, "start": 215, "text": " teacher forcing as one of the big three problems for autoregressive models." }, { "end": 230, "start": 220, "text": " And in his own words, teacher forcing might lead to predict one step ahead, not many and potentially brittle generation and myopic representations." }, { "end": 235, "start": 230, "text": " How have people addressed teacher forcing so far? There are approaches to try to mitigate the problem." }, { "end": 242, "start": 235, "text": " For example, by blending together these two views, training time and test time, so that sometimes you use your own prediction during training," }, { "end": 249, "start": 242, "text": " but sometimes you use the ground truth. We believe for a rigorous model of text generation, we need a rigorous model of uncertainty." }, { "end": 258, "start": 249, "text": " This should be an integral part of any generative model and therefore it should be the same model both at training time and test time without any hacks." }, { "end": 263, "start": 258, "text": " We propose a fundamentally different approach by proposing a new transition function." }, { "end": 273, "start": 263, "text": " The new transition function is non autoregressive. That means it depends on the last stage, ht-1, but it doesn't depend on the last word." }, { "end": 279, "start": 273, "text": " That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore." }, { "end": 284, "start": 279, "text": " Instead, the transition function accepts a white noise vector as the second input." }, { "end": 289, "start": 284, "text": " Now you might wonder why do we need noise at all as an input to the transition function?" }, { "end": 293, "start": 289, "text": " Well, for a given prefix, there might be different continuations." }, { "end": 298, "start": 293, "text": " So we need some source of entropy to model the entropy in different continuations." }, { "end": 303, "start": 298, "text": " The rest of the paper pretty much focuses on the following two questions." }, { "end": 311, "start": 303, "text": " A. Which function f is powerful enough to turn the most simple noise source, just the standard Gaussian vector," }, { "end": 317, "start": 311, "text": " into something that is powerful enough to replace the autoregressive feedback mechanism of a standard RNN?" }, { "end": 322, "start": 317, "text": " And the second question is, of course, how do we train this? What framework do we train this in?" }, { "end": 331, "start": 322, "text": " And it will turn out that variational flows are suitable functions f and variational inference is the right framework to train them." }, { "end": 334, "start": 331, "text": " So here's the roadmap to complete the model." }, { "end": 340, "start": 334, "text": " First, we need to cast the generative model as a probabilistic method because so far I've only sketched a procedure" }, { "end": 346, "start": 340, "text": " that involves sampling some noise and then applying some function and then predicting observations." }, { "end": 351, "start": 346, "text": " Then we need to propose a variational inference model so that we can do maximum likelihood training." }, { "end": 354, "start": 351, "text": " We will derive an elbow, which is our objective." }, { "end": 359, "start": 354, "text": " Then in the paper, we also describe how the tightness of the elbow can be improved." }, { "end": 365, "start": 359, "text": " And here I will finish by talking a bit about the evaluation and what we do to inspect the model." }, { "end": 372, "start": 365, "text": " Since this work is based a lot on variational flows, let me give you a quick summary of variational flows." }, { "end": 382, "start": 372, "text": " A variational flow is a diffeomorphism f, which maps from what I will call a simple noise space, Xi, to a complex noise space, H." }, { "end": 386, "start": 382, "text": " And here I'm already using the notation for our sequence model." }, { "end": 395, "start": 386, "text": " Simply by the change of variable formula, we know that the probability of an event H in the complex space is simply the probability of the event" }, { "end": 404, "start": 395, "text": " in the simplest space Xi as given by the inverse of f times a Jacobian term with respect to f evaluated at Xi." }, { "end": 407, "start": 404, "text": " How can we use this in our sequential setting?" }, { "end": 413, "start": 407, "text": " First, let me fix some notation because sequential models are pretty prone to overloaded notation." }, { "end": 418, "start": 413, "text": " I'll write time as t running from 1 to capital T." }, { "end": 425, "start": 418, "text": " And whenever I talk about a sequence of variables like w, I don't index them. I just write w without an index." }, { "end": 431, "start": 425, "text": " And only when I need a specific element, I'll write it as wt." }, { "end": 434, "start": 431, "text": " Let's formalize the generative model." }, { "end": 438, "start": 434, "text": " We start out with the probability of observing a sequence w." }, { "end": 443, "start": 438, "text": " And since we use the latent variable model, we marginalize out the latent variables H." }, { "end": 453, "start": 443, "text": " And then we will assume that the overall dependencies between hidden states H and observations w follow like an HMM type of dependency." }, { "end": 459, "start": 453, "text": " That means the new state only depends on the last state and the current observation only depends on the current state." }, { "end": 462, "start": 459, "text": " And now the question is how do we model these transitions?" }, { "end": 467, "start": 462, "text": " I've so far pitched the ideas of sampling noise and then using some transition function f." }, { "end": 472, "start": 467, "text": " And we have seen flows already. Now we are ready to combine the two." }, { "end": 478, "start": 472, "text": " We propose a transition function fg, which has the signature as I mentioned before." }, { "end": 481, "start": 478, "text": " It gets a hidden state and noise vector as an input." }, { "end": 484, "start": 481, "text": " And it gives you a new state as an output." }, { "end": 494, "start": 484, "text": " This can be seen as a conditional flow because any ht minus 1, any last state, inserted as the first argument into fg," }, { "end": 502, "start": 494, "text": " induces a flow which maps from the simple noise distribution to the space of new hidden states." }, { "end": 510, "start": 502, "text": " And as I've said before, for the prior distribution in the simple noise space, we simply assume it's a standard Gaussian." }, { "end": 514, "start": 510, "text": " Let's look at this graphically, because in the end this is a graphical model." }, { "end": 517, "start": 514, "text": " I copied over the formulas from the last slide." }, { "end": 519, "start": 517, "text": " And at the bottom you see the graphical model." }, { "end": 523, "start": 519, "text": " First we have a sequence of stochastic variables Xi." }, { "end": 530, "start": 523, "text": " Those deterministically induce via the transition function f, via the flow, a sequence of hidden states." }, { "end": 533, "start": 530, "text": " And those independently predict the observations." }, { "end": 536, "start": 533, "text": " All the magic is in the transition." }, { "end": 540, "start": 536, "text": " So let me sketch this process here in the big circle." }, { "end": 545, "start": 540, "text": " How do we get from the last state h2 to the new state h3?" }, { "end": 549, "start": 545, "text": " Let's say h2 encodes a prefix and there are two possible continuations." }, { "end": 554, "start": 549, "text": " They're equally likely in the corpus, so there are two potential new states." }, { "end": 558, "start": 554, "text": " The blue state h3 and the yellow state h3." }, { "end": 562, "start": 558, "text": " I've sketched the standard Gaussian noise distribution at the top." }, { "end": 565, "start": 562, "text": " There are yellow samples and there are blue samples." }, { "end": 570, "start": 565, "text": " The flow realizes a mapping that takes any yellow sample and maps it to the yellow hidden state." }, { "end": 574, "start": 570, "text": " And it maps any blue sample to the blue hidden state." }, { "end": 580, "start": 574, "text": " So with probability one half in this situation, we either get a blue or a yellow sample from the simple noise distribution." }, { "end": 586, "start": 580, "text": " And it will induce new states, blue h3 or the yellow h3." }, { "end": 589, "start": 586, "text": " So far we have proposed the generative model." }, { "end": 593, "start": 589, "text": " Now the question is how do we train it if we don't know the hidden states?" }, { "end": 598, "start": 593, "text": " The answer is variational inference and in particular, amortized variational inference." }, { "end": 604, "start": 598, "text": " The key idea of variational inference is to introduce a parameterized approximate inference model." }, { "end": 606, "start": 604, "text": " How do we propose such a model?" }, { "end": 610, "start": 606, "text": " Well, a good recipe is to first look at a true posterior." }, { "end": 614, "start": 610, "text": " The probability of a state sequence given an observation sequence." }, { "end": 619, "start": 614, "text": " The true posterior turns out to factorize into individual components," }, { "end": 625, "start": 619, "text": " which give us the probability of a state given the last state and the future observations." }, { "end": 631, "start": 625, "text": " It turns out that we can formulate this inference model using two ingredients that should be familiar." }, { "end": 636, "start": 631, "text": " First, we use a transition function Fq, which induces a flow." }, { "end": 639, "start": 636, "text": " It has the same signature as Fg for the generative model." }, { "end": 642, "start": 639, "text": " And we use a noise source q." }, { "end": 646, "start": 642, "text": " But now the noise source isn't uninformative anymore." }, { "end": 650, "start": 646, "text": " In variational inference, the inference network is informed about the data." }, { "end": 656, "start": 650, "text": " So there's a base distribution q of Xi t, which is allowed to look at the data Wt." }, { "end": 659, "start": 656, "text": " Now compare this to teacher forcing." }, { "end": 666, "start": 659, "text": " In teacher forcing, we substitute our own predictions by inserting ground truth information into the generative model." }, { "end": 669, "start": 666, "text": " In variational inference, it's very clear how to use the data." }, { "end": 675, "start": 669, "text": " The data enters through the inference model and it enters in the form of future observation" }, { "end": 679, "start": 675, "text": " because the past observation we want to store in the hidden state." }, { "end": 686, "start": 679, "text": " It remains to derive an elbow, which is the usual evidence lower bound objective used for variational inference." }, { "end": 691, "start": 686, "text": " Any elbow, whether it's in a sequential setting or not, factorizes into two parts," }, { "end": 694, "start": 691, "text": " a reconstruction loss and a model mismatch term." }, { "end": 699, "start": 694, "text": " Here, reconstruction loss means probability of observation given a state." }, { "end": 704, "start": 699, "text": " And model mismatch is between the generative model P and the inference model q." }, { "end": 708, "start": 704, "text": " This is what is usually written as a KL divergence." }, { "end": 713, "start": 708, "text": " To derive our elbow, we follow the literature on flows." }, { "end": 718, "start": 713, "text": " In the first step, we introduced the flow on the inference model Fq." }, { "end": 727, "start": 718, "text": " We turn the expectation with respect to the complex state space H into an expectation with respect to the simple noise distribution." }, { "end": 732, "start": 727, "text": " And then, of course, at the same time, the flow appears inside the expectation." }, { "end": 736, "start": 732, "text": " And we get the log-determinant terms that I've mentioned before." }, { "end": 743, "start": 736, "text": " In the second step, we introduced the generative flow Fg using the same change of variable technique." }, { "end": 748, "start": 743, "text": " It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows" }, { "end": 754, "start": 748, "text": " and so that the generative model always appears as the inverse concatenated with the inference flow." }, { "end": 757, "start": 754, "text": " In a second, I'll show you what the interpretation of that is." }, { "end": 760, "start": 757, "text": " Let's quickly recap what we've seen so far." }, { "end": 762, "start": 760, "text": " There's a generative model." }, { "end": 767, "start": 762, "text": " It consists of a generative flow Fg and an uninformed noise source." }, { "end": 772, "start": 767, "text": " There's an inference model, which contains an inference flow Fq" }, { "end": 777, "start": 772, "text": " and a simple base distribution across the noise variables q of xi." }, { "end": 783, "start": 777, "text": " In the elbow, the two flows appear concatenated, and we can interpret this in the following way." }, { "end": 789, "start": 783, "text": " The inference model q proposes a noise vector, xi t, that is informed about the future." }, { "end": 792, "start": 789, "text": " The inference flow maps this to a hidden state." }, { "end": 796, "start": 792, "text": " At the hidden state, the reconstruction loss lives." }, { "end": 799, "start": 796, "text": " This is where we pay a price for making a bad prediction." }, { "end": 806, "start": 799, "text": " However, the inference model cannot encode all the possible information about the future into the hidden state, ht," }, { "end": 811, "start": 806, "text": " because the mapping continues to the simple noise space of the generative model." }, { "end": 818, "start": 811, "text": " And the inference model must make sure that the proposal also covers significant probability mass under the uninformed prior." }, { "end": 823, "start": 818, "text": " This trade-off between reconstruction and model mismatch is common to all elbows." }, { "end": 830, "start": 823, "text": " But here we highlight the special situation where we have two flows, one for the inference model and one for the generative model." }, { "end": 839, "start": 830, "text": " In our paper, we also show how we can use the recently proposed important weighted autoencoder to improve the tightness of our bound, but I'll skip those steps here." }, { "end": 843, "start": 839, "text": " Instead, let's quickly talk about evaluation." }, { "end": 846, "start": 843, "text": " We apply our model to unconditional generation." }, { "end": 849, "start": 846, "text": " So why in hell would somebody look into unconditional generation?" }, { "end": 853, "start": 849, "text": " Well, actually, it turns out it's harder than conditional generation." }, { "end": 859, "start": 853, "text": " If you know what the French sentence looks like, it's much easier to continue a partial English translation." }, { "end": 869, "start": 859, "text": " But it's not only harder, it's also more interesting to inspect which information does a sequence model need to store and which information can it forget." }, { "end": 871, "start": 869, "text": " We use two metrics to evaluate our model." }, { "end": 873, "start": 871, "text": " First, we look at sequence cross entropy." }, { "end": 879, "start": 873, "text": " So we compare the model's sequence distribution to the data sequence distribution." }, { "end": 883, "start": 879, "text": " Usually estimating the data distribution is impossible." }, { "end": 889, "start": 883, "text": " You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data." }, { "end": 895, "start": 889, "text": " However, for words, we can use unigram frequencies of words in a corpus as a pretty reliable estimate." }, { "end": 902, "start": 895, "text": " Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling." }, { "end": 910, "start": 902, "text": " We take the marginal likelihood, sample k trajectories, and assess the probability that the trajectories assigned to the given sequence." }, { "end": 914, "start": 910, "text": " Since our model is not autoregressive, the sequence isn't tied to an observation." }, { "end": 921, "start": 914, "text": " So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary." }, { "end": 930, "start": 921, "text": " Since we've pitched our noise model as the key to contribution to our generative model, we want to empirically verify that the model is being used." }, { "end": 936, "start": 930, "text": " Working with a clean probabilistic model allows us to use tools from probability theory to assess that." }, { "end": 942, "start": 936, "text": " We use the mutual information between a noise vector at time t and the observation of time t." }, { "end": 947, "start": 942, "text": " So this measures how much information in the output is actually due to the noise model." }, { "end": 952, "start": 947, "text": " Before showing you the numbers, let's quickly go across the parameterization of our model." }, { "end": 956, "start": 952, "text": " For the flows, we look at shift scaling transformations." }, { "end": 962, "start": 956, "text": " And if the scaling g is lower triangular, we can compute efficiently the Jacobian determinant." }, { "end": 967, "start": 962, "text": " We also look at real NVP and we compose flows by concatenation." }, { "end": 974, "start": 967, "text": " The base distribution of our inference model depends on the future observations, which we summarize using a GRU RNN." }, { "end": 977, "start": 974, "text": " The base distribution itself is a diagonal Gaussian." }, { "end": 982, "start": 977, "text": " We use a state size of 8 and also run some experiments for 16 and 32." }, { "end": 986, "start": 982, "text": " All the numbers are in the paper, so here are just the take-home messages." }, { "end": 992, "start": 986, "text": " We are on par or better than a domestic RNN with teacher forcing trained at the same state size." }, { "end": 997, "start": 992, "text": " Also, we observed that a powerful generative flow is essential to achieve good performance." }, { "end": 1003, "start": 997, "text": " Furthermore, we can confirm that important weightless elbow improved the results." }, { "end": 1007, "start": 1003, "text": " This is the first model applying generative flows to sequence modeling." }, { "end": 1012, "start": 1007, "text": " So naturally, we are interested in comparing the expressiveness of fg and fq." }, { "end": 1016, "start": 1012, "text": " Our paper has a table that compares four choices for both flows." }, { "end": 1024, "start": 1016, "text": " Our findings are that the generative flow should be powerful and the inference flow should be slightly less powerful." }, { "end": 1031, "start": 1024, "text": " To understand our noise model, we look at the mutual information at every time step and show a box spot for all of them." }, { "end": 1037, "start": 1031, "text": " Initially, the mutual information is highest, which means the initial character is most important to remember." }, { "end": 1046, "start": 1037, "text": " The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences." }, { "end": 1057, "start": 1046, "text": " A non-autoregressive model needs to have lower entropy in the observation model because any underentropy under the observation model is being forgotten because there is no feedback." }, { "end": 1062, "start": 1057, "text": " The purple line shows you the observation model entropy during training." }, { "end": 1067, "start": 1062, "text": " The dashed red line shows you the entropy on the observation model of a baseline." }, { "end": 1075, "start": 1067, "text": " So indeed, we have lower entropy in the observation model and at the same time in green, you see the mutual information increasing." }, { "end": 1078, "start": 1075, "text": " Let's summarize our findings." }, { "end": 1085, "start": 1078, "text": " Using variational flows, non-autoregressive modeling of sequences is possible and teacher forcing is not necessary." }, { "end": 1092, "start": 1085, "text": " At the same time, we get a noise model that is the driving factor of the sequence model and is easy to interpret." }, { "end": 1120, "start": 1092, "text": " For any details, please check out the paper and for any questions, shoot me an email." } ]
WYrvh50yu6s
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
[ "Science & Technology" ]
[ "ai", "deep learning", "variational", "autoencoders", "vae", "disentanglement", "representation learning", "machine learning", "unsupervised", "arxiv", "google", "google ai", "mpi", "eth", "eth zurich", "ethz" ]
https://arxiv.org/abs/1811.12359 Abstract: In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learning algorithms. A large number of unsupervised learning approaches based on auto-encoding and quantitative evaluation metrics of disentanglement have been proposed; yet, the efficacy of the proposed approaches and utility of proposed notions of disentanglement has not been challenged in prior work. In this paper, we provide a sober look on recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12000 models covering the six most prominent methods, and evaluate them across six disentanglement metrics in a reproducible large-scale experimental study on seven different data sets. On the positive side, we observe that different methods successfully enforce properties "encouraged" by the corresponding losses. On the negative side, we observe in our study that well-disentangled models seemingly cannot be identified without access to ground-truth labels even if we are allowed to transfer hyperparameters across data sets. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. These results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets. Authors: Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem
All right, hello everyone Today we're gonna look at this paper challenging common assumptions in the unsupervised learning of disentangled representations by Francesca Locutello and a bunch of other people at Google AI, ETH Zurich and MPI Full disclaimer, I know these people and I've Talked to them about this work. So just so you know where I'm coming from It's a good paper and it's fairly short to explain. So let's go over it The main thing here is What's called disentanglement? So disentanglement is kind of a property of data in unsupervised learning or not data of your model that you would like to have In unsupervised learning in here, especially in generative models so What they focus on is like Auto encoding here and What that means is I have some data point which could be an image. Let's draw an image here and I compress this usually into a vector and The vector has a couple of dimensions. This is a representation of the Data and from this representation what I can do is I can produce an image again and If I train an autoencoder, I will enforce that my model. So both of these are my model This is called an encoder and this is called a decoder That What they do is that the final image then looks like the original image This is an autoencoder basically a compression algorithm that Tries to find representations such that it can reconstruct the original image again Here we go a little further in that we use what's called variational autoencoders. So All of these all of these experiments here use variants of the variational autoencoder and What a variational autoencoder? Let's skip some here A variational autoencoder is the same thing as an autoencoder except It's a probabilistic framework, so What you do is here? On the bottom you can see an equation that basically is the objective for a VAE and What it does is it says okay, I have an image Let's say this is my image and I use an encoder like in an autoencoder And that gives me an image And that gives me an autoencoder and that gives me a representation Okay but Now I don't use this representation directly to decode but this representation Is simply the parameters from a bunch of distributions Right, so here let's say I have Four four I want four latent factors and the latent factors are basically the latent variables that describe This image so the images could be images of let's say cats and four latent factors could be The color of the fur of the cat the size of the cat the position in the image and the let's say the General lighting of how bright the image is so these could be four latent factors that would explain Best the the image and from that and if the image could be best reconstructed, let's say So the the four latent factors we consider as probability distributions so What our encoder needs to do our encoder needs to produce eight numbers in this case Eight numbers why because for each of these four distributions we want a mean? And a standard deviation So these eight numbers here each one Or each pair of numbers one of them is going to be the mean and the other one is going to be the standard deviation of a distribution and then From these we're going to construct a distribution Like so like okay. Here's the mean here's the standard deviation So the distribution somehow looks like this and then we're going to sample from this distribution. So one sample could be Here one sample could be here one sample could be here here So of course in the middle here, we're going to have more samples. But so the whereas the autoencoder directly uses the encoding to reproduce the image the variational autoencoder the the What the output what the encoder produces here is simply a parameterization for a disk for a distribution and And that distribution then is sampled so we're going to take one sample here So from from each of these so there's going to be multiple of those distributions because we have Eight numbers we are going to produce four distributions in particular So we're going to sample four different numbers. So we're going to sample a new vector with four One two, three four. Well, I didn't have eight at the beginning, but never mind. So here This gives us four numbers, but these are sampled. So these are going to be different every time Even if we feed the same image and from this the decoder Is going to try to reproduce the image and then Again the images the end image and the beginning image are going to be forced to be close to each other But also now since this is a probabilistic framework we also kind of need We need a different loss function for the autoencoder. You can simply penalize how far the images are in let's say l2 norm but here We have two distinct parts to the loss term. So And everything is probabilistic. So let's walk through this here. The first part Of the so we have two parts of the loss term and Here in particular q Is you can see here it takes as an is it is the distribution of z Conditional x and z will always be related representation of the Of the data and x will be the the data itself the data point So q will take the data point and produce z And the z specifically here what's meant is this This thing here This is z Whereas This is this is x And this is also Well, this is x Tilde or something whatever is produced by the decoder So basically what we're gonna do is We're going to punish the kl distance, which is a probabilistic distance measure. We're gonna Measure the distance between the distribution of z under x With the prior over z so p of z here This here Is the prior distribution Prior distribution over z and the prior distribution in va is is often to be taken as a Gaussian so We'll say all right So the our our kind of default assumption on the z variables is that they're that they're gaussians here And We're gonna force basically we're gonna force the encoder to come up with With encodings generally over the data set that are gaussians that are conformal to our prior So here we say specific prior pz I didn't mean to cross that out Right, so this second term enforces the the encoder to produce things that are Gaussian Um, it's specifically with our if our prior is let's say um Zero zero mean unit variance gaussians. It's gonna enforce that the first term here Is different the first term makes the image that has been input to the variational encoder and the image that has been output Close together again. This is a probabilistic Loss so What we're gonna do here is we're gonna take expectations. So the KL distance is also an expectation by the way We're gonna take expectations over Px which is the distribution of the data and also Over Q and Q is again our encoding mechanism Mechanism and we're simply going to punish the Or we're gonna here maximize the the log Probability which is equivalent to minimizing the negative log likelihood Which you might be familiar with of the data given the the z variables so And this is an expectation over q given x so what that means is basically we want the the probability Of this original data point we want Here we output x tilde We We want this to be close to x here. So what we can say is we want the probability that our model outputs x Which has been the original input right given this particular z that it produced to be high As an expectation of q Of z given x So as a bit cryptic, but it means here I input x into q I get out z and when I Have the z what I produce here is what I produce The likelihood that x the original image these are the same is produced should be high So that's a variational autoencoder. I simply encourage the latent representations to be close to my prior which is often Gaussian and I Encourage the output to be similar to the input which I do by Encouraging the likelihood that the output is the input All right, so cool. So what's that have to do with disentanglement disentanglement is property That now I would like to have in my model which is that these These things here Um, or we can also focus on these things here, however, you want to view it or these things here these Latent things that my encoder outputs somehow give me information about the data in a way That's disentangled what that means is I've already I've made an example that's already disentangled where I said, let's let's say we have images of a cat of cats and the fur color is going to be one variable and the color of the eyes of the cat is going to be another one and The position in the image is going to be another one. So these are all fairly independent, right? and so I if I change some Latent factor I can change them pretty much independently. So here this could be the fur color I can change it pretty much independently and cat will just have a different fur and so on What would be non disentangled representations? would be Let's say one encodes the fur of the cat and the other one encodes the Encodes the the species of cat because these are these are highly let's say entangled so the fur color is highly dependent on what species the cat is and It's not really so they kind of you can you can imagine it as these things being correlated, but it's slightly different And there are there's not an agreement on what this entanglement means really we just kind of imagine data is somehow Entangled and we want to kind of pull out these disentangled factors So what they focus on here and the easiest the easiest measure here is the following um, I might want to have some Space All right. So the easiest measure of disentanglement that is come up with here is the following Um, it's an assumption. The assumption is let's say there's data x right We'll call it random variable and we know We know we assume that This data is generated by a bunch of Latent variables z1 z2 z3 Which are? Independent which means that and the technical In this is that the p of z which is all of them can be factorized into p of z i So they are independent Um and these Kind of determine independently the data x now What does that disentanglement of when my model has produced a disentangled representation means I now have a model some model m Which is going to give me a representation of x And the representation as we saw before um Could be these things here, that's the the Representation specifically what these people do is they say okay the mean of the distribution that my encoder gives me That's the representation of x All right, so this gives you a representation of x from which you then might want to you know reconstruct x over here x So then but so the important thing is when is the representation disentangled the representation is disentangled in the easiest sense If the following holds when I change um When I change z i So I introduce a delta to z i to any of these three that means That in the representation of x Which we're just going to say So if there's three dimensions of z we just assume kind of we know that and we also make the representation three-dimensional then exactly one Factor in this is going to change so if I change one factor of the true underlying distribution um Which is independently which all the latent factors are independent then Only one factor in my representation changes. So if that's the case then Kind of I can be fairly sure that i've captured the the true latent structure of the data, right if one if if one of the Of the if I change one of the the z here Let's say I change the z3 and only then uh r3 So I change z3 let's say I have access to the true underlying distribution I ask the the world Ask the world to give me a picture of a cat that where the fur color is different and then I put it I get a data point and then I put it through my model I get a representation and only From the cat that I had before only one of the factors of my representation changes Then I call it disentangled then I can be fairly sure. Okay my representation this dimension of my representation captures the fur color independently of the other factors All right, so that's disentanglement and you notice it requires actually access here to the true distribution Distribution of how the data is generated by the world So this is something you generally don't have but um, it's a technical notion So you can you can certainly postulate it And it's it It's a nice framework and this paper basically proves that Generally learning disentangled representation in that way is impossible um If you don't have some if you don't make some assumptions some a priori assumptions on your data and your model so This is a theorem here and we See here p is any generative model Which admits this factorization Right does that that's what we talked about the true underlying generative process is Is independent in so In its constituents That means there's a bunch of latent variables. They independently from each other produce a data point right X is the data observations Then there exists an infinite family of bijective functions right such that This and this and this and this Okay What that means? is so this thing here basically just means that the um the distributions agree so that the the the overall distributions the let's say the it's not exactly that but the posterior distributions Um, let's say the data looks the same right That what comes out of the process looks the same So there is there is functions that transform the latent distribution into some other distribution, but they look the same in cumulatively All right, and then we have the All right, and then this part here Means you'll see the derivative of fi of u with respect to some Uj which you'll notice i and j are different. Um, this this means that basically the dimensions are Entangled it means that if I take the derivative of one entry In the in the f in the function output and I derive it By another entry then I get a non-zero derivative which means that this Uj influences fi Which basically means that I can produce I can take the z I can transform it in In so z is independent. So it means the i-th dimension has no influence on the j-th dimension Of the of the output and I can transform it into something Where that's no longer the case where the i-th and the j-th dimension very much uh Kind of are entangled or covariate so This means I can take the z that That's kind of everything is independent. I can transform it into something where everything is dependent and they give a nice example here So they say let's say we have Gaussians In two dimensions, so we have one Gaussian here And let me see if I can draw this one Gaussian here Right in two dimensions. They're completely independent um what you'll find is that the kind of distribution overall has Iso lines like this Right, it gives you kind of a hump in the middle two-dimensionally. You can maybe imagine like a bit of a mountain in the middle um All right. So this is what you this is the kind of output distribution If you if you don't know about the underlying factors, you simply see the cumulative distribution, which would be the the big p here um All right. Now we transform this into with f And f is simply a rotation by 45 degrees right, so two new axes this and that and again Our two gaussians are going to be transformed these Right. So these are not these are not disentangled anymore. Well in the in the notion I can't say it like this, but this is easiest to say so these are these are kind of Now that it's rotated in terms of the original coordinate system, which would go like this These very much depend on each other right the jth dimension the if dimension depend on each other because if I sample from one of the gaussians I need now basically two coordinates to describe where it is or Yeah, one isn't just So if I sample from one Gaussian I need both the coordinates but the cumulative distribution or the That is still the same That is still going to look exactly the same so It's again a hump. So it's basically an isometric hump in every direction if I rotate that the It looks exactly the same. This is the p here But now the the if dimension and the jth dimension very much influence each other um, and yeah, interestingly the If you now look at disentanglement if I just have if if I now produce data x here x1 and here I produce data x2 and both go through my model and give me our representation of x1 and the representation of x1 and the representation of x2 I have Without seeing the underlying structure. I have no idea which one of those two It comes from and thereby I have zero chance basically. It's a luck lucky guess um Which one it comes from and there's an infinite family. So I will never find the true underlying distribution here and thereby I will never um I will never be able to satisfy this property that if one of the z changes Then only one of the factors of my representation will change because if I Say, oh, well, obviously this is the case Then i'm going to make a different model and if I say well, this is the case I'm going to make a different model. I don't know which one it is So I have to choose one and it could be the other one. So i'm bound to be wrong in this case 50% of the time, but if it's an infinite family i'm bound to be wrong every time basically, so That's what the theorem basically says I can't Decide on the true underlying distribution. Um, there's an infinite family that Transforms it into it. It transforms every distribution into some other distribution that has basically complete opposite properties of entanglement And I need to choose one and I will never choose the right one because i'm not that lucky And thereby I can't do representation learning that's disentangled All right, so that's the main claim of the paper and um There is a lot of experiments here so what the paper also does is they produce some new Data sets and they test a lot of a lot of architectures basically they say just because it's theoretically impossible It's not impractical because we can actually make these underlying assumptions like we can make some assumptions on the data and then and then we kind of can attempt to do disentanglement learning so they do these data sets and they test different VAE's architectures on it and they basically Um establish where More work should go. So that's that's kind of the rest of the paper I encourage you to look at the rest of the paper I just wanted to give a quick introduction to VAEs and to disentanglement to entangle representation learning I Wasn't technically correct Uh in every detail, but I hope that it's enough and have fun
[ { "end": 2, "start": 0, "text": " All right, hello everyone" }, { "end": 11.92, "start": 2.84, "text": " Today we're gonna look at this paper challenging common assumptions in the unsupervised learning of disentangled representations by Francesca Locutello and" }, { "end": 17.3, "start": 12.92, "text": " a bunch of other people at Google AI, ETH Zurich and MPI" }, { "end": 22.28, "start": 18.36, "text": " Full disclaimer, I know these people and I've" }, { "end": 26.92, "start": 23.2, "text": " Talked to them about this work. So just so you know where I'm coming from" }, { "end": 33.24, "start": 26.92, "text": " It's a good paper and it's fairly short to explain. So let's go over it" }, { "end": 36.760000000000005, "start": 34.760000000000005, "text": " The main thing here is" }, { "end": 42.300000000000004, "start": 36.800000000000004, "text": " What's called disentanglement? So disentanglement is kind of a property of data in" }, { "end": 48.56, "start": 42.96, "text": " unsupervised learning or not data of your model that you would like to have" }, { "end": 53.84, "start": 49.24, "text": " In unsupervised learning in here, especially in generative models" }, { "end": 56.84, "start": 53.84, "text": " so" }, { "end": 59.760000000000005, "start": 57.52, "text": " What they focus on is like" }, { "end": 63.28, "start": 61.28, "text": " Auto encoding here and" }, { "end": 70.24000000000001, "start": 63.800000000000004, "text": " What that means is I have some data point which could be an image. Let's draw an image here and" }, { "end": 73.44, "start": 71.44, "text": " I" }, { "end": 77.80000000000001, "start": 73.44, "text": " compress this usually into a vector and" }, { "end": 84.8, "start": 77.8, "text": " The vector has a couple of dimensions. This is a representation of the" }, { "end": 94.08, "start": 86.52, "text": " Data and from this representation what I can do is I can produce an image again and" }, { "end": 101, "start": 94.88, "text": " If I train an autoencoder, I will enforce that my model. So both of these are my model" }, { "end": 105, "start": 101, "text": " This is called an encoder and this is called a decoder" }, { "end": 107, "start": 105, "text": " That" }, { "end": 113.84, "start": 107, "text": " What they do is that the final image then looks like the original image" }, { "end": 118.4, "start": 114.64, "text": " This is an autoencoder basically a compression algorithm that" }, { "end": 124.36, "start": 119.64, "text": " Tries to find representations such that it can reconstruct the original image again" }, { "end": 131.56, "start": 125.16, "text": " Here we go a little further in that we use what's called variational autoencoders. So" }, { "end": 135.32, "start": 131.56, "text": " All of these all of these experiments here use" }, { "end": 139.08, "start": 136.04, "text": " variants of the variational autoencoder and" }, { "end": 142.04, "start": 140.04, "text": " What a variational autoencoder?" }, { "end": 145.56, "start": 143.56, "text": " Let's skip some here" }, { "end": 152.04, "start": 147, "text": " A variational autoencoder is the same thing as an autoencoder except" }, { "end": 157, "start": 155, "text": " It's a probabilistic framework, so" }, { "end": 159, "start": 157, "text": " What you do is here?" }, { "end": 167.64, "start": 160.84, "text": " On the bottom you can see an equation that basically is the objective for a VAE and" }, { "end": 171.84, "start": 168.76, "text": " What it does is it says okay, I have an image" }, { "end": 174.44, "start": 172.44, "text": " Let's say this is my image and" }, { "end": 178.6, "start": 175.32, "text": " I use an encoder like in an autoencoder" }, { "end": 183.24, "start": 181.24, "text": " And that gives me an image" }, { "end": 187.88, "start": 183.24, "text": " And that gives me an autoencoder and that gives me a representation" }, { "end": 190.60000000000002, "start": 189, "text": " Okay" }, { "end": 191.8, "start": 190.60000000000002, "text": " but" }, { "end": 197.34, "start": 191.8, "text": " Now I don't use this representation directly to decode but this representation" }, { "end": 203.58, "start": 198.84, "text": " Is simply the parameters from a bunch of distributions" }, { "end": 207.96, "start": 205, "text": " Right, so here let's say I have" }, { "end": 215.16, "start": 207.96, "text": " Four four I want four latent factors and the latent factors are basically the latent variables that describe" }, { "end": 221.88, "start": 215.72, "text": " This image so the images could be images of let's say cats and four latent factors could be" }, { "end": 228.60000000000002, "start": 222.44, "text": " The color of the fur of the cat the size of the cat the position in the image and" }, { "end": 230.44, "start": 229.4, "text": " the" }, { "end": 232.44, "start": 230.44, "text": " let's say the" }, { "end": 238.84, "start": 232.44, "text": " General lighting of how bright the image is so these could be four latent factors that would" }, { "end": 241.8, "start": 239.8, "text": " explain" }, { "end": 246.92, "start": 241.8, "text": " Best the the image and from that and if the image could be best reconstructed, let's say" }, { "end": 251.48, "start": 247.64, "text": " So the the four latent factors we consider as probability distributions" }, { "end": 253.07999999999998, "start": 252.12, "text": " so" }, { "end": 258.68, "start": 253.07999999999998, "text": " What our encoder needs to do our encoder needs to produce eight numbers in this case" }, { "end": 267, "start": 258.68, "text": " Eight numbers why because for each of these four distributions we want a mean?" }, { "end": 271.24, "start": 269.24, "text": " And a standard deviation" }, { "end": 275.4, "start": 273.40000000000003, "text": " So these eight numbers here" }, { "end": 277.32, "start": 275.72, "text": " each one" }, { "end": 284.92, "start": 277.32, "text": " Or each pair of numbers one of them is going to be the mean and the other one is going to be the standard deviation" }, { "end": 287.4, "start": 285.4, "text": " of a distribution" }, { "end": 289.08, "start": 287.4, "text": " and then" }, { "end": 293.41999999999996, "start": 289.08, "text": " From these we're going to construct a distribution" }, { "end": 298.62, "start": 294.44, "text": " Like so like okay. Here's the mean here's the standard deviation" }, { "end": 308.44, "start": 299.64, "text": " So the distribution somehow looks like this and then we're going to sample from this distribution. So one sample could be" }, { "end": 312.67999999999995, "start": 309.32, "text": " Here one sample could be here one sample could be here here" }, { "end": 319.40000000000003, "start": 312.68, "text": " So of course in the middle here, we're going to have more samples. But so the whereas the autoencoder directly uses the encoding" }, { "end": 323.24, "start": 319.72, "text": " to reproduce the image the variational autoencoder the" }, { "end": 326.12, "start": 324.68, "text": " the" }, { "end": 330.7, "start": 326.12, "text": " What the output what the encoder produces here is simply a parameterization" }, { "end": 336.12, "start": 331.88, "text": " for a disk for a distribution and" }, { "end": 343.88, "start": 336.12, "text": " And that distribution then is sampled so we're going to take one sample" }, { "end": 347, "start": 345, "text": " here" }, { "end": 353.32, "start": 348.12, "text": " So from from each of these so there's going to be multiple of those distributions because we have" }, { "end": 358.06, "start": 354.52, "text": " Eight numbers we are going to produce four distributions" }, { "end": 361.08, "start": 359, "text": " in particular" }, { "end": 367.47999999999996, "start": 361.08, "text": " So we're going to sample four different numbers. So we're going to sample a new vector" }, { "end": 369.56, "start": 368.03999999999996, "text": " with four" }, { "end": 373.96, "start": 369.56, "text": " One two, three four. Well, I didn't have eight at the beginning, but never mind. So here" }, { "end": 379.15999999999997, "start": 374.59999999999997, "text": " This gives us four numbers, but these are sampled. So these are going to be different every time" }, { "end": 381.71999999999997, "start": 379.71999999999997, "text": " Even if we feed the same image" }, { "end": 384.78, "start": 382.2, "text": " and from this the decoder" }, { "end": 389.5, "start": 384.78, "text": " Is going to try to reproduce the image and then" }, { "end": 399.41999999999996, "start": 391.5, "text": " Again the images the end image and the beginning image are going to be forced to be close to each other" }, { "end": 406.7, "start": 401.82, "text": " But also now since this is a probabilistic framework we also kind of need" }, { "end": 414.05999999999995, "start": 407.34, "text": " We need a different loss function for the autoencoder. You can simply penalize how far the images are in let's say l2 norm" }, { "end": 416.06, "start": 414.06, "text": " but here" }, { "end": 419.74, "start": 416.38, "text": " We have two distinct parts to the loss term. So" }, { "end": 425.98, "start": 421.26, "text": " And everything is probabilistic. So let's walk through this here. The first part" }, { "end": 431.5, "start": 427.98, "text": " Of the so we have two parts of the loss term and" }, { "end": 435.98, "start": 432.86, "text": " Here in particular q" }, { "end": 442.14000000000004, "start": 435.98, "text": " Is you can see here it takes as an is it is the distribution of z" }, { "end": 447.18, "start": 442.54, "text": " Conditional x and z will always be related representation of the" }, { "end": 453.74, "start": 447.82, "text": " Of the data and x will be the the data itself the data point" }, { "end": 457.58000000000004, "start": 454.3, "text": " So q will take the data point and produce" }, { "end": 459.74, "start": 458.70000000000005, "text": " z" }, { "end": 462.70000000000005, "start": 459.74, "text": " And the z specifically here what's meant is" }, { "end": 465.66, "start": 463.82, "text": " this" }, { "end": 467.42, "start": 465.66, "text": " This thing here" }, { "end": 469.42, "start": 467.42, "text": " This is z" }, { "end": 471.26000000000005, "start": 469.58000000000004, "text": " Whereas" }, { "end": 473.26000000000005, "start": 471.26000000000005, "text": " This is this is x" }, { "end": 475.58000000000004, "start": 473.82000000000005, "text": " And this is also" }, { "end": 477.58000000000004, "start": 475.58000000000004, "text": " Well, this is x" }, { "end": 481.36, "start": 478.54, "text": " Tilde or something whatever is produced by the decoder" }, { "end": 490.22, "start": 485.82000000000005, "text": " So basically what we're gonna do is" }, { "end": 496.62, "start": 490.22, "text": " We're going to punish the kl distance, which is a probabilistic distance measure. We're gonna" }, { "end": 506.06, "start": 499.58000000000004, "text": " Measure the distance between the distribution of z under x" }, { "end": 511.66, "start": 507.98, "text": " With the prior over z so p of z here" }, { "end": 513.74, "start": 512.38, "text": " This here" }, { "end": 515.98, "start": 513.74, "text": " Is the prior distribution" }, { "end": 522.62, "start": 515.98, "text": " Prior distribution over z and the prior distribution in va is is often to be taken as a" }, { "end": 525.24, "start": 523.24, "text": " Gaussian so" }, { "end": 527.26, "start": 525.74, "text": " We'll say all right" }, { "end": 534.0600000000001, "start": 527.26, "text": " So the our our kind of default assumption on the z variables is that they're that they're gaussians here" }, { "end": 537.4200000000001, "start": 535.98, "text": " And" }, { "end": 543.5, "start": 537.4200000000001, "text": " We're gonna force basically we're gonna force the encoder to come up with" }, { "end": 551.74, "start": 543.5, "text": " With encodings generally over the data set that are gaussians that are conformal to our prior" }, { "end": 559.42, "start": 554.38, "text": " So here we say specific prior pz I didn't mean to cross that out" }, { "end": 568.14, "start": 561.26, "text": " Right, so this second term enforces the the encoder to produce things that are" }, { "end": 570.06, "start": 568.76, "text": " Gaussian" }, { "end": 574.14, "start": 570.06, "text": " Um, it's specifically with our if our prior is let's say" }, { "end": 577.0999999999999, "start": 575.0999999999999, "text": " um" }, { "end": 584.8599999999999, "start": 577.66, "text": " Zero zero mean unit variance gaussians. It's gonna enforce that the first term here" }, { "end": 593.18, "start": 586.3, "text": " Is different the first term makes the image that has been input to the variational encoder and the image that has been output" }, { "end": 596.3199999999999, "start": 593.8199999999999, "text": " Close together again. This is a probabilistic" }, { "end": 598.32, "start": 596.32, "text": " Loss so" }, { "end": 604.08, "start": 598.4000000000001, "text": " What we're gonna do here is we're gonna take expectations. So the KL distance is also an expectation by the way" }, { "end": 609.36, "start": 606.88, "text": " We're gonna take expectations over" }, { "end": 615.2, "start": 610.08, "text": " Px which is the distribution of the data and also" }, { "end": 619.2, "start": 615.6800000000001, "text": " Over Q and Q is again our encoding" }, { "end": 621.84, "start": 619.84, "text": " mechanism" }, { "end": 627.44, "start": 621.84, "text": " Mechanism and we're simply going to punish the" }, { "end": 631.36, "start": 628.48, "text": " Or we're gonna here maximize the the log" }, { "end": 636.24, "start": 632.22, "text": " Probability which is equivalent to minimizing the negative log likelihood" }, { "end": 641.84, "start": 636.24, "text": " Which you might be familiar with of the data given the the z variables" }, { "end": 644.5600000000001, "start": 642.5600000000001, "text": " so" }, { "end": 648.08, "start": 645.6, "text": " And this is an expectation over q" }, { "end": 652.96, "start": 648.08, "text": " given x so what that means is basically we want the" }, { "end": 654.96, "start": 653.9200000000001, "text": " the" }, { "end": 656.96, "start": 654.96, "text": " probability" }, { "end": 662, "start": 658, "text": " Of this original data point we want" }, { "end": 665.44, "start": 663.44, "text": " Here we output x tilde" }, { "end": 667.84, "start": 666.88, "text": " We" }, { "end": 673.5400000000001, "start": 667.84, "text": " We want this to be close to x here. So what we can say is we want the probability" }, { "end": 676.4000000000001, "start": 674.6400000000001, "text": " that our model" }, { "end": 678.4, "start": 676.4, "text": " outputs x" }, { "end": 687.4399999999999, "start": 679.84, "text": " Which has been the original input right given this particular z that it produced to be high" }, { "end": 693.92, "start": 690, "text": " As an expectation of q" }, { "end": 699.92, "start": 697.92, "text": " Of z given x" }, { "end": 707.92, "start": 699.92, "text": " So as a bit cryptic, but it means here I input x into q I get out z" }, { "end": 710.0799999999999, "start": 708.64, "text": " and when I" }, { "end": 713.92, "start": 710.0799999999999, "text": " Have the z what I produce here is what I produce" }, { "end": 723.1999999999999, "start": 715.4399999999999, "text": " The likelihood that x the original image these are the same is produced should be high" }, { "end": 729.12, "start": 723.2, "text": " So that's a variational autoencoder. I simply encourage the latent representations to be" }, { "end": 732.48, "start": 729.36, "text": " close to my prior which is often Gaussian and I" }, { "end": 738.1600000000001, "start": 733.0400000000001, "text": " Encourage the output to be similar to the input which I do by" }, { "end": 741.5200000000001, "start": 738.6400000000001, "text": " Encouraging the likelihood that the output is the input" }, { "end": 748.24, "start": 742.32, "text": " All right, so cool. So what's that have to do with disentanglement disentanglement is property" }, { "end": 755.04, "start": 748.24, "text": " That now I would like to have in my model which is that" }, { "end": 757.84, "start": 755.84, "text": " these" }, { "end": 759.6, "start": 757.84, "text": " These things here" }, { "end": 765.52, "start": 759.6, "text": " Um, or we can also focus on these things here, however, you want to view it or these things here" }, { "end": 767.1800000000001, "start": 766.16, "text": " these" }, { "end": 774.32, "start": 767.1800000000001, "text": " Latent things that my encoder outputs somehow give me information about the data in a way" }, { "end": 779.94, "start": 774.32, "text": " That's disentangled what that means is I've already I've made an example that's already disentangled" }, { "end": 785.7600000000001, "start": 780.24, "text": " where I said, let's let's say we have images of a cat of cats and" }, { "end": 794.8000000000001, "start": 786.48, "text": " the fur color is going to be one variable and the color of the eyes of the cat is going to be another one and" }, { "end": 800.5600000000001, "start": 795.5200000000001, "text": " The position in the image is going to be another one. So these are all fairly independent, right?" }, { "end": 803.12, "start": 801.12, "text": " and so I" }, { "end": 805.12, "start": 803.12, "text": " if I change some" }, { "end": 811.04, "start": 805.6, "text": " Latent factor I can change them pretty much independently. So here this could be the fur color" }, { "end": 816.4, "start": 811.6, "text": " I can change it pretty much independently and cat will just have a different fur and so on" }, { "end": 819.68, "start": 816.64, "text": " What would be non disentangled representations?" }, { "end": 822.4, "start": 820.4, "text": " would be" }, { "end": 826.24, "start": 822.48, "text": " Let's say one encodes the fur of the cat" }, { "end": 829.76, "start": 826.8, "text": " and the other one encodes the" }, { "end": 836.56, "start": 829.76, "text": " Encodes the the species of cat because these are these are highly let's say entangled" }, { "end": 841.12, "start": 836.56, "text": " so the fur color is highly dependent on what species the cat is and" }, { "end": 849.6, "start": 842.72, "text": " It's not really so they kind of you can you can imagine it as these things being correlated, but it's slightly different" }, { "end": 857.4399999999999, "start": 851.04, "text": " And there are there's not an agreement on what this entanglement means really we just kind of imagine data is somehow" }, { "end": 861.3000000000001, "start": 857.44, "text": " Entangled and we want to kind of pull out these disentangled factors" }, { "end": 866.82, "start": 861.62, "text": " So what they focus on here and the easiest the easiest measure here" }, { "end": 868.58, "start": 867.46, "text": " is" }, { "end": 871.7800000000001, "start": 868.58, "text": " the following um, I might want to have some" }, { "end": 874.2600000000001, "start": 873.22, "text": " Space" }, { "end": 880.9000000000001, "start": 874.2600000000001, "text": " All right. So the easiest measure of disentanglement that is come up with here is the following" }, { "end": 886.34, "start": 881.7800000000001, "text": " Um, it's an assumption. The assumption is let's say there's data x" }, { "end": 888.34, "start": 886.34, "text": " right" }, { "end": 892.1800000000001, "start": 889.5400000000001, "text": " We'll call it random variable and we know" }, { "end": 895.14, "start": 893.14, "text": " We know we assume" }, { "end": 896.26, "start": 895.14, "text": " that" }, { "end": 898.26, "start": 896.26, "text": " This data is generated" }, { "end": 900.6600000000001, "start": 898.6600000000001, "text": " by a bunch of" }, { "end": 903.86, "start": 901.14, "text": " Latent variables z1 z2 z3" }, { "end": 907.3000000000001, "start": 905.3000000000001, "text": " Which are?" }, { "end": 910.5, "start": 907.36, "text": " Independent which means that and the technical" }, { "end": 918.26, "start": 910.5, "text": " In this is that the p of z which is all of them can be factorized" }, { "end": 921.86, "start": 919.54, "text": " into p of z i" }, { "end": 925.86, "start": 923.62, "text": " So they are independent" }, { "end": 929.54, "start": 927.54, "text": " Um and these" }, { "end": 932.84, "start": 930.74, "text": " Kind of determine independently" }, { "end": 936.1, "start": 934.02, "text": " the data x" }, { "end": 937.62, "start": 936.1, "text": " now" }, { "end": 945.94, "start": 937.62, "text": " What does that disentanglement of when my model has produced a disentangled representation means I now have a model some model" }, { "end": 948.26, "start": 946.98, "text": " m" }, { "end": 951.78, "start": 948.26, "text": " Which is going to give me a representation of x" }, { "end": 957.3, "start": 954.02, "text": " And the representation as we saw before" }, { "end": 960.02, "start": 958.02, "text": " um" }, { "end": 963.22, "start": 961.22, "text": " Could be" }, { "end": 965.3, "start": 963.22, "text": " these things here, that's the" }, { "end": 966.9, "start": 965.3, "text": " the" }, { "end": 973.62, "start": 966.9, "text": " Representation specifically what these people do is they say okay the mean of the distribution that my encoder gives me" }, { "end": 975.9399999999999, "start": 973.9399999999999, "text": " That's the representation of x" }, { "end": 989.2199999999999, "start": 981.78, "text": " All right, so this gives you a representation of x from which you then might want to you know reconstruct x" }, { "end": 991.78, "start": 990.0999999999999, "text": " over here" }, { "end": 992.9799999999999, "start": 991.78, "text": " x" }, { "end": 1001.38, "start": 992.98, "text": " So then but so the important thing is when is the representation disentangled the representation is disentangled in the easiest sense" }, { "end": 1004.58, "start": 1002.1, "text": " If the following holds when I change" }, { "end": 1007.78, "start": 1005.78, "text": " um" }, { "end": 1011.0600000000001, "start": 1008.66, "text": " When I change z i" }, { "end": 1017.62, "start": 1012.26, "text": " So I introduce a delta to z i to any of these three that means" }, { "end": 1021.14, "start": 1017.62, "text": " That in the representation of x" }, { "end": 1024.66, "start": 1022.66, "text": " Which we're just going to say" }, { "end": 1032.58, "start": 1025.54, "text": " So if there's three dimensions of z we just assume kind of we know that and we also make the representation three-dimensional" }, { "end": 1034.34, "start": 1033.22, "text": " then" }, { "end": 1036.34, "start": 1034.34, "text": " exactly one" }, { "end": 1041.78, "start": 1037.46, "text": " Factor in this is going to change so if I change one" }, { "end": 1045.22, "start": 1042.5, "text": " factor of the true underlying distribution" }, { "end": 1047.22, "start": 1045.22, "text": " um" }, { "end": 1051.38, "start": 1047.22, "text": " Which is independently which all the latent factors are independent then" }, { "end": 1056.98, "start": 1051.8600000000001, "text": " Only one factor in my representation changes. So if that's the case then" }, { "end": 1065.7, "start": 1057.54, "text": " Kind of I can be fairly sure that i've captured the the true latent structure of the data, right if one if if one of the" }, { "end": 1069.06, "start": 1066.5, "text": " Of the if I change one of the the z here" }, { "end": 1072.5, "start": 1070.5, "text": " Let's say I change the z3" }, { "end": 1075.06, "start": 1072.5, "text": " and only then uh" }, { "end": 1077.86, "start": 1075.86, "text": " r3" }, { "end": 1084.66, "start": 1078.66, "text": " So I change z3 let's say I have access to the true underlying distribution I ask the the world" }, { "end": 1091.7, "start": 1085.22, "text": " Ask the world to give me a picture of a cat that where the fur color is different and then I put it" }, { "end": 1094.34, "start": 1092.34, "text": " I get a data point" }, { "end": 1098.26, "start": 1094.74, "text": " and then I put it through my model I get a representation and" }, { "end": 1100.82, "start": 1099.3, "text": " only" }, { "end": 1106.4199999999998, "start": 1100.82, "text": " From the cat that I had before only one of the factors of my representation changes" }, { "end": 1113.9399999999998, "start": 1106.8999999999999, "text": " Then I call it disentangled then I can be fairly sure. Okay my representation this dimension of my representation captures the fur color" }, { "end": 1116.98, "start": 1114.4199999999998, "text": " independently of the other factors" }, { "end": 1125.9399999999998, "start": 1118.4199999999998, "text": " All right, so that's disentanglement and you notice it requires actually access here to the true" }, { "end": 1128.5, "start": 1127.22, "text": " distribution" }, { "end": 1132.66, "start": 1128.5, "text": " Distribution of how the data is generated by the world" }, { "end": 1137.86, "start": 1133.22, "text": " So this is something you generally don't have but um, it's a technical notion" }, { "end": 1140.26, "start": 1138.26, "text": " So you can you can certainly postulate it" }, { "end": 1142.9, "start": 1140.9, "text": " And it's it" }, { "end": 1148.1, "start": 1143.62, "text": " It's a nice framework and this paper basically proves that" }, { "end": 1153.54, "start": 1149.84, "text": " Generally learning disentangled representation in that way is impossible" }, { "end": 1155.46, "start": 1154.18, "text": " um" }, { "end": 1162.18, "start": 1155.46, "text": " If you don't have some if you don't make some assumptions some a priori assumptions on your data and your model" }, { "end": 1165.14, "start": 1163.7, "text": " so" }, { "end": 1166.98, "start": 1165.14, "text": " This is a theorem here" }, { "end": 1168.66, "start": 1166.98, "text": " and we" }, { "end": 1171.22, "start": 1168.66, "text": " See here p is any generative model" }, { "end": 1173.94, "start": 1171.94, "text": " Which admits this factorization" }, { "end": 1180.26, "start": 1174.74, "text": " Right does that that's what we talked about the true underlying generative process is" }, { "end": 1184.9, "start": 1180.26, "text": " Is independent in so" }, { "end": 1188.34, "start": 1186.34, "text": " In its constituents" }, { "end": 1193.22, "start": 1188.66, "text": " That means there's a bunch of latent variables. They independently from each other produce a data point" }, { "end": 1196.02, "start": 1194.58, "text": " right" }, { "end": 1198.02, "start": 1196.02, "text": " X is the data observations" }, { "end": 1200.82, "start": 1198.42, "text": " Then there exists an infinite family" }, { "end": 1203.7, "start": 1201.7, "text": " of bijective functions" }, { "end": 1205.78, "start": 1203.78, "text": " right such that" }, { "end": 1209.3, "start": 1205.78, "text": " This and this and this and this" }, { "end": 1211.3, "start": 1210.34, "text": " Okay" }, { "end": 1212.66, "start": 1211.3, "text": " What that means?" }, { "end": 1215.3799999999999, "start": 1212.66, "text": " is so this thing here" }, { "end": 1218.1, "start": 1216.1, "text": " basically just means that the" }, { "end": 1226.26, "start": 1218.8999999999999, "text": " um the distributions agree so that the the the overall distributions the let's say the" }, { "end": 1229.22, "start": 1227.22, "text": " it's not exactly that but the" }, { "end": 1232.26, "start": 1230.26, "text": " posterior distributions" }, { "end": 1235.62, "start": 1232.26, "text": " Um, let's say the data looks the same right" }, { "end": 1239.86, "start": 1236.58, "text": " That what comes out of the process looks the same" }, { "end": 1245.3799999999999, "start": 1241.22, "text": " So there is there is functions that transform" }, { "end": 1246.98, "start": 1246.02, "text": " the" }, { "end": 1251.3, "start": 1246.98, "text": " latent distribution into some other distribution, but they" }, { "end": 1254.26, "start": 1252.26, "text": " look the same in" }, { "end": 1257.14, "start": 1255.14, "text": " cumulatively" }, { "end": 1260.5, "start": 1258.42, "text": " All right, and then we have the" }, { "end": 1263.46, "start": 1260.5, "text": " All right, and then this part here" }, { "end": 1269.3, "start": 1264.42, "text": " Means you'll see the derivative of fi of u with respect to" }, { "end": 1271.62, "start": 1270.42, "text": " some" }, { "end": 1275.54, "start": 1271.62, "text": " Uj which you'll notice i and j are different. Um, this" }, { "end": 1277.7, "start": 1276.26, "text": " this means" }, { "end": 1279.46, "start": 1277.7, "text": " that" }, { "end": 1281.46, "start": 1279.46, "text": " basically the dimensions" }, { "end": 1283.78, "start": 1282.5, "text": " are" }, { "end": 1285.86, "start": 1283.78, "text": " Entangled it means that if I" }, { "end": 1288.58, "start": 1286.58, "text": " take the derivative of" }, { "end": 1290.58, "start": 1288.58, "text": " one entry" }, { "end": 1293.3, "start": 1290.6599999999999, "text": " In the in the f in the function" }, { "end": 1295.9399999999998, "start": 1293.9399999999998, "text": " output and I derive it" }, { "end": 1302.34, "start": 1296.34, "text": " By another entry then I get a non-zero derivative which means that this" }, { "end": 1304.6599999999999, "start": 1303.22, "text": " Uj" }, { "end": 1306.6599999999999, "start": 1304.6599999999999, "text": " influences fi" }, { "end": 1314.1, "start": 1307.22, "text": " Which basically means that I can produce I can take the z I can transform it in" }, { "end": 1320.4199999999998, "start": 1314.1, "text": " In so z is independent. So it means the i-th dimension has no influence on the j-th dimension" }, { "end": 1324.4199999999998, "start": 1320.98, "text": " Of the of the output and I can transform it into something" }, { "end": 1329.3, "start": 1324.8999999999999, "text": " Where that's no longer the case where the i-th and the j-th dimension very much" }, { "end": 1331.3, "start": 1329.9399999999998, "text": " uh" }, { "end": 1333.06, "start": 1331.3, "text": " Kind of are" }, { "end": 1334.8999999999999, "start": 1333.06, "text": " entangled or covariate" }, { "end": 1335.9399999999998, "start": 1334.8999999999999, "text": " so" }, { "end": 1338.1799999999998, "start": 1335.9399999999998, "text": " This means I can take the z that" }, { "end": 1344.74, "start": 1338.18, "text": " That's kind of everything is independent. I can transform it into something where everything is dependent and they give a nice example here" }, { "end": 1347.14, "start": 1344.74, "text": " So they say let's say we have" }, { "end": 1349.0600000000002, "start": 1347.78, "text": " Gaussians" }, { "end": 1352.18, "start": 1349.0600000000002, "text": " In two dimensions, so we have one Gaussian here" }, { "end": 1355.54, "start": 1352.74, "text": " And let me see if I can draw this one Gaussian here" }, { "end": 1358.66, "start": 1356.18, "text": " Right in two dimensions. They're completely independent" }, { "end": 1362.42, "start": 1359.46, "text": " um what you'll find is that the kind of" }, { "end": 1365.38, "start": 1363.38, "text": " distribution overall has" }, { "end": 1367.7, "start": 1365.38, "text": " Iso lines like this" }, { "end": 1373.8600000000001, "start": 1367.7, "text": " Right, it gives you kind of a hump in the middle two-dimensionally. You can maybe imagine like a bit of a mountain in the middle" }, { "end": 1376.1000000000001, "start": 1374.8200000000002, "text": " um" }, { "end": 1379.3000000000002, "start": 1376.1000000000001, "text": " All right. So this is what you this is the kind of output distribution" }, { "end": 1386.42, "start": 1379.38, "text": " If you if you don't know about the underlying factors, you simply see the cumulative distribution, which would be the the big p here" }, { "end": 1388.42, "start": 1387.14, "text": " um" }, { "end": 1391.6200000000001, "start": 1388.42, "text": " All right. Now we transform this into with f" }, { "end": 1394.18, "start": 1392.18, "text": " And f is simply a rotation" }, { "end": 1396.18, "start": 1394.18, "text": " by 45 degrees" }, { "end": 1398.74, "start": 1396.18, "text": " right, so two new axes this" }, { "end": 1401.38, "start": 1399.38, "text": " and that and again" }, { "end": 1405.14, "start": 1402.1000000000001, "text": " Our two gaussians are going to be transformed these" }, { "end": 1412.18, "start": 1405.94, "text": " Right. So these are not these are not disentangled anymore. Well in the in the notion" }, { "end": 1417.3, "start": 1413.22, "text": " I can't say it like this, but this is easiest to say so these are these are kind of" }, { "end": 1422.8200000000002, "start": 1418.26, "text": " Now that it's rotated in terms of the original coordinate system, which would go like this" }, { "end": 1430.34, "start": 1422.82, "text": " These very much depend on each other right the jth dimension the if dimension depend on each other because if I sample from one of the gaussians" }, { "end": 1434.26, "start": 1430.34, "text": " I need now basically two coordinates to describe" }, { "end": 1436.98, "start": 1434.98, "text": " where it is or" }, { "end": 1439.3, "start": 1437.3, "text": " Yeah, one isn't just" }, { "end": 1444.26, "start": 1440.34, "text": " So if I sample from one Gaussian I need both the coordinates" }, { "end": 1447.9399999999998, "start": 1444.8999999999999, "text": " but the cumulative distribution or the" }, { "end": 1451.06, "start": 1449.06, "text": " That is still the same" }, { "end": 1454.5, "start": 1451.06, "text": " That is still going to look exactly the same" }, { "end": 1457.3, "start": 1455.78, "text": " so" }, { "end": 1463.46, "start": 1457.3, "text": " It's again a hump. So it's basically an isometric hump in every direction if I rotate that the" }, { "end": 1467.54, "start": 1464.1799999999998, "text": " It looks exactly the same. This is the p here" }, { "end": 1473.46, "start": 1468.58, "text": " But now the the if dimension and the jth dimension very much influence each other" }, { "end": 1477.06, "start": 1474.4199999999998, "text": " um, and yeah, interestingly the" }, { "end": 1482.5, "start": 1477.06, "text": " If you now look at disentanglement if I just have if if I now produce" }, { "end": 1485.3799999999999, "start": 1483.3799999999999, "text": " data" }, { "end": 1487.1399999999999, "start": 1485.86, "text": " x" }, { "end": 1488.1, "start": 1487.1399999999999, "text": " here" }, { "end": 1491.22, "start": 1488.1, "text": " x1 and here I produce data" }, { "end": 1493.54, "start": 1491.86, "text": " x2" }, { "end": 1495.3, "start": 1493.54, "text": " and both" }, { "end": 1497.3, "start": 1495.3, "text": " go through my model" }, { "end": 1500.34, "start": 1497.54, "text": " and give me our representation" }, { "end": 1502.4199999999998, "start": 1500.8999999999999, "text": " of x1" }, { "end": 1504.4199999999998, "start": 1502.4199999999998, "text": " and the representation" }, { "end": 1508.18, "start": 1504.42, "text": " of x1 and the representation of x2" }, { "end": 1510.8200000000002, "start": 1509.22, "text": " I have" }, { "end": 1515.38, "start": 1510.8200000000002, "text": " Without seeing the underlying structure. I have no idea which one of those two" }, { "end": 1522.42, "start": 1516.26, "text": " It comes from and thereby I have zero chance basically. It's a luck lucky guess" }, { "end": 1524.1000000000001, "start": 1523.14, "text": " um" }, { "end": 1529.8600000000001, "start": 1524.1000000000001, "text": " Which one it comes from and there's an infinite family. So I will never find the true underlying" }, { "end": 1533.8, "start": 1529.86, "text": " distribution here and thereby I will never" }, { "end": 1535.9599999999998, "start": 1534.76, "text": " um" }, { "end": 1540.12, "start": 1535.9599999999998, "text": " I will never be able to satisfy this property that if one of the z changes" }, { "end": 1544.9199999999998, "start": 1540.6, "text": " Then only one of the factors of my representation will change because if I" }, { "end": 1548.28, "start": 1545.56, "text": " Say, oh, well, obviously this is the case" }, { "end": 1552.52, "start": 1548.76, "text": " Then i'm going to make a different model and if I say well, this is the case" }, { "end": 1556.12, "start": 1553.08, "text": " I'm going to make a different model. I don't know which one it is" }, { "end": 1560.6, "start": 1556.12, "text": " So I have to choose one and it could be the other one. So i'm bound to be wrong in this case" }, { "end": 1564.04, "start": 1560.84, "text": " 50% of the time, but if it's an infinite family i'm bound to be wrong" }, { "end": 1566.36, "start": 1564.6799999999998, "text": " every time" }, { "end": 1568.12, "start": 1566.36, "text": " basically, so" }, { "end": 1570.6799999999998, "start": 1568.12, "text": " That's what the theorem basically says I can't" }, { "end": 1576.04, "start": 1571.32, "text": " Decide on the true underlying distribution. Um, there's an infinite family that" }, { "end": 1579.58, "start": 1576.6599999999999, "text": " Transforms it into it. It transforms every distribution" }, { "end": 1585.58, "start": 1580.04, "text": " into some other distribution that has basically complete opposite properties of entanglement" }, { "end": 1591.5, "start": 1585.58, "text": " And I need to choose one and I will never choose the right one because i'm not that lucky" }, { "end": 1596.32, "start": 1592.22, "text": " And thereby I can't do representation learning that's disentangled" }, { "end": 1602.62, "start": 1597.74, "text": " All right, so that's the main claim of the paper and um" }, { "end": 1605.74, "start": 1603.74, "text": " There is a lot of experiments here" }, { "end": 1609.6599999999999, "start": 1606.22, "text": " so what the paper also does is they produce some new" }, { "end": 1616, "start": 1609.66, "text": " Data sets and they test a lot of a lot of architectures basically they say just because it's theoretically impossible" }, { "end": 1621.44, "start": 1616.48, "text": " It's not impractical because we can actually make these underlying assumptions" }, { "end": 1625.1200000000001, "start": 1621.92, "text": " like we can make some assumptions on the data and then and then" }, { "end": 1627.52, "start": 1625.8400000000001, "text": " we kind of" }, { "end": 1628.5600000000002, "start": 1627.52, "text": " can" }, { "end": 1631.44, "start": 1628.5600000000002, "text": " attempt to do disentanglement learning so they do these" }, { "end": 1638.16, "start": 1632.4, "text": " data sets and they test different VAE's architectures on it and they basically" }, { "end": 1640.16, "start": 1638.16, "text": " Um establish where" }, { "end": 1644.24, "start": 1640.96, "text": " More work should go. So that's that's kind of the rest of the paper" }, { "end": 1647.3600000000001, "start": 1644.4, "text": " I encourage you to look at the rest of the paper" }, { "end": 1651.52, "start": 1647.3600000000001, "text": " I just wanted to give a quick introduction to VAEs and to disentanglement" }, { "end": 1654.16, "start": 1652.16, "text": " to entangle representation learning" }, { "end": 1655.68, "start": 1654.48, "text": " I" }, { "end": 1657.68, "start": 1655.68, "text": " Wasn't technically correct" }, { "end": 1668.4, "start": 1657.68, "text": " Uh in every detail, but I hope that it's enough and have fun" } ]
dPsXxLyqpfs
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
World Models
[ "Science & Technology" ]
[ "deep learning", "reinforcement learning", "deep reinforcement learning", "deep rl", "schmidhuber", "environment model", "imagination", "vae", "rnn", "lstm" ]
Authors: David Ha, Jürgen Schmidhuber Abstract: We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. https://arxiv.org/abs/1803.10122
Hi, today we're looking at World Models by David Ha and Jürgen Schmidhuber. This is a paper that's concerned with reinforcement learning and especially with the problem of, say, you have an environment that you interact with and you kind of need to learn to act in it, but it could be, for example, very expensive to always query the environment. So let's say you have a robot and it needs to do something in the world, and you kind of, to have a robot execute something and then observe it, is quite expensive, costs electricity and so on. So you would like to sort of minimize how many times this happens. So here, searching for a good picture, they're concerned with problems, for example, like this. This is a race car simulator. There's an OpenAI gym environment for that. The other one that they use is a so-called like a doom experiment where, as you look at this, there's a couple of monsters and they're shooting fireballs at you and the task is just to kind of avoid the fireballs. So the entire point of the paper is that I don't actually need to interact with the environment in order to learn it. I can simply kind of learn a model of the environment and then learn using that model. So basically, I can learn how the environment works and then simply use my imagination of the environment, my model, in order to learn from that so I don't have to interact with the real environment anymore. So how do they do this? They do it in multiple stages. Here, first thing they do is they collect a bunch of samples from the environment. So they go to the environment, they simply do a random policy and then they collect a bunch of samples. I think the process is outlined down here somewhere. We saw it before. Here, collect 10,000 rollouts from a random policy. Next, they train a VAE here to kind of learn the environment. So that's where that comes in. This is all done in stages, not end-to-end. The VAE is simply a model that takes, in this case, a video frame here. It sends it through an encoder neural network to obtain what's called a latent representation, which is a much smaller dimensional representation. So if the image is 64 by 64 pixels, then the latent code could be as little as 100 or even 10 dimensional. So you see that there's quite a bit of compression going on. This is a variational autoencoder. It's not really important here that it's variational since the difference is the variational autoencoder is kind of a stochastic process, whereas the regular autoencoder isn't. But they introduce stochasticity later again. So it's not particularly important. So it's a variational autoencoder, which means they obtain a latent representation that defines distribution over outputs. So they send this sample from this latent distribution that they obtain, and then they feed this to the decoder. And the decoder kind of gives back what it thinks the encoder encoded. So the decoder tries to reconstruct as close as possible this original frame that was given to the encoder. But of course it can't because we've compressed it so much to this lower dimensional representation here. So it kind of does its best effort. So what you hope to achieve with this is that kind of the decoder learns, for example, there's always here. This is the ceiling right here. It's always gray. So basically, you shouldn't actually need to encode this in your Z. If it's always gray, the decoder should learn this by itself. So your hope is that the Z, the latent representation, will simply end up containing just the information that's kind of different or between the individual frames, which here I guess would be kind of the fireballs coming and your position relative to them. That's what's changing if you think about this environment. So your hope is that the latent representation captures only that, whereas all the static parts that are irrelevant or never change are kind of captured by the encoder and the decoder architecture by itself. So yeah, it's important to note the encoder and decoder are obviously always the same for all the frames, whereas the Z representation, of course, is there is one per frame, so each frame will give you a different Z. And that's so you can imagine how that works or how that's going to be useful. So they train this on like a randomly collected sample of the environment until they're confident they now have a good model of the environment. And then what they do next is they use this in order to train an RNN. So again, they kind of have their compression model of the environment. What they do now is they use these Z states you see here, here, here, here that they get from that. And they train how these latent representations evolve over time. So with an RNN here goes over time. So the RNN will always kind of predict what's the next state of the environment going to be. But importantly, maybe compared to environment models that we've discussed before in the, for example, imagination augmented agent paper, there we always try to directly predict the future pixels, so to say, of the future frame. Here, the environment model is over the latent representation. Of course, this means that the this is a much smaller space. So if your compression model is good, then this should be much easier to learn than, say, like a full end to end environment model. So this model learns how your latent states evolve over time, given your actions. So you can imagine the Z being an abstract representation of your state and then your action. And then this goes into the RNN and the RNN will predict what's the next latent representation. And there is what's called a temperature parameter to control the stochasticity. I've already told you this, there is a stochasticity built into this. So the RNN will simply output like some vector, what it thinks is the next thing going to be. And they don't use this directly as the next step, but they parameterize a kind of a mixture of Gaussian distributions coupled with a decoder here in order to give a random distribution over the next state. And they control the amount of randomness with the temperature parameter. They argue that this comes in handy later. So all right, so what do we have? We have a system that can compress the environment into what we would call an essential part. Every frame we extract what's important in that frame. Then next we have a model that can predict, given a state and an action, what's the next state going to be, the next latent state. So technically we now have an environment model, right, given a state. We can simply, given a state and a policy, we can simply use this model to roll forward. So the last component is the actual policy. And the actual policy here, as you can see, is in their case simply a linear model. The linear model will take the z, which is the latent representation of the current state, and the h, which is the current state of the RNN that models the environment over time. And it simply is a linear function of the two, gives you the action probabilities, or I guess the log-its of the actions. So it's a really, really simple controller over these things. And they do this in order to show that the main part of the work is being done by this environment model. And given the environment model, you only need very few parameters basically to then learn a policy. Here is what I said in a diagram. So the observation goes into the compression of the VAE, the latent representation of that goes into the RNN together with the hidden state from the last step. And this will output a new hidden state, which goes here into the controller, and we also directly take this z into the controller. And then from these two, we perform an action, which now we have a choice. It could go to the environment, right, give you the next observation, but also, or at the same time, since you kind of need to update your RNN, it can go here and update your RNN because it will need to predict the next hidden state. The thing is, we can also now leave away this path, which means we can simply take our RNN and our kind of imagine the next latent representation, put it through the decoder part of the VAE and use that as an observation. I hope this makes sense. It's rather intuitive, right? You have a model of the environment. You can simply use this instead of the real environment. So, there's a bit of pseudo code here, and they do a bunch of experiments, right? So, we're primarily interested, so they say, they see here, okay, our compression works, and this is the real frame, and this is the reconstructed frame, kind of looks, you know, captures the essence of what's going on. And I actually want to go down here, the Visdome experiment. So, what they do here in the car racing experiment is they kind of learn this entire thing, right? And then they learn a policy in the real world, in the environment, using this model up here, this procedure where they always go to the environment, and here is the exact experiment set up. So, first they collect, again, rollouts for a random policy, they train the VAE, they train the RNN, and then they learn the controller using the entire model, but in kind of the real world. So, they always interact with the environment, but because they also have their kind of latent representation of the observation, and not directly the observation, they get a higher score. And also, the policy that they use in the real environment transfers to the environment model. So, the policy they learn in the true environment, it transfers to the imagined, so if they use the imagined model as an environment, it also performs well. In the next experiment, they're going to try to do this the other way around. They're going to try to learn only using their model of the environment, and then see whether or not the policy transfers to the true environment. So, that's what they do here. They collect, again, a sample from the environment, they train the VAE, they train the RNN, and then they simply use this virtual environment, what they call it, in order to learn a policy, and at the end, they try to transfer, use the learn policy on the actual environment. And given the results, you see here, there we go. So, you see the kind of best it does, I would say, is about here, where the actual score is, you can see in this, and also in this setting, is higher than the kind of previous best algorithm in the OpenAI GIMP, when you go from virtual to actual. So, what this means is kind of, yeah, you can train using this imagined model, and then it will actually transfer, but there's a crucial thing, and that is this kind of temperature thing here. You can see a lot of times they actually don't manage to reach a good score, if this parameter is wrong. What does this parameter do? This parameter controls, as we discussed, the stochasticity of the model. So, basically, the environment model doesn't directly imagine a future state, but it imagines a distribution over future states. And the higher this parameter, the more stochastic this distribution is, basically the more uniform, I guess, the more entropy you have in these future states. We've seen this temperature parameter here. Which is important, because they go into length explaining why in this entire page here that we skipped. Here you see just text, there. Cheating the world model, which basically they say, okay, if you have a wrong model, if you have a model that's wrong of the environment, and you train a policy on it, necessarily, it's going to probably find a policy that exploits the wrongness of this model. So you might be able to walk through walls or fly or ignore the fireballs. Or basically, find that if you stand next to a wall, in your imagination, you'll never get hit. Something like this, which isn't true in the real world. So the policy will exploit that. And to counter this, they simply basically turn up this temperature parameter, giving them a more stochastic procedure. Meaning they imagine a lot of kind of different futures, and they train their policy on all of them, or in expectation over a sample of them. Which means that if the environment model is wrong, this kind of... I want to say if it's wrong, this corrects for it. It doesn't. But if it's wrong, you still sample different futures. So if it has one wrong future, you still have the other ones to kind of punish the policy, if it tries to exploit this one mistake. At least that's the reasoning behind it. So that's how they do this. You can interact with their trained environment models online somehow. They also give a kind of a look at what they would like to have. Instead of collecting the environment model from random rollout, they would try to train it, then to use it again to collect more data, to train more environment model, then use the environment, better environment model to train more the policy, and so on in a stepwise fashion. But they don't actually do it, they simply describe it. And the rest of the paper is a bit of related work and discussion. It's very prosaically written, kind of different from what you're used to if you read a lot of these papers. But yeah, I hope you can now you know what's going on and see you next time.
[ { "end": 6, "start": 0, "text": " Hi, today we're looking at World Models by David Ha and Jürgen Schmidhuber." }, { "end": 13, "start": 6, "text": " This is a paper that's concerned with reinforcement learning and especially with the problem of," }, { "end": 20, "start": 13, "text": " say, you have an environment that you interact with and you kind of need to learn to act in it," }, { "end": 26, "start": 20, "text": " but it could be, for example, very expensive to always query the environment." }, { "end": 33, "start": 26, "text": " So let's say you have a robot and it needs to do something in the world," }, { "end": 44, "start": 33, "text": " and you kind of, to have a robot execute something and then observe it, is quite expensive, costs electricity and so on." }, { "end": 50, "start": 44, "text": " So you would like to sort of minimize how many times this happens." }, { "end": 59, "start": 50, "text": " So here, searching for a good picture, they're concerned with problems, for example, like this." }, { "end": 66, "start": 59, "text": " This is a race car simulator. There's an OpenAI gym environment for that." }, { "end": 76, "start": 66, "text": " The other one that they use is a so-called like a doom experiment where, as you look at this," }, { "end": 83, "start": 76, "text": " there's a couple of monsters and they're shooting fireballs at you and the task is just to kind of avoid the fireballs." }, { "end": 91, "start": 83, "text": " So the entire point of the paper is that I don't actually need to interact with the environment in order to learn it." }, { "end": 98, "start": 91, "text": " I can simply kind of learn a model of the environment and then learn using that model." }, { "end": 105, "start": 98, "text": " So basically, I can learn how the environment works and then simply use my imagination of the environment," }, { "end": 114, "start": 105, "text": " my model, in order to learn from that so I don't have to interact with the real environment anymore." }, { "end": 119, "start": 114, "text": " So how do they do this? They do it in multiple stages." }, { "end": 128, "start": 119, "text": " Here, first thing they do is they collect a bunch of samples from the environment." }, { "end": 136, "start": 128, "text": " So they go to the environment, they simply do a random policy and then they collect a bunch of samples." }, { "end": 143, "start": 136, "text": " I think the process is outlined down here somewhere. We saw it before." }, { "end": 155, "start": 143, "text": " Here, collect 10,000 rollouts from a random policy. Next, they train a VAE here to kind of learn the environment." }, { "end": 161, "start": 155, "text": " So that's where that comes in. This is all done in stages, not end-to-end." }, { "end": 169, "start": 161, "text": " The VAE is simply a model that takes, in this case, a video frame here." }, { "end": 174, "start": 169, "text": " It sends it through an encoder neural network to obtain what's called a latent representation," }, { "end": 177, "start": 174, "text": " which is a much smaller dimensional representation." }, { "end": 189, "start": 177, "text": " So if the image is 64 by 64 pixels, then the latent code could be as little as 100 or even 10 dimensional." }, { "end": 193, "start": 189, "text": " So you see that there's quite a bit of compression going on." }, { "end": 202, "start": 193, "text": " This is a variational autoencoder. It's not really important here that it's variational since the difference is" }, { "end": 209, "start": 202, "text": " the variational autoencoder is kind of a stochastic process, whereas the regular autoencoder isn't." }, { "end": 216, "start": 209, "text": " But they introduce stochasticity later again. So it's not particularly important." }, { "end": 225, "start": 216, "text": " So it's a variational autoencoder, which means they obtain a latent representation that defines distribution over outputs." }, { "end": 235, "start": 225, "text": " So they send this sample from this latent distribution that they obtain, and then they feed this to the decoder." }, { "end": 243, "start": 235, "text": " And the decoder kind of gives back what it thinks the encoder encoded." }, { "end": 252, "start": 243, "text": " So the decoder tries to reconstruct as close as possible this original frame that was given to the encoder." }, { "end": 259, "start": 252, "text": " But of course it can't because we've compressed it so much to this lower dimensional representation here." }, { "end": 261, "start": 259, "text": " So it kind of does its best effort." }, { "end": 268, "start": 261, "text": " So what you hope to achieve with this is that kind of the decoder learns, for example, there's always here." }, { "end": 272, "start": 268, "text": " This is the ceiling right here. It's always gray." }, { "end": 278, "start": 272, "text": " So basically, you shouldn't actually need to encode this in your Z." }, { "end": 283, "start": 278, "text": " If it's always gray, the decoder should learn this by itself." }, { "end": 296, "start": 283, "text": " So your hope is that the Z, the latent representation, will simply end up containing just the information that's kind of different or between the individual frames," }, { "end": 305, "start": 296, "text": " which here I guess would be kind of the fireballs coming and your position relative to them." }, { "end": 308, "start": 305, "text": " That's what's changing if you think about this environment." }, { "end": 312, "start": 308, "text": " So your hope is that the latent representation captures only that," }, { "end": 323, "start": 312, "text": " whereas all the static parts that are irrelevant or never change are kind of captured by the encoder and the decoder architecture by itself." }, { "end": 329, "start": 323, "text": " So yeah, it's important to note the encoder and decoder are obviously always the same for all the frames," }, { "end": 336, "start": 329, "text": " whereas the Z representation, of course, is there is one per frame, so each frame will give you a different Z." }, { "end": 343, "start": 336, "text": " And that's so you can imagine how that works or how that's going to be useful." }, { "end": 355, "start": 343, "text": " So they train this on like a randomly collected sample of the environment until they're confident they now have a good model of the environment." }, { "end": 363, "start": 355, "text": " And then what they do next is they use this in order to train an RNN." }, { "end": 373, "start": 363, "text": " So again, they kind of have their compression model of the environment." }, { "end": 381, "start": 373, "text": " What they do now is they use these Z states you see here, here, here, here that they get from that." }, { "end": 386, "start": 381, "text": " And they train how these latent representations evolve over time." }, { "end": 390, "start": 386, "text": " So with an RNN here goes over time." }, { "end": 401, "start": 390, "text": " So the RNN will always kind of predict what's the next state of the environment going to be." }, { "end": 407, "start": 401, "text": " But importantly, maybe compared to environment models that we've discussed before in the, for example," }, { "end": 419, "start": 407, "text": " imagination augmented agent paper, there we always try to directly predict the future pixels, so to say, of the future frame." }, { "end": 424, "start": 419, "text": " Here, the environment model is over the latent representation." }, { "end": 429, "start": 424, "text": " Of course, this means that the this is a much smaller space." }, { "end": 440, "start": 429, "text": " So if your compression model is good, then this should be much easier to learn than, say, like a full end to end environment model." }, { "end": 449, "start": 440, "text": " So this model learns how your latent states evolve over time, given your actions." }, { "end": 455, "start": 449, "text": " So you can imagine the Z being an abstract representation of your state and then your action." }, { "end": 462, "start": 455, "text": " And then this goes into the RNN and the RNN will predict what's the next latent representation." }, { "end": 468, "start": 462, "text": " And there is what's called a temperature parameter to control the stochasticity." }, { "end": 476, "start": 468, "text": " I've already told you this, there is a stochasticity built into this." }, { "end": 484, "start": 476, "text": " So the RNN will simply output like some vector, what it thinks is the next thing going to be." }, { "end": 492, "start": 484, "text": " And they don't use this directly as the next step, but they parameterize a kind of a mixture of Gaussian distributions" }, { "end": 499, "start": 492, "text": " coupled with a decoder here in order to give a random distribution over the next state." }, { "end": 503, "start": 499, "text": " And they control the amount of randomness with the temperature parameter." }, { "end": 506, "start": 503, "text": " They argue that this comes in handy later." }, { "end": 508, "start": 506, "text": " So all right, so what do we have?" }, { "end": 517, "start": 508, "text": " We have a system that can compress the environment into what we would call an essential part." }, { "end": 521, "start": 517, "text": " Every frame we extract what's important in that frame." }, { "end": 535, "start": 521, "text": " Then next we have a model that can predict, given a state and an action, what's the next state going to be, the next latent state." }, { "end": 539, "start": 535, "text": " So technically we now have an environment model, right, given a state." }, { "end": 548, "start": 539, "text": " We can simply, given a state and a policy, we can simply use this model to roll forward." }, { "end": 552, "start": 548, "text": " So the last component is the actual policy." }, { "end": 560, "start": 552, "text": " And the actual policy here, as you can see, is in their case simply a linear model." }, { "end": 568, "start": 560, "text": " The linear model will take the z, which is the latent representation of the current state," }, { "end": 578, "start": 568, "text": " and the h, which is the current state of the RNN that models the environment over time." }, { "end": 589, "start": 578, "text": " And it simply is a linear function of the two, gives you the action probabilities, or I guess the log-its of the actions." }, { "end": 593, "start": 589, "text": " So it's a really, really simple controller over these things." }, { "end": 601, "start": 593, "text": " And they do this in order to show that the main part of the work is being done by this environment model." }, { "end": 608, "start": 601, "text": " And given the environment model, you only need very few parameters basically to then learn a policy." }, { "end": 613, "start": 608, "text": " Here is what I said in a diagram." }, { "end": 618, "start": 613, "text": " So the observation goes into the compression of the VAE," }, { "end": 625, "start": 618, "text": " the latent representation of that goes into the RNN together with the hidden state from the last step." }, { "end": 632, "start": 625, "text": " And this will output a new hidden state, which goes here into the controller," }, { "end": 636, "start": 632, "text": " and we also directly take this z into the controller." }, { "end": 643, "start": 636, "text": " And then from these two, we perform an action, which now we have a choice." }, { "end": 649, "start": 643, "text": " It could go to the environment, right, give you the next observation, but also," }, { "end": 656, "start": 649, "text": " or at the same time, since you kind of need to update your RNN, it can go here" }, { "end": 663, "start": 656, "text": " and update your RNN because it will need to predict the next hidden state." }, { "end": 667, "start": 663, "text": " The thing is, we can also now leave away this path," }, { "end": 679, "start": 667, "text": " which means we can simply take our RNN and our kind of imagine the next latent representation," }, { "end": 686, "start": 679, "text": " put it through the decoder part of the VAE and use that as an observation." }, { "end": 691, "start": 686, "text": " I hope this makes sense. It's rather intuitive, right? You have a model of the environment." }, { "end": 695, "start": 691, "text": " You can simply use this instead of the real environment." }, { "end": 702, "start": 695, "text": " So, there's a bit of pseudo code here, and they do a bunch of experiments, right?" }, { "end": 710, "start": 702, "text": " So, we're primarily interested, so they say, they see here, okay, our compression works," }, { "end": 715, "start": 710, "text": " and this is the real frame, and this is the reconstructed frame, kind of looks, you know," }, { "end": 719, "start": 715, "text": " captures the essence of what's going on." }, { "end": 729, "start": 719, "text": " And I actually want to go down here, the Visdome experiment." }, { "end": 737, "start": 729, "text": " So, what they do here in the car racing experiment is they kind of learn this entire thing, right?" }, { "end": 746, "start": 737, "text": " And then they learn a policy in the real world, in the environment, using this model up here," }, { "end": 752, "start": 746, "text": " this procedure where they always go to the environment, and here is the exact experiment set up." }, { "end": 761, "start": 752, "text": " So, first they collect, again, rollouts for a random policy, they train the VAE, they train the RNN," }, { "end": 775, "start": 761, "text": " and then they learn the controller using the entire model, but in kind of the real world." }, { "end": 782, "start": 775, "text": " So, they always interact with the environment, but because they also have their kind of latent representation" }, { "end": 788, "start": 782, "text": " of the observation, and not directly the observation, they get a higher score." }, { "end": 798, "start": 788, "text": " And also, the policy that they use in the real environment transfers to the environment model." }, { "end": 804, "start": 798, "text": " So, the policy they learn in the true environment, it transfers to the imagined," }, { "end": 809, "start": 804, "text": " so if they use the imagined model as an environment, it also performs well." }, { "end": 813, "start": 809, "text": " In the next experiment, they're going to try to do this the other way around." }, { "end": 819, "start": 813, "text": " They're going to try to learn only using their model of the environment," }, { "end": 825, "start": 819, "text": " and then see whether or not the policy transfers to the true environment." }, { "end": 832, "start": 825, "text": " So, that's what they do here. They collect, again, a sample from the environment," }, { "end": 843, "start": 832, "text": " they train the VAE, they train the RNN, and then they simply use this virtual environment," }, { "end": 849, "start": 843, "text": " what they call it, in order to learn a policy, and at the end, they try to transfer," }, { "end": 852, "start": 849, "text": " use the learn policy on the actual environment." }, { "end": 865, "start": 852, "text": " And given the results, you see here, there we go." }, { "end": 877, "start": 865, "text": " So, you see the kind of best it does, I would say, is about here," }, { "end": 884, "start": 877, "text": " where the actual score is, you can see in this, and also in this setting," }, { "end": 892, "start": 884, "text": " is higher than the kind of previous best algorithm in the OpenAI GIMP," }, { "end": 898, "start": 892, "text": " when you go from virtual to actual." }, { "end": 905, "start": 898, "text": " So, what this means is kind of, yeah, you can train using this imagined model," }, { "end": 910, "start": 905, "text": " and then it will actually transfer, but there's a crucial thing," }, { "end": 913, "start": 910, "text": " and that is this kind of temperature thing here." }, { "end": 919, "start": 913, "text": " You can see a lot of times they actually don't manage to reach a good score," }, { "end": 922, "start": 919, "text": " if this parameter is wrong. What does this parameter do?" }, { "end": 927, "start": 922, "text": " This parameter controls, as we discussed, the stochasticity of the model." }, { "end": 935, "start": 927, "text": " So, basically, the environment model doesn't directly imagine a future state," }, { "end": 939, "start": 935, "text": " but it imagines a distribution over future states." }, { "end": 944, "start": 939, "text": " And the higher this parameter, the more stochastic this distribution is," }, { "end": 951, "start": 944, "text": " basically the more uniform, I guess, the more entropy you have in these future states." }, { "end": 955, "start": 951, "text": " We've seen this temperature parameter here." }, { "end": 966, "start": 955, "text": " Which is important, because they go into length explaining why in this entire page here that we skipped." }, { "end": 971, "start": 966, "text": " Here you see just text, there." }, { "end": 975, "start": 971, "text": " Cheating the world model, which basically they say, okay, if you have a wrong model," }, { "end": 980, "start": 975, "text": " if you have a model that's wrong of the environment, and you train a policy on it, necessarily," }, { "end": 987, "start": 980, "text": " it's going to probably find a policy that exploits the wrongness of this model." }, { "end": 995, "start": 987, "text": " So you might be able to walk through walls or fly or ignore the fireballs." }, { "end": 1003, "start": 995, "text": " Or basically, find that if you stand next to a wall, in your imagination, you'll never get hit." }, { "end": 1006, "start": 1003, "text": " Something like this, which isn't true in the real world." }, { "end": 1011, "start": 1006, "text": " So the policy will exploit that." }, { "end": 1016, "start": 1011, "text": " And to counter this, they simply basically turn up this temperature parameter," }, { "end": 1020, "start": 1016, "text": " giving them a more stochastic procedure." }, { "end": 1024, "start": 1020, "text": " Meaning they imagine a lot of kind of different futures," }, { "end": 1029, "start": 1024, "text": " and they train their policy on all of them, or in expectation over a sample of them." }, { "end": 1038, "start": 1029, "text": " Which means that if the environment model is wrong, this kind of..." }, { "end": 1042, "start": 1038, "text": " I want to say if it's wrong, this corrects for it. It doesn't." }, { "end": 1049, "start": 1042, "text": " But if it's wrong, you still sample different futures." }, { "end": 1056, "start": 1049, "text": " So if it has one wrong future, you still have the other ones to kind of punish the policy," }, { "end": 1063, "start": 1056, "text": " if it tries to exploit this one mistake. At least that's the reasoning behind it." }, { "end": 1067, "start": 1063, "text": " So that's how they do this." }, { "end": 1071, "start": 1067, "text": " You can interact with their trained environment models online somehow." }, { "end": 1076, "start": 1071, "text": " They also give a kind of a look at what they would like to have." }, { "end": 1082, "start": 1076, "text": " Instead of collecting the environment model from random rollout," }, { "end": 1086, "start": 1082, "text": " they would try to train it, then to use it again to collect more data," }, { "end": 1089, "start": 1086, "text": " to train more environment model, then use the environment," }, { "end": 1094, "start": 1089, "text": " better environment model to train more the policy, and so on in a stepwise fashion." }, { "end": 1100, "start": 1094, "text": " But they don't actually do it, they simply describe it." }, { "end": 1105, "start": 1100, "text": " And the rest of the paper is a bit of related work and discussion." }, { "end": 1115, "start": 1105, "text": " It's very prosaically written, kind of different from what you're used to if you read a lot of these papers." }, { "end": 1136, "start": 1115, "text": " But yeah, I hope you can now you know what's going on and see you next time." } ]
_Z9ZP1eiKsI
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Curiosity-driven Exploration by Self-supervised Prediction
[ "Science & Technology" ]
[]
https://arxiv.org/abs/1705.05363 Authors: Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell Abstract: In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch.
Hi there! Today we're going to look at this paper, Curiosity-Driven Exploration by Self-Supervised Prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental idea of the paper is to tackle the reward sparseness problem reinforcement learning. For example, if you have a Super Mario game like here, and there's a number of ways you can think of the reward, but one way you could formulate it is that you simply get kind of a plus one reward when you finish the game, or the level. Let's say you finish the level, you get plus one. If you die or don't make it in time, you get negative one. I think there's no way to not make it in... Oh yeah, there's actually a time limit. So the... The problem here is that your algorithm kind of needs to learn to make things now such that it gets to the end of the level, but the reward is only at the end of the level. So basically step by step it has no signal to go on because the reward is always zero, and it kind of needs to learn these long range dependencies. And that's notoriously hard in reinforcement learning to step by step learn actions that kind of maximize some very long term goal. So you can also think of a game of chess where your reward is going to be whether you win or lose at the end, but step by step it's kind of this... The reward is 50ish steps away. So you have no way of kind of step by step optimizing your actions in a meaningful manner. So there are many ways to get around this. One way that people have done is what's called reward shaping. And reward shaping is you're trying to introduce additional rewards kind of as a designer of the algorithm that you know are kind of good or helping to solve the problem or at least correlated with the reward you're going to get at the end. So in Mario this could be like the further right you go, the more reward you get. You get kind of an additional reward if you go right. Coincidentally I think in real Mario this also gives you points, but our situation is that the reward is just going to be at the end. You could also say like if you kill the... Or if you stomp the goombas, one goomba you stomp, that actually gives you also a bit of reward. In chess you could say like the more pieces you have, that gives you a bit of reward if you have more pieces than your opponent, if your opponent loses pieces. You don't and you also get a bit of reward if you get more territory on the board and so on. So these are all things that we know kind of correlate with the end reward. Like because in Mario for example the end of the level is actually on the right. But of course it's not perfect because sometimes there are situations where you kind of have to go back, go around something or go over something and not immediately go to the right. As well as in chess there are good sacrifices that you can make. So these kind of additional rewards they help, but they're not perfect. And the biggest problem with them is they're very domain specific. So a developer of the algorithm you basically have to know the domain like Super Mario and you have to know the goal is on the right. So you have to construct your reward in order to kind of reflect this. And this is very domain specific. Basically you have to do it for every domain again and again and again. In chess you have to know something about chess to play and so on. So one way around this, and this paper proposes one method to do this, is to introduce an additional reward not based on the domain specifically, but based on what they call this curiosity. And it's specifically curiosity by self supervised prediction. So what does that mean? The idea is not new in that people have kind of done this before. If we go for example down here. So here is this kind of doom environment and what you could say is in my agent I have kind of a little module that's going to predict the future. So like if I'm here then I will basically choose an action, my agent will choose an action, like move forward, like press the forward key and then I will predict how that's going to look. And of course we know this is kind of a 3D environment so this is probably going to be this part of the screen is going to be the full screen because you're now closer and so on the perspective changes a little bit. But basically this should be a learned neural network that predicts the future from the state now and the action now. And basically you can train this in a supervised fashion because you will perform some actions, you will collect some data about this so you can learn a network that is going to predict one step into the future basically, how the environment will look. And then, and this is by no means kind of a new idea to introduce rewards based on this type of learning how the environment acts. We've seen this in like the A3C paper, the original one where the additional reward is something like pixel control where they consider like okay this pixel here, how much can I control it by my action, like how does my action influence it, can I predict this and so on. And to learn how to control the pixels on the screen by your actions and to give a reward based on that so that's been around this idea. And what this paper here does specifically is they say well I'm going to predict the future and if I am wrong about the prediction then that gives me a reward and that's the curiosity part. Basically it means like if I have a good model of what's going to happen in the future and then I predict the future and then I'm wrong it means something new has happened, something special, something that I hadn't expected. And therefore if the goal is to get the algorithm to explore by itself which is what you need to do when you don't have a reward, right? When you don't have a reward what you want your algorithm to do is simply to go around and explore. And in a sense they're saying okay the way to do this is to go by curiosity which means is to go to actively seek out environments that you wouldn't expect basically. So whenever you don't expect something that means it's something new, that means you haven't had this experience before, right? And that means that it's kind of a new state to explore. That you have not seen this before so kind of in absence of any reward you might as well go where you haven't been before and that's kind of the essence. So they outline a number of problems that you might have with this approach. They give the example, let's first actually go to what the model actually looks like. So that's here. You can see this is kind of what they call an intrinsic curiosity module. So you have a state here, you're in a state, you have your policy and your policy gives you an action. And the action goes to the environment and the environment gives you the next state and also what's called the reward. They call here E is the extrinsic reward that you get from the environment. But they also combine this with what's called an intrinsic reward that you get from here that you get from the curiosity module. And that's what we've discussed. It kind of tries to assess how new is the state that I'm going to be in. How surprising it is for me. So the thing is that I'm going to first describe the model how you would build it and how that gets you into problems and then how to fix it. So how you would build this is to have this what's called this forward model. So the forward model takes the action and the current state and it kind of predicts the next state that's in here. Don't worry about the phi hat right now. It predicts the next state and then you compare this to the actual next state. You subtract, you just subtract the next state and then you get the next state. You subtract, you just look at the difference between what you predict the next state is going to be and what the next state really is. And that gives you the intrinsic reward. The more different these are, the higher the reward. That's what we've discussed. How much different is it from what I've expected. So how does that get you into problems? And the authors give a very good illustrative example of say you are in an environment. Let's actually go over here. You are in an environment and you have your screen. And here is kind of a road that you need to maybe walk after. And here are some leaves in the wind. I'm very bad at drawing leaves so imagine these are leaves and there's wind right? Like winds coming from here and kind of shaking up these leaves and so on. So if you simply try to predict this entire screen as your forward model, what's going to happen is you will never be able to predict how these leaves are going to move because there basically you can't influence them. You can predict a bit from the current state but the action you take has no influence on how these leaves are going to move because they are influenced by the wind. And the wind is kind of this random-ish process that you can't control. So the authors say because of this your algorithm is always going to find these leaves basically interesting, curious, be curious about it because it can't predict them. And we've seen that the reward that they model to give an addition is based on how well you cannot predict a certain state. And they say okay if we do like this then these random things that we can't influence will always be surprising and therefore we will always be curious about them and therefore we will always kind of look at the leaves and be amazed and get reward after reward because we can't predict them. That's not the goal. So what they're arguing is that why are these leaves not important for curiosity? Because we can't influence them with our actions. Like we can influence where we go on this road because we can kind of move and the road is kind of static, not governed by these random processes. But the leaves we would like to discard them. We can't influence them. And therefore what they say is what we need is an encoder that takes a state and I'm going to try to delete this annotation. So we need an encoder here features that takes a state and it outputs features of the state. And then our forward model isn't fed with the state, it's fed with the features of the state and is not going to output the next state. So we need an encoder that takes a state and is fed with the features of the state and is not going to output the next state as such but the features of the next state. It predicts the features and then we're going to compare that with the features of the true next state and that's what we compare. So how does this encoder, these features need to look? And they're saying well these features should kind of only consider things about the state that are actually dependent on our actions. And they have a very interesting way of achieving to train such an encoder, such a feature producing function in that they say it's going to be a neural network that we train by training this so called inverse model. So we take this encoder and we train this inverse model on top of it and the inverse model takes the features of the last state and the new state and is trying to predict this action, this action right here. So this is this action, the action we took to get from the old state to the new state. So this inverse model is trained to predict what action was taken to get from the old state to the new state. And by training the encoder with this inverse model, like training this end to end, you will make the encoder such that it only considers things that are actually relevant to predicting this action. So in the leaves example it would discard the leaves. It will discard anything that you can't influence with your action and therefore it will only retain features that are dependent on your action. I think that's quite an interesting way to get rid of the irrelevant information that they don't want. And then they can use this encoder to train this forward model and to essentially get information from the old model and to essentially get this intrinsic reward. So I find this idea quite interesting and as I said the idea of intrinsic reward and curiosity to go for exploration is not new, but I think this kind of approach and I'm sure it's been around in some variants, but I've just stumbled across this and this is quite interesting. So we're going to take a look, and you can go about the math yourself, but they do these kind of experiments and they corrupt, as you can see, part of the screen with noise here and they of course show like, okay, since the noise is not dependent on our action, our features do actually discard this noise, only focus on the part that we can actually influence by our actions. So that's, I think, all in all pretty interesting. They show, of course, that their algorithm then outperforms the kind of baseline of A3C on these sparse reward tasks and the sparser here you can see like the left is like dense reward and then sparse reward and then very sparse reward and at some point you see the A3C simply doesn't do it anymore. But what's also interesting is here you have the ICM in pixels, which kind of means pixel-based curiosity, so where we don't have this encoder, where we simply try to predict the pixels of the environment and that works if you have like this kind of sparse reward thing, but if you want to, if you have the very sparse reward, that also fails and you actually need this encoder that discards what's not relevant for predicting the actions. Yeah, so you can take a look at the rest of the paper yourself. I find it quite interesting. They analyze how their agent explore these mazes and things and they have more experiments on like benchmark tasks. So have a look at it and I'll see you next time.
[ { "end": 8, "start": 0, "text": " Hi there! Today we're going to look at this paper, Curiosity-Driven Exploration by Self-Supervised" }, { "end": 14.84, "start": 8, "text": " Prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental" }, { "end": 21.36, "start": 14.84, "text": " idea of the paper is to tackle the reward sparseness problem reinforcement learning." }, { "end": 27.52, "start": 21.36, "text": " For example, if you have a Super Mario game like here, and there's a number of ways you" }, { "end": 33.6, "start": 27.52, "text": " can think of the reward, but one way you could formulate it is that you simply get kind of" }, { "end": 40.56, "start": 33.6, "text": " a plus one reward when you finish the game, or the level. Let's say you finish the level," }, { "end": 49.2, "start": 40.56, "text": " you get plus one. If you die or don't make it in time, you get negative one. I think" }, { "end": 55.92, "start": 49.2, "text": " there's no way to not make it in... Oh yeah, there's actually a time limit. So the..." }, { "end": 63.92, "start": 55.92, "text": " The problem here is that your algorithm kind of needs to learn to make things now such" }, { "end": 68, "start": 63.92, "text": " that it gets to the end of the level, but the reward is only at the end of the level." }, { "end": 74.04, "start": 68, "text": " So basically step by step it has no signal to go on because the reward is always zero," }, { "end": 78.16, "start": 74.04, "text": " and it kind of needs to learn these long range dependencies. And that's notoriously hard" }, { "end": 83.2, "start": 78.16, "text": " in reinforcement learning to step by step learn actions that kind of maximize some very" }, { "end": 89.28, "start": 83.2, "text": " long term goal. So you can also think of a game of chess where your reward is going to" }, { "end": 94.16, "start": 89.28, "text": " be whether you win or lose at the end, but step by step it's kind of this... The reward" }, { "end": 103.48, "start": 94.16, "text": " is 50ish steps away. So you have no way of kind of step by step optimizing your actions" }, { "end": 112.44, "start": 103.48, "text": " in a meaningful manner. So there are many ways to get around this. One way that people" }, { "end": 118.03999999999999, "start": 112.44, "text": " have done is what's called reward shaping. And reward shaping is you're trying to introduce" }, { "end": 125.32, "start": 118.03999999999999, "text": " additional rewards kind of as a designer of the algorithm that you know are kind of good" }, { "end": 132.76, "start": 125.32, "text": " or helping to solve the problem or at least correlated with the reward you're going to" }, { "end": 138.28, "start": 132.76, "text": " get at the end. So in Mario this could be like the further right you go, the more reward" }, { "end": 143.8, "start": 138.28, "text": " you get. You get kind of an additional reward if you go right. Coincidentally I think in" }, { "end": 149.52, "start": 143.8, "text": " real Mario this also gives you points, but our situation is that the reward is just going" }, { "end": 156.2, "start": 149.52, "text": " to be at the end. You could also say like if you kill the... Or if you stomp the goombas," }, { "end": 162.8, "start": 156.2, "text": " one goomba you stomp, that actually gives you also a bit of reward. In chess you could" }, { "end": 167.36, "start": 162.8, "text": " say like the more pieces you have, that gives you a bit of reward if you have more pieces" }, { "end": 173.08, "start": 167.36, "text": " than your opponent, if your opponent loses pieces. You don't and you also get a bit of" }, { "end": 177.52, "start": 173.08, "text": " reward if you get more territory on the board and so on. So these are all things that we" }, { "end": 183.56, "start": 177.52, "text": " know kind of correlate with the end reward. Like because in Mario for example the end" }, { "end": 187.72000000000003, "start": 183.56, "text": " of the level is actually on the right. But of course it's not perfect because sometimes" }, { "end": 192.96, "start": 187.72000000000003, "text": " there are situations where you kind of have to go back, go around something or go over" }, { "end": 198.92000000000002, "start": 192.96, "text": " something and not immediately go to the right. As well as in chess there are good sacrifices" }, { "end": 205.92000000000002, "start": 198.92000000000002, "text": " that you can make. So these kind of additional rewards they help, but they're not perfect." }, { "end": 212.36, "start": 206.92000000000002, "text": " And the biggest problem with them is they're very domain specific. So a developer of the" }, { "end": 217.08, "start": 212.36, "text": " algorithm you basically have to know the domain like Super Mario and you have to know the" }, { "end": 224.08, "start": 217.08, "text": " goal is on the right. So you have to construct your reward in order to kind of reflect this." }, { "end": 231.48000000000002, "start": 224.60000000000002, "text": " And this is very domain specific. Basically you have to do it for every domain again and" }, { "end": 238.60000000000002, "start": 231.48000000000002, "text": " again and again. In chess you have to know something about chess to play and so on. So" }, { "end": 245.28, "start": 238.60000000000002, "text": " one way around this, and this paper proposes one method to do this, is to introduce an" }, { "end": 250.16, "start": 245.28, "text": " additional reward not based on the domain specifically, but based on what they call" }, { "end": 257.16, "start": 250.16, "text": " this curiosity. And it's specifically curiosity by self supervised prediction. So what does" }, { "end": 268.36, "start": 257.36, "text": " that mean? The idea is not new in that people have kind of done this before. If we go for" }, { "end": 278.36, "start": 268.36, "text": " example down here. So here is this kind of doom environment and what you could say is" }, { "end": 292.36, "start": 281.36, "text": " in my agent I have kind of a little module that's going to predict the future. So like" }, { "end": 299.36, "start": 292.36, "text": " if I'm here then I will basically choose an action, my agent will choose an action, like" }, { "end": 308.24, "start": 301.24, "text": " move forward, like press the forward key and then I will predict how that's going to look." }, { "end": 314.40000000000003, "start": 309.88, "text": " And of course we know this is kind of a 3D environment so this is probably going to be" }, { "end": 318.76, "start": 314.40000000000003, "text": " this part of the screen is going to be the full screen because you're now closer and" }, { "end": 324.92, "start": 318.76, "text": " so on the perspective changes a little bit. But basically this should be a learned neural" }, { "end": 330.48, "start": 324.92, "text": " network that predicts the future from the state now and the action now. And basically" }, { "end": 336.88, "start": 330.48, "text": " you can train this in a supervised fashion because you will perform some actions, you" }, { "end": 343.03999999999996, "start": 336.88, "text": " will collect some data about this so you can learn a network that is going to predict one" }, { "end": 349.32, "start": 343.04, "text": " step into the future basically, how the environment will look. And then, and this is by no means" }, { "end": 356.32000000000005, "start": 349.32, "text": " kind of a new idea to introduce rewards based on this type of learning how the environment" }, { "end": 364, "start": 357, "text": " acts. We've seen this in like the A3C paper, the original one where the additional reward" }, { "end": 369.12, "start": 364.24, "text": " is something like pixel control where they consider like okay this pixel here, how much" }, { "end": 374.62, "start": 369.12, "text": " can I control it by my action, like how does my action influence it, can I predict this" }, { "end": 381.62, "start": 374.62, "text": " and so on. And to learn how to control the pixels on the screen by your actions and to" }, { "end": 388.88, "start": 382.48, "text": " give a reward based on that so that's been around this idea. And what this paper here" }, { "end": 395.88, "start": 388.88, "text": " does specifically is they say well I'm going to predict the future and if I am wrong about" }, { "end": 402.88, "start": 395.88, "text": " the prediction then that gives me a reward and that's the curiosity part. Basically it" }, { "end": 410.68, "start": 403.68, "text": " means like if I have a good model of what's going to happen in the future and then I predict" }, { "end": 417.15999999999997, "start": 411.15999999999997, "text": " the future and then I'm wrong it means something new has happened, something special, something" }, { "end": 424.16, "start": 417.16, "text": " that I hadn't expected. And therefore if the goal is to get the algorithm to explore by" }, { "end": 430.76000000000005, "start": 427.32000000000005, "text": " itself which is what you need to do when you don't have a reward, right? When you don't" }, { "end": 437.76000000000005, "start": 430.76000000000005, "text": " have a reward what you want your algorithm to do is simply to go around and explore." }, { "end": 443.8, "start": 438.8, "text": " And in a sense they're saying okay the way to do this is to go by curiosity which means" }, { "end": 450.8, "start": 443.8, "text": " is to go to actively seek out environments that you wouldn't expect basically. So whenever" }, { "end": 458.16, "start": 453.56, "text": " you don't expect something that means it's something new, that means you haven't had" }, { "end": 465.16, "start": 458.16, "text": " this experience before, right? And that means that it's kind of a new state to explore." }, { "end": 472.16, "start": 465.16, "text": " That you have not seen this before so kind of in absence of any reward you might as well" }, { "end": 479.16, "start": 472.16, "text": " go where you haven't been before and that's kind of the essence. So they outline a number" }, { "end": 487.6, "start": 480.6, "text": " of problems that you might have with this approach. They give the example, let's first" }, { "end": 494.6, "start": 487.6, "text": " actually go to what the model actually looks like. So that's here. You can see this is" }, { "end": 502.12, "start": 495.12, "text": " kind of what they call an intrinsic curiosity module. So you have a state here, you're in" }, { "end": 509.12, "start": 502.12, "text": " a state, you have your policy and your policy gives you an action. And the action goes to" }, { "end": 516.12, "start": 509.12, "text": " the environment and the environment gives you the next state and also what's called" }, { "end": 524.68, "start": 517.68, "text": " the reward. They call here E is the extrinsic reward that you get from the environment." }, { "end": 529.68, "start": 524.68, "text": " But they also combine this with what's called an intrinsic reward that you get from here" }, { "end": 535.6800000000001, "start": 529.68, "text": " that you get from the curiosity module. And that's what we've discussed. It kind of tries" }, { "end": 542.68, "start": 535.68, "text": " to assess how new is the state that I'm going to be in. How surprising it is for me. So" }, { "end": 549.68, "start": 542.68, "text": " the thing is that I'm going to first describe the model how you would build it and how that" }, { "end": 559.1999999999999, "start": 553.1999999999999, "text": " gets you into problems and then how to fix it. So how you would build this is to have" }, { "end": 566.2, "start": 559.2, "text": " this what's called this forward model. So the forward model takes the action and the" }, { "end": 570.2, "start": 566.2, "text": " current state and it kind of predicts the next state that's in here. Don't worry about" }, { "end": 577.2, "start": 570.2, "text": " the phi hat right now. It predicts the next state and then you compare this to the actual" }, { "end": 587.2, "start": 580.2, "text": " next state. You subtract, you just subtract the next state and then you get the next state." }, { "end": 592.44, "start": 587.2, "text": " You subtract, you just look at the difference between what you predict the next state is" }, { "end": 597.24, "start": 592.44, "text": " going to be and what the next state really is. And that gives you the intrinsic reward." }, { "end": 602.72, "start": 597.24, "text": " The more different these are, the higher the reward. That's what we've discussed. How much" }, { "end": 609.72, "start": 602.72, "text": " different is it from what I've expected. So how does that get you into problems? And the" }, { "end": 616.72, "start": 609.72, "text": " authors give a very good illustrative example of say you are in an environment. Let's actually" }, { "end": 625.96, "start": 619.28, "text": " go over here. You are in an environment and you have your screen. And here is kind of" }, { "end": 631.6800000000001, "start": 625.96, "text": " a road that you need to maybe walk after. And here are some leaves in the wind. I'm" }, { "end": 638, "start": 631.6800000000001, "text": " very bad at drawing leaves so imagine these are leaves and there's wind right? Like winds" }, { "end": 644.2, "start": 638, "text": " coming from here and kind of shaking up these leaves and so on. So if you simply try to" }, { "end": 651.2, "start": 644.2, "text": " predict this entire screen as your forward model, what's going to happen is you will" }, { "end": 658.26, "start": 652.44, "text": " never be able to predict how these leaves are going to move because there basically" }, { "end": 665.26, "start": 658.26, "text": " you can't influence them. You can predict a bit from the current state but the action" }, { "end": 671.26, "start": 665.26, "text": " you take has no influence on how these leaves are going to move because they are influenced" }, { "end": 678.26, "start": 671.26, "text": " by the wind. And the wind is kind of this random-ish process that you can't control." }, { "end": 689.26, "start": 682.26, "text": " So the authors say because of this your algorithm is always going to find these leaves basically" }, { "end": 694.26, "start": 689.26, "text": " interesting, curious, be curious about it because it can't predict them. And we've" }, { "end": 701.26, "start": 694.26, "text": " seen that the reward that they model to give an addition is based on how well you cannot" }, { "end": 708.26, "start": 701.26, "text": " predict a certain state. And they say okay if we do like this then these random things" }, { "end": 715.74, "start": 708.74, "text": " that we can't influence will always be surprising and therefore we will always be curious about" }, { "end": 720.74, "start": 715.74, "text": " them and therefore we will always kind of look at the leaves and be amazed and get reward" }, { "end": 725.74, "start": 720.74, "text": " after reward because we can't predict them. That's not the goal. So what they're arguing" }, { "end": 732.74, "start": 725.74, "text": " is that why are these leaves not important for curiosity? Because we can't influence" }, { "end": 739.26, "start": 733.26, "text": " them with our actions. Like we can influence where we go on this road because we can kind" }, { "end": 746.26, "start": 739.26, "text": " of move and the road is kind of static, not governed by these random processes. But the" }, { "end": 753.26, "start": 746.26, "text": " leaves we would like to discard them. We can't influence them. And therefore what they say" }, { "end": 760.26, "start": 753.26, "text": " is what we need is an encoder that takes a state and I'm going to try to delete this" }, { "end": 767.26, "start": 760.26, "text": " annotation. So we need an encoder here features that takes a state and it outputs features" }, { "end": 778.26, "start": 771.26, "text": " of the state. And then our forward model isn't fed with the state, it's fed with the features" }, { "end": 785.26, "start": 778.26, "text": " of the state and is not going to output the next state. So we need an encoder that takes" }, { "end": 790.26, "start": 785.26, "text": " a state and is fed with the features of the state and is not going to output the next" }, { "end": 796.26, "start": 790.26, "text": " state as such but the features of the next state. It predicts the features and then we're" }, { "end": 801.26, "start": 796.26, "text": " going to compare that with the features of the true next state and that's what we compare." }, { "end": 808.26, "start": 801.26, "text": " So how does this encoder, these features need to look? And they're saying well these features" }, { "end": 814.26, "start": 808.76, "text": " should kind of only consider things about the state that are actually dependent on our" }, { "end": 821.26, "start": 814.26, "text": " actions. And they have a very interesting way of achieving to train such an encoder," }, { "end": 828.26, "start": 821.76, "text": " such a feature producing function in that they say it's going to be a neural network" }, { "end": 835.26, "start": 828.26, "text": " that we train by training this so called inverse model. So we take this encoder and we train" }, { "end": 842.26, "start": 835.26, "text": " this inverse model on top of it and the inverse model takes the features of the last state" }, { "end": 850.26, "start": 843.26, "text": " and the new state and is trying to predict this action, this action right here. So this" }, { "end": 857.26, "start": 850.26, "text": " is this action, the action we took to get from the old state to the new state. So this" }, { "end": 864.26, "start": 857.26, "text": " inverse model is trained to predict what action was taken to get from the old state to the" }, { "end": 871.26, "start": 864.26, "text": " new state. And by training the encoder with this inverse model, like training this end" }, { "end": 878.26, "start": 871.26, "text": " to end, you will make the encoder such that it only considers things that are actually" }, { "end": 883.26, "start": 878.26, "text": " relevant to predicting this action. So in the leaves example it would discard the leaves." }, { "end": 890.26, "start": 883.26, "text": " It will discard anything that you can't influence with your action and therefore it will only" }, { "end": 896.26, "start": 890.26, "text": " retain features that are dependent on your action. I think that's quite an interesting" }, { "end": 902.26, "start": 896.26, "text": " way to get rid of the irrelevant information that they don't want. And then they can use" }, { "end": 909.26, "start": 902.26, "text": " this encoder to train this forward model and to essentially get information from the old" }, { "end": 916.26, "start": 909.26, "text": " model and to essentially get this intrinsic reward. So I find this idea quite interesting" }, { "end": 924.26, "start": 918.26, "text": " and as I said the idea of intrinsic reward and curiosity to go for exploration is not" }, { "end": 930.26, "start": 924.26, "text": " new, but I think this kind of approach and I'm sure it's been around in some variants," }, { "end": 944.26, "start": 930.26, "text": " but I've just stumbled across this and this is quite interesting. So we're going to take" }, { "end": 951.26, "start": 944.26, "text": " a look, and you can go about the math yourself, but they do these kind of experiments and" }, { "end": 958.26, "start": 951.26, "text": " they corrupt, as you can see, part of the screen with noise here and they of course" }, { "end": 964.26, "start": 958.26, "text": " show like, okay, since the noise is not dependent on our action, our features do actually discard" }, { "end": 969.26, "start": 964.26, "text": " this noise, only focus on the part that we can actually influence by our actions. So" }, { "end": 976.26, "start": 969.26, "text": " that's, I think, all in all pretty interesting. They show, of course, that their algorithm" }, { "end": 984.26, "start": 976.26, "text": " then outperforms the kind of baseline of A3C on these sparse reward tasks and the sparser" }, { "end": 992.26, "start": 984.26, "text": " here you can see like the left is like dense reward and then sparse reward and then very" }, { "end": 999.26, "start": 992.26, "text": " sparse reward and at some point you see the A3C simply doesn't do it anymore. But what's" }, { "end": 1007.26, "start": 999.26, "text": " also interesting is here you have the ICM in pixels, which kind of means pixel-based" }, { "end": 1013.26, "start": 1007.26, "text": " curiosity, so where we don't have this encoder, where we simply try to predict the pixels" }, { "end": 1018.26, "start": 1013.26, "text": " of the environment and that works if you have like this kind of sparse reward thing, but" }, { "end": 1023.26, "start": 1018.26, "text": " if you want to, if you have the very sparse reward, that also fails and you actually need" }, { "end": 1033.26, "start": 1023.26, "text": " this encoder that discards what's not relevant for predicting the actions. Yeah, so you can" }, { "end": 1038.26, "start": 1033.26, "text": " take a look at the rest of the paper yourself. I find it quite interesting. They analyze" }, { "end": 1048.26, "start": 1038.26, "text": " how their agent explore these mazes and things and they have more experiments on like benchmark" }, { "end": 1068.26, "start": 1048.26, "text": " tasks. So have a look at it and I'll see you next time." } ]
BBp0tHcirtQ
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
git for research basics: fundamentals, commits, branches, merging
[ "Science & Technology" ]
[ "git", "research", "commit", "merge", "conflict" ]
Don't watch this if you already know how to solve a merge conflict :)
Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations. So Git is like a tool to collaborate, but when you research, like when you work on a paper together with other people, you won't use a lot of the features that Git offers and that are usually described by Git. So in this series I want to talk about what's kind of the most simple way to collaborate with people on a research project using Git. And today we're going to go over just the fundamentals, which makes everything else a lot easier. So what you need to understand about Git is that fundamentally Git is a graph, and it's a graph of commits. What I mean by this. So let's say you have your paper, you write some things, and then this is kind of version one. And then you have another paper, or the same paper, and you kind of change this line here. That's version two, and so on. You have kind of this chain of versions that you would like to keep in store. So this is the classic example of version control, where you would like to save these versions, and do it in a way that you can at any point in time go back to any version previously. And this is exactly what Git does, without you having to kind of rename. Like people usually copy the file and then rename like this version two, version three, final version, really final version, really final version corrected, blah blah blah. Alright, so Git fundamentally is a graph, and a graph of an object we call a commit. So a commit, which I'm going to represent as a bubble here, is simply a kind of an image of your hard drive, or one folder of your hard drive at a particular point in time. So this will contain all kind of files. Let's call this file A, file B. Oops, well, I meant to make a square here. But all the files that are in your folder, which is called the Git repository, or it's not correct, but bear with me. You have this folder, and all the files in this folder, when you make a commit, all these files are kind of saved as they are into one of these bubbles. And they're saved forever basically in this status that they are. So what you can do now is you can go ahead and make a second commit. So you change a bunch of files. Let's say the file B is still the same, but the file A has changed, is now A'. You make a second commit, and the second commit references the first commit. So part of a commit, except the very first commit, part of a commit is always a pointer to its parent commit. And especially if you look at the commits, they all have names. And the name of a commit is always its hash. And the hash includes basically the hash of all the files that are in there. So a hash could be something like F5C259, and so on. And for the next commit, the hash also includes the reference to the parent. That's why the integral part of a commit is to which parent it belongs. This ultimately is what makes the graph kind of the graph. Every commit references its parent. So you can address every commit by its name, as I said, which is the hash of the commit. So the hash is really long, but you can also simply reference it by the first couple of letters. As long as that's unique, Git will let you do this whenever you need to reference some commit. So we've discussed that basically a commit is a bunch of files, as they are, and it's saved in this state. So Git is of course smart. It will only save the diff from one to the other commit. But you can just imagine that a commit is simply the status of a folder at a particular point in time. So let me just take away these files here. There are a bunch of other things in Git. So one concept that Git has is called a tag. A tag is a name for a commit that you give yourself. And the tag is like a little flag that sticks in a commit. And you may say this, v1, version 1. This is simply a tag, and as you make new commits, the tag simply stays there. And at any time, if you don't want to remember this big long hash, you can simply refer to this commit as v1. Because that's the tag. It's kind of simple. The next form of a little flag that you can append to a commit is called a branch. And a branch, the difference between a tag and a branch. So a branch is also this flag, and we'll call it, I don't know, blah. The difference is that when you are on this commit here, right here, and you make a commit on top of this commit, while what's called you've checked out the blah branch. So right now you're looking at blah, which is this commit, and you make a commit on top of this commit. What Git will do automatically for you is it will erase this flag and move it to this next commit. So you might know branches from subversion or other version control technologies. It's very similar, but in Git, a branch is simply like a tag. It's simply a name for a commit. But with the additional property that when you make a commit on top of that commit, so when it has the commit as its parent, then Git will move the branch, the little flag, to the new commit. So basically, you always have that one branch, which is called master. Git creates this automatically for you if you just have this little flag, master. And you make a commit on top of master, which would cause master to go here. So people usually say they work on the master branch, which means they're simply making commits on top of the commit that currently has the master flag. Git also allows you to move around both tags and the branches basically to any commit. So I could forcefully go erase this here and simply stick the master flag here. And sometimes if we kind of decide these two commits are no good, we would simply do this. We would simply take the master flag, put it here, and then when we make a new commit on top of the master now, what we would make is we make a new commit point here, then Git would move the master flag because it's a branch, master, and then we simply continue working here, working here, and Git will happily move along this master. So in Git, there is no need to actually delete commits or something like this. What we can simply do is kind of move the branch that we're working on to the commit we like, and garbage collection ultimately will at some point go and delete these two commits. This is a bit more difficult once you collaborate with other people, because they might actually have made commits that reference the commits that you just kind of deleted or so. So it's a bit tricky, but ultimately this is something you can do. So the next thing we're going to talk about is multiple branches. Having multiple branches basically boils down to you have few commits, you have your graph, and let's say this is your master branch. So here we have master, but also, or let's make the one before, otherwise I don't have space, master. So what someone else would like to do is say, hey, I want to try out this new feature in code. It will probably change the code base and so on, but I want to try it out. Maybe it'll introduce some bugs and so on. And then what you can do is you can make a new branch, F1, let's call it F1 for feature one. And then I can make a commit on top of feature one, which would then move the feature one flag to here, and so on. I can make second and third commit and so on. Meanwhile, the other people working on the project, or maybe even you yourself, can work on top of this commit, on top of the master branch. So in kind of software engineering, this is typically used when one part of the team wants to implement a new feature, but the other part of the team kind of continues to do bug fixes or things like this, development on the version of the software that doesn't yet have the new feature. But they kind of need to fix bugs, and since the new feature is not complete yet, they can't both work on the same code base. So each work on their own branch, so to say. And at the end, when feature one is ready, people say, okay, we've implemented it, it's all good, there's no bugs. We would like to integrate the feature one into the main software, basically. What you have to do is you would have to do a so-called merge. A merge is a process that generates a merge commit, and a merge commit is this thing here. As you notice, it has more than one parent. It has, in this case, two parents where it kind of combines. So from this commit here, both branches are based off of this commit. And then changes were made, individual changes, in this branch and in this branch. So there's the possibility that people change different things. And what the merge commit needs to do somehow is to bring together these changes. So actually, both branches might have changed the same file, but in a different way. And now the question is how do we merge these different files? And that's kind of the last topic we'll go into today. How does Git do a merge? So when we talk about merging, Git has a bunch of built-in algorithms that it helps you with. Most of the time, merging is automatic. So if you have files here, A and B, and in one branch, A is changed, some here, and in one branch, B is changed. Git simply assumes, well, this one branch has changed A, the other one hasn't. So the changes mean something. I'll take them. So basically, whenever something has changed in one branch and not changed in the other, it will assume that the changes are the thing that needs to continue to live. It assumes that the changes were made for a reason, and that reason should continue. So one might be a bug fix, the other one might be the new feature. The same goes in the same file. So when you have the same file and in one branch something on top is changed, and the other branch something kind of on the bottom is changed, Git simply assumes both changes are wanted and takes both. The only kind of time when Git doesn't know what to do is when both branches change the same line. So I'm going to represent this with, I don't know, but when both branches change the same line in the same file, or close by, so there are these algorithms that Git determines when there's a so-called merge conflict. That's the only time where Git doesn't know what to do. And so as preliminary, it's a good idea to structure files in a line-based fashion, especially if you write kind of LaTeX. A good practice is to put every sentence on a new line and not have like giant lines of multiple sentences, because if you put every sentence on a new line, then you immediately kind of see where something was changed. Whereas if you have this big paragraph and Git will simply tell you this line has changed, which is an entire paragraph, and you don't see what's happening. So when you have a merge conflict, Git asks you what to do, and it does this in a very simple way. We're just going to kind of take a look here as a final demonstration. So I have a Git repository here. Let me do that. So as you can see, there's simply this test file in here. And I've just made one commit, the initial commit. And let's look at this test file. It simply says, hello. So what I can do is, for example, I can say, hi. When I want to make a new commit, first of all, Git status will always tell you kind of what you can do, what's happening in your Git repository. Here it says, changes not staged for commit, modified test.txt. And it also tells you what you can do. So it tells me, for example, use git checkout dash dash with the file name to discard changes. Or use git add to update what will be committed. There's a... I'll use git add with this. So it tells me changes to be committed. Now it's green, as you can see. So when I now type git commit, it should commit these changes. And this is a common occurrence in Git. Whenever you see a text editor opening, Git expects you to type a text message, like a commit message in this case, like a log message, basically. The hashtags are comments, which will not go in here. This is all described right here, actually, in these comments. The thing about these things is, when you type an empty message, then Git will abort the commit. Notice you've done something wrong, you can simply save this file with being empty, being nothing but comments, basically. Git will abort. So it's super useful. I'll just say, added hi, and then save this file. So this is not a special thing. All you need to do... This is an editor, a text editor, that edits a text file. You simply need to save the file and close the editor, and Git will be like, OK, cool, I'll continue. So with git log, now you can see we have two commits. We have my initial commit, and we have the commit called added hi. If you look at the test file, you see hi. So what we'll do now is, finally, we'll make two branches, as we've discussed before. So this is my initial commit. I've made one more commit. And we're on branch master right now, which Git status will tell you. See? On branch master. So this is now master. What we'll do is we'll make a new branch called F1. We'll make a commit on F1, meaning we'll move this. F1. Then we'll make a commit on top of master, like this, which means we'll move this. Master. And then we will merge F1 back into master, such that this master is here. And at the end, we can even remove the F1 branch. And we'll do this while we're having a merge conflict, so that you see the whole process. So, okay. So what I want to do is, first I want to make a branch F1. For this, we can use checkout minus B for making a new branch F1. If the branch already exists, you simply need to checkout, which means I simply go to where this branch is, to the commit that the branch references to. We also say we put head to this commit. Head is always the thing you're looking at, basically. The thing you've currently checked out. So, make a new branch F1, and we'll immediately switch to F1 if I type status. It says on branch F1. It's still the same commit, but we're just in a different branch. So we'll make kind of a change to this file here. I'm gonna say hello. Cool. Save the file. Status. It says it's modified. I want to add and commit it. And there's a shortcut. Commit minus A minus M. So the A simply says all the files that have changed, add them. So I don't need to add, git add all the changed files separately. Though this only counts for kind of changed files. If you have completely new files that git isn't tracking yet, you need to add them yourself. So here with a minus A, I skip the need to first add the files, and with the minus M I can give directly the commit message. More O. Cool. So now what we've done is we have made this commit here and moved the F1 flag to this commit. What we'll do now is we'll go back to this commit, which is currently master branch, and we'll make this commit. So first what we need to do is we'll go back to some commit, which is a checkout. Checkout master. Since master is still referring to that commit. As you can see, when I open the test file, there's no hello. It's the status from before. Hello. I can now change the file in some other manner. In this case I say hello, because I want many Es. And I can say I can commit this, because I'm now on the branch master. It will make this new commit here and move the master branch to that. More E. If you look at git log, you see all these commits on this kind of branch. You don't see the commit on the F1 branch. For that I would have to go back to the F1 branch. I log, and you see here it's a different story. After the added high commit, there's the more O commit. Whereas up here, after the added high commit, there's the more E commit. Merging also happens when you have different branches. When you collaborate with other people, and these people make commits, and you make commits independent of each other, and you try to synchronize your work, often you need to do a merge. And then merge conflicts can also happen. What we can do now is we can go back to master. Because we've... Oops. Git checkout master. There are shortcuts for all of these. We're on this branch right here. What we want to do is we want to make the merge commit. We want to merge F1 into master. While I am on master, I can say git merge F1. It will try to merge, but it will tell me conflict, automatic merge failed, fixed conflicts, and then commit the result. I can say git status. It will tell me you're currently merging. You have unmerged paths. And this test.txt file is both branches modified. I'll go into the test. This is very strange if you see it for the first time, but it's actually very intuitive. What git will do is wherever the line is that both branches have changed, or wherever the block of lines is that both branches have changed, git will basically indicate this by writing directly into the file. It will make these smaller, smaller, smaller, smaller, smaller than sign. Then it says head, which means this is the thing you're currently looking at, which we know is master, has changed this first line to this. Hello. Then it will be like equal, equal, equal, equal. Then it will say down here, it will say the F1 branch has changed this line, the same line to hello. It will denote the end of this with larger, larger, larger, larger, greater than signs. What you need to do in order to merge is simply make this file as you wish it is in the merged state. First of all, you can always start by removing, actually, good practice maybe to remove these equal lines. Then within these delimiters change how you want the file to look. In essence, I simply want to have these O's here at the end. I just want too many. Like this. Or like this. I like that. I'm going to call that the merged state. Then I delete these lines. This is the file that I would like the merged commit to have. What I can do is save this file. Again, I say git status. It still tells me it's unmerged, but it tells me what to do. It says use git add to mark resolution. I've resolved it. git add test txt. git status. It says all conflicts fixed, but you are still merging. Use git commit to conclude merge. git commit. Bam. I still have to enter a commit message, which is already predefined here. I'm saying I merged the branch F1 and there were conflicts, but that's fine. I like this message, so I'm simply going to save the file right here. When I look into git log, it now gives me the full story. First I have this added high commit, then I have the more O commit and the more E commit, which were in parallel to each other. Then I merged both branches into one. We're now right here. What I can do now is delete the F1 flag, because I don't need it anymore. I do that by git branch minus d F1. It says delete the branch F1. No commits are actually deleted when you delete the branch. It's simply the little flag that is deleted. The only danger is when you delete the little flag and the name, and you're unable to reach the commit from any other end. Here of course we have this master, and by following this edge here, we can reach this commit just fine. git won't delete it or garbage collect it. But git will also tell you when you're about to do something dangerous. So don't worry. With this I think you should already have many tools or many insights into git. In another video we're going to look at how to collaborate online with people, which isn't much harder than this. It's simply two more steps to push and pull your work from a server together with other people. Alright, so that was it. Take care.
[ { "end": 9, "start": 0, "text": " Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations." }, { "end": 19, "start": 9, "text": " So Git is like a tool to collaborate, but when you research, like when you work on a paper together with other people," }, { "end": 24, "start": 19, "text": " you won't use a lot of the features that Git offers and that are usually described by Git." }, { "end": 33, "start": 24, "text": " So in this series I want to talk about what's kind of the most simple way to collaborate with people on a research project using Git." }, { "end": 40, "start": 33, "text": " And today we're going to go over just the fundamentals, which makes everything else a lot easier." }, { "end": 51, "start": 40, "text": " So what you need to understand about Git is that fundamentally Git is a graph, and it's a graph of commits." }, { "end": 61, "start": 51, "text": " What I mean by this. So let's say you have your paper, you write some things, and then this is kind of version one." }, { "end": 70, "start": 61, "text": " And then you have another paper, or the same paper, and you kind of change this line here. That's version two, and so on." }, { "end": 76, "start": 70, "text": " You have kind of this chain of versions that you would like to keep in store." }, { "end": 82, "start": 76, "text": " So this is the classic example of version control, where you would like to save these versions," }, { "end": 88, "start": 82, "text": " and do it in a way that you can at any point in time go back to any version previously." }, { "end": 92, "start": 88, "text": " And this is exactly what Git does, without you having to kind of rename." }, { "end": 102, "start": 92, "text": " Like people usually copy the file and then rename like this version two, version three, final version, really final version, really final version corrected, blah blah blah." }, { "end": 108, "start": 102, "text": " Alright, so Git fundamentally is a graph, and a graph of an object we call a commit." }, { "end": 116, "start": 108, "text": " So a commit, which I'm going to represent as a bubble here, is simply a kind of an image of your hard drive," }, { "end": 120, "start": 116, "text": " or one folder of your hard drive at a particular point in time." }, { "end": 127, "start": 120, "text": " So this will contain all kind of files. Let's call this file A, file B." }, { "end": 134, "start": 127, "text": " Oops, well, I meant to make a square here. But all the files that are in your folder," }, { "end": 140, "start": 134, "text": " which is called the Git repository, or it's not correct, but bear with me." }, { "end": 146, "start": 140, "text": " You have this folder, and all the files in this folder, when you make a commit," }, { "end": 152, "start": 146, "text": " all these files are kind of saved as they are into one of these bubbles." }, { "end": 159, "start": 152, "text": " And they're saved forever basically in this status that they are." }, { "end": 165, "start": 159, "text": " So what you can do now is you can go ahead and make a second commit." }, { "end": 175, "start": 165, "text": " So you change a bunch of files. Let's say the file B is still the same, but the file A has changed, is now A'." }, { "end": 179, "start": 175, "text": " You make a second commit, and the second commit references the first commit." }, { "end": 188, "start": 179, "text": " So part of a commit, except the very first commit, part of a commit is always a pointer to its parent commit." }, { "end": 192, "start": 188, "text": " And especially if you look at the commits, they all have names." }, { "end": 196, "start": 192, "text": " And the name of a commit is always its hash." }, { "end": 201, "start": 196, "text": " And the hash includes basically the hash of all the files that are in there." }, { "end": 209, "start": 201, "text": " So a hash could be something like F5C259, and so on." }, { "end": 215, "start": 209, "text": " And for the next commit, the hash also includes the reference to the parent." }, { "end": 222, "start": 215, "text": " That's why the integral part of a commit is to which parent it belongs." }, { "end": 228, "start": 222, "text": " This ultimately is what makes the graph kind of the graph." }, { "end": 233, "start": 228, "text": " Every commit references its parent." }, { "end": 238, "start": 233, "text": " So you can address every commit by its name, as I said, which is the hash of the commit." }, { "end": 246, "start": 238, "text": " So the hash is really long, but you can also simply reference it by the first couple of letters." }, { "end": 252, "start": 246, "text": " As long as that's unique, Git will let you do this whenever you need to reference some commit." }, { "end": 262, "start": 252, "text": " So we've discussed that basically a commit is a bunch of files, as they are, and it's saved in this state." }, { "end": 267, "start": 262, "text": " So Git is of course smart. It will only save the diff from one to the other commit." }, { "end": 274, "start": 267, "text": " But you can just imagine that a commit is simply the status of a folder at a particular point in time." }, { "end": 280, "start": 274, "text": " So let me just take away these files here." }, { "end": 285, "start": 280, "text": " There are a bunch of other things in Git." }, { "end": 292, "start": 285, "text": " So one concept that Git has is called a tag." }, { "end": 297, "start": 292, "text": " A tag is a name for a commit that you give yourself." }, { "end": 301, "start": 297, "text": " And the tag is like a little flag that sticks in a commit." }, { "end": 305, "start": 301, "text": " And you may say this, v1, version 1." }, { "end": 310, "start": 305, "text": " This is simply a tag, and as you make new commits, the tag simply stays there." }, { "end": 316, "start": 310, "text": " And at any time, if you don't want to remember this big long hash, you can simply refer to this commit as v1." }, { "end": 321, "start": 316, "text": " Because that's the tag. It's kind of simple." }, { "end": 327, "start": 321, "text": " The next form of a little flag that you can append to a commit is called a branch." }, { "end": 331, "start": 327, "text": " And a branch, the difference between a tag and a branch." }, { "end": 338, "start": 331, "text": " So a branch is also this flag, and we'll call it, I don't know, blah." }, { "end": 345, "start": 338, "text": " The difference is that when you are on this commit here, right here," }, { "end": 350, "start": 345, "text": " and you make a commit on top of this commit," }, { "end": 353, "start": 350, "text": " while what's called you've checked out the blah branch." }, { "end": 359, "start": 353, "text": " So right now you're looking at blah, which is this commit, and you make a commit on top of this commit." }, { "end": 368, "start": 359, "text": " What Git will do automatically for you is it will erase this flag and move it to this next commit." }, { "end": 378, "start": 368, "text": " So you might know branches from subversion or other version control technologies." }, { "end": 382, "start": 378, "text": " It's very similar, but in Git, a branch is simply like a tag." }, { "end": 385, "start": 382, "text": " It's simply a name for a commit." }, { "end": 390, "start": 385, "text": " But with the additional property that when you make a commit on top of that commit," }, { "end": 399, "start": 390, "text": " so when it has the commit as its parent, then Git will move the branch, the little flag, to the new commit." }, { "end": 406, "start": 399, "text": " So basically, you always have that one branch, which is called master." }, { "end": 413, "start": 406, "text": " Git creates this automatically for you if you just have this little flag, master." }, { "end": 421, "start": 413, "text": " And you make a commit on top of master, which would cause master to go here." }, { "end": 427, "start": 421, "text": " So people usually say they work on the master branch," }, { "end": 433, "start": 427, "text": " which means they're simply making commits on top of the commit that currently has the master flag." }, { "end": 441, "start": 433, "text": " Git also allows you to move around both tags and the branches basically to any commit." }, { "end": 449, "start": 441, "text": " So I could forcefully go erase this here and simply stick the master flag here." }, { "end": 456, "start": 449, "text": " And sometimes if we kind of decide these two commits are no good, we would simply do this." }, { "end": 463, "start": 456, "text": " We would simply take the master flag, put it here, and then when we make a new commit on top of the master now," }, { "end": 466, "start": 463, "text": " what we would make is we make a new commit point here," }, { "end": 472, "start": 466, "text": " then Git would move the master flag because it's a branch, master," }, { "end": 482, "start": 472, "text": " and then we simply continue working here, working here, and Git will happily move along this master." }, { "end": 486, "start": 482, "text": " So in Git, there is no need to actually delete commits or something like this." }, { "end": 496, "start": 486, "text": " What we can simply do is kind of move the branch that we're working on to the commit we like," }, { "end": 501, "start": 496, "text": " and garbage collection ultimately will at some point go and delete these two commits." }, { "end": 505, "start": 501, "text": " This is a bit more difficult once you collaborate with other people," }, { "end": 514, "start": 505, "text": " because they might actually have made commits that reference the commits that you just kind of deleted or so." }, { "end": 521, "start": 514, "text": " So it's a bit tricky, but ultimately this is something you can do." }, { "end": 525, "start": 521, "text": " So the next thing we're going to talk about is multiple branches." }, { "end": 535, "start": 525, "text": " Having multiple branches basically boils down to you have few commits, you have your graph," }, { "end": 539, "start": 535, "text": " and let's say this is your master branch." }, { "end": 551, "start": 539, "text": " So here we have master, but also, or let's make the one before, otherwise I don't have space, master." }, { "end": 565, "start": 551, "text": " So what someone else would like to do is say, hey, I want to try out this new feature in code." }, { "end": 569, "start": 565, "text": " It will probably change the code base and so on, but I want to try it out." }, { "end": 572, "start": 569, "text": " Maybe it'll introduce some bugs and so on." }, { "end": 580, "start": 572, "text": " And then what you can do is you can make a new branch, F1, let's call it F1 for feature one." }, { "end": 592, "start": 580, "text": " And then I can make a commit on top of feature one, which would then move the feature one flag to here, and so on." }, { "end": 594, "start": 592, "text": " I can make second and third commit and so on." }, { "end": 603, "start": 594, "text": " Meanwhile, the other people working on the project, or maybe even you yourself, can work on top of this commit," }, { "end": 606, "start": 603, "text": " on top of the master branch." }, { "end": 614, "start": 606, "text": " So in kind of software engineering, this is typically used when one part of the team wants to implement a new feature," }, { "end": 619, "start": 614, "text": " but the other part of the team kind of continues to do bug fixes or things like this," }, { "end": 625, "start": 619, "text": " development on the version of the software that doesn't yet have the new feature." }, { "end": 632, "start": 625, "text": " But they kind of need to fix bugs, and since the new feature is not complete yet, they can't both work on the same code base." }, { "end": 639, "start": 632, "text": " So each work on their own branch, so to say." }, { "end": 652, "start": 639, "text": " And at the end, when feature one is ready, people say, okay, we've implemented it, it's all good, there's no bugs." }, { "end": 658, "start": 652, "text": " We would like to integrate the feature one into the main software, basically." }, { "end": 666, "start": 658, "text": " What you have to do is you would have to do a so-called merge." }, { "end": 675, "start": 666, "text": " A merge is a process that generates a merge commit, and a merge commit is this thing here." }, { "end": 678, "start": 675, "text": " As you notice, it has more than one parent." }, { "end": 686, "start": 678, "text": " It has, in this case, two parents where it kind of combines." }, { "end": 693, "start": 686, "text": " So from this commit here, both branches are based off of this commit." }, { "end": 699, "start": 693, "text": " And then changes were made, individual changes, in this branch and in this branch." }, { "end": 704, "start": 699, "text": " So there's the possibility that people change different things." }, { "end": 712, "start": 704, "text": " And what the merge commit needs to do somehow is to bring together these changes." }, { "end": 718, "start": 712, "text": " So actually, both branches might have changed the same file, but in a different way." }, { "end": 723, "start": 718, "text": " And now the question is how do we merge these different files?" }, { "end": 729, "start": 723, "text": " And that's kind of the last topic we'll go into today." }, { "end": 733, "start": 729, "text": " How does Git do a merge?" }, { "end": 744, "start": 733, "text": " So when we talk about merging, Git has a bunch of built-in algorithms that it helps you with." }, { "end": 747, "start": 744, "text": " Most of the time, merging is automatic." }, { "end": 760, "start": 747, "text": " So if you have files here, A and B, and in one branch, A is changed, some here, and in one branch, B is changed." }, { "end": 766, "start": 760, "text": " Git simply assumes, well, this one branch has changed A, the other one hasn't." }, { "end": 770, "start": 766, "text": " So the changes mean something. I'll take them." }, { "end": 778, "start": 770, "text": " So basically, whenever something has changed in one branch and not changed in the other," }, { "end": 787, "start": 778, "text": " it will assume that the changes are the thing that needs to continue to live." }, { "end": 794, "start": 787, "text": " It assumes that the changes were made for a reason, and that reason should continue." }, { "end": 798, "start": 794, "text": " So one might be a bug fix, the other one might be the new feature." }, { "end": 800, "start": 798, "text": " The same goes in the same file." }, { "end": 807, "start": 800, "text": " So when you have the same file and in one branch something on top is changed," }, { "end": 810, "start": 807, "text": " and the other branch something kind of on the bottom is changed," }, { "end": 819, "start": 810, "text": " Git simply assumes both changes are wanted and takes both." }, { "end": 828, "start": 819, "text": " The only kind of time when Git doesn't know what to do is when both branches change the same line." }, { "end": 837, "start": 828, "text": " So I'm going to represent this with, I don't know, but when both branches change the same line in the same file," }, { "end": 847, "start": 837, "text": " or close by, so there are these algorithms that Git determines when there's a so-called merge conflict." }, { "end": 850, "start": 847, "text": " That's the only time where Git doesn't know what to do." }, { "end": 856, "start": 850, "text": " And so as preliminary, it's a good idea to structure files in a line-based fashion," }, { "end": 860, "start": 856, "text": " especially if you write kind of LaTeX." }, { "end": 869, "start": 860, "text": " A good practice is to put every sentence on a new line and not have like giant lines of multiple sentences," }, { "end": 878, "start": 869, "text": " because if you put every sentence on a new line, then you immediately kind of see where something was changed." }, { "end": 883, "start": 878, "text": " Whereas if you have this big paragraph and Git will simply tell you this line has changed," }, { "end": 888, "start": 883, "text": " which is an entire paragraph, and you don't see what's happening." }, { "end": 895, "start": 888, "text": " So when you have a merge conflict, Git asks you what to do, and it does this in a very simple way." }, { "end": 900, "start": 895, "text": " We're just going to kind of take a look here as a final demonstration." }, { "end": 905, "start": 900, "text": " So I have a Git repository here." }, { "end": 907, "start": 905, "text": " Let me do that." }, { "end": 911, "start": 907, "text": " So as you can see, there's simply this test file in here." }, { "end": 914, "start": 911, "text": " And I've just made one commit, the initial commit." }, { "end": 918, "start": 914, "text": " And let's look at this test file." }, { "end": 920, "start": 918, "text": " It simply says, hello." }, { "end": 924, "start": 920, "text": " So what I can do is, for example, I can say, hi." }, { "end": 931, "start": 924, "text": " When I want to make a new commit, first of all, Git status will always tell you kind of what you can do," }, { "end": 934, "start": 931, "text": " what's happening in your Git repository." }, { "end": 940, "start": 934, "text": " Here it says, changes not staged for commit, modified test.txt." }, { "end": 942, "start": 940, "text": " And it also tells you what you can do." }, { "end": 949, "start": 942, "text": " So it tells me, for example, use git checkout dash dash with the file name to discard changes." }, { "end": 954, "start": 949, "text": " Or use git add to update what will be committed." }, { "end": 957, "start": 954, "text": " There's a..." }, { "end": 961, "start": 957, "text": " I'll use git add with this." }, { "end": 966, "start": 961, "text": " So it tells me changes to be committed." }, { "end": 968, "start": 966, "text": " Now it's green, as you can see." }, { "end": 974, "start": 968, "text": " So when I now type git commit, it should commit these changes." }, { "end": 977, "start": 974, "text": " And this is a common occurrence in Git." }, { "end": 983, "start": 977, "text": " Whenever you see a text editor opening, Git expects you to type a text message," }, { "end": 988, "start": 983, "text": " like a commit message in this case, like a log message, basically." }, { "end": 992, "start": 988, "text": " The hashtags are comments, which will not go in here." }, { "end": 997, "start": 992, "text": " This is all described right here, actually, in these comments." }, { "end": 1004, "start": 997, "text": " The thing about these things is, when you type an empty message, then Git will abort the commit." }, { "end": 1009, "start": 1004, "text": " Notice you've done something wrong, you can simply save this file with being empty," }, { "end": 1014, "start": 1009, "text": " being nothing but comments, basically." }, { "end": 1016, "start": 1014, "text": " Git will abort. So it's super useful." }, { "end": 1022, "start": 1016, "text": " I'll just say, added hi, and then save this file." }, { "end": 1025, "start": 1022, "text": " So this is not a special thing. All you need to do..." }, { "end": 1029, "start": 1025, "text": " This is an editor, a text editor, that edits a text file." }, { "end": 1033, "start": 1029, "text": " You simply need to save the file and close the editor, and Git will be like," }, { "end": 1037, "start": 1033, "text": " OK, cool, I'll continue." }, { "end": 1040, "start": 1037, "text": " So with git log, now you can see we have two commits." }, { "end": 1044, "start": 1040, "text": " We have my initial commit, and we have the commit called added hi." }, { "end": 1048, "start": 1044, "text": " If you look at the test file, you see hi." }, { "end": 1056, "start": 1048, "text": " So what we'll do now is, finally, we'll make two branches, as we've discussed before." }, { "end": 1060, "start": 1056, "text": " So this is my initial commit. I've made one more commit." }, { "end": 1065, "start": 1060, "text": " And we're on branch master right now, which Git status will tell you." }, { "end": 1070, "start": 1065, "text": " See? On branch master. So this is now master." }, { "end": 1074, "start": 1070, "text": " What we'll do is we'll make a new branch called F1." }, { "end": 1081, "start": 1074, "text": " We'll make a commit on F1, meaning we'll move this. F1." }, { "end": 1087, "start": 1081, "text": " Then we'll make a commit on top of master, like this, which means we'll move this." }, { "end": 1099, "start": 1087, "text": " Master. And then we will merge F1 back into master, such that this master is here." }, { "end": 1104, "start": 1099, "text": " And at the end, we can even remove the F1 branch." }, { "end": 1111, "start": 1104, "text": " And we'll do this while we're having a merge conflict, so that you see the whole process." }, { "end": 1117, "start": 1111, "text": " So, okay. So what I want to do is, first I want to make a branch F1." }, { "end": 1122, "start": 1117, "text": " For this, we can use checkout minus B for making a new branch F1." }, { "end": 1130, "start": 1122, "text": " If the branch already exists, you simply need to checkout, which means I simply go to where this branch is," }, { "end": 1133, "start": 1130, "text": " to the commit that the branch references to." }, { "end": 1138, "start": 1133, "text": " We also say we put head to this commit." }, { "end": 1144, "start": 1138, "text": " Head is always the thing you're looking at, basically. The thing you've currently checked out." }, { "end": 1150, "start": 1144, "text": " So, make a new branch F1, and we'll immediately switch to F1 if I type status." }, { "end": 1157, "start": 1150, "text": " It says on branch F1. It's still the same commit, but we're just in a different branch." }, { "end": 1166, "start": 1157, "text": " So we'll make kind of a change to this file here. I'm gonna say hello." }, { "end": 1174, "start": 1166, "text": " Cool. Save the file. Status. It says it's modified. I want to add and commit it." }, { "end": 1180, "start": 1174, "text": " And there's a shortcut. Commit minus A minus M." }, { "end": 1186, "start": 1180, "text": " So the A simply says all the files that have changed, add them." }, { "end": 1190, "start": 1186, "text": " So I don't need to add, git add all the changed files separately." }, { "end": 1193, "start": 1190, "text": " Though this only counts for kind of changed files." }, { "end": 1198, "start": 1193, "text": " If you have completely new files that git isn't tracking yet, you need to add them yourself." }, { "end": 1204, "start": 1198, "text": " So here with a minus A, I skip the need to first add the files," }, { "end": 1210, "start": 1204, "text": " and with the minus M I can give directly the commit message. More O. Cool." }, { "end": 1219, "start": 1210, "text": " So now what we've done is we have made this commit here and moved the F1 flag to this commit." }, { "end": 1229, "start": 1219, "text": " What we'll do now is we'll go back to this commit, which is currently master branch, and we'll make this commit." }, { "end": 1234, "start": 1229, "text": " So first what we need to do is we'll go back to some commit, which is a checkout." }, { "end": 1240, "start": 1234, "text": " Checkout master. Since master is still referring to that commit." }, { "end": 1248, "start": 1240, "text": " As you can see, when I open the test file, there's no hello. It's the status from before. Hello." }, { "end": 1256, "start": 1248, "text": " I can now change the file in some other manner. In this case I say hello, because I want many Es." }, { "end": 1263, "start": 1256, "text": " And I can say I can commit this, because I'm now on the branch master." }, { "end": 1271, "start": 1263, "text": " It will make this new commit here and move the master branch to that. More E." }, { "end": 1277, "start": 1271, "text": " If you look at git log, you see all these commits on this kind of branch." }, { "end": 1286, "start": 1277, "text": " You don't see the commit on the F1 branch. For that I would have to go back to the F1 branch." }, { "end": 1293, "start": 1286, "text": " I log, and you see here it's a different story. After the added high commit, there's the more O commit." }, { "end": 1299, "start": 1293, "text": " Whereas up here, after the added high commit, there's the more E commit." }, { "end": 1307, "start": 1299, "text": " Merging also happens when you have different branches." }, { "end": 1313, "start": 1307, "text": " When you collaborate with other people, and these people make commits, and you make commits independent of each other," }, { "end": 1319, "start": 1313, "text": " and you try to synchronize your work, often you need to do a merge." }, { "end": 1324, "start": 1319, "text": " And then merge conflicts can also happen." }, { "end": 1329, "start": 1324, "text": " What we can do now is we can go back to master." }, { "end": 1334, "start": 1329, "text": " Because we've... Oops. Git checkout master." }, { "end": 1337, "start": 1334, "text": " There are shortcuts for all of these." }, { "end": 1340, "start": 1337, "text": " We're on this branch right here." }, { "end": 1345, "start": 1340, "text": " What we want to do is we want to make the merge commit." }, { "end": 1354, "start": 1345, "text": " We want to merge F1 into master. While I am on master, I can say git merge F1." }, { "end": 1362, "start": 1354, "text": " It will try to merge, but it will tell me conflict, automatic merge failed, fixed conflicts, and then commit the result." }, { "end": 1364, "start": 1362, "text": " I can say git status." }, { "end": 1369, "start": 1364, "text": " It will tell me you're currently merging. You have unmerged paths." }, { "end": 1375, "start": 1369, "text": " And this test.txt file is both branches modified." }, { "end": 1378, "start": 1375, "text": " I'll go into the test." }, { "end": 1383, "start": 1378, "text": " This is very strange if you see it for the first time, but it's actually very intuitive." }, { "end": 1390, "start": 1383, "text": " What git will do is wherever the line is that both branches have changed," }, { "end": 1395, "start": 1390, "text": " or wherever the block of lines is that both branches have changed," }, { "end": 1400, "start": 1395, "text": " git will basically indicate this by writing directly into the file." }, { "end": 1405, "start": 1400, "text": " It will make these smaller, smaller, smaller, smaller, smaller than sign." }, { "end": 1409, "start": 1405, "text": " Then it says head, which means this is the thing you're currently looking at," }, { "end": 1413, "start": 1409, "text": " which we know is master, has changed this first line to this." }, { "end": 1418, "start": 1413, "text": " Hello. Then it will be like equal, equal, equal, equal." }, { "end": 1423, "start": 1418, "text": " Then it will say down here, it will say the F1 branch has changed this line," }, { "end": 1426, "start": 1423, "text": " the same line to hello." }, { "end": 1434, "start": 1426, "text": " It will denote the end of this with larger, larger, larger, larger, greater than signs." }, { "end": 1444, "start": 1434, "text": " What you need to do in order to merge is simply make this file as you wish it is in the merged state." }, { "end": 1452, "start": 1444, "text": " First of all, you can always start by removing, actually, good practice maybe to remove these equal lines." }, { "end": 1459, "start": 1452, "text": " Then within these delimiters change how you want the file to look." }, { "end": 1469, "start": 1459, "text": " In essence, I simply want to have these O's here at the end." }, { "end": 1474, "start": 1469, "text": " I just want too many. Like this." }, { "end": 1478, "start": 1474, "text": " Or like this. I like that. I'm going to call that the merged state." }, { "end": 1488, "start": 1478, "text": " Then I delete these lines. This is the file that I would like the merged commit to have." }, { "end": 1491, "start": 1488, "text": " What I can do is save this file." }, { "end": 1495, "start": 1491, "text": " Again, I say git status. It still tells me it's unmerged, but it tells me what to do." }, { "end": 1499, "start": 1495, "text": " It says use git add to mark resolution." }, { "end": 1508, "start": 1499, "text": " I've resolved it. git add test txt. git status." }, { "end": 1516, "start": 1508, "text": " It says all conflicts fixed, but you are still merging. Use git commit to conclude merge." }, { "end": 1519, "start": 1516, "text": " git commit. Bam." }, { "end": 1527, "start": 1519, "text": " I still have to enter a commit message, which is already predefined here." }, { "end": 1533, "start": 1527, "text": " I'm saying I merged the branch F1 and there were conflicts, but that's fine." }, { "end": 1539, "start": 1533, "text": " I like this message, so I'm simply going to save the file right here." }, { "end": 1545, "start": 1539, "text": " When I look into git log, it now gives me the full story." }, { "end": 1550, "start": 1545, "text": " First I have this added high commit, then I have the more O commit and the more E commit," }, { "end": 1552, "start": 1550, "text": " which were in parallel to each other." }, { "end": 1562, "start": 1552, "text": " Then I merged both branches into one. We're now right here." }, { "end": 1570, "start": 1562, "text": " What I can do now is delete the F1 flag, because I don't need it anymore." }, { "end": 1576, "start": 1570, "text": " I do that by git branch minus d F1." }, { "end": 1582, "start": 1576, "text": " It says delete the branch F1. No commits are actually deleted when you delete the branch." }, { "end": 1585, "start": 1582, "text": " It's simply the little flag that is deleted." }, { "end": 1590, "start": 1585, "text": " The only danger is when you delete the little flag and the name," }, { "end": 1594, "start": 1590, "text": " and you're unable to reach the commit from any other end." }, { "end": 1600, "start": 1594, "text": " Here of course we have this master, and by following this edge here, we can reach this commit just fine." }, { "end": 1605, "start": 1600, "text": " git won't delete it or garbage collect it." }, { "end": 1610, "start": 1605, "text": " But git will also tell you when you're about to do something dangerous." }, { "end": 1613, "start": 1610, "text": " So don't worry." }, { "end": 1621, "start": 1613, "text": " With this I think you should already have many tools or many insights into git." }, { "end": 1626, "start": 1621, "text": " In another video we're going to look at how to collaborate online with people," }, { "end": 1628, "start": 1626, "text": " which isn't much harder than this." }, { "end": 1638, "start": 1628, "text": " It's simply two more steps to push and pull your work from a server together with other people." }, { "end": 1660, "start": 1638, "text": " Alright, so that was it. Take care." } ]
iDulhoQ2pro
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Attention Is All You Need
[ "Science & Technology" ]
[ "deep learning", "machine learning", "nlp", "natural language processing", "machine translation", "arxiv", "google", "attention mechanism", "attention", "transformer", "tensor2tensor", "rnn", "recurrent", "seq2seq" ]
https://arxiv.org/abs/1706.03762 Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. Authors: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
Hi there. Today we're looking at Attention is All You Need by Google. Just to declare, I don't work for Google just because we've been looking at Google papers lately. But it's just an interesting paper and we're going to see what's the deal with it. So basically what the authors are saying is we should kind of get away from basically RNNs. So traditionally what you would do, and these authors in particular are interested in NLP, Natural Language Processing. So traditionally when you have a language task like the cat eats the mouse and you'd like to translate this to say any other language like let's say German or whatever. What you would do is you would try to encode this sentence into a representation and then decode it again. So somehow, somehow this sentence needs to all go into say one vector and then this one vector needs to somehow be transformed into the target language. So these are traditionally called sec to sec tasks and they have been solved so far using recurrent neural networks. You might know the LSTM networks that are very popular for these tasks. What basically happens in an RNN is that you go over the say source sentence here one by one. Here you take the word the, you kind of encode it maybe with a word vector if you know what that is. So you turn it into like a vector, a word vector and then you use a neural network to turn this vector into what we call a hidden state. So this h0 is a hidden state. You then take the second token here cat. You again take its word vector because you need to represent it with numbers somehow so use word vectors for that. You turn this into, you put it through the same function so here is what's like a little e for encoder. You turn it into the same function but this time this hidden state also gets plugged in here. So the word vector is instead, you can actually think of having like a start hidden state here, h start. Usually people either learn this or just initialize with zeros that kind of goes into the encoder function so it's always really the same function. And from the previous hidden state and the current word vector the encoder again predicts another hidden state h1 and so on. So you take the next token, you turn it into a word vector, you put it through this little e encoder function and of course this is a lot more complicated in actual like say an LSTM but it's the basic principle behind it. So you end up with h2 and here you'd have h3, h4 and the last hidden state h4 here you would use this in kind of exactly the same fashion. You would plug it into like a decoder, a little e decoder which would output you a word d and it would also output you a next hidden state so h5. Let's just go on with the listing of the states and this h5 would again go into the decoder which would output kotsa. So that's how you would decode you basically these RNNs what they do is they kind of take, if you look on top here they take an input, a current input and they take the last hidden state and they compute a new hidden state. In the case of the decoder they take the hidden state and they take kind of the previous, usually the previous word that you output and they feed this back into the decoder and they will output the next word. It kind of makes sense. So you would guess that the hidden state kind of encodes what the sentence means and the last word that you output you need this because maybe for grammar right you know what you've just output so kind of the next word should be based on that. Of course you don't have to do it exactly this way but that's kind of what these RNNs did. So attention is a mechanism here to basically increase the performance of the RNNs. So what attention would do is in this particular case if we look at the decoder here if it's trying to predict this word for cat then or the next word here, say here it wants the next word and in essence the only information it really has is what the last word was, the German word for cat, and what the hidden state is. So if we look at what word it actually should output in the input sentence it's this here, eats. And if we look at kind of the information flow that this word has to travel so first it needs to encode into a word vector it needs to go through this encoder that's the same function for all the words so now we have to look at this encoder that's the same function for all the words so nothing specific can be learned to the word eats here right. It needs to go through this hidden state, traverse again into another step, this hidden state because we have two more tokens and then the next hidden state and then it goes all the way to the decoder where the first two words are decoded and still so this H6, this hidden state somehow still needs to retain the information that now the word eats somehow is kind of the word to be translated and that the decoder should find the German word for that. So that's of course a very long path, there's a lot of transformations involved over all these hidden states and the hidden states not only do they need to remember this particular word but all of the words and the order and so on and the grammar, ok the grammar you can actually learn with the decoders themselves but kind of the meaning and the structure of the sentence so it's very hard for an RNN to learn all of this what we call long range dependencies and so naturally you actually think well why can't we just decode the first word to the first word, second word to the second word it actually works pretty well in this example right like the de cat cuts it eats we could just decode one by one of course that's not how translation works in translations the sentences can become rearranged in the target language like one word can become many words or it could even be an entirely different expression. So attention is a mechanism by which this decoder here in this step that we're looking at can actually decide to go back and look at particular parts of the input especially what it would do in like popular attention mechanisms is that this decoder here can decide to attend to the hidden states of the input sentence. What that means is in this particular case we would like to teach the decoder somehow that aha look here I need to pay close attention to this step here because that was the step when the word eats was just encoded so it probably has a lot of information about what I would like to do right now namely translate this word eats. So this mechanism allows if you look at the information flow it simply goes through this word vector goes through one encoding step and then is at the hidden state and then the decoder can look directly at that so the path length of information is much shorter than going through all of the hidden states in a traditional way. So that's where attention helps and the way that the decoder decides what to look at is like a kind of an addressing scheme you may know it from neural Turing machines or kind of other kind of neural algorithms things so what the decoder would do is in each step it would output a bunch of keys. Sorry about that. That's my hand being drippy. So what it would output is a bunch of keys so k1 through kn and what these keys would do is they would index these hidden kind of hidden states via a kind of a softmax architecture and we're gonna look at this I think in the actual paper we're discussing because it's gonna become more clear but just kind of notice that the decoder here can decide to attend to the input sentence and kind of draw information directly from there instead of having to go just to the hidden state it's provided with. So if we go to the paper here what do these authors propose and the thing is they ditch the RNNs they basically say attention is all you need you don't need the entire recurrent things basically in every step of this decode of this basically of the decoding so you want to produce the target sentence so in this step in this step in this step you can basically you don't need the recurrence you can just kind of do attention over everything and it will be fine namely what they do is they propose this transformer architecture so what does it do it has two parts what's called an encoder and a decoder but don't kind of be confused because this all happens at once so this is not an RNN it all happens at once every all the source sentence so if we again have the cat oops that doesn't work as easy let's just do this this is a source sentence and then we also have a target sentence that maybe we've produced two words and we want to produce this third word here I want to produce this so we would feed the entire source sentence and also the target sentence we've produced so far to this network namely the source sentence would go into this part and the target that we've produced so far would go into this part and this is then all the time we would feed and this is then all combined and at the end we get an output here at the output probabilities that kind of tells us the probabilities for the next word so we can choose the top probability and then repeat the entire process so basically every step in production is one training sample every step in producing a sentence here before with the RNNs the entire sentence to sentence translation is one sample because we need to back propagate through all of these RNN steps because they all happen kind of in sequence here basically output of one single token is one sample and then the computation is finished the back prop happens through everything only for this one step so there is no multi-step kind of back propagation as an RNN and this is kind of a paradigm shift in sequence processing because people were always convinced that you kind of need these recurrent things in order to make good to learn these dependencies but here they basically say no no no we can just do attention over everything and it will actually be fine if we just do one step predictions so let's go one by one so here we have an input embedding and say an output embedding these are symmetrical so basically the tokens just get embedded with say word vectors again then there is a positional encoding this is kind of a special thing where because you now lose this kind of sequence nature of your algorithm you kind of need to encode where the words are that you push through the network so the network kind of goes ah this is a word at the beginning of the sentence or ah this is a word towards the end of the sentence or that it can compare two words like which one comes first which one comes second and you do this it's pretty easy for the networks if you do it with kind of these trigonometric functioning embeddings so if I draw you a sine wave and I draw you a sine wave of that is double as fast and I draw you a sine wave that is even faster maybe this one actually sink one two three four five no it doesn't matter you know what I mean so I can encode the first word I can encode the first position with all down and then the second position is kind of down down up and the third position is kind of up down up and so on so this is kind of a continuous way of binary encoding of position so if I want to compare two words I can just look at all the scales of these things and I know aha this word one word has a high here and the other word is low here so they must be pretty far away like one must be at the beginning and one must be at the end if they happen to match in this long wave and they also are both kind of low on this wave and then I can look in this way for like oh maybe they're close together but here I really get the information which one's first which one's second so these are kind of position encodings they're not critical to this algorithm but they are critical to the algorithm and algorithm but they just encode where the words are which of course that is important and it gives the network a significant boost in performance but it's not like it's not the meat of the thing the meat of the thing is that now that these encodings go into the networks they simply do what they call attention here attention here and attention here so there's kind of three kinds of attention so basically the first attention on the bottom left is simply attention as you can see over the input sentence so I told you before you need to take this input sentence if you look over here and you somehow need to encode it into a hidden representation and this now looks much more like the picture I drew here and the picture I drew right at the beginning is that all at once you kind of put together this hidden representation and all you do is you use attention over the input sequence which basically means you kind of pick and choose which words you look at more or less so with the bottom right so in the output sentence that you've produced so far you simply encode it into kind of a hidden state and then the third on the top right that's the I think the sorry I got interrupted so as I was saying the top right is the most interesting part of the attention mechanism here where basically it unites the kind of encoder part with the kind of de let's not it combines the source sentence with the target sentence that you've produced so far so as you can see maybe here I can just slightly annoying but I'm just going to remove these kind of circles here so if you can see here there's an output going from the part that encodes the source sentence and it goes into this multi-headed tension there's two connections and there's also one connection coming from the encoded output so far here and so there's three connections going into this and we're going to take a look at what these three connections are so the three connections here basically are the keys values and queries if you see here the values and the keys are what is output by the encoding part of the source sentence and the query is output by the encoding part of the target sentence and these are not only one value key and query so there are many in this kind of multi-headed tension fashion so there are just many of them instead of one but you can think of them as just kind of sets so the attention computed here is what does it do so first of all it calculates an adult product of the keys and the queries and then it does a soft max over this and then it multiplies it by the values so what does this do if you dot product the keys and the queries what you would get is so as you know if you have two vectors and the dot product basically gives you the angle between the vectors with especially in high dimensions most vectors are going to be of kind of a 90 degree kind of oh I know the Americans do the little square so most vectors are going to be not aligned very well so their dot product will kind of be zero-ish but if a key and the query actually align with each other like if they point into the same directions their dot product will actually be large so what you can think of this as the keys are kind of here the keys are just a bunch of vectors in space and each key has an associated value so each key there is kind of a table value one value two value three this is really annoying if I do this over text right so again here so we have a bunch of keys right in space and we have a table with values and each key here corresponds to value value one value two value three value four and so each key is associated with one of these values and then when we introduce a query what can it do so query will be a vector like this and we simply compute the so this is Q this is the query we compute the dot product with each of the keys and then we compute a softmax over this which means that one key will basically be selected so in this case it will be probably this blue key here that has the biggest dot product with the query so this is key two in this case and softmax so if you don't know what a softmax is you have you have like x1 to xnb like some numbers then you simply do you map them to the exponential function each one of them and but also each one of them you divide by the sum of over i of e to the xi so basically this is a renormalization basically you do the exponential function of the numbers which of course this makes the kind of the big numbers even bigger so basically what you end up with is one of these numbers x1 to xn will become very big compared to the others and then you renormalize so basically one of them will be almost one and the other ones will be almost zero simply the maximum function you can think of in a differentiable way so this is a renormalization so basically maximum function you can think of in a differentiable way and you just want to select the biggest entry in this case here we kind of select the key that aligns most with the query which in this case would be key two and then we when we multiply this softmax thing with the values so this query this inner product if we multiply q with k2 as an inner product and we take the softmax over it what we'll do is i'm going to draw it upwards here we're going to induce a distribution like this and if we multiply this by the value it will basically select value two so this is this is kind of an indexing scheme into this matrix and we select value two so this is this is kind of an indexing scheme into this memory of values and this is what then the network uses to compute further things using so you see the output here goes into kind of more layers of the neural network upwards so basically what what you can think what does this mean you can think of here's the whoopsie i want to delete this you can think of this as basically the encoder of the source sentence right here discovers interesting things that's that looks ugly it discovers interesting things about the about the the source sentence and it builds key value pairs and then the encoder of the target sentence builds the queries and together they give you kind of the next to next signal so it means that the network basically says here's a bunch of things here's a here's a bunch of things about the source sentence that you might find interesting that's the values and the keys are ways to index the values so it says here's a bunch of things that are interesting which are the values and here is how you would address these things which is the keys and then the other part of the network builds the queries it says i would like to know certain things so think of the values like attributes like here is the name and the the the kind of tallness and the weight of a person right and the keys are like the the actual indexes like name height weight and then the the other part of the network can decide what do i want i actually want the name so my query is the name it will be aligned with the key name and the corresponding value would be the name of the person you would like to describe so that's how kind of these networks work together and i think it's a it's a pretty ingenious it's not entirely new because it has been done of course before with all the differentiable turing machines and whatnot but it's pretty cool that this actually works and actually works kind of better than rnns if you simply do this so they describe a bunch of other things here i i don't think they're too important basically that the point that they make about this attention is that it reduces path lengths and kind of that's the the main reason why it should work better with this entire attention mechanism you reduce the amount of computation steps that information has to flow from one point in the network to another and that's what brings the major improvement because all the computation steps can make you lose information and you don't want that you want short path lengths and so that's that's what this method achieves and they claim that's why it's better and it works so well so they have experiments you can look at them they're really good at everything of course of course you always have state of the art and i think i will conclude here if you want to check it out yourself they have extensive code on github where you can build your own transformer networks and with that have a nice day and see ya
[ { "end": 7, "start": 0, "text": " Hi there. Today we're looking at Attention is All You Need by Google. Just to declare," }, { "end": 12.56, "start": 7.44, "text": " I don't work for Google just because we've been looking at Google papers lately. But" }, { "end": 19.12, "start": 12.56, "text": " it's just an interesting paper and we're going to see what's the deal with it. So basically" }, { "end": 26.12, "start": 19.12, "text": " what the authors are saying is we should kind of get away from basically RNNs. So traditionally" }, { "end": 33.120000000000005, "start": 26.12, "text": " what you would do, and these authors in particular are interested in NLP, Natural Language Processing." }, { "end": 40.120000000000005, "start": 33.120000000000005, "text": " So traditionally when you have a language task like the cat eats the mouse and you'd" }, { "end": 59.12, "start": 40.12, "text": " like to translate this to say any other language like let's say German or whatever. What you" }, { "end": 66.12, "start": 59.12, "text": " would do is you would try to encode this sentence into a representation and then decode it again." }, { "end": 73.12, "start": 66.12, "text": " So somehow, somehow this sentence needs to all go into say one vector and then this one" }, { "end": 81.32000000000001, "start": 74.32000000000001, "text": " vector needs to somehow be transformed into the target language. So these are traditionally" }, { "end": 88.92, "start": 81.92, "text": " called sec to sec tasks and they have been solved so far using recurrent neural networks." }, { "end": 95.92, "start": 88.92, "text": " You might know the LSTM networks that are very popular for these tasks. What basically" }, { "end": 103.92, "start": 96.92, "text": " happens in an RNN is that you go over the say source sentence here one by one. Here" }, { "end": 110, "start": 104, "text": " you take the word the, you kind of encode it maybe with a word vector if you know what" }, { "end": 117, "start": 110, "text": " that is. So you turn it into like a vector, a word vector and then you use a neural network" }, { "end": 124, "start": 117, "text": " to turn this vector into what we call a hidden state. So this h0 is a hidden state. You then" }, { "end": 136.28, "start": 129.28, "text": " take the second token here cat. You again take its word vector because you need to represent" }, { "end": 143.8, "start": 136.8, "text": " it with numbers somehow so use word vectors for that. You turn this into, you put it through" }, { "end": 149.8, "start": 143.8, "text": " the same function so here is what's like a little e for encoder. You turn it into the" }, { "end": 155.8, "start": 149.8, "text": " same function but this time this hidden state also gets plugged in here. So the word vector" }, { "end": 162.8, "start": 155.8, "text": " is instead, you can actually think of having like a start hidden state here, h start. Usually" }, { "end": 169.24, "start": 163.52, "text": " people either learn this or just initialize with zeros that kind of goes into the encoder" }, { "end": 176.24, "start": 169.24, "text": " function so it's always really the same function. And from the previous hidden state and the" }, { "end": 183.28, "start": 176.28, "text": " current word vector the encoder again predicts another hidden state h1 and so on. So you" }, { "end": 191.76000000000002, "start": 184.76000000000002, "text": " take the next token, you turn it into a word vector, you put it through this little e encoder" }, { "end": 198.24, "start": 191.88, "text": " function and of course this is a lot more complicated in actual like say an LSTM but" }, { "end": 205.24, "start": 198.24, "text": " it's the basic principle behind it. So you end up with h2 and here you'd have h3, h4" }, { "end": 212.20000000000002, "start": 207.28, "text": " and the last hidden state h4 here you would use this in kind of exactly the same fashion." }, { "end": 219.20000000000002, "start": 212.20000000000002, "text": " You would plug it into like a decoder, a little e decoder which would output you a word d" }, { "end": 226.2, "start": 219.2, "text": " and it would also output you a next hidden state so h5. Let's just go on with the listing" }, { "end": 241.44, "start": 234.44, "text": " of the states and this h5 would again go into the decoder which would output kotsa. So that's" }, { "end": 248.44, "start": 241.44, "text": " how you would decode you basically these RNNs what they do is they kind of take, if you" }, { "end": 255.44, "start": 248.44, "text": " look on top here they take an input, a current input and they take the last hidden state" }, { "end": 262.48, "start": 255.48, "text": " and they compute a new hidden state. In the case of the decoder they take the hidden state" }, { "end": 269.84, "start": 262.84, "text": " and they take kind of the previous, usually the previous word that you output and they" }, { "end": 276.84, "start": 269.84, "text": " feed this back into the decoder and they will output the next word. It kind of makes sense." }, { "end": 283.52, "start": 277.32, "text": " So you would guess that the hidden state kind of encodes what the sentence means and the" }, { "end": 290.15999999999997, "start": 283.52, "text": " last word that you output you need this because maybe for grammar right you know what you've" }, { "end": 297.15999999999997, "start": 290.15999999999997, "text": " just output so kind of the next word should be based on that. Of course you don't have" }, { "end": 304.16, "start": 297.16, "text": " to do it exactly this way but that's kind of what these RNNs did. So attention is a" }, { "end": 313.16, "start": 306.16, "text": " mechanism here to basically increase the performance of the RNNs. So what attention would do is" }, { "end": 322.36, "start": 315.36, "text": " in this particular case if we look at the decoder here if it's trying to predict this" }, { "end": 329.36, "start": 322.36, "text": " word for cat then or the next word here, say here it wants the next word and in essence" }, { "end": 343.12, "start": 336.12, "text": " the only information it really has is what the last word was, the German word for cat," }, { "end": 350.12, "start": 343.12, "text": " and what the hidden state is. So if we look at what word it actually should output in" }, { "end": 357.12, "start": 350.12, "text": " the input sentence it's this here, eats. And if we look at kind of the information flow" }, { "end": 364.56, "start": 358.56, "text": " that this word has to travel so first it needs to encode into a word vector it needs to go" }, { "end": 369.56, "start": 364.56, "text": " through this encoder that's the same function for all the words so now we have to look at" }, { "end": 374.56, "start": 369.56, "text": " this encoder that's the same function for all the words so nothing specific can be learned" }, { "end": 379.72, "start": 374.56, "text": " to the word eats here right. It needs to go through this hidden state, traverse again" }, { "end": 384.8, "start": 379.72, "text": " into another step, this hidden state because we have two more tokens and then the next" }, { "end": 391.52, "start": 384.8, "text": " hidden state and then it goes all the way to the decoder where the first two words are" }, { "end": 398.52, "start": 391.52, "text": " decoded and still so this H6, this hidden state somehow still needs to retain the information" }, { "end": 405.52, "start": 398.52, "text": " that now the word eats somehow is kind of the word to be translated and that the decoder" }, { "end": 415.84, "start": 408.84, "text": " should find the German word for that. So that's of course a very long path, there's a lot" }, { "end": 424.12, "start": 418.24, "text": " of transformations involved over all these hidden states and the hidden states not only" }, { "end": 429.2, "start": 424.12, "text": " do they need to remember this particular word but all of the words and the order and so" }, { "end": 435.72, "start": 429.2, "text": " on and the grammar, ok the grammar you can actually learn with the decoders themselves" }, { "end": 442.32, "start": 435.72, "text": " but kind of the meaning and the structure of the sentence so it's very hard for an RNN" }, { "end": 449.32, "start": 442.32, "text": " to learn all of this what we call long range dependencies and so naturally you actually" }, { "end": 454.56, "start": 449.32, "text": " think well why can't we just decode the first word to the first word, second word to the" }, { "end": 460.28, "start": 454.56, "text": " second word it actually works pretty well in this example right like the de cat cuts" }, { "end": 465.68, "start": 460.28, "text": " it eats we could just decode one by one of course that's not how translation works in" }, { "end": 471.65999999999997, "start": 465.68, "text": " translations the sentences can become rearranged in the target language like one word can become" }, { "end": 478.65999999999997, "start": 471.65999999999997, "text": " many words or it could even be an entirely different expression. So attention is a mechanism" }, { "end": 484.70000000000005, "start": 478.66, "text": " by which this decoder here in this step that we're looking at can actually decide to go" }, { "end": 491.70000000000005, "start": 484.70000000000005, "text": " back and look at particular parts of the input especially what it would do in like popular" }, { "end": 501.70000000000005, "start": 491.70000000000005, "text": " attention mechanisms is that this decoder here can decide to attend to the hidden states" }, { "end": 507.78000000000003, "start": 502.02000000000004, "text": " of the input sentence. What that means is in this particular case we would like to teach" }, { "end": 514.78, "start": 507.78, "text": " the decoder somehow that aha look here I need to pay close attention to this step here because" }, { "end": 523.06, "start": 516.3399999999999, "text": " that was the step when the word eats was just encoded so it probably has a lot of information" }, { "end": 533.06, "start": 523.06, "text": " about what I would like to do right now namely translate this word eats. So this mechanism" }, { "end": 539.06, "start": 533.06, "text": " allows if you look at the information flow it simply goes through this word vector goes" }, { "end": 544.4599999999999, "start": 539.06, "text": " through one encoding step and then is at the hidden state and then the decoder can look" }, { "end": 550.9, "start": 544.4599999999999, "text": " directly at that so the path length of information is much shorter than going through all of" }, { "end": 557.9, "start": 550.9, "text": " the hidden states in a traditional way. So that's where attention helps and the way that" }, { "end": 563.9, "start": 557.9, "text": " the decoder decides what to look at is like a kind of an addressing scheme you may know" }, { "end": 574.9, "start": 563.9, "text": " it from neural Turing machines or kind of other kind of neural algorithms things so" }, { "end": 581.98, "start": 574.98, "text": " what the decoder would do is in each step it would output a bunch of keys. Sorry about" }, { "end": 591.98, "start": 581.98, "text": " that. That's my hand being drippy. So what it would output is a bunch of keys so k1 through" }, { "end": 606.98, "start": 591.98, "text": " kn and what these keys would do is they would index these hidden kind of hidden states via" }, { "end": 613.98, "start": 606.98, "text": " a kind of a softmax architecture and we're gonna look at this I think in the actual paper" }, { "end": 619.98, "start": 614.9, "text": " we're discussing because it's gonna become more clear but just kind of notice that the" }, { "end": 626.86, "start": 619.98, "text": " decoder here can decide to attend to the input sentence and kind of draw information directly" }, { "end": 633.86, "start": 626.86, "text": " from there instead of having to go just to the hidden state it's provided with. So if" }, { "end": 640.86, "start": 633.86, "text": " we go to the paper here what do these authors propose and the thing is they ditch the RNNs" }, { "end": 645.86, "start": 641.22, "text": " they basically say attention is all you need you don't need the entire recurrent things" }, { "end": 651.7, "start": 645.86, "text": " basically in every step of this decode of this basically of the decoding so you want" }, { "end": 658.7, "start": 651.7, "text": " to produce the target sentence so in this step in this step in this step you can basically" }, { "end": 665.7, "start": 658.7, "text": " you don't need the recurrence you can just kind of do attention over everything and it" }, { "end": 673.9000000000001, "start": 666.9000000000001, "text": " will be fine namely what they do is they propose this transformer architecture so what does" }, { "end": 682.1400000000001, "start": 675.1400000000001, "text": " it do it has two parts what's called an encoder and a decoder but don't kind of be confused" }, { "end": 689.14, "start": 682.14, "text": " because this all happens at once so this is not an RNN it all happens at once every all" }, { "end": 696.14, "start": 689.14, "text": " the source sentence so if we again have the cat oops that doesn't work as easy let's" }, { "end": 704.58, "start": 697.58, "text": " just do this this is a source sentence and then we also have a target sentence that maybe" }, { "end": 711.58, "start": 704.58, "text": " we've produced two words and we want to produce this third word here I want to produce this" }, { "end": 719.1, "start": 712.1, "text": " so we would feed the entire source sentence and also the target sentence we've produced" }, { "end": 726.1800000000001, "start": 719.1800000000001, "text": " so far to this network namely the source sentence would go into this part and the target that" }, { "end": 733.1800000000001, "start": 726.1800000000001, "text": " we've produced so far would go into this part and this is then all the time we would feed" }, { "end": 740.18, "start": 733.18, "text": " and this is then all combined and at the end we get an output here at the output probabilities" }, { "end": 749.5, "start": 742.5, "text": " that kind of tells us the probabilities for the next word so we can choose the top probability" }, { "end": 756.9799999999999, "start": 749.9799999999999, "text": " and then repeat the entire process so basically every step in production is one training sample" }, { "end": 762.62, "start": 757.8199999999999, "text": " every step in producing a sentence here before with the RNNs the entire sentence to sentence" }, { "end": 767.66, "start": 762.62, "text": " translation is one sample because we need to back propagate through all of these RNN" }, { "end": 774.66, "start": 767.66, "text": " steps because they all happen kind of in sequence here basically output of one single token" }, { "end": 781.38, "start": 775.78, "text": " is one sample and then the computation is finished the back prop happens through everything" }, { "end": 788.38, "start": 781.38, "text": " only for this one step so there is no multi-step kind of back propagation as an RNN and this" }, { "end": 795.38, "start": 788.38, "text": " is kind of a paradigm shift in sequence processing because people were always convinced that" }, { "end": 803.88, "start": 796.88, "text": " you kind of need these recurrent things in order to make good to learn these dependencies" }, { "end": 809.72, "start": 804.2, "text": " but here they basically say no no no we can just do attention over everything and it will" }, { "end": 816.72, "start": 809.72, "text": " actually be fine if we just do one step predictions so let's go one by one so here we have an" }, { "end": 823.72, "start": 816.72, "text": " input embedding and say an output embedding these are symmetrical so basically the tokens" }, { "end": 828.72, "start": 823.72, "text": " just get embedded with say word vectors again then there is a positional encoding this is" }, { "end": 835.72, "start": 828.72, "text": " kind of a special thing where because you now lose this kind of sequence nature of your" }, { "end": 840.88, "start": 835.88, "text": " algorithm you kind of need to encode where the words are that you push through the network" }, { "end": 844.88, "start": 840.88, "text": " so the network kind of goes ah this is a word at the beginning of the sentence or ah this" }, { "end": 850.04, "start": 844.88, "text": " is a word towards the end of the sentence or that it can compare two words like which" }, { "end": 856.54, "start": 850.04, "text": " one comes first which one comes second and you do this it's pretty easy for the networks" }, { "end": 862.72, "start": 856.54, "text": " if you do it with kind of these trigonometric functioning embeddings so if I draw you a" }, { "end": 869.72, "start": 862.72, "text": " sine wave and I draw you a sine wave of that is double as fast and I draw you a sine wave" }, { "end": 876.72, "start": 869.72, "text": " that is even faster maybe this one actually sink one two three four five no it doesn't" }, { "end": 883.72, "start": 876.72, "text": " matter you know what I mean so I can encode the first word I can encode the first position" }, { "end": 890.96, "start": 883.96, "text": " with all down and then the second position is kind of down down up and the third position" }, { "end": 897.96, "start": 890.96, "text": " is kind of up down up and so on so this is kind of a continuous way of binary encoding" }, { "end": 905.36, "start": 898.36, "text": " of position so if I want to compare two words I can just look at all the scales of these" }, { "end": 909.72, "start": 904.72, "text": " things and I know aha this word one word has a high here and the other word is low here" }, { "end": 914.72, "start": 909.72, "text": " so they must be pretty far away like one must be at the beginning and one must be at the" }, { "end": 921.72, "start": 914.72, "text": " end if they happen to match in this long wave and they also are both kind of low on this" }, { "end": 930.08, "start": 924.08, "text": " wave and then I can look in this way for like oh maybe they're close together but here I" }, { "end": 935.08, "start": 930.08, "text": " really get the information which one's first which one's second so these are kind of position" }, { "end": 942.08, "start": 935.08, "text": " encodings they're not critical to this algorithm but they are critical to the algorithm and" }, { "end": 949.08, "start": 942.08, "text": " algorithm but they just encode where the words are which of course that is important and" }, { "end": 956.72, "start": 949.72, "text": " it gives the network a significant boost in performance but it's not like it's not the" }, { "end": 963.88, "start": 957.2, "text": " meat of the thing the meat of the thing is that now that these encodings go into the" }, { "end": 970.88, "start": 963.88, "text": " networks they simply do what they call attention here attention here and attention here so" }, { "end": 979.04, "start": 973.32, "text": " there's kind of three kinds of attention so basically the first attention on the bottom" }, { "end": 986.04, "start": 979.04, "text": " left is simply attention as you can see over the input sentence so I told you before you" }, { "end": 991.64, "start": 986.74, "text": " need to take this input sentence if you look over here and you somehow need to encode it" }, { "end": 998.64, "start": 991.64, "text": " into a hidden representation and this now looks much more like the picture I drew here" }, { "end": 1005.4399999999999, "start": 1000.04, "text": " and the picture I drew right at the beginning is that all at once you kind of put together" }, { "end": 1010.6, "start": 1005.4399999999999, "text": " this hidden representation and all you do is you use attention over the input sequence" }, { "end": 1016.88, "start": 1010.6, "text": " which basically means you kind of pick and choose which words you look at more or less" }, { "end": 1021.16, "start": 1016.88, "text": " so with the bottom right so in the output sentence that you've produced so far you simply" }, { "end": 1028.1599999999999, "start": 1021.16, "text": " encode it into kind of a hidden state and then the third on the top right that's the" }, { "end": 1035.24, "start": 1028.24, "text": " I think the sorry I got interrupted so as I was saying the top right is the most interesting" }, { "end": 1043.04, "start": 1036.04, "text": " part of the attention mechanism here where basically it unites the kind of encoder part" }, { "end": 1050.6, "start": 1043.6, "text": " with the kind of de let's not it combines the source sentence with the target sentence" }, { "end": 1057.6, "start": 1050.6, "text": " that you've produced so far so as you can see maybe here I can just slightly annoying" }, { "end": 1070, "start": 1063, "text": " but I'm just going to remove these kind of circles here so if you can see here there's" }, { "end": 1078.12, "start": 1071.12, "text": " an output going from the part that encodes the source sentence and it goes into this" }, { "end": 1085.12, "start": 1078.12, "text": " multi-headed tension there's two connections and there's also one connection coming from" }, { "end": 1092.4799999999998, "start": 1085.4799999999998, "text": " the encoded output so far here and so there's three connections going into this and we're" }, { "end": 1103.7199999999998, "start": 1096.7199999999998, "text": " going to take a look at what these three connections are so the three connections here basically" }, { "end": 1110.72, "start": 1103.72, "text": " are the keys values and queries if you see here the values and the keys are what is output" }, { "end": 1122.56, "start": 1116.04, "text": " by the encoding part of the source sentence and the query is output by the encoding part" }, { "end": 1129.56, "start": 1122.56, "text": " of the target sentence and these are not only one value key and query so there are many" }, { "end": 1135.48, "start": 1129.56, "text": " in this kind of multi-headed tension fashion so there are just many of them instead of" }, { "end": 1142.48, "start": 1135.48, "text": " one but you can think of them as just kind of sets so the attention computed here is" }, { "end": 1150.56, "start": 1143.56, "text": " what does it do so first of all it calculates an adult product of the keys and the queries" }, { "end": 1157.6, "start": 1152.36, "text": " and then it does a soft max over this and then it multiplies it by the values so what" }, { "end": 1164.6, "start": 1157.6, "text": " does this do if you dot product the keys and the queries what you would get is so as you" }, { "end": 1173.76, "start": 1166.76, "text": " know if you have two vectors and the dot product basically gives you the angle between the" }, { "end": 1181.28, "start": 1174.28, "text": " vectors with especially in high dimensions most vectors are going to be of kind of a" }, { "end": 1188.28, "start": 1181.28, "text": " 90 degree kind of oh I know the Americans do the little square so most vectors are going" }, { "end": 1197.08, "start": 1190.8, "text": " to be not aligned very well so their dot product will kind of be zero-ish but if a key and" }, { "end": 1204.08, "start": 1197.08, "text": " the query actually align with each other like if they point into the same directions their" }, { "end": 1211.08, "start": 1204.08, "text": " dot product will actually be large so what you can think of this as the keys are kind" }, { "end": 1218.1599999999999, "start": 1211.1599999999999, "text": " of here the keys are just a bunch of vectors in space and each key has an associated value" }, { "end": 1227.9199999999998, "start": 1220.9199999999998, "text": " so each key there is kind of a table value one value two value three this is really annoying" }, { "end": 1234.92, "start": 1227.92, "text": " if I do this over text right so again here so we have a bunch of keys right in space" }, { "end": 1242.96, "start": 1236.96, "text": " and we have a table with values and each key here corresponds to value value one value" }, { "end": 1249.96, "start": 1242.96, "text": " two value three value four and so each key is associated with one of these values and" }, { "end": 1256.96, "start": 1249.96, "text": " then when we introduce a query what can it do so query will be a vector like this and" }, { "end": 1262.96, "start": 1257.96, "text": " we simply compute the so this is Q this is the query we compute the dot product with" }, { "end": 1269.96, "start": 1262.96, "text": " each of the keys and then we compute a softmax over this which means that one key will basically" }, { "end": 1276.96, "start": 1269.96, "text": " be selected so in this case it will be probably this blue key here that has the biggest dot" }, { "end": 1283.48, "start": 1276.48, "text": " product with the query so this is key two in this case and softmax so if you don't know" }, { "end": 1292.6000000000001, "start": 1285.6000000000001, "text": " what a softmax is you have you have like x1 to xnb like some numbers then you simply do" }, { "end": 1299.6, "start": 1292.6, "text": " you map them to the exponential function each one of them and but also each one of them" }, { "end": 1308.36, "start": 1301.36, "text": " you divide by the sum of over i of e to the xi so basically this is a renormalization" }, { "end": 1314.32, "start": 1309.08, "text": " basically you do the exponential function of the numbers which of course this makes" }, { "end": 1315.48, "start": 1314.32, "text": " the kind of the" }, { "end": 1322.48, "start": 1315.48, "text": " big numbers even bigger so basically what you end up with is one of these numbers x1" }, { "end": 1329.8, "start": 1322.8, "text": " to xn will become very big compared to the others and then you renormalize so basically" }, { "end": 1334.8, "start": 1329.8, "text": " one of them will be almost one and the other ones will be almost zero simply the maximum" }, { "end": 1341.8, "start": 1334.8, "text": " function you can think of in a differentiable way so this is a renormalization so basically" }, { "end": 1347.24, "start": 1341.8, "text": " maximum function you can think of in a differentiable way and you just want to select the biggest" }, { "end": 1352.9199999999998, "start": 1347.24, "text": " entry in this case here we kind of select the key that aligns most with the query which" }, { "end": 1358.56, "start": 1352.9199999999998, "text": " in this case would be key two and then we when we multiply this softmax thing with the" }, { "end": 1365.56, "start": 1358.56, "text": " values so this query this inner product if we multiply q with k2 as an inner product" }, { "end": 1372.56, "start": 1365.56, "text": " and we take the softmax over it what we'll do is i'm going to draw it upwards here we're" }, { "end": 1381.6, "start": 1374.6, "text": " going to induce a distribution like this and if we multiply this by the value it will basically" }, { "end": 1388.6, "start": 1381.6, "text": " select value two so this is this is kind of an indexing scheme into this matrix and we" }, { "end": 1395.6, "start": 1388.6, "text": " select value two so this is this is kind of an indexing scheme into this memory of values" }, { "end": 1404.36, "start": 1397.36, "text": " and this is what then the network uses to compute further things using so you see the" }, { "end": 1411.9199999999998, "start": 1405.1599999999999, "text": " output here goes into kind of more layers of the neural network upwards so basically" }, { "end": 1418.92, "start": 1411.92, "text": " what what you can think what does this mean you can think of here's the whoopsie i want" }, { "end": 1426.6000000000001, "start": 1419.6000000000001, "text": " to delete this you can think of this as basically the encoder of the source sentence right here" }, { "end": 1439.16, "start": 1432.76, "text": " discovers interesting things that's that looks ugly it discovers interesting things about" }, { "end": 1446.16, "start": 1439.16, "text": " the about the the source sentence and it builds key value pairs and then the encoder of the" }, { "end": 1454.96, "start": 1447.96, "text": " target sentence builds the queries and together they give you kind of the next to next signal" }, { "end": 1462.3200000000002, "start": 1456.28, "text": " so it means that the network basically says here's a bunch of things here's a here's a" }, { "end": 1469.32, "start": 1462.32, "text": " bunch of things about the source sentence that you might find interesting that's the" }, { "end": 1476.48, "start": 1469.48, "text": " values and the keys are ways to index the values so it says here's a bunch of things" }, { "end": 1484.4399999999998, "start": 1479.24, "text": " that are interesting which are the values and here is how you would address these things" }, { "end": 1491.28, "start": 1484.4399999999998, "text": " which is the keys and then the other part of the network builds the queries it says" }, { "end": 1498.28, "start": 1491.28, "text": " i would like to know certain things so think of the values like attributes like here is" }, { "end": 1505.8799999999999, "start": 1498.8799999999999, "text": " the name and the the the kind of tallness and the weight of a person right and the keys" }, { "end": 1512.92, "start": 1505.92, "text": " are like the the actual indexes like name height weight and then the the other part of the" }, { "end": 1518.6399999999999, "start": 1513.28, "text": " network can decide what do i want i actually want the name so my query is the name it will" }, { "end": 1524.3600000000001, "start": 1518.64, "text": " be aligned with the key name and the corresponding value would be the name of the person you" }, { "end": 1529.68, "start": 1524.3600000000001, "text": " would like to describe so that's how kind of these networks work together and i think" }, { "end": 1535.2800000000002, "start": 1529.68, "text": " it's a it's a pretty ingenious it's not entirely new because it has been done of course before" }, { "end": 1540.72, "start": 1535.2800000000002, "text": " with all the differentiable turing machines and whatnot but it's pretty cool that this" }, { "end": 1547.72, "start": 1540.72, "text": " actually works and actually works kind of better than rnns if you simply do this so" }, { "end": 1556.96, "start": 1549.96, "text": " they describe a bunch of other things here i i don't think they're too important basically" }, { "end": 1562.68, "start": 1557.16, "text": " that the point that they make about this attention is that it reduces path lengths and kind of" }, { "end": 1569.68, "start": 1562.68, "text": " that's the the main reason why it should work better with this entire attention mechanism" }, { "end": 1576.52, "start": 1570.88, "text": " you reduce the amount of computation steps that information has to flow from one point" }, { "end": 1582.44, "start": 1576.52, "text": " in the network to another and that's what brings the major improvement because all the" }, { "end": 1588.4, "start": 1582.44, "text": " computation steps can make you lose information and you don't want that you want short path" }, { "end": 1595.4, "start": 1588.4, "text": " lengths and so that's that's what this method achieves and they claim that's why it's better" }, { "end": 1602.2800000000002, "start": 1595.92, "text": " and it works so well so they have experiments you can look at them they're really good at" }, { "end": 1609.2800000000002, "start": 1602.2800000000002, "text": " everything of course of course you always have state of the art and i think i will conclude" }, { "end": 1616.28, "start": 1609.28, "text": " here if you want to check it out yourself they have extensive code on github where you" }, { "end": 1639.28, "start": 1616.28, "text": " can build your own transformer networks and with that have a nice day and see ya" } ]
-YiMVR3HEuY
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Reinforcement Learning with Unsupervised Auxiliary Tasks
[ "Science & Technology" ]
[ "machine learning", "artificial intelligence", "ai", "deep learning", "unsupervised learning", "research", "academia", "paper", "review", "agents", "tasks" ]
https://arxiv.org/abs/1611.05397 Abstract: Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10× and averaging 87\% expert human performance on Labyrinth. Authors: Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu
Hi there, today we're looking at reinforcement learning with unsupervised auxiliary tasks by Google. So in this paper the authors consider a reinforcement learning task and I can show you what it looks like. It looks like this kind of a maze or this is an example that they give where you have to navigate the maze, it's 3D and you have to navigate from pixel inputs, you have to collect apples and reach the goal and this gives you rewards. So on the left you can see what the agent is actually seeing, on the right you can see it from a top down view. The problem is of course that the input is very, or the reward is very sparse, meaning that you have to navigate a lot of maze before you even get a single point. So reinforcement learning has a big trouble with this because it relies on constant reward to notice what actions are good and what actions are bad. So what the authors propose is in addition to the regular loss that you would have, so your reward which is this thing, you would also have an additional set of auxiliary tasks and here C goes over the auxiliary control tasks that you specify. Each of those has a reward and you're also trying to maximize these each with some kind of a weight here. And the thing is that the parameters that you maximize over control all of the different tasks so they are partly shared between the tasks. So what you're hoping is that by kind of learning to do one thing you also learn to do another thing. So the difference between this and let's say, you might have, so we've seen kind of work of it like this before where you do it more like an autoencoder setting. So for example you can't, the agent sees the input on the left here and it kind of tries to predict what the next input will be, what the next frame will be. The thought behind this is if you can accurately predict what the next frame will be maybe learn something useful about the environment. In this work it's different because now we couple a reward to these tasks and I can show you here what the authors propose as additional rewards. Sorry, they're further on top. Let me go there. Basically they consider here these two auxiliary control tasks. So pixel changes which means that the agent actually tries to actively change pixels. So it gets a reward for changing the pixels in the input. So it tries to maximize this. It needs to learn what do I need to do to maximize my pixel changes and probably that will be moving around. So it will learn to kind of move around, not move against the wall because if it moves against the wall the pixels won't change. So it will kind of learn to move along the, like how a regular human agent would also move not into a wall, not like into a dead end or something such that the pixels always change. Of course it's not perfect. You can also change your pixels quite a bit by simply spinning around in a circle. But this is one auxiliary tasks that they augment the agent with. The other one is network features. So it's kind of a meta learning here. You actually reward the agent for changing its own internal activations. So the hope is that it kind of learns about something by itself. How can I activate my internal neural network units? And it gets rewarded for that. So it might want to activate a lot of them and want to learn how they're activated. So this kind of self-interspection, you also hope that it kind of leads to a network that does more sophisticated tasks or that by nature of trying to get most pixel changes and the most network feature activations that you also learn something useful for the actual task. So these are the two tasks they propose. In addition, they also do, and they have a drawing of this over here. They also do a lot of other things, namely on the top left, you can kind of see here we have a database agent. This is an A3C agent, meaning that it's an actor critic. So you learn a policy and you learn a value network. We might go over this in a future video. So just consider this a standard reinforcement learning agent. You feed its experience into a replay buffer. And out of the replay buffer, you do many things. So for one, you try to learn these auxiliary tasks. Note that these are shared parameters between all of these networks. That's why the auxiliary tasks actually help. But you also try to better learn your value function. They call this off policy learning because you kind of pause the base agent training for a while and then you train the value function some more, just because that helps. You also try a reward prediction from here. And the way they do it, as they explain, is kind of in a skewed sampling way. So out of all the situations you can be in, the agent will have a reward very, very few times. So what they do is they simply sample out of the replay buffer, out of all the experiences they've had so far, they sample more frequently the experiences where they have actually gotten a reward. That way the hope is, of course, the agent, if you look at the experience here where you actually get an apple, then the agent might learn a lot faster, oh, there's some kind of apple there and I move towards it to get a reward. So that's the hope that you instantly recognize high reward situations and kind of are not so interested in non-reward situations. Of course, it does introduce biased near sampling and you might decide for yourself if that's good or bad. But here it seems to work. So they have a lot of experiments in this task, this labyrinth task, and they, of course, as is with research, they reach state of the art, they're much better than anything else. No, I mean they don't boast this much. So it's actually fair comparisons. The criticisms, so they also evaluate on Atari games, the criticisms that I have are twofold. First of all, the choice of auxiliary tasks is, of course, completely up to the implementer, which means that I have to decide as an implementer of this algorithm what my auxiliary task will be. And here, pixel changes and network features, they seem like fairly general tasks that you could apply to a lot of these kind of problems, but it always kind of comes down to how much knowledge about the task would you like to code into the actor. And here, I mean, you can see it makes sense to get at least the pixel changes as an auxiliary task, but it's questionable how much of kind of domain knowledge this already encodes. So the fact, the choice of these are certainly something that you have to decide as a human. And I think these are good choices. So they're not too domain specific, but also they do correspond to like some kind of visual moving around game task. And the other kind of criticisms, not really criticisms, it's just a remark, is that they do a lot of things. So their paper is about the auxiliary tasks, but they also then do these skewed sampling and the off-policy value learning and so on. And of course, you can kind of argue, yeah, this is all done in other reinforcement learning tasks. That's why it's a fair comparison. I guess it's a philosophical question. If you want to reach state of the art, of course, you have to first of all, get a better method here. This will be the auxiliary tasks. This is the new idea. And then implement all the tricks that the other people have discovered, which is good because you kind of reach the highest performance you can get. But also the problem is you make it harder to compare, you make it harder to see where the improvement is coming from. Have you simply chosen better hyperparameters for the reward predictions of things? Is there an interaction maybe between the auxiliary tasks and the skewed sampling part? All of these kind of things wash out and it's not really clear where the improvement is coming from. On the other hand, if you simply take a basic, basic, basic algorithm, like just A3C here on the top left, and you augment it with nothing but these auxiliary tasks on the bottom left, and then you see an improvement, you can be relatively sure it's due to your new idea. But of course, you won't reach any state of the art numbers because everyone that does A3C also does these tricks. No question here. I'm standing more on the side of not doing the tricks or maybe doing both. Yeah, but decide for yourself and have a nice day.
[ { "end": 6.48, "start": 0, "text": " Hi there, today we're looking at reinforcement learning with unsupervised auxiliary tasks" }, { "end": 9.64, "start": 6.48, "text": " by Google." }, { "end": 14.6, "start": 9.64, "text": " So in this paper the authors consider a reinforcement learning task and I can show you what it looks" }, { "end": 16.92, "start": 14.6, "text": " like." }, { "end": 22.64, "start": 16.92, "text": " It looks like this kind of a maze or this is an example that they give where you have" }, { "end": 27.64, "start": 22.64, "text": " to navigate the maze, it's 3D and you have to navigate from pixel inputs, you have to" }, { "end": 31.52, "start": 27.64, "text": " collect apples and reach the goal and this gives you rewards." }, { "end": 36, "start": 31.52, "text": " So on the left you can see what the agent is actually seeing, on the right you can see" }, { "end": 38.68, "start": 36, "text": " it from a top down view." }, { "end": 45.72, "start": 38.68, "text": " The problem is of course that the input is very, or the reward is very sparse, meaning" }, { "end": 52.78, "start": 45.72, "text": " that you have to navigate a lot of maze before you even get a single point." }, { "end": 58.96, "start": 52.78, "text": " So reinforcement learning has a big trouble with this because it relies on constant reward" }, { "end": 62.5, "start": 58.96, "text": " to notice what actions are good and what actions are bad." }, { "end": 71.2, "start": 62.5, "text": " So what the authors propose is in addition to the regular loss that you would have, so" }, { "end": 79.72, "start": 71.2, "text": " your reward which is this thing, you would also have an additional set of auxiliary tasks" }, { "end": 86.4, "start": 79.72, "text": " and here C goes over the auxiliary control tasks that you specify." }, { "end": 92.44, "start": 86.4, "text": " Each of those has a reward and you're also trying to maximize these each with some kind" }, { "end": 94.4, "start": 92.44, "text": " of a weight here." }, { "end": 99.84, "start": 94.4, "text": " And the thing is that the parameters that you maximize over control all of the different" }, { "end": 104.22, "start": 99.84, "text": " tasks so they are partly shared between the tasks." }, { "end": 109.08, "start": 104.22, "text": " So what you're hoping is that by kind of learning to do one thing you also learn to do another" }, { "end": 111.12, "start": 109.08, "text": " thing." }, { "end": 118.72, "start": 111.12, "text": " So the difference between this and let's say, you might have, so we've seen kind of work" }, { "end": 125, "start": 118.72, "text": " of it like this before where you do it more like an autoencoder setting." }, { "end": 130.88, "start": 125, "text": " So for example you can't, the agent sees the input on the left here and it kind of tries" }, { "end": 135.2, "start": 130.88, "text": " to predict what the next input will be, what the next frame will be." }, { "end": 139.32, "start": 135.2, "text": " The thought behind this is if you can accurately predict what the next frame will be maybe" }, { "end": 142.64, "start": 139.32, "text": " learn something useful about the environment." }, { "end": 150.79999999999998, "start": 142.64, "text": " In this work it's different because now we couple a reward to these tasks and I can show" }, { "end": 155.67999999999998, "start": 150.79999999999998, "text": " you here what the authors propose as additional rewards." }, { "end": 158.72, "start": 155.67999999999998, "text": " Sorry, they're further on top." }, { "end": 161.67999999999998, "start": 158.72, "text": " Let me go there." }, { "end": 167.04000000000002, "start": 161.68, "text": " Basically they consider here these two auxiliary control tasks." }, { "end": 176.72, "start": 167.04000000000002, "text": " So pixel changes which means that the agent actually tries to actively change pixels." }, { "end": 181.56, "start": 176.72, "text": " So it gets a reward for changing the pixels in the input." }, { "end": 183.8, "start": 181.56, "text": " So it tries to maximize this." }, { "end": 189.44, "start": 183.8, "text": " It needs to learn what do I need to do to maximize my pixel changes and probably that" }, { "end": 191.24, "start": 189.44, "text": " will be moving around." }, { "end": 195.64000000000001, "start": 191.24, "text": " So it will learn to kind of move around, not move against the wall because if it moves" }, { "end": 199.08, "start": 195.64000000000001, "text": " against the wall the pixels won't change." }, { "end": 208.60000000000002, "start": 199.08, "text": " So it will kind of learn to move along the, like how a regular human agent would also" }, { "end": 214.56, "start": 208.60000000000002, "text": " move not into a wall, not like into a dead end or something such that the pixels always" }, { "end": 215.56, "start": 214.56, "text": " change." }, { "end": 217.32000000000002, "start": 215.56, "text": " Of course it's not perfect." }, { "end": 223.51999999999998, "start": 217.32, "text": " You can also change your pixels quite a bit by simply spinning around in a circle." }, { "end": 227.6, "start": 223.51999999999998, "text": " But this is one auxiliary tasks that they augment the agent with." }, { "end": 229.68, "start": 227.6, "text": " The other one is network features." }, { "end": 233.12, "start": 229.68, "text": " So it's kind of a meta learning here." }, { "end": 244.76, "start": 233.12, "text": " You actually reward the agent for changing its own internal activations." }, { "end": 249.79999999999998, "start": 244.76, "text": " So the hope is that it kind of learns about something by itself." }, { "end": 256.12, "start": 249.79999999999998, "text": " How can I activate my internal neural network units?" }, { "end": 257.48, "start": 256.12, "text": " And it gets rewarded for that." }, { "end": 261.92, "start": 257.48, "text": " So it might want to activate a lot of them and want to learn how they're activated." }, { "end": 268.84, "start": 261.92, "text": " So this kind of self-interspection, you also hope that it kind of leads to a network that" }, { "end": 278.47999999999996, "start": 268.84, "text": " does more sophisticated tasks or that by nature of trying to get most pixel changes and the" }, { "end": 284.35999999999996, "start": 278.47999999999996, "text": " most network feature activations that you also learn something useful for the actual" }, { "end": 286.88, "start": 284.35999999999996, "text": " task." }, { "end": 290.32, "start": 286.88, "text": " So these are the two tasks they propose." }, { "end": 296.84, "start": 290.32, "text": " In addition, they also do, and they have a drawing of this over here." }, { "end": 303.84, "start": 296.84, "text": " They also do a lot of other things, namely on the top left, you can kind of see here" }, { "end": 307.23999999999995, "start": 303.84, "text": " we have a database agent." }, { "end": 313.2, "start": 307.23999999999995, "text": " This is an A3C agent, meaning that it's an actor critic." }, { "end": 316.23999999999995, "start": 313.2, "text": " So you learn a policy and you learn a value network." }, { "end": 318.88, "start": 316.23999999999995, "text": " We might go over this in a future video." }, { "end": 322.96, "start": 318.88, "text": " So just consider this a standard reinforcement learning agent." }, { "end": 326.59999999999997, "start": 322.96, "text": " You feed its experience into a replay buffer." }, { "end": 329.96000000000004, "start": 326.6, "text": " And out of the replay buffer, you do many things." }, { "end": 335.96000000000004, "start": 329.96000000000004, "text": " So for one, you try to learn these auxiliary tasks." }, { "end": 340.24, "start": 335.96000000000004, "text": " Note that these are shared parameters between all of these networks." }, { "end": 343.6, "start": 340.24, "text": " That's why the auxiliary tasks actually help." }, { "end": 347.28000000000003, "start": 343.6, "text": " But you also try to better learn your value function." }, { "end": 356.12, "start": 347.28000000000003, "text": " They call this off policy learning because you kind of pause the base agent training" }, { "end": 362.28000000000003, "start": 356.12, "text": " for a while and then you train the value function some more, just because that helps." }, { "end": 366.4, "start": 362.28000000000003, "text": " You also try a reward prediction from here." }, { "end": 371.48, "start": 366.4, "text": " And the way they do it, as they explain, is kind of in a skewed sampling way." }, { "end": 380.04, "start": 371.48, "text": " So out of all the situations you can be in, the agent will have a reward very, very few" }, { "end": 381.28000000000003, "start": 380.04, "text": " times." }, { "end": 386.64, "start": 381.28, "text": " So what they do is they simply sample out of the replay buffer, out of all the experiences" }, { "end": 393.76, "start": 386.64, "text": " they've had so far, they sample more frequently the experiences where they have actually gotten" }, { "end": 395.14, "start": 393.76, "text": " a reward." }, { "end": 405.91999999999996, "start": 395.14, "text": " That way the hope is, of course, the agent, if you look at the experience here where you" }, { "end": 412.32, "start": 405.92, "text": " actually get an apple, then the agent might learn a lot faster, oh, there's some kind" }, { "end": 416.68, "start": 412.32, "text": " of apple there and I move towards it to get a reward." }, { "end": 424.04, "start": 416.68, "text": " So that's the hope that you instantly recognize high reward situations and kind of are not" }, { "end": 426.44, "start": 424.04, "text": " so interested in non-reward situations." }, { "end": 432.44, "start": 426.44, "text": " Of course, it does introduce biased near sampling and you might decide for yourself if that's" }, { "end": 433.44, "start": 432.44, "text": " good or bad." }, { "end": 436.6, "start": 433.44, "text": " But here it seems to work." }, { "end": 446.04, "start": 436.6, "text": " So they have a lot of experiments in this task, this labyrinth task, and they, of course," }, { "end": 451.08, "start": 446.04, "text": " as is with research, they reach state of the art, they're much better than anything else." }, { "end": 453.64, "start": 451.08, "text": " No, I mean they don't boast this much." }, { "end": 457.84, "start": 453.64, "text": " So it's actually fair comparisons." }, { "end": 464.47999999999996, "start": 457.84, "text": " The criticisms, so they also evaluate on Atari games, the criticisms that I have are twofold." }, { "end": 472.84, "start": 464.47999999999996, "text": " First of all, the choice of auxiliary tasks is, of course, completely up to the implementer," }, { "end": 479.59999999999997, "start": 472.84, "text": " which means that I have to decide as an implementer of this algorithm what my auxiliary task will" }, { "end": 480.59999999999997, "start": 479.59999999999997, "text": " be." }, { "end": 485.15999999999997, "start": 480.59999999999997, "text": " And here, pixel changes and network features, they seem like fairly general tasks that you" }, { "end": 491.08000000000004, "start": 485.16, "text": " could apply to a lot of these kind of problems, but it always kind of comes down to how much" }, { "end": 497.48, "start": 491.08000000000004, "text": " knowledge about the task would you like to code into the actor." }, { "end": 504.40000000000003, "start": 497.48, "text": " And here, I mean, you can see it makes sense to get at least the pixel changes as an auxiliary" }, { "end": 511.40000000000003, "start": 504.40000000000003, "text": " task, but it's questionable how much of kind of domain knowledge this already encodes." }, { "end": 519.68, "start": 511.4, "text": " So the fact, the choice of these are certainly something that you have to decide as a human." }, { "end": 521.9599999999999, "start": 519.68, "text": " And I think these are good choices." }, { "end": 528.64, "start": 521.9599999999999, "text": " So they're not too domain specific, but also they do correspond to like some kind of visual" }, { "end": 532.68, "start": 528.64, "text": " moving around game task." }, { "end": 540.88, "start": 532.68, "text": " And the other kind of criticisms, not really criticisms, it's just a remark, is that they" }, { "end": 542.84, "start": 540.88, "text": " do a lot of things." }, { "end": 549.4, "start": 542.84, "text": " So their paper is about the auxiliary tasks, but they also then do these skewed sampling" }, { "end": 552.56, "start": 549.4, "text": " and the off-policy value learning and so on." }, { "end": 559.52, "start": 552.56, "text": " And of course, you can kind of argue, yeah, this is all done in other reinforcement learning" }, { "end": 560.52, "start": 559.52, "text": " tasks." }, { "end": 562.72, "start": 560.52, "text": " That's why it's a fair comparison." }, { "end": 566.16, "start": 562.72, "text": " I guess it's a philosophical question." }, { "end": 572.3199999999999, "start": 566.16, "text": " If you want to reach state of the art, of course, you have to first of all, get a better" }, { "end": 573.6, "start": 572.3199999999999, "text": " method here." }, { "end": 575.04, "start": 573.6, "text": " This will be the auxiliary tasks." }, { "end": 576.48, "start": 575.04, "text": " This is the new idea." }, { "end": 585.04, "start": 576.48, "text": " And then implement all the tricks that the other people have discovered, which is good" }, { "end": 588.04, "start": 585.04, "text": " because you kind of reach the highest performance you can get." }, { "end": 596.48, "start": 588.04, "text": " But also the problem is you make it harder to compare, you make it harder to see where" }, { "end": 598.1999999999999, "start": 596.48, "text": " the improvement is coming from." }, { "end": 605.76, "start": 598.1999999999999, "text": " Have you simply chosen better hyperparameters for the reward predictions of things?" }, { "end": 611.4, "start": 605.76, "text": " Is there an interaction maybe between the auxiliary tasks and the skewed sampling part?" }, { "end": 615.16, "start": 611.4, "text": " All of these kind of things wash out and it's not really clear where the improvement is" }, { "end": 616.16, "start": 615.16, "text": " coming from." }, { "end": 623, "start": 616.16, "text": " On the other hand, if you simply take a basic, basic, basic algorithm, like just A3C here" }, { "end": 630.9599999999999, "start": 623, "text": " on the top left, and you augment it with nothing but these auxiliary tasks on the bottom left," }, { "end": 635.52, "start": 630.9599999999999, "text": " and then you see an improvement, you can be relatively sure it's due to your new idea." }, { "end": 640.48, "start": 635.52, "text": " But of course, you won't reach any state of the art numbers because everyone that does" }, { "end": 645.16, "start": 640.48, "text": " A3C also does these tricks." }, { "end": 647.12, "start": 645.16, "text": " No question here." }, { "end": 653.12, "start": 647.12, "text": " I'm standing more on the side of not doing the tricks or maybe doing both." }, { "end": 676.08, "start": 653.12, "text": " Yeah, but decide for yourself and have a nice day." } ]
56GW1IlWgMg
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Learning model-based planning from scratch
[ "Science & Technology" ]
[ "machine learning", "artificial intelligence", "ai", "deep learning", "reinforcement learning", "deep mind", "research", "academia", "paper", "review", "imagination", "planning", "agents" ]
https://arxiv.org/abs/1707.06170 Abstract: Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them. Authors: Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racanière, David Reichert, Théophane Weber, Daan Wierstra, Peter Battaglia
Hi there, today we're taking a look at learning model-based planning from scratch by DeepMind. So as a recap, what is model-based planning? Basically a model, also called an environment model, is just kind of a black box thing, you can imagine, where you have a state of your current environment, you put it in there and you have an action that you want to take, you put it in there as well. And the environment model tells you what the new state, S' here, and possibly also the new reward for taking that action is going to be. So this, of course it's always good to have such an environment model, because you can use it to plan ahead, but the authors here question how do you plan and propose a new algorithm to learn this planning. For now, people have mostly used heuristics to plan either things like A star search, where you have a maze and you want to go here, and you kind of have a heuristic, say the distance between the two points, but there's kind of walls in between, so you try to go there but then there's a wall and you kind of explore around it. So these are kind of the techniques that have existed so far. Also we've seen stuff like Monte Carlo tree search for AlphaGo and other things like this that are not really learned. So this kind of paper pros and mechanisms to learn how to plan using such a model. So basically they devise an algorithm or a framework, you can say, where they have this, what you see here, this schematic. This schematic tells you that you have this thing called a manager. Let me just quickly bring up my comment thingy thing. You can see here there's this kind of manager and this manager can decide to imagine or act. If it acts, then it simply takes kind of the current state and all the things that happened so far and decides on an action to do in the world. And then it kind of trains on the action like classic reinforcement learning. But if it decides to imagine, it can use its model of the world, its imagination model to perform an action and see what would happen if it did that action. And it can then also append that to the memory and use it to further learn. Even though it didn't do the action, it can imagine what happens. So how can it imagine? The authors in particular propose different methods of imagining. This graph you see there are proposed methods. The first two methods basically, so here every row is a method of imagining. The first method, the one step imagining, simply means you have the current state of the world, which is the grey blob here. And what you do is you always go from the current state of the world, imagine one step ahead. So basically you select the state to imagine from, you imagine one step. And if you decide to not take an action after that, but imagine again, because maybe you're not sure yet what you want to do, so you want to imagine another action, you would again go from this initial state, so this in the horizontal direction is time, time, internal time basically. You would again go from this state, imagine another action based on it, and so on, imagine another action. Until you're satisfied, you've imagined enough so you can actually take a real world step. In contrast, the end step strategy, so these are hard coded strategies as you can see. The learned part is which action should I take? The hard coded part is where do I base this action off? The end step strategy also selects the first state at first, imagines one action on top of it, but then always selects that new imagined action. So you can see here it selects this one to propose this action, and then it selects that imagined action to propose yet another action. So you can see it kind of imagines one path into the future instead of many paths, just one step ahead. And then lastly, this imagination tree strategy is basically the only one that's actually kind of a learned strategy where the manager can now propose any previously imagined or real world states in order to imagine from. So you always have the current world state, which is the first node in the graph. You select it, of course, at the beginning you have no choice. You imagine an action on top of it, but then you can select any of these two nodes to imagine from and here again the first is selected and action is imagined. Then you have three nodes. You can choose any of those where you want to imagine the next step. Here in this example, the manager selects this state right here and decides to imagine another action on top of it until it is satisfied and can then actually go over to plan to actually perform an action in the real world. So if you then decide to do an action in the real world, what you can do is you can take all of the things you've imagined and use that. So you see in this pathway here, this flows back to the manager. At some point it decides, okay, I've imagined enough and we can use all of these imagined steps in order to take a real world step. And after the real world step, the entire thing starts again. So that's how it learns to plan. Really interesting of course is this imagination tree strategy where it actually learns to plan ahead. So the model is described in detail in a formal manner and then it already goes over to experiments and there's this spaceship task where you have to get the spaceship to move around stuff and around these asteroids and get a reward. So you can see different imagination projectives here in the top row. You see the red ones is the kind of executed actions, the blue ones are imagined ones and you see the tree it's constructed. So first it takes an action right here, just without imagining. Then it imagines one step but then decides to take another action. It imagines two actions but decides on a third one. So you see to the left in this picture you see the first action. Then it imagines one action and decides to take an action. Then it imagines two actions and based on these imaginations, I'm going to guess it's fairly satisfied with the one that's very close to the target and it can then take an action. So it's pretty smart in that it sees that the second imagined action is fairly close to where it wants to go and it doesn't need to imagine yet another action. That then actually hits the target. It can go over to performing the action right away because the imagination gives enough information. So these kind of things are pretty cool to look at and check out the more experiments if you want to know. Here is even more experiments in discrete mazes. They feature multiple goals. They feature the system optimizing not only for its reward but also for kind of internal costs, so having a budget for imagining and optimizing not doing too many imagination steps. On this experiment the kind of thing that bugs me here is the fact that they didn't actually use the full imagination tree algorithm but the manager only selected from what you can see here. So do an actual action, then SJ0 is the first imagined state and SJK is the last imagined state. So basically the manager can only choose between actually acting, then doing this one step strategy and then doing kind of this end step strategy in each step. So it kind of limits the way it can plan but I'm going to guess they did this because otherwise they couldn't have trained the model and it seems a pretty reasonable simplification to make in order to get this to work. Also check out the paper if you want to see how all of these different parts are implemented. Of course you can guess most of them are neural networks and it's pretty standard so far and check out for the additional experiments. They're pretty cool. See you next time.
[ { "end": 8.040000000000001, "start": 0, "text": " Hi there, today we're taking a look at learning model-based planning from scratch by DeepMind." }, { "end": 12.32, "start": 8.040000000000001, "text": " So as a recap, what is model-based planning?" }, { "end": 20.32, "start": 12.32, "text": " Basically a model, also called an environment model, is just kind of a black box thing," }, { "end": 26.28, "start": 20.32, "text": " you can imagine, where you have a state of your current environment, you put it in there" }, { "end": 30.720000000000002, "start": 26.28, "text": " and you have an action that you want to take, you put it in there as well." }, { "end": 36.52, "start": 30.720000000000002, "text": " And the environment model tells you what the new state, S' here, and possibly also the" }, { "end": 41.36, "start": 36.52, "text": " new reward for taking that action is going to be." }, { "end": 49.120000000000005, "start": 41.36, "text": " So this, of course it's always good to have such an environment model, because you can" }, { "end": 57.04, "start": 49.12, "text": " use it to plan ahead, but the authors here question how do you plan and propose a new" }, { "end": 59.239999999999995, "start": 57.04, "text": " algorithm to learn this planning." }, { "end": 66.6, "start": 59.239999999999995, "text": " For now, people have mostly used heuristics to plan either things like A star search," }, { "end": 72.08, "start": 66.6, "text": " where you have a maze and you want to go here, and you kind of have a heuristic, say the" }, { "end": 77.75999999999999, "start": 72.08, "text": " distance between the two points, but there's kind of walls in between, so you try to go" }, { "end": 83.52000000000001, "start": 77.76, "text": " there but then there's a wall and you kind of explore around it." }, { "end": 87.12, "start": 83.52000000000001, "text": " So these are kind of the techniques that have existed so far." }, { "end": 95.64, "start": 87.12, "text": " Also we've seen stuff like Monte Carlo tree search for AlphaGo and other things like this" }, { "end": 98.5, "start": 95.64, "text": " that are not really learned." }, { "end": 108.9, "start": 98.5, "text": " So this kind of paper pros and mechanisms to learn how to plan using such a model." }, { "end": 117.44, "start": 108.9, "text": " So basically they devise an algorithm or a framework, you can say, where they have this," }, { "end": 120.28, "start": 117.44, "text": " what you see here, this schematic." }, { "end": 124.6, "start": 120.28, "text": " This schematic tells you that you have this thing called a manager." }, { "end": 137.35999999999999, "start": 124.6, "text": " Let me just quickly bring up my comment thingy thing." }, { "end": 143.35999999999999, "start": 137.35999999999999, "text": " You can see here there's this kind of manager and this manager can decide to imagine or" }, { "end": 147.4, "start": 143.35999999999999, "text": " act." }, { "end": 154.28, "start": 147.4, "text": " If it acts, then it simply takes kind of the current state and all the things that happened" }, { "end": 159.52, "start": 154.28, "text": " so far and decides on an action to do in the world." }, { "end": 164.36, "start": 159.52, "text": " And then it kind of trains on the action like classic reinforcement learning." }, { "end": 171.68, "start": 164.36, "text": " But if it decides to imagine, it can use its model of the world, its imagination model" }, { "end": 177, "start": 171.68, "text": " to perform an action and see what would happen if it did that action." }, { "end": 187.32, "start": 177, "text": " And it can then also append that to the memory and use it to further learn." }, { "end": 190.68, "start": 187.32, "text": " Even though it didn't do the action, it can imagine what happens." }, { "end": 192.16, "start": 190.68, "text": " So how can it imagine?" }, { "end": 201.56, "start": 192.16, "text": " The authors in particular propose different methods of imagining." }, { "end": 205.32, "start": 201.56, "text": " This graph you see there are proposed methods." }, { "end": 214, "start": 205.32, "text": " The first two methods basically, so here every row is a method of imagining." }, { "end": 218.72, "start": 214, "text": " The first method, the one step imagining, simply means you have the current state of" }, { "end": 221.79999999999998, "start": 218.72, "text": " the world, which is the grey blob here." }, { "end": 226.4, "start": 221.79999999999998, "text": " And what you do is you always go from the current state of the world, imagine one step" }, { "end": 227.76, "start": 226.4, "text": " ahead." }, { "end": 234.32, "start": 227.76, "text": " So basically you select the state to imagine from, you imagine one step." }, { "end": 241.28, "start": 234.32, "text": " And if you decide to not take an action after that, but imagine again, because maybe you're" }, { "end": 246.16, "start": 241.28, "text": " not sure yet what you want to do, so you want to imagine another action, you would again" }, { "end": 255.84, "start": 246.16, "text": " go from this initial state, so this in the horizontal direction is time, time, internal" }, { "end": 258.4, "start": 255.84, "text": " time basically." }, { "end": 263.15999999999997, "start": 258.4, "text": " You would again go from this state, imagine another action based on it, and so on, imagine" }, { "end": 265.76000000000005, "start": 263.16, "text": " another action." }, { "end": 271.84000000000003, "start": 265.76000000000005, "text": " Until you're satisfied, you've imagined enough so you can actually take a real world step." }, { "end": 282.86, "start": 271.84000000000003, "text": " In contrast, the end step strategy, so these are hard coded strategies as you can see." }, { "end": 286.08000000000004, "start": 282.86, "text": " The learned part is which action should I take?" }, { "end": 291.40000000000003, "start": 286.08000000000004, "text": " The hard coded part is where do I base this action off?" }, { "end": 297, "start": 291.4, "text": " The end step strategy also selects the first state at first, imagines one action on top" }, { "end": 302.15999999999997, "start": 297, "text": " of it, but then always selects that new imagined action." }, { "end": 308.56, "start": 302.15999999999997, "text": " So you can see here it selects this one to propose this action, and then it selects that" }, { "end": 312.71999999999997, "start": 308.56, "text": " imagined action to propose yet another action." }, { "end": 319.59999999999997, "start": 312.71999999999997, "text": " So you can see it kind of imagines one path into the future instead of many paths, just" }, { "end": 321.72, "start": 319.6, "text": " one step ahead." }, { "end": 329.48, "start": 321.72, "text": " And then lastly, this imagination tree strategy is basically the only one that's actually" }, { "end": 339.32000000000005, "start": 329.48, "text": " kind of a learned strategy where the manager can now propose any previously imagined or" }, { "end": 342.06, "start": 339.32000000000005, "text": " real world states in order to imagine from." }, { "end": 347.08000000000004, "start": 342.06, "text": " So you always have the current world state, which is the first node in the graph." }, { "end": 350.12, "start": 347.08, "text": " You select it, of course, at the beginning you have no choice." }, { "end": 355.44, "start": 350.12, "text": " You imagine an action on top of it, but then you can select any of these two nodes to imagine" }, { "end": 361.28, "start": 355.44, "text": " from and here again the first is selected and action is imagined." }, { "end": 363, "start": 361.28, "text": " Then you have three nodes." }, { "end": 367.76, "start": 363, "text": " You can choose any of those where you want to imagine the next step." }, { "end": 375.78, "start": 367.76, "text": " Here in this example, the manager selects this state right here and decides to imagine" }, { "end": 382.91999999999996, "start": 375.78, "text": " another action on top of it until it is satisfied and can then actually go over to plan to actually" }, { "end": 384.44, "start": 382.91999999999996, "text": " perform an action in the real world." }, { "end": 395.32, "start": 384.44, "text": " So if you then decide to do an action in the real world, what you can do is you can take" }, { "end": 402.03999999999996, "start": 395.32, "text": " all of the things you've imagined and use that." }, { "end": 407.20000000000005, "start": 402.04, "text": " So you see in this pathway here, this flows back to the manager." }, { "end": 412.44, "start": 407.20000000000005, "text": " At some point it decides, okay, I've imagined enough and we can use all of these imagined" }, { "end": 416.16, "start": 412.44, "text": " steps in order to take a real world step." }, { "end": 423.8, "start": 416.16, "text": " And after the real world step, the entire thing starts again." }, { "end": 426.88, "start": 423.8, "text": " So that's how it learns to plan." }, { "end": 438.32, "start": 426.88, "text": " Really interesting of course is this imagination tree strategy where it actually learns to" }, { "end": 442.78, "start": 438.32, "text": " plan ahead." }, { "end": 449.92, "start": 442.78, "text": " So the model is described in detail in a formal manner and then it already goes over to experiments" }, { "end": 462.04, "start": 449.92, "text": " and there's this spaceship task where you have to get the spaceship to move around stuff" }, { "end": 468.44, "start": 462.04, "text": " and around these asteroids and get a reward." }, { "end": 475.40000000000003, "start": 468.44, "text": " So you can see different imagination projectives here in the top row." }, { "end": 481.64, "start": 475.4, "text": " You see the red ones is the kind of executed actions, the blue ones are imagined ones and" }, { "end": 483.84, "start": 481.64, "text": " you see the tree it's constructed." }, { "end": 488.47999999999996, "start": 483.84, "text": " So first it takes an action right here, just without imagining." }, { "end": 493.15999999999997, "start": 488.47999999999996, "text": " Then it imagines one step but then decides to take another action." }, { "end": 500.46, "start": 493.15999999999997, "text": " It imagines two actions but decides on a third one." }, { "end": 506.2, "start": 500.46, "text": " So you see to the left in this picture you see the first action." }, { "end": 511.44, "start": 506.2, "text": " Then it imagines one action and decides to take an action." }, { "end": 516.12, "start": 511.44, "text": " Then it imagines two actions and based on these imaginations, I'm going to guess it's" }, { "end": 523.4399999999999, "start": 516.12, "text": " fairly satisfied with the one that's very close to the target and it can then take an" }, { "end": 524.4399999999999, "start": 523.4399999999999, "text": " action." }, { "end": 531.2800000000001, "start": 524.44, "text": " So it's pretty smart in that it sees that the second imagined action is fairly close" }, { "end": 537.32, "start": 531.2800000000001, "text": " to where it wants to go and it doesn't need to imagine yet another action." }, { "end": 539.24, "start": 537.32, "text": " That then actually hits the target." }, { "end": 546.36, "start": 539.24, "text": " It can go over to performing the action right away because the imagination gives enough" }, { "end": 549.84, "start": 546.36, "text": " information." }, { "end": 558.1600000000001, "start": 549.84, "text": " So these kind of things are pretty cool to look at and check out the more experiments" }, { "end": 559.2800000000001, "start": 558.1600000000001, "text": " if you want to know." }, { "end": 563.2800000000001, "start": 559.2800000000001, "text": " Here is even more experiments in discrete mazes." }, { "end": 565, "start": 563.2800000000001, "text": " They feature multiple goals." }, { "end": 573.0400000000001, "start": 565, "text": " They feature the system optimizing not only for its reward but also for kind of internal" }, { "end": 580.16, "start": 573.04, "text": " costs, so having a budget for imagining and optimizing not doing too many imagination" }, { "end": 582, "start": 580.16, "text": " steps." }, { "end": 588.3199999999999, "start": 582, "text": " On this experiment the kind of thing that bugs me here is the fact that they didn't" }, { "end": 596.0799999999999, "start": 588.3199999999999, "text": " actually use the full imagination tree algorithm but the manager only selected from what you" }, { "end": 597.12, "start": 596.0799999999999, "text": " can see here." }, { "end": 608.64, "start": 597.12, "text": " So do an actual action, then SJ0 is the first imagined state and SJK is the last imagined" }, { "end": 612.04, "start": 608.64, "text": " state." }, { "end": 622.88, "start": 612.04, "text": " So basically the manager can only choose between actually acting, then doing this one step" }, { "end": 628.4, "start": 622.88, "text": " strategy and then doing kind of this end step strategy in each step." }, { "end": 635.88, "start": 628.4, "text": " So it kind of limits the way it can plan but I'm going to guess they did this because otherwise" }, { "end": 641.56, "start": 635.88, "text": " they couldn't have trained the model and it seems a pretty reasonable simplification to" }, { "end": 645.28, "start": 641.56, "text": " make in order to get this to work." }, { "end": 650.56, "start": 645.28, "text": " Also check out the paper if you want to see how all of these different parts are implemented." }, { "end": 656.7199999999999, "start": 650.56, "text": " Of course you can guess most of them are neural networks and it's pretty standard so far and" }, { "end": 659.1199999999999, "start": 656.7199999999999, "text": " check out for the additional experiments." }, { "end": 660.1199999999999, "start": 659.1199999999999, "text": " They're pretty cool." }, { "end": 681.16, "start": 660.12, "text": " See you next time." } ]
agXIYMCICcc
Yannic Kilcher
UCZHmQk67mSJgfCCTn7xBfew
Imagination-Augmented Agents for Deep Reinforcement Learning
[ "Science & Technology" ]
[ "deep learning", "reinforcement learning", "deep mind", "academic", "paper", "research" ]
Commentary of https://arxiv.org/abs/1707.06203 Abstract We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines. Authors Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, David Silver, Daan Wierstra
Hi, today we're taking a look at Imagination Augmented Agents for deep reinforcement learning. This is a paper by DeepMind and has been in the news a bit recently, so we're going to have a look at what it's all about. Basically they claim that agents who have a model of the world perform better usually than agents who don't. But of course usually we don't have a model of the world, so they make the agent learn a model of the world which you can then use to plan. Now this learning of the model can of course be imperfect because it's learned and so they provide a way to work with imperfect environment models and combine them with a model-free approach. So what do we mean by models and model-free? Basically what you can say is if you have a model of the world, you have kind of a machine, say a box, and in this box you have a state S and you feed the state to the machine and you feed an action and the model of the world will tell you what did S' the new state is going to be. So this is in the case where you exactly know how your environment works. Now in a model-free approach what you would do is you would plan basically you would have a state and you would put that through some kind of a layered neural network and out would come what action should I take right now. So in the model-based approach you're trying to try out all these actions and tell you look which one gives me kind of a desired final state. And in the model-free approach you simply use the rewards to go directly and say here's my state, what should my action be? So this paper is a combination of both. The basic architecture is here, so let's start from the very right. We have two paths divided along this line. The final policy, so which actions you're going to take and what kind of values you can expect is going to be a result of two different models that are combined. There's a model-free path which means this is what we talked about. Simply here is the state and you simply feed it through this neural network thing, blah, blah, blah, blah, blah, blah, out comes a policy or an action you should take. But then there's also this other path and this is the imagination path. Basically consists a bunch of these rollout encoders and these rollout encoders is just the agent imagining the future. So the agent doing some actions and looking at how they will perform. So as this is done, there's this imagination core thingy. What this consists of is a policy network and an environment model. This environment model is really the core of the entire thing. So this environment model you basically learn from what you've seen so far. So far you've taken certain actions here in certain states. You use this to learn the environment model that gives you from one state the next state and the next reward. So that's what you learn. Of course also using neural networks and whatnot. You use that environment model to imagine the future. So here in this imagination core, basically you put in your state, you get out some new state and some reward. You feed the new state and you imagine another action. Of course the actions aren't random. The actions you also take via this thing. And this is where it loops all back. This is now a model free policy network that works with the environment model. So basically in your imagination you only use, if you look at the very right here, you only use this right path. Because your imagination doesn't need to be super exact or super well planned, you can use the model free approach that we kind of know kind of works for some problems. You use this to generate your actions that you imagine. And you use an environment model in order to look how these actions will play out. And that's how you imagine one step of the future. And you simply repeat this a couple of steps. And then you have an entire what's called a rollout, which consists of these pairs of states and rewards. And what you do then is you encode this rollout via this encoder, which is in this case an LSTM or something like this I think. You encode all these states into one vector, into one embedding basically for this rollout. And this embedding describes kind of this future imagined path. Of course, what you're going to hope is that somehow this encoding captures how you will do in the future and how good this will be. So these states and rewards. Once you have a couple of these rollouts, so once you've imagined a couple of different futures, you then aggregate them in this aggregator. I think in their case, they just concatenate these rollout encodings. And then you feed this too to the big aggregator on top. So the big aggregator on top can now combine the model free path and the imagined futures. So if the big aggregator thinks that the imagination isn't correct, it can resort to the model free path, but it can also think that maybe it's correct, or it can be kind of if it's sure it's correct, it can fully trust these rollouts and perform actions according to that. All of this is of course trained end to end. There's a tiny piece we haven't looked at yet, namely how this here, this policy network on the left is learned. And this is simply learned by, and I have to pay attention that I'm doing the right thing here. So you take this big thing here, your final policy network, and you perform, you kind of learn to copy its actions simply from the input. So from this model free input over here, you take this input and you take, excuse me, and you take the output of your big policy network and you try to simply make a neural network that copies the outputs given these inputs. And that's kind of your small policy network in here that's simply model free. So the loop closes in a way that you use your learned model to then again imagine the future. But of course for imagining the future, within imagining the future, you can't have another instance of this network because it would be infinite recursion. So you can only have a model free network. All right. That's it for the model. Of course, yeah, there's a couple of tricks and how to encode these things. Basically they perform experiments and this is maybe what you've seen in the media so far of this game. And this game is a game where you have to push around the brown boxes onto the red squares using the green avatar that you have. So this game is difficult because first of all, the levels are generated randomly. So there's no way you can like hard code anything. And second of all, if you push a box, say this box here, if you were to push it to the right into the corner, you would have no way of getting it out again. That's why I have to plan ahead and avoid such mistakes because they're not fixable. So once you make the mistakes, you can't go back and that's where planning comes in so handy. If you imagine this future and if your model is correct or approximately correct, then you can avoid such mistakes. Of course, that's the difficulty in this game and that's where the planning helps. Note that they don't code in how the game works. So all these models get is pixel input of the game and they have to kind of imagine the pixel output they're going to get. So that's increased difficulty. So technically the method is model free in the sense that there's really no coded model of the world, just the pixels. So they have performance comparisons where if you and I find this on the right here interesting, you can see according to the unrolled depth, so how much steps into the future you imagine. You can see it kind of flattens out after only about five steps. Whereas the game usually lasts for about 50 steps, they say. So only imagining five steps is already really helpful. What I don't like here is that they compare to what they say this copy model because this here is a standard model free comparison. So it's just a model free agent and of course, or not of course, but it performs worse right here. Because it has no imagination, but it also has less parameters. So they're trying to compare it to something with the same amount of parameters and say, oh, we have this copy model agent here. And what the copy model agent is doing is simply, for the environment model, it's the same architecture, but for the environment model, it simply predicts the output as the input. So it simply says, oh, you do this action, the environment is going to be exactly the same as it is now. And I don't like it because basically this entire branch here becomes rather useless. And so even though you have parameters in here, they're not useful. So to say that this is a comparison with the model of the same amount of parameters, I don't know, technically true. Another thing that they do is they pre-train the environment model with a model free agent. So first they code a model free agent, then they pre-train the environment model to then use with this agent. So it's not fully learned and I can imagine they tried and it didn't work. And this is how you get it to work. So they also experiment with imperfect models. So they train the environment model only imperfectly. And as you can see here, this is kind of the output you can get. Say you have duplicates, you have kind of errors, you have twice your character here, you have like boxes within the wall or all kinds of things. And they basically show that if you try to classically plan using these models, these bad models, you get nowhere. Basically this is a Monte Carlo sampler planner using a poor model and its performance degrades significantly from when you use the good model, which is right here. And the imagination agent is not affected by kind of the bad model, except that it takes kind of longer to reach its high inaccuracy. All right, so there's a couple of other experiments and a couple of Pac-Man experiments where they show you can learn one model to transfer kind of to play different games in this Pac-Man world. And that just works the more if you have very sparse rewards, which you can imagine, yes, if you need to plan then that's what you get. You get the ability to earn more sparse rewards because you can kind of look ahead. All right, so I think I'll conclude here with the discussion of this paper. I quite liked it and it's a cool method, combines many things and I'll see you next time.
[ { "end": 10.8, "start": 0, "text": " Hi, today we're taking a look at Imagination Augmented Agents for deep reinforcement learning." }, { "end": 16.2, "start": 10.8, "text": " This is a paper by DeepMind and has been in the news a bit recently, so we're going to" }, { "end": 21.080000000000002, "start": 16.2, "text": " have a look at what it's all about." }, { "end": 28.64, "start": 21.080000000000002, "text": " Basically they claim that agents who have a model of the world perform better usually" }, { "end": 30.44, "start": 28.64, "text": " than agents who don't." }, { "end": 37.28, "start": 30.44, "text": " But of course usually we don't have a model of the world, so they make the agent learn" }, { "end": 41.68, "start": 37.28, "text": " a model of the world which you can then use to plan." }, { "end": 49.8, "start": 41.68, "text": " Now this learning of the model can of course be imperfect because it's learned and so they" }, { "end": 57.08, "start": 49.8, "text": " provide a way to work with imperfect environment models and combine them with a model-free" }, { "end": 58.68, "start": 57.08, "text": " approach." }, { "end": 62.519999999999996, "start": 58.68, "text": " So what do we mean by models and model-free?" }, { "end": 69.28, "start": 62.519999999999996, "text": " Basically what you can say is if you have a model of the world, you have kind of a machine," }, { "end": 80.28, "start": 69.28, "text": " say a box, and in this box you have a state S and you feed the state to the machine and" }, { "end": 87.72, "start": 80.28, "text": " you feed an action and the model of the world will tell you what did S' the new state is" }, { "end": 91, "start": 87.72, "text": " going to be." }, { "end": 97.16, "start": 91, "text": " So this is in the case where you exactly know how your environment works." }, { "end": 107.36, "start": 97.16, "text": " Now in a model-free approach what you would do is you would plan basically you would have" }, { "end": 114.24, "start": 107.36, "text": " a state and you would put that through some kind of a layered neural network and out would" }, { "end": 119.76, "start": 114.24, "text": " come what action should I take right now." }, { "end": 126.36, "start": 119.76, "text": " So in the model-based approach you're trying to try out all these actions and tell you" }, { "end": 131.48, "start": 126.36, "text": " look which one gives me kind of a desired final state." }, { "end": 136.07999999999998, "start": 131.48, "text": " And in the model-free approach you simply use the rewards to go directly and say here's" }, { "end": 139.36, "start": 136.08, "text": " my state, what should my action be?" }, { "end": 145.64000000000001, "start": 139.36, "text": " So this paper is a combination of both." }, { "end": 150.48000000000002, "start": 145.64000000000001, "text": " The basic architecture is here, so let's start from the very right." }, { "end": 154.76000000000002, "start": 150.48000000000002, "text": " We have two paths divided along this line." }, { "end": 159.48000000000002, "start": 154.76000000000002, "text": " The final policy, so which actions you're going to take and what kind of values you" }, { "end": 166.84, "start": 159.48, "text": " can expect is going to be a result of two different models that are combined." }, { "end": 171.16, "start": 166.84, "text": " There's a model-free path which means this is what we talked about." }, { "end": 176.76, "start": 171.16, "text": " Simply here is the state and you simply feed it through this neural network thing, blah," }, { "end": 183.32, "start": 176.76, "text": " blah, blah, blah, blah, blah, out comes a policy or an action you should take." }, { "end": 189.79999999999998, "start": 183.32, "text": " But then there's also this other path and this is the imagination path." }, { "end": 195.51999999999998, "start": 189.79999999999998, "text": " Basically consists a bunch of these rollout encoders and these rollout encoders is just" }, { "end": 198.5, "start": 195.51999999999998, "text": " the agent imagining the future." }, { "end": 205.64, "start": 198.5, "text": " So the agent doing some actions and looking at how they will perform." }, { "end": 213.67999999999998, "start": 205.64, "text": " So as this is done, there's this imagination core thingy." }, { "end": 219.48, "start": 213.67999999999998, "text": " What this consists of is a policy network and an environment model." }, { "end": 223.56, "start": 219.48, "text": " This environment model is really the core of the entire thing." }, { "end": 230.27999999999997, "start": 223.56, "text": " So this environment model you basically learn from what you've seen so far." }, { "end": 233.16, "start": 230.27999999999997, "text": " So far you've taken certain actions here in certain states." }, { "end": 242.32, "start": 233.16, "text": " You use this to learn the environment model that gives you from one state the next state" }, { "end": 244.56, "start": 242.32, "text": " and the next reward." }, { "end": 248.24, "start": 244.56, "text": " So that's what you learn." }, { "end": 252.64, "start": 248.24, "text": " Of course also using neural networks and whatnot." }, { "end": 260.56, "start": 252.64, "text": " You use that environment model to imagine the future." }, { "end": 268.16, "start": 260.56, "text": " So here in this imagination core, basically you put in your state, you get out some new" }, { "end": 270.08, "start": 268.16, "text": " state and some reward." }, { "end": 273.48, "start": 270.08, "text": " You feed the new state and you imagine another action." }, { "end": 275.52, "start": 273.48, "text": " Of course the actions aren't random." }, { "end": 279.8, "start": 275.52, "text": " The actions you also take via this thing." }, { "end": 281.8, "start": 279.8, "text": " And this is where it loops all back." }, { "end": 287.66, "start": 281.8, "text": " This is now a model free policy network that works with the environment model." }, { "end": 292.16, "start": 287.66, "text": " So basically in your imagination you only use, if you look at the very right here, you" }, { "end": 296.08000000000004, "start": 292.16, "text": " only use this right path." }, { "end": 301.16, "start": 296.08000000000004, "text": " Because your imagination doesn't need to be super exact or super well planned, you can" }, { "end": 307.36, "start": 301.16, "text": " use the model free approach that we kind of know kind of works for some problems." }, { "end": 312.24, "start": 307.36, "text": " You use this to generate your actions that you imagine." }, { "end": 317.72, "start": 312.24, "text": " And you use an environment model in order to look how these actions will play out." }, { "end": 322.08, "start": 317.72, "text": " And that's how you imagine one step of the future." }, { "end": 328.40000000000003, "start": 322.08, "text": " And you simply repeat this a couple of steps." }, { "end": 333.56, "start": 328.40000000000003, "text": " And then you have an entire what's called a rollout, which consists of these pairs of" }, { "end": 336.8, "start": 333.56, "text": " states and rewards." }, { "end": 342.84000000000003, "start": 336.8, "text": " And what you do then is you encode this rollout via this encoder, which is in this case an" }, { "end": 348.2, "start": 342.84000000000003, "text": " LSTM or something like this I think." }, { "end": 356.2, "start": 348.2, "text": " You encode all these states into one vector, into one embedding basically for this rollout." }, { "end": 364.28000000000003, "start": 356.2, "text": " And this embedding describes kind of this future imagined path." }, { "end": 372.08, "start": 364.28, "text": " Of course, what you're going to hope is that somehow this encoding captures how you will" }, { "end": 374.35999999999996, "start": 372.08, "text": " do in the future and how good this will be." }, { "end": 377.23999999999995, "start": 374.35999999999996, "text": " So these states and rewards." }, { "end": 381.84, "start": 377.23999999999995, "text": " Once you have a couple of these rollouts, so once you've imagined a couple of different" }, { "end": 388.28, "start": 381.84, "text": " futures, you then aggregate them in this aggregator." }, { "end": 395.32, "start": 388.28, "text": " I think in their case, they just concatenate these rollout encodings." }, { "end": 401.23999999999995, "start": 395.32, "text": " And then you feed this too to the big aggregator on top." }, { "end": 408.71999999999997, "start": 401.23999999999995, "text": " So the big aggregator on top can now combine the model free path and the imagined futures." }, { "end": 417.84, "start": 408.71999999999997, "text": " So if the big aggregator thinks that the imagination isn't correct, it can resort to the model" }, { "end": 425.11999999999995, "start": 417.84, "text": " free path, but it can also think that maybe it's correct, or it can be kind of if it's" }, { "end": 431, "start": 425.11999999999995, "text": " sure it's correct, it can fully trust these rollouts and perform actions according to" }, { "end": 432, "start": 431, "text": " that." }, { "end": 435.47999999999996, "start": 432, "text": " All of this is of course trained end to end." }, { "end": 441.08, "start": 435.47999999999996, "text": " There's a tiny piece we haven't looked at yet, namely how this here, this policy network" }, { "end": 445.28, "start": 441.08, "text": " on the left is learned." }, { "end": 451.32, "start": 445.28, "text": " And this is simply learned by, and I have to pay attention that I'm doing the right" }, { "end": 452.32, "start": 451.32, "text": " thing here." }, { "end": 460.26, "start": 452.32, "text": " So you take this big thing here, your final policy network, and you perform, you kind" }, { "end": 466.23999999999995, "start": 460.26, "text": " of learn to copy its actions simply from the input." }, { "end": 475.08, "start": 466.23999999999995, "text": " So from this model free input over here, you take this input and you take, excuse me, and" }, { "end": 485.03999999999996, "start": 475.08, "text": " you take the output of your big policy network and you try to simply make a neural network" }, { "end": 489.56, "start": 485.03999999999996, "text": " that copies the outputs given these inputs." }, { "end": 494.96, "start": 489.56, "text": " And that's kind of your small policy network in here that's simply model free." }, { "end": 507.2, "start": 494.96, "text": " So the loop closes in a way that you use your learned model to then again imagine the future." }, { "end": 512.88, "start": 507.2, "text": " But of course for imagining the future, within imagining the future, you can't have another" }, { "end": 516.52, "start": 512.88, "text": " instance of this network because it would be infinite recursion." }, { "end": 519.36, "start": 516.52, "text": " So you can only have a model free network." }, { "end": 521.62, "start": 519.36, "text": " All right." }, { "end": 525.24, "start": 521.62, "text": " That's it for the model." }, { "end": 534.52, "start": 525.24, "text": " Of course, yeah, there's a couple of tricks and how to encode these things." }, { "end": 541.66, "start": 534.52, "text": " Basically they perform experiments and this is maybe what you've seen in the media so" }, { "end": 545.14, "start": 541.66, "text": " far of this game." }, { "end": 552.4399999999999, "start": 545.14, "text": " And this game is a game where you have to push around the brown boxes onto the red squares" }, { "end": 558.36, "start": 552.4399999999999, "text": " using the green avatar that you have." }, { "end": 566.04, "start": 558.36, "text": " So this game is difficult because first of all, the levels are generated randomly." }, { "end": 570.48, "start": 566.04, "text": " So there's no way you can like hard code anything." }, { "end": 578.48, "start": 570.48, "text": " And second of all, if you push a box, say this box here, if you were to push it to the" }, { "end": 590, "start": 578.48, "text": " right into the corner, you would have no way of getting it out again." }, { "end": 597.26, "start": 590, "text": " That's why I have to plan ahead and avoid such mistakes because they're not fixable." }, { "end": 601.88, "start": 597.26, "text": " So once you make the mistakes, you can't go back and that's where planning comes in so" }, { "end": 602.88, "start": 601.88, "text": " handy." }, { "end": 608.4399999999999, "start": 602.88, "text": " If you imagine this future and if your model is correct or approximately correct, then" }, { "end": 611.12, "start": 608.4399999999999, "text": " you can avoid such mistakes." }, { "end": 621.4, "start": 611.12, "text": " Of course, that's the difficulty in this game and that's where the planning helps." }, { "end": 624.9, "start": 621.4, "text": " Note that they don't code in how the game works." }, { "end": 631, "start": 624.9, "text": " So all these models get is pixel input of the game and they have to kind of imagine" }, { "end": 634.34, "start": 631, "text": " the pixel output they're going to get." }, { "end": 637.56, "start": 634.34, "text": " So that's increased difficulty." }, { "end": 645.52, "start": 637.56, "text": " So technically the method is model free in the sense that there's really no coded model" }, { "end": 649.36, "start": 645.52, "text": " of the world, just the pixels." }, { "end": 663.0600000000001, "start": 649.36, "text": " So they have performance comparisons where if you and I find this on the right here interesting," }, { "end": 670.8000000000001, "start": 663.0600000000001, "text": " you can see according to the unrolled depth, so how much steps into the future you imagine." }, { "end": 676.64, "start": 670.8000000000001, "text": " You can see it kind of flattens out after only about five steps." }, { "end": 682.6, "start": 676.64, "text": " Whereas the game usually lasts for about 50 steps, they say." }, { "end": 688.52, "start": 682.6, "text": " So only imagining five steps is already really helpful." }, { "end": 696.4399999999999, "start": 688.52, "text": " What I don't like here is that they compare to what they say this copy model because this" }, { "end": 699.98, "start": 696.4399999999999, "text": " here is a standard model free comparison." }, { "end": 707.48, "start": 699.98, "text": " So it's just a model free agent and of course, or not of course, but it performs worse right" }, { "end": 713.36, "start": 707.48, "text": " here." }, { "end": 715.96, "start": 713.36, "text": " Because it has no imagination, but it also has less parameters." }, { "end": 719.6, "start": 715.96, "text": " So they're trying to compare it to something with the same amount of parameters and say," }, { "end": 722.12, "start": 719.6, "text": " oh, we have this copy model agent here." }, { "end": 732, "start": 722.12, "text": " And what the copy model agent is doing is simply, for the environment model, it's the" }, { "end": 737.52, "start": 732, "text": " same architecture, but for the environment model, it simply predicts the output as the" }, { "end": 738.84, "start": 737.52, "text": " input." }, { "end": 743.28, "start": 738.84, "text": " So it simply says, oh, you do this action, the environment is going to be exactly the" }, { "end": 745.72, "start": 743.28, "text": " same as it is now." }, { "end": 754.8000000000001, "start": 745.72, "text": " And I don't like it because basically this entire branch here becomes rather useless." }, { "end": 761.36, "start": 754.8000000000001, "text": " And so even though you have parameters in here, they're not useful." }, { "end": 768.64, "start": 761.36, "text": " So to say that this is a comparison with the model of the same amount of parameters, I" }, { "end": 771.88, "start": 768.64, "text": " don't know, technically true." }, { "end": 781.76, "start": 771.88, "text": " Another thing that they do is they pre-train the environment model with a model free agent." }, { "end": 786.96, "start": 781.76, "text": " So first they code a model free agent, then they pre-train the environment model to then" }, { "end": 789.18, "start": 786.96, "text": " use with this agent." }, { "end": 794.56, "start": 789.18, "text": " So it's not fully learned and I can imagine they tried and it didn't work." }, { "end": 799.32, "start": 794.56, "text": " And this is how you get it to work." }, { "end": 810.12, "start": 799.32, "text": " So they also experiment with imperfect models." }, { "end": 814.48, "start": 810.12, "text": " So they train the environment model only imperfectly." }, { "end": 817.0400000000001, "start": 814.48, "text": " And as you can see here, this is kind of the output you can get." }, { "end": 824.5200000000001, "start": 817.0400000000001, "text": " Say you have duplicates, you have kind of errors, you have twice your character here," }, { "end": 831.68, "start": 824.52, "text": " you have like boxes within the wall or all kinds of things." }, { "end": 838.16, "start": 831.68, "text": " And they basically show that if you try to classically plan using these models, these" }, { "end": 841.84, "start": 838.16, "text": " bad models, you get nowhere." }, { "end": 852.72, "start": 841.84, "text": " Basically this is a Monte Carlo sampler planner using a poor model and its performance degrades" }, { "end": 857.1600000000001, "start": 852.72, "text": " significantly from when you use the good model, which is right here." }, { "end": 867.12, "start": 857.1600000000001, "text": " And the imagination agent is not affected by kind of the bad model, except that it takes" }, { "end": 873.0400000000001, "start": 867.12, "text": " kind of longer to reach its high inaccuracy." }, { "end": 880.08, "start": 873.0400000000001, "text": " All right, so there's a couple of other experiments and a couple of Pac-Man experiments where" }, { "end": 887.8000000000001, "start": 880.08, "text": " they show you can learn one model to transfer kind of to play different games in this Pac-Man" }, { "end": 888.8000000000001, "start": 887.8000000000001, "text": " world." }, { "end": 898.88, "start": 888.8000000000001, "text": " And that just works the more if you have very sparse rewards, which you can imagine, yes," }, { "end": 903, "start": 898.88, "text": " if you need to plan then that's what you get." }, { "end": 907.6400000000001, "start": 903, "text": " You get the ability to earn more sparse rewards because you can kind of look ahead." }, { "end": 912.64, "start": 907.64, "text": " All right, so I think I'll conclude here with the discussion of this paper." }, { "end": 939.64, "start": 912.64, "text": " I quite liked it and it's a cool method, combines many things and I'll see you next time." } ]