CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
61
100
DESCRIPTION
stringclasses
6 values
TRANSCRIPTION
stringlengths
2.07k
14.5k
SEGMENTS
stringlengths
3.72k
25k
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=faTk41hUGec
8.1 Unsupervised Learning, Recommenders, Reinforcement Learning|Welcome! -Machine Learning Andrew Ng
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to this third and final course of this specialization on unsupervised learning, recommender systems, and reinforcement learning. Whereas in the first two courses, we spent a lot of time on supervised learning, in this third and final course, we'll talk about a new set of techniques that goes beyond supervised learning and will give you an extra set of powerful tools that I hope you enjoy adding to your tool set. And by the time you finish this course and finish this specialization, I think you'll be well on your way to being an expert in machine learning. Let's take a look. This week will start with unsupervised learning, and in particular, you learn about clustering algorithms, which is a way of grouping data into clusters, as well as anomaly detection. Both of these are techniques used by many companies today in important commercial applications. And by the end of this week, you know how these algorithms work and be able to get them to work for yourself as well. In the second week, you will learn about recommender systems. When you go to an online shopping website or a video streaming website, how does it recommend products or movies to you? Recommender systems is one of the most commercially important machine learning technologies is moving many billions of dollars worth of value or products or other things around. It's one of the technologies that receives surprisingly little attention from academia, despite how important it is. But in the second week, I hope you learn how these systems work and be able to implement one for yourself. And if you are curious about how online ad systems work, the description of recommender systems will also give you a sense for how those large online ad tech companies decide what ads to show you. In the third and final week of this course, you learn about reinforcement learning. You may have read in the news about reinforcement learning being great at playing a variety of video games, even outperforming humans. I've also used reinforcement learning many times myself to control a variety of different robots. Even though reinforcement learning is a new and emerging technology, that is the number of commercial applications of reinforcement learning is not nearly as large as the other techniques covered in this week or in previous weeks, is a technology that is exciting and is opening up a new frontier to what you can get learning algorithms to do. And so in the final week, you implement a reinforcement learning yourself and use it to land a simulated moon lander. And when you see that working for yourself with your own code later in this course, I think you'd be impressed by what you can get reinforcement learning to do. So I'm really excited to be here with you to talk about unsupervised learning, recommender systems and reinforcement learning. So let's go on to the next video where you learn about an important unsupervised learning algorithm called a clustering algorithm.
[{"start": 0.0, "end": 8.2, "text": " Welcome to this third and final course of this specialization on unsupervised learning,"}, {"start": 8.2, "end": 11.120000000000001, "text": " recommender systems, and reinforcement learning."}, {"start": 11.120000000000001, "end": 16.32, "text": " Whereas in the first two courses, we spent a lot of time on supervised learning, in this"}, {"start": 16.32, "end": 21.84, "text": " third and final course, we'll talk about a new set of techniques that goes beyond supervised"}, {"start": 21.84, "end": 27.32, "text": " learning and will give you an extra set of powerful tools that I hope you enjoy adding"}, {"start": 27.32, "end": 28.94, "text": " to your tool set."}, {"start": 28.94, "end": 33.2, "text": " And by the time you finish this course and finish this specialization, I think you'll"}, {"start": 33.2, "end": 36.84, "text": " be well on your way to being an expert in machine learning."}, {"start": 36.84, "end": 38.36, "text": " Let's take a look."}, {"start": 38.36, "end": 43.64, "text": " This week will start with unsupervised learning, and in particular, you learn about clustering"}, {"start": 43.64, "end": 50.72, "text": " algorithms, which is a way of grouping data into clusters, as well as anomaly detection."}, {"start": 50.72, "end": 57.72, "text": " Both of these are techniques used by many companies today in important commercial applications."}, {"start": 57.72, "end": 62.4, "text": " And by the end of this week, you know how these algorithms work and be able to get them"}, {"start": 62.4, "end": 64.84, "text": " to work for yourself as well."}, {"start": 64.84, "end": 69.9, "text": " In the second week, you will learn about recommender systems."}, {"start": 69.9, "end": 75.6, "text": " When you go to an online shopping website or a video streaming website, how does it"}, {"start": 75.6, "end": 79.68, "text": " recommend products or movies to you?"}, {"start": 79.68, "end": 84.78, "text": " Recommender systems is one of the most commercially important machine learning technologies is"}, {"start": 84.78, "end": 91.12, "text": " moving many billions of dollars worth of value or products or other things around."}, {"start": 91.12, "end": 96.24000000000001, "text": " It's one of the technologies that receives surprisingly little attention from academia,"}, {"start": 96.24000000000001, "end": 98.24000000000001, "text": " despite how important it is."}, {"start": 98.24000000000001, "end": 103.28, "text": " But in the second week, I hope you learn how these systems work and be able to implement"}, {"start": 103.28, "end": 105.6, "text": " one for yourself."}, {"start": 105.6, "end": 111.12, "text": " And if you are curious about how online ad systems work, the description of recommender"}, {"start": 111.12, "end": 117.16000000000001, "text": " systems will also give you a sense for how those large online ad tech companies decide"}, {"start": 117.16000000000001, "end": 120.12, "text": " what ads to show you."}, {"start": 120.12, "end": 126.52000000000001, "text": " In the third and final week of this course, you learn about reinforcement learning."}, {"start": 126.52000000000001, "end": 131.76, "text": " You may have read in the news about reinforcement learning being great at playing a variety"}, {"start": 131.76, "end": 134.64000000000001, "text": " of video games, even outperforming humans."}, {"start": 134.64000000000001, "end": 140.12, "text": " I've also used reinforcement learning many times myself to control a variety of different"}, {"start": 140.12, "end": 141.72, "text": " robots."}, {"start": 141.72, "end": 147.8, "text": " Even though reinforcement learning is a new and emerging technology, that is the number"}, {"start": 147.8, "end": 152.74, "text": " of commercial applications of reinforcement learning is not nearly as large as the other"}, {"start": 152.74, "end": 158.88, "text": " techniques covered in this week or in previous weeks, is a technology that is exciting and"}, {"start": 158.88, "end": 164.48000000000002, "text": " is opening up a new frontier to what you can get learning algorithms to do."}, {"start": 164.48, "end": 171.04, "text": " And so in the final week, you implement a reinforcement learning yourself and use it"}, {"start": 171.04, "end": 175.67999999999998, "text": " to land a simulated moon lander."}, {"start": 175.67999999999998, "end": 180.56, "text": " And when you see that working for yourself with your own code later in this course, I"}, {"start": 180.56, "end": 186.44, "text": " think you'd be impressed by what you can get reinforcement learning to do."}, {"start": 186.44, "end": 190.89999999999998, "text": " So I'm really excited to be here with you to talk about unsupervised learning, recommender"}, {"start": 190.89999999999998, "end": 193.79999999999998, "text": " systems and reinforcement learning."}, {"start": 193.8, "end": 198.86, "text": " So let's go on to the next video where you learn about an important unsupervised learning"}, {"start": 198.86, "end": 225.88000000000002, "text": " algorithm called a clustering algorithm."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=P9ma8vycu3Y
8.2 Clustering | What is clustering? -- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
What is clustering? A clustering algorithm looks at a number of data points and automatically finds data points that are related or similar to each other. Let's take a look at what that means. Let me contrast clustering, which is an unsupervised learning algorithm, with what you had previously seen with supervised learning for binary classification. Given a data set like this with features x1 and x2, with supervised learning we had a training set with both the input features x as well as the labels y, and we could plot a data set like this and fit, say, a logistic regression algorithm or a neural network to learn a decision boundary like that. And in supervised learning, the data set included both the inputs x as well as the target outputs y. In contrast, in unsupervised learning, you're given a data set like this with just x, but not the labels or the target labels y. And that's why when I plot the data set, it looks like this, with just dots rather than two classes denoted by the x's and the o's. Since we don't have target labels y, we're not able to tell the algorithm what is the quote, right answer, why that we wanted to predict. Instead, we're going to ask the algorithm to find something interesting about the data, that is to find some interesting structure about this data. But the first unsupervised learning algorithm that you learn about is called a clustering algorithm, which looks for one particular type of structure in the data. Namely, it'll look at the data set like this and try to see if it can be grouped into clusters, meaning groups of points that are similar to each other. So a clustering algorithm in this case might find that this data set comprises of data from two clusters shown here. Here are some applications of clustering. In the first week of the first course, you heard me talk about grouping similar news articles together, like the story about pandas or market segmentation, where at deeplearning.ai we discovered that there are many learners that come here because you may want to grow your skills or develop your careers or stay updated with AI and understand how it affects your field of work. And we want to help everyone with any of these goals to learn about machine learning. Or if you don't fall into one of these clusters, that's totally fine too. And I hope deeplearning.ai and Stanford Online's materials will be useful to you as well. Clustering has also been used to analyze DNA data, where you would look at the genetic expression data from different individuals and try to group them into people that exhibit similar traits. I find astronomy and space and space exploration fascinating. And so one application that I thought was very exciting was astronomers using clustering for astronomical data analysis to group bodies in space together for their own analysis of what's going on in space. And so one of the applications I found fascinating was astronomers using clustering to group bodies together to figure out which ones form one galaxy or which one form coherent structures in space. So clustering today is used for all of these applications and many, many more. In the next video, let's take a look at the most commonly used clustering algorithm called the Key Means algorithm. And let's take a look at how it works.
[{"start": 0.0, "end": 3.7600000000000002, "text": " What is clustering?"}, {"start": 3.7600000000000002, "end": 9.68, "text": " A clustering algorithm looks at a number of data points and automatically finds data points"}, {"start": 9.68, "end": 12.68, "text": " that are related or similar to each other."}, {"start": 12.68, "end": 15.44, "text": " Let's take a look at what that means."}, {"start": 15.44, "end": 21.400000000000002, "text": " Let me contrast clustering, which is an unsupervised learning algorithm, with what you had previously"}, {"start": 21.400000000000002, "end": 25.240000000000002, "text": " seen with supervised learning for binary classification."}, {"start": 25.24, "end": 35.2, "text": " Given a data set like this with features x1 and x2, with supervised learning we had a"}, {"start": 35.2, "end": 42.32, "text": " training set with both the input features x as well as the labels y, and we could plot"}, {"start": 42.32, "end": 48.16, "text": " a data set like this and fit, say, a logistic regression algorithm or a neural network to"}, {"start": 48.16, "end": 51.32, "text": " learn a decision boundary like that."}, {"start": 51.32, "end": 57.04, "text": " And in supervised learning, the data set included both the inputs x as well as the target outputs"}, {"start": 57.04, "end": 58.6, "text": " y."}, {"start": 58.6, "end": 65.44, "text": " In contrast, in unsupervised learning, you're given a data set like this with just x, but"}, {"start": 65.44, "end": 69.03999999999999, "text": " not the labels or the target labels y."}, {"start": 69.03999999999999, "end": 74.68, "text": " And that's why when I plot the data set, it looks like this, with just dots rather than"}, {"start": 74.68, "end": 79.72, "text": " two classes denoted by the x's and the o's."}, {"start": 79.72, "end": 85.52, "text": " Since we don't have target labels y, we're not able to tell the algorithm what is the"}, {"start": 85.52, "end": 89.8, "text": " quote, right answer, why that we wanted to predict."}, {"start": 89.8, "end": 95.4, "text": " Instead, we're going to ask the algorithm to find something interesting about the data,"}, {"start": 95.4, "end": 99.44, "text": " that is to find some interesting structure about this data."}, {"start": 99.44, "end": 105.44, "text": " But the first unsupervised learning algorithm that you learn about is called a clustering"}, {"start": 105.44, "end": 110.28, "text": " algorithm, which looks for one particular type of structure in the data."}, {"start": 110.28, "end": 118.2, "text": " Namely, it'll look at the data set like this and try to see if it can be grouped into clusters,"}, {"start": 118.2, "end": 122.03999999999999, "text": " meaning groups of points that are similar to each other."}, {"start": 122.03999999999999, "end": 127.68, "text": " So a clustering algorithm in this case might find that this data set comprises of data"}, {"start": 127.68, "end": 130.24, "text": " from two clusters shown here."}, {"start": 130.24, "end": 133.07999999999998, "text": " Here are some applications of clustering."}, {"start": 133.08, "end": 138.56, "text": " In the first week of the first course, you heard me talk about grouping similar news"}, {"start": 138.56, "end": 147.0, "text": " articles together, like the story about pandas or market segmentation, where at deeplearning.ai"}, {"start": 147.0, "end": 152.68, "text": " we discovered that there are many learners that come here because you may want to grow"}, {"start": 152.68, "end": 160.56, "text": " your skills or develop your careers or stay updated with AI and understand how it affects"}, {"start": 160.56, "end": 162.32000000000002, "text": " your field of work."}, {"start": 162.32, "end": 169.68, "text": " And we want to help everyone with any of these goals to learn about machine learning."}, {"start": 169.68, "end": 173.56, "text": " Or if you don't fall into one of these clusters, that's totally fine too."}, {"start": 173.56, "end": 180.28, "text": " And I hope deeplearning.ai and Stanford Online's materials will be useful to you as well."}, {"start": 180.28, "end": 186.88, "text": " Clustering has also been used to analyze DNA data, where you would look at the genetic"}, {"start": 186.88, "end": 193.56, "text": " expression data from different individuals and try to group them into people that exhibit"}, {"start": 193.56, "end": 195.96, "text": " similar traits."}, {"start": 195.96, "end": 201.79999999999998, "text": " I find astronomy and space and space exploration fascinating."}, {"start": 201.79999999999998, "end": 208.46, "text": " And so one application that I thought was very exciting was astronomers using clustering"}, {"start": 208.46, "end": 215.24, "text": " for astronomical data analysis to group bodies in space together for their own analysis of"}, {"start": 215.24, "end": 218.16, "text": " what's going on in space."}, {"start": 218.16, "end": 225.70000000000002, "text": " And so one of the applications I found fascinating was astronomers using clustering to group"}, {"start": 225.70000000000002, "end": 234.16000000000003, "text": " bodies together to figure out which ones form one galaxy or which one form coherent structures"}, {"start": 234.16000000000003, "end": 236.16000000000003, "text": " in space."}, {"start": 236.16000000000003, "end": 242.74, "text": " So clustering today is used for all of these applications and many, many more."}, {"start": 242.74, "end": 248.16, "text": " In the next video, let's take a look at the most commonly used clustering algorithm called"}, {"start": 248.16, "end": 249.88, "text": " the Key Means algorithm."}, {"start": 249.88, "end": 272.84, "text": " And let's take a look at how it works."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Q9tKUSAO2LY
8.3 Clustering | K-means intuition-- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's take a look at what the k-means clustering algorithm does. Let me start with an example. Here I've plotted a dataset with 30 unlabeled training examples. So there are 30 points. And what we'd like to do is run k-means on this dataset. The first thing that the k-means algorithm does is it will take a random guess at where might be the centers of the two clusters that you might ask it to find. In this example, I'm going to ask it to try to find two clusters. Later in this week, we'll talk about how you might decide how many clusters to find. But the very first step is it will randomly pick two points, which I've shown here as a red cross and a blue cross at where might be the centers of two different clusters. This is just a random initial guess and they're not particularly good guesses, but it's a start. One thing I hope you take away from this video is that k-means will repeatedly do two different things. The first is assign points to cluster centroids and the second is move cluster centroids. Let's take a look at what this means. The first of the two steps is it will go through each of these points and look at whether it is closer to the red cross or to the blue cross. The very first thing that k-means does is it will take a random guess at where are the centers of the cluster. And the centers of the cluster are called cluster centroids. After it's made an initial guess at where are the cluster centroids, it will go through all of these examples, x1 through x30 by 30 data points, and for each of them it will check if it is closer to the red cluster centroid, shown by the red cross, or if it's closer to the blue cluster centroid, shown by the blue cross, and it will assign each of these points to whichever of the cluster centroids it is closer to. I'm going to illustrate that by painting each of these examples, each of these little round dots, either red or blue, depending on whether that example is closer to the red or to the blue cluster centroid. So this point up here is closer to the red centroid, which is why it's painted red, whereas this point down there is closer to the blue cluster centroid, which is why I've now painted it blue. So that was the first of the two things that k-means does over and over, which is assign points to cluster centroids. And all that means is it will associate, which I'm illustrating here with the color, every point of one of the cluster centroids. The second of the two steps that k-means does is it will look at all of the red points and take an average of them, and it will move the red cross to whatever is the average location of the red dots, which turns out to be here. And so the red cross, that is the red cluster centroid, will move here. And then we do the same thing for all the blue dots. Look at all the blue dots and take an average of them and move the blue cross over there, so you now have a new location for the blue cluster centroid as well. In the next video, we'll look at the mathematical formulas for how to do both of these steps. But now that you have these new and hopefully slightly improved guesses for the locations of the two cluster centroids, we'll look through all of the 30 training examples again and check for every one of them, whether it's closer to the red or the blue cluster centroid for the new locations, and then we will associate them, which are indicated by the color again, every point to the closer cluster centroid. And if you do that, you see that a few of the points change color. So for example, this point is colored red because it was closer to the red cluster centroid previously, but if we now look again, it's now actually closer to the blue cluster centroid because the blue and red cluster centroids have moved. So if we go through and associate each point with the closer cluster centroid, you end up with this. And then we just repeat the second part of k-means again, which is look at all of the red dots and compute the average, and also look at all of the blue dots and compute the average location of all of the blue dots. And it turns out that you end up moving the red cross over there and the blue cross over here. And we repeat, let's look at all of the points again and we color them either red or blue, depending on which cluster centroid it is closer to. So you end up with this. And then again, look at all of the red dots and take the average location and look at all the blue dots and take the average location and move the clusters to the new locations. And it turns out that if you were to keep on repeating these two steps, that is, look at each point and assign it to the nearest cluster centroid, and then also move each cluster centroid to the mean of all the points with the same color, if you keep on doing those two steps, you find that there are no more changes to the colors of the points or to the locations of the cluster centroids. And so this means that at this point, the k-means clustering algorithm has converged because applying those two steps over and over results in no further changes to either the assignment of points to the centroids or to the location of the cluster centroids. In this example, it looks like k-means has done a pretty good job. It has found that these points up here correspond to one cluster and these points down here correspond to a second cluster. So now you've seen an illustration of how k-means works. The two key steps are assign every point to cluster centroid, depending on what cluster centroid is nearest to, and second, move each cluster centroid to the average or the mean of all the points that were assigned to it. In the next video, we'll look at how to formalize this and write out the algorithm that does what you just saw in this video. Let's go on to the next video.
[{"start": 0.0, "end": 5.8, "text": " Let's take a look at what the k-means clustering algorithm does."}, {"start": 5.8, "end": 8.6, "text": " Let me start with an example."}, {"start": 8.6, "end": 13.58, "text": " Here I've plotted a dataset with 30 unlabeled training examples."}, {"start": 13.58, "end": 15.56, "text": " So there are 30 points."}, {"start": 15.56, "end": 20.12, "text": " And what we'd like to do is run k-means on this dataset."}, {"start": 20.12, "end": 25.84, "text": " The first thing that the k-means algorithm does is it will take a random guess at where"}, {"start": 25.84, "end": 31.56, "text": " might be the centers of the two clusters that you might ask it to find."}, {"start": 31.56, "end": 36.480000000000004, "text": " In this example, I'm going to ask it to try to find two clusters."}, {"start": 36.480000000000004, "end": 41.84, "text": " Later in this week, we'll talk about how you might decide how many clusters to find."}, {"start": 41.84, "end": 47.92, "text": " But the very first step is it will randomly pick two points, which I've shown here as"}, {"start": 47.92, "end": 56.64, "text": " a red cross and a blue cross at where might be the centers of two different clusters."}, {"start": 56.64, "end": 61.96, "text": " This is just a random initial guess and they're not particularly good guesses, but it's a"}, {"start": 61.96, "end": 62.96, "text": " start."}, {"start": 62.96, "end": 69.36, "text": " One thing I hope you take away from this video is that k-means will repeatedly do two different"}, {"start": 69.36, "end": 70.36, "text": " things."}, {"start": 70.36, "end": 76.36, "text": " The first is assign points to cluster centroids and the second is move cluster centroids."}, {"start": 76.36, "end": 78.36, "text": " Let's take a look at what this means."}, {"start": 78.36, "end": 84.6, "text": " The first of the two steps is it will go through each of these points and look at whether it"}, {"start": 84.6, "end": 92.08, "text": " is closer to the red cross or to the blue cross."}, {"start": 92.08, "end": 97.2, "text": " The very first thing that k-means does is it will take a random guess at where are the"}, {"start": 97.2, "end": 100.12, "text": " centers of the cluster."}, {"start": 100.12, "end": 106.08, "text": " And the centers of the cluster are called cluster centroids."}, {"start": 106.08, "end": 110.14, "text": " After it's made an initial guess at where are the cluster centroids, it will go through"}, {"start": 110.14, "end": 119.08, "text": " all of these examples, x1 through x30 by 30 data points, and for each of them it will"}, {"start": 119.08, "end": 124.48, "text": " check if it is closer to the red cluster centroid, shown by the red cross, or if it's closer"}, {"start": 124.48, "end": 130.24, "text": " to the blue cluster centroid, shown by the blue cross, and it will assign each of these"}, {"start": 130.24, "end": 135.28, "text": " points to whichever of the cluster centroids it is closer to."}, {"start": 135.28, "end": 140.64000000000001, "text": " I'm going to illustrate that by painting each of these examples, each of these little round"}, {"start": 140.64000000000001, "end": 148.32, "text": " dots, either red or blue, depending on whether that example is closer to the red or to the"}, {"start": 148.32, "end": 150.56, "text": " blue cluster centroid."}, {"start": 150.56, "end": 155.24, "text": " So this point up here is closer to the red centroid, which is why it's painted red, whereas"}, {"start": 155.24, "end": 160.0, "text": " this point down there is closer to the blue cluster centroid, which is why I've now painted"}, {"start": 160.0, "end": 162.1, "text": " it blue."}, {"start": 162.1, "end": 168.51999999999998, "text": " So that was the first of the two things that k-means does over and over, which is assign"}, {"start": 168.51999999999998, "end": 170.38, "text": " points to cluster centroids."}, {"start": 170.38, "end": 176.28, "text": " And all that means is it will associate, which I'm illustrating here with the color, every"}, {"start": 176.28, "end": 178.88, "text": " point of one of the cluster centroids."}, {"start": 178.88, "end": 186.24, "text": " The second of the two steps that k-means does is it will look at all of the red points and"}, {"start": 186.24, "end": 194.8, "text": " take an average of them, and it will move the red cross to whatever is the average location"}, {"start": 194.8, "end": 198.48000000000002, "text": " of the red dots, which turns out to be here."}, {"start": 198.48000000000002, "end": 203.60000000000002, "text": " And so the red cross, that is the red cluster centroid, will move here."}, {"start": 203.60000000000002, "end": 205.86, "text": " And then we do the same thing for all the blue dots."}, {"start": 205.86, "end": 212.28, "text": " Look at all the blue dots and take an average of them and move the blue cross over there,"}, {"start": 212.28, "end": 218.24, "text": " so you now have a new location for the blue cluster centroid as well."}, {"start": 218.24, "end": 223.84, "text": " In the next video, we'll look at the mathematical formulas for how to do both of these steps."}, {"start": 223.84, "end": 228.52, "text": " But now that you have these new and hopefully slightly improved guesses for the locations"}, {"start": 228.52, "end": 236.16, "text": " of the two cluster centroids, we'll look through all of the 30 training examples again and"}, {"start": 236.16, "end": 241.44, "text": " check for every one of them, whether it's closer to the red or the blue cluster centroid"}, {"start": 241.44, "end": 247.92, "text": " for the new locations, and then we will associate them, which are indicated by the color again,"}, {"start": 247.92, "end": 251.64, "text": " every point to the closer cluster centroid."}, {"start": 251.64, "end": 255.9, "text": " And if you do that, you see that a few of the points change color."}, {"start": 255.9, "end": 262.0, "text": " So for example, this point is colored red because it was closer to the red cluster centroid"}, {"start": 262.0, "end": 266.84, "text": " previously, but if we now look again, it's now actually closer to the blue cluster centroid"}, {"start": 266.84, "end": 270.92, "text": " because the blue and red cluster centroids have moved."}, {"start": 270.92, "end": 276.8, "text": " So if we go through and associate each point with the closer cluster centroid, you end"}, {"start": 276.8, "end": 278.88, "text": " up with this."}, {"start": 278.88, "end": 285.28000000000003, "text": " And then we just repeat the second part of k-means again, which is look at all of the"}, {"start": 285.28000000000003, "end": 292.0, "text": " red dots and compute the average, and also look at all of the blue dots and compute the"}, {"start": 292.0, "end": 295.36, "text": " average location of all of the blue dots."}, {"start": 295.36, "end": 301.8, "text": " And it turns out that you end up moving the red cross over there and the blue cross over"}, {"start": 301.8, "end": 302.96000000000004, "text": " here."}, {"start": 302.96000000000004, "end": 308.04, "text": " And we repeat, let's look at all of the points again and we color them either red or blue,"}, {"start": 308.04, "end": 311.44, "text": " depending on which cluster centroid it is closer to."}, {"start": 311.44, "end": 313.64, "text": " So you end up with this."}, {"start": 313.64, "end": 317.8, "text": " And then again, look at all of the red dots and take the average location and look at"}, {"start": 317.8, "end": 325.22, "text": " all the blue dots and take the average location and move the clusters to the new locations."}, {"start": 325.22, "end": 330.20000000000005, "text": " And it turns out that if you were to keep on repeating these two steps, that is, look"}, {"start": 330.20000000000005, "end": 334.76000000000005, "text": " at each point and assign it to the nearest cluster centroid, and then also move each"}, {"start": 334.76000000000005, "end": 340.32000000000005, "text": " cluster centroid to the mean of all the points with the same color, if you keep on doing"}, {"start": 340.32000000000005, "end": 345.48, "text": " those two steps, you find that there are no more changes to the colors of the points or"}, {"start": 345.48, "end": 348.48, "text": " to the locations of the cluster centroids."}, {"start": 348.48, "end": 354.02000000000004, "text": " And so this means that at this point, the k-means clustering algorithm has converged"}, {"start": 354.02, "end": 360.44, "text": " because applying those two steps over and over results in no further changes to either"}, {"start": 360.44, "end": 365.68, "text": " the assignment of points to the centroids or to the location of the cluster centroids."}, {"start": 365.68, "end": 369.47999999999996, "text": " In this example, it looks like k-means has done a pretty good job."}, {"start": 369.47999999999996, "end": 376.84, "text": " It has found that these points up here correspond to one cluster and these points down here"}, {"start": 376.84, "end": 380.02, "text": " correspond to a second cluster."}, {"start": 380.02, "end": 384.44, "text": " So now you've seen an illustration of how k-means works."}, {"start": 384.44, "end": 389.64, "text": " The two key steps are assign every point to cluster centroid, depending on what cluster"}, {"start": 389.64, "end": 396.56, "text": " centroid is nearest to, and second, move each cluster centroid to the average or the mean"}, {"start": 396.56, "end": 399.91999999999996, "text": " of all the points that were assigned to it."}, {"start": 399.91999999999996, "end": 404.78, "text": " In the next video, we'll look at how to formalize this and write out the algorithm that does"}, {"start": 404.78, "end": 407.28, "text": " what you just saw in this video."}, {"start": 407.28, "end": 410.4, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=wb5tJ4Hw27A
8.4 Clustering | K-means algorithm-- [Machine Learning | Andrew Ng]
None
In the last video you saw an illustration of the k-means algorithm running. Now let's write out the k-means algorithm in detail so that you'd be able to implement it for yourself. Here's the k-means algorithm. The first step is to randomly initialize k cluster centroid mu1, mu2 through mu k. In the example that we had, this corresponded to when we randomly chose a location for the red cross and for the blue cross corresponding to the two cluster centroids. In our example k was equal to 2, so if the red cross was cluster centroid 1 and the blue cross was cluster centroid 2, these are just two indices to denote the first and the second cluster, then the red cross would be the location of mu1 and the blue cross would be the location of mu2. And just to be clear, mu1 and mu2 are vectors which have the same dimension as your training examples x1 through say x30 in our example. So all of these are lists of two numbers or they are two dimensional vectors or whatever dimension the training data had. So we had n equals two features for each of the training examples, then mu1 and mu2 will also be two dimensional vectors, meaning vectors with two numbers in them. Having randomly initialized the k cluster centroids, k-means will then repeatedly carry out the two steps that you saw in the last video. The first step is to assign points to cluster centroids, meaning color each of the points either red or blue corresponding to assigning them to cluster centroids 1 or 2 when k is equal to 2. Rinse it out in math, that means that we're going to for i equals 1 through m, for our m training examples, we're going to set ci to be equal to the index, which can be anything from 1 to k of the cluster centroid closest to the training example xi. Mathematically, you can write this out as computing the distance between xi and mu k. And in math, the distance between two points is often written like this. It is also called the L2 norm, and what you want to find is the value of k that minimizes this because that corresponds to the cluster centroid mu k that is closest to the training example xi. And then the value of k that minimizes this is what gets set to ci. When you implement this algorithm, you find that it's actually a little bit more convenient to minimize the squared distance because the cluster centroid with the smallest square distance should be the same as the cluster centroid with the smallest distance. And when you look at this week's optional labs and practice labs, you see how to implement this in code for yourself. As a concrete example, this point up here is closer to the red or to cluster centroid 1. So if this was training example x1, we would set c1 to be equal to 1. Whereas this point over here, if this was the 12 training example, this is closer to the second cluster centroid, the blue one. And so we will set this, the corresponding cluster assignment variable to 2 because it's closer to cluster centroid 2. So that's the first step of the k-means algorithm, assign points to cluster centroids. The second step is to move the cluster centroids. And what that means is for lowercase k equals 1 to capital K, the number of clusters, we're going to set the cluster centroids location to be updated to be the average or the mean of the points assigned to that cluster k. Concretely, what that means is we'll look at all of these red points, say, and look at their position on the horizontal axis, look at the value of the first feature x1 and average that out and compute the average value on the vertical axis as well. And after computing those two averages, you find that the mean is here, which is why mu 1, that is the location of the red cluster centroid, gets updated as follows. Similarly, we would look at all of the points that were colored blue, that is, with ci equals 2 and compute the average of the value on the horizontal axis, the average of their feature x1, compute the average of the feature x2, and those two averages give you the new location of the blue cluster centroid, which therefore moves over here. Just to write those out in math, if the first cluster had assigned to it training examples 1, 5, 6, and 10, just as an example, then what that means is you would compute the average this way, notice that x1, x5, x6, and x10 are training examples, four training examples, so we divide by four, and this gives you the new location of mu 1, the new cluster centroid for cluster 1. To be clear, each of these x values are vectors with two numbers in them or n numbers in them if you have n features, and so mu will also have two numbers in it or n numbers in it if you have n features instead of two. Now there is one corner case to this algorithm, which is what happens if a cluster has zero training examples assigned to it? In that case, the second step mu k would be trying to compute the average of zero points, and that's not well defined. If that ever happens, the most common thing to do is to just eliminate that cluster so you end up with k minus 1 clusters, or if you really, really need k clusters, an alternative would be to just randomly reinitialize that cluster centroid and hope that it gets assigned at least some points next time around, but it's actually more common when running k means to just eliminate a cluster if no points are assigned to it. Even though I've mainly been describing k means for clusters that are well separated, so clusters that may look like this, where if you ask it to find three clusters, hopefully it will find these three distinct clusters, it turns out that k means is also frequently applied to datasets where the clusters are not that well separated. For example, if you are a designer and manufacturer of cool t-shirts and you want to decide if how do I size my small, medium, and large t-shirts? How small should a small be? How large should a large be? And what should a medium-sized t-shirt really be? And so one thing you might do is collect data of people likely to buy your t-shirts based on their heights and weights, and you find that the height and weight of people tend to vary continuously on the spectrum without very clear clusters. Nonetheless, if you were to run k means with, say, three cluster centroids, you might find that k means would group these points into one cluster, these points into a second cluster, and these points into a third cluster. And so if you're trying to decide exactly how to size your small, medium, and large t-shirts, you might then choose the dimensions of your small t-shirt to try to make it fit these individuals well, the medium-sized t-shirt to try to fit these individuals well, and the large t-shirt to try to fit these individuals well, with potentially the cluster centroids giving you a sense of what is the most representative height and weight that you would want your three t-shirt sizes to fit. So this is an example of k means working just fine and giving a useful result, even if the data does not lie in well-separated groups or clusters. So that was the k means clustering algorithm. Assign cluster centroids randomly and then repeatedly assign points to cluster centroids and move the cluster centroids. But what is this algorithm really doing? And do we think this algorithm will converge or might it just keep on running forever and never converge? To gain deeper intuition about the k means algorithm and also see why we might hope this algorithm does converge, let's go on to the next video where you see that k means is actually trying to optimize a specific cost function. Let's take a look at that in the next video.
[{"start": 0.0, "end": 7.32, "text": " In the last video you saw an illustration of the k-means algorithm running."}, {"start": 7.32, "end": 11.64, "text": " Now let's write out the k-means algorithm in detail so that you'd be able to implement"}, {"start": 11.64, "end": 12.64, "text": " it for yourself."}, {"start": 12.64, "end": 15.32, "text": " Here's the k-means algorithm."}, {"start": 15.32, "end": 24.5, "text": " The first step is to randomly initialize k cluster centroid mu1, mu2 through mu k."}, {"start": 24.5, "end": 31.12, "text": " In the example that we had, this corresponded to when we randomly chose a location for the"}, {"start": 31.12, "end": 39.36, "text": " red cross and for the blue cross corresponding to the two cluster centroids."}, {"start": 39.36, "end": 46.44, "text": " In our example k was equal to 2, so if the red cross was cluster centroid 1 and the blue"}, {"start": 46.44, "end": 53.08, "text": " cross was cluster centroid 2, these are just two indices to denote the first and the second"}, {"start": 53.08, "end": 62.519999999999996, "text": " cluster, then the red cross would be the location of mu1 and the blue cross would be the location"}, {"start": 62.519999999999996, "end": 64.92, "text": " of mu2."}, {"start": 64.92, "end": 72.64, "text": " And just to be clear, mu1 and mu2 are vectors which have the same dimension as your training"}, {"start": 72.64, "end": 77.44, "text": " examples x1 through say x30 in our example."}, {"start": 77.44, "end": 83.7, "text": " So all of these are lists of two numbers or they are two dimensional vectors or whatever"}, {"start": 83.7, "end": 86.92, "text": " dimension the training data had."}, {"start": 86.92, "end": 93.6, "text": " So we had n equals two features for each of the training examples, then mu1 and mu2 will"}, {"start": 93.6, "end": 99.2, "text": " also be two dimensional vectors, meaning vectors with two numbers in them."}, {"start": 99.2, "end": 105.86, "text": " Having randomly initialized the k cluster centroids, k-means will then repeatedly carry"}, {"start": 105.86, "end": 109.34, "text": " out the two steps that you saw in the last video."}, {"start": 109.34, "end": 114.76, "text": " The first step is to assign points to cluster centroids, meaning color each of the points"}, {"start": 114.76, "end": 124.1, "text": " either red or blue corresponding to assigning them to cluster centroids 1 or 2 when k is"}, {"start": 124.1, "end": 126.72, "text": " equal to 2."}, {"start": 126.72, "end": 132.96, "text": " Rinse it out in math, that means that we're going to for i equals 1 through m, for our"}, {"start": 132.96, "end": 139.28, "text": " m training examples, we're going to set ci to be equal to the index, which can be anything"}, {"start": 139.28, "end": 145.36, "text": " from 1 to k of the cluster centroid closest to the training example xi."}, {"start": 145.36, "end": 152.88, "text": " Mathematically, you can write this out as computing the distance between xi and mu k."}, {"start": 152.88, "end": 158.60000000000002, "text": " And in math, the distance between two points is often written like this."}, {"start": 158.6, "end": 167.0, "text": " It is also called the L2 norm, and what you want to find is the value of k that minimizes"}, {"start": 167.0, "end": 176.07999999999998, "text": " this because that corresponds to the cluster centroid mu k that is closest to the training"}, {"start": 176.07999999999998, "end": 179.28, "text": " example xi."}, {"start": 179.28, "end": 187.72, "text": " And then the value of k that minimizes this is what gets set to ci."}, {"start": 187.72, "end": 192.24, "text": " When you implement this algorithm, you find that it's actually a little bit more convenient"}, {"start": 192.24, "end": 198.68, "text": " to minimize the squared distance because the cluster centroid with the smallest square"}, {"start": 198.68, "end": 205.92, "text": " distance should be the same as the cluster centroid with the smallest distance."}, {"start": 205.92, "end": 211.4, "text": " And when you look at this week's optional labs and practice labs, you see how to implement"}, {"start": 211.4, "end": 214.07999999999998, "text": " this in code for yourself."}, {"start": 214.08, "end": 220.76000000000002, "text": " As a concrete example, this point up here is closer to the red or to cluster centroid"}, {"start": 220.76000000000002, "end": 221.76000000000002, "text": " 1."}, {"start": 221.76000000000002, "end": 229.72000000000003, "text": " So if this was training example x1, we would set c1 to be equal to 1."}, {"start": 229.72000000000003, "end": 234.44, "text": " Whereas this point over here, if this was the 12 training example, this is closer to"}, {"start": 234.44, "end": 237.24, "text": " the second cluster centroid, the blue one."}, {"start": 237.24, "end": 243.84, "text": " And so we will set this, the corresponding cluster assignment variable to 2 because it's"}, {"start": 243.84, "end": 246.52, "text": " closer to cluster centroid 2."}, {"start": 246.52, "end": 253.0, "text": " So that's the first step of the k-means algorithm, assign points to cluster centroids."}, {"start": 253.0, "end": 257.64, "text": " The second step is to move the cluster centroids."}, {"start": 257.64, "end": 264.72, "text": " And what that means is for lowercase k equals 1 to capital K, the number of clusters, we're"}, {"start": 264.72, "end": 272.74, "text": " going to set the cluster centroids location to be updated to be the average or the mean"}, {"start": 272.74, "end": 275.44, "text": " of the points assigned to that cluster k."}, {"start": 275.44, "end": 280.68, "text": " Concretely, what that means is we'll look at all of these red points, say, and look"}, {"start": 280.68, "end": 286.40000000000003, "text": " at their position on the horizontal axis, look at the value of the first feature x1"}, {"start": 286.40000000000003, "end": 292.66, "text": " and average that out and compute the average value on the vertical axis as well."}, {"start": 292.66, "end": 299.44, "text": " And after computing those two averages, you find that the mean is here, which is why mu"}, {"start": 299.44, "end": 306.04, "text": " 1, that is the location of the red cluster centroid, gets updated as follows."}, {"start": 306.04, "end": 313.76, "text": " Similarly, we would look at all of the points that were colored blue, that is, with ci equals"}, {"start": 313.76, "end": 320.84, "text": " 2 and compute the average of the value on the horizontal axis, the average of their"}, {"start": 320.84, "end": 327.36, "text": " feature x1, compute the average of the feature x2, and those two averages give you the new"}, {"start": 327.36, "end": 333.48, "text": " location of the blue cluster centroid, which therefore moves over here."}, {"start": 333.48, "end": 341.32, "text": " Just to write those out in math, if the first cluster had assigned to it training examples"}, {"start": 341.32, "end": 353.28000000000003, "text": " 1, 5, 6, and 10, just as an example, then what that means is you would compute the average"}, {"start": 353.28, "end": 362.52, "text": " this way, notice that x1, x5, x6, and x10 are training examples, four training examples,"}, {"start": 362.52, "end": 369.55999999999995, "text": " so we divide by four, and this gives you the new location of mu 1, the new cluster centroid"}, {"start": 369.55999999999995, "end": 372.15999999999997, "text": " for cluster 1."}, {"start": 372.15999999999997, "end": 379.88, "text": " To be clear, each of these x values are vectors with two numbers in them or n numbers in them"}, {"start": 379.88, "end": 386.48, "text": " if you have n features, and so mu will also have two numbers in it or n numbers in it"}, {"start": 386.48, "end": 389.96, "text": " if you have n features instead of two."}, {"start": 389.96, "end": 397.04, "text": " Now there is one corner case to this algorithm, which is what happens if a cluster has zero"}, {"start": 397.04, "end": 399.0, "text": " training examples assigned to it?"}, {"start": 399.0, "end": 405.24, "text": " In that case, the second step mu k would be trying to compute the average of zero points,"}, {"start": 405.24, "end": 407.46, "text": " and that's not well defined."}, {"start": 407.46, "end": 412.4, "text": " If that ever happens, the most common thing to do is to just eliminate that cluster so"}, {"start": 412.4, "end": 419.28, "text": " you end up with k minus 1 clusters, or if you really, really need k clusters, an alternative"}, {"start": 419.28, "end": 424.84, "text": " would be to just randomly reinitialize that cluster centroid and hope that it gets assigned"}, {"start": 424.84, "end": 429.67999999999995, "text": " at least some points next time around, but it's actually more common when running k means"}, {"start": 429.67999999999995, "end": 434.74, "text": " to just eliminate a cluster if no points are assigned to it."}, {"start": 434.74, "end": 439.52, "text": " Even though I've mainly been describing k means for clusters that are well separated,"}, {"start": 439.52, "end": 446.04, "text": " so clusters that may look like this, where if you ask it to find three clusters, hopefully"}, {"start": 446.04, "end": 451.88, "text": " it will find these three distinct clusters, it turns out that k means is also frequently"}, {"start": 451.88, "end": 456.88, "text": " applied to datasets where the clusters are not that well separated."}, {"start": 456.88, "end": 464.72, "text": " For example, if you are a designer and manufacturer of cool t-shirts and you want to decide if"}, {"start": 464.72, "end": 470.16, "text": " how do I size my small, medium, and large t-shirts?"}, {"start": 470.16, "end": 471.52000000000004, "text": " How small should a small be?"}, {"start": 471.52000000000004, "end": 472.76000000000005, "text": " How large should a large be?"}, {"start": 472.76000000000005, "end": 476.12, "text": " And what should a medium-sized t-shirt really be?"}, {"start": 476.12, "end": 481.28000000000003, "text": " And so one thing you might do is collect data of people likely to buy your t-shirts based"}, {"start": 481.28000000000003, "end": 487.52000000000004, "text": " on their heights and weights, and you find that the height and weight of people tend"}, {"start": 487.52000000000004, "end": 492.44000000000005, "text": " to vary continuously on the spectrum without very clear clusters."}, {"start": 492.44, "end": 500.2, "text": " Nonetheless, if you were to run k means with, say, three cluster centroids, you might find"}, {"start": 500.2, "end": 506.4, "text": " that k means would group these points into one cluster, these points into a second cluster,"}, {"start": 506.4, "end": 509.72, "text": " and these points into a third cluster."}, {"start": 509.72, "end": 515.68, "text": " And so if you're trying to decide exactly how to size your small, medium, and large"}, {"start": 515.68, "end": 522.36, "text": " t-shirts, you might then choose the dimensions of your small t-shirt to try to make it fit"}, {"start": 522.36, "end": 528.36, "text": " these individuals well, the medium-sized t-shirt to try to fit these individuals well, and"}, {"start": 528.36, "end": 535.36, "text": " the large t-shirt to try to fit these individuals well, with potentially the cluster centroids"}, {"start": 535.36, "end": 540.2, "text": " giving you a sense of what is the most representative height and weight that you would want your"}, {"start": 540.2, "end": 543.72, "text": " three t-shirt sizes to fit."}, {"start": 543.72, "end": 550.2, "text": " So this is an example of k means working just fine and giving a useful result, even if the"}, {"start": 550.2, "end": 555.08, "text": " data does not lie in well-separated groups or clusters."}, {"start": 555.08, "end": 558.6800000000001, "text": " So that was the k means clustering algorithm."}, {"start": 558.6800000000001, "end": 563.32, "text": " Assign cluster centroids randomly and then repeatedly assign points to cluster centroids"}, {"start": 563.32, "end": 566.0400000000001, "text": " and move the cluster centroids."}, {"start": 566.0400000000001, "end": 568.36, "text": " But what is this algorithm really doing?"}, {"start": 568.36, "end": 572.08, "text": " And do we think this algorithm will converge or might it just keep on running forever and"}, {"start": 572.08, "end": 573.44, "text": " never converge?"}, {"start": 573.44, "end": 578.76, "text": " To gain deeper intuition about the k means algorithm and also see why we might hope this"}, {"start": 578.76, "end": 584.36, "text": " algorithm does converge, let's go on to the next video where you see that k means is actually"}, {"start": 584.36, "end": 587.84, "text": " trying to optimize a specific cost function."}, {"start": 587.84, "end": 610.0, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=SSoA7w8HvK8
8.5 Clustering | Optimization objective-- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the earlier courses, courses one and two of the specialization, you saw a lot of supervised learning algorithms as ticket training set, posing a cost function, and then using gradient descent or some other algorithms to optimize that cost function. It turns out that the Q means algorithm that you saw in the last video is also optimizing a specific cost function. Although the optimization algorithm that it uses to optimize that is not gradient descent, it's actually the algorithm that you already saw in the last video. Let's take a look at what all this means. Let's take a look at what is the cost function for K means. To get started, as a reminder, this is the notation we've been using where Ci is the index of the cluster, so Ci is some number from 1 through K, of the index of the cluster to which training example Xi is currently assigned, and mu K is the location of cluster centroid K. Let me introduce one more piece of notation, which is when lowercase k equals Ci, so mu subscript Ci is the cluster centroid of the cluster to which example Xi has been assigned. So for example, if I were to look at some training example, say training example 10, and I were to ask, what's the location of the cluster centroid to which the 10th training example has been assigned? Well, I would then look up C10. This would give me a number from 1 to K that tells me was example 10 assigned to the red or the blue or some other cluster centroid, and then mu subscript C10 is the location of the cluster centroid to which X10 has been assigned. So armed with this notation, let me now write out the cost function that K means turns out to be minimizing. The cost function J, which is a function of C1 through CM, these are all the assignments of points to cluster centroids, as well as mu 1 through mu K. These are the locations of all the cluster centroids as defined as this expression on the right. It is the average, so 1 over M of sum from i equals 1 to M of the squared distance between every training example Xi as i goes from 1 through M. It is the square distance between Xi and mu subscript Ci. So this quantity up here. In other words, the cost function for K means is the average squared distance between every training example Xi and the location of the cluster centroid to which the training example Xi has been assigned. So for this example up here, we would be measuring the distance between X10 and mu subscript C10, the cluster centroid to which X10 has been assigned, and taking the square of that distance. And that would be one of the terms over here that we're averaging over. And it turns out that what the K means algorithm is doing is trying to find assignments of points to cluster centroids, as well as find locations of cluster centroids that minimizes the squared distance. Visually, here's what you saw part way into the run of K means in an earlier video. And at this step, the cost function, if you were to compute it, would be to look at every one of the blue points and measure these distances and compute the square. And then also similarly look at every one of the red points and compute these distances and compute the square. And then the average of the squares of all of these differences for the red and the blue points is the value of the cost function J at this particular configuration of the parameters for K means. And what it will do on every step is try to update the cluster assignments C1 through C30 in this example, or update the positions of the cluster centroids mu1 and mu2 in order to keep on reducing this cost function J. Oh, and by the way, this cost function J also has a name in the literature. It's called the distortion function. I don't know that this is a great name, but if you hear someone talk about the K means algorithm and the distortion or the distortion cost function, that's just what this formula J is computing. Let's now take a deeper look at the algorithm and why the algorithm is trying to minimize this cost function J or why it's trying to minimize the distortion. Here on top of copied over the cost function from the previous slide. It turns out that the first part of K means where you assign points to cluster centroids, that turns out to be trying to update C1 through CM to try to minimize the cost function J as much as possible while holding mu1 through muK fix. And the second step in contrast, where you move the cluster centroid, it turns out that that is trying to leave C1 through CM fix, but to update mu1 through muK to try to minimize the cost function or the distortion as much as possible. Let's take a look at why this is the case. During the first step, if you want to choose the values of C1 through CM or save a particular value of Ci to try to minimize this, well, what would make Xi minus mu Ci as small as possible? This is the distance or the square distance between a training example Xi and the location of the cluster centroid to which it's been assigned. So if you want to minimize this distance or the square distance, what you should do is assign Xi to the closest cluster centroid. So to take a simplified example, if you have two cluster centroids, say cluster centroids one and two, and just a single training example Xi, if you were to assign it to cluster centroid one, this square distance here would be this large distance, well, squared. And if you were to assign it to cluster centroid two, then this squared distance would be the square of this much smaller distance. So if you want to minimize this term, you would take Xi and assign it to the closer cluster centroid, which is exactly what the algorithm is doing up here. So that's why the step where you assign points to cluster centroids is choosing the values for Ci to try to minimize J without changing mu1 through muK for now, but just choosing the values of C1 through Cm to try to make these terms as small as possible. How about the second step of the K-means algorithm? That is to move the cluster centroids. It turns out that choosing muK to be average at the mean of the points assigned is the choice of these terms mu that will minimize this expression. To take a simplified example, say you have a cluster with just two points assigned to it, shown as follows. And so with the cluster centroid here, the average of the squared distances would be a distance of one here squared plus this distance here, which is nine squared. And then you take the average of these two numbers. And so that turns out to be one half of one plus 81, which turns out to be 41. But if you were to take the average of these two points, so one plus 11 over two, that's equal to six. And if you were to move the cluster centroid over here to the middle, then the average of these two squared distances turns out to be a distance of five and five here. So you end up with one half of five squared plus five squared, which is equal to 25. And this is a much smaller average squared distance than 41. And in fact, you can play around with the location of this cluster centroid and maybe convince yourself that taking this mean location, this average location in the middle of these two training samples, that is really the value that minimizes the squared distance. So the fact that the k-means algorithm is optimizing a cost function j means that it is guaranteed to converge. That is, on every single iteration, the distortion cost function should go down or stay the same. But if it ever fails to go down or stay the same in the worst case, if it ever goes up, that means there's a bug in the code. It should never go up because every single step of k-means is setting the value c i and mu k to try to reduce the cost function. Also if the cost function ever stops going down, that also gives you one way to test if k-means has converged. Once there's a single iteration where it stays the same, that usually means k-means has converged and you should just stop running the algorithm even further. Or in some rare cases, you will run k-means for a long time and the cost function or the distortion is just going down very, very slowly. And that's a bit like gradient descent where maybe running it even longer might help a bit. But if the rate at which the cost function is going down has become very, very slow, you might also just say, this is good enough, I'm just going to say it's close enough to convergence and not spend even more compute cycles running the algorithm for even longer. So these are some of the ways that computing the cost function is helpful. It helps you figure out if the algorithm has converged. It turns out that there's one other very useful way to take advantage of the cost function, which is to use multiple different random initializations of the cluster centroids. It turns out if you do this, you can often find much better clusters using k-means. Let's take a look at the next video of how to do that.
[{"start": 0.0, "end": 7.84, "text": " In the earlier courses, courses one and two of the specialization, you saw a lot of supervised"}, {"start": 7.84, "end": 14.52, "text": " learning algorithms as ticket training set, posing a cost function, and then using gradient"}, {"start": 14.52, "end": 18.56, "text": " descent or some other algorithms to optimize that cost function."}, {"start": 18.56, "end": 24.52, "text": " It turns out that the Q means algorithm that you saw in the last video is also optimizing"}, {"start": 24.52, "end": 26.68, "text": " a specific cost function."}, {"start": 26.68, "end": 31.8, "text": " Although the optimization algorithm that it uses to optimize that is not gradient descent,"}, {"start": 31.8, "end": 36.04, "text": " it's actually the algorithm that you already saw in the last video."}, {"start": 36.04, "end": 38.519999999999996, "text": " Let's take a look at what all this means."}, {"start": 38.519999999999996, "end": 43.0, "text": " Let's take a look at what is the cost function for K means."}, {"start": 43.0, "end": 48.96, "text": " To get started, as a reminder, this is the notation we've been using where Ci is the"}, {"start": 48.96, "end": 57.72, "text": " index of the cluster, so Ci is some number from 1 through K, of the index of the cluster"}, {"start": 57.72, "end": 64.72, "text": " to which training example Xi is currently assigned, and mu K is the location of cluster"}, {"start": 64.72, "end": 66.92, "text": " centroid K."}, {"start": 66.92, "end": 76.08, "text": " Let me introduce one more piece of notation, which is when lowercase k equals Ci, so mu"}, {"start": 76.08, "end": 84.84, "text": " subscript Ci is the cluster centroid of the cluster to which example Xi has been assigned."}, {"start": 84.84, "end": 91.28, "text": " So for example, if I were to look at some training example, say training example 10,"}, {"start": 91.28, "end": 97.16, "text": " and I were to ask, what's the location of the cluster centroid to which the 10th training"}, {"start": 97.16, "end": 98.75999999999999, "text": " example has been assigned?"}, {"start": 98.75999999999999, "end": 102.03999999999999, "text": " Well, I would then look up C10."}, {"start": 102.04, "end": 107.84, "text": " This would give me a number from 1 to K that tells me was example 10 assigned to the red"}, {"start": 107.84, "end": 117.0, "text": " or the blue or some other cluster centroid, and then mu subscript C10 is the location"}, {"start": 117.0, "end": 120.94000000000001, "text": " of the cluster centroid to which X10 has been assigned."}, {"start": 120.94000000000001, "end": 129.20000000000002, "text": " So armed with this notation, let me now write out the cost function that K means turns out"}, {"start": 129.20000000000002, "end": 131.04000000000002, "text": " to be minimizing."}, {"start": 131.04, "end": 140.6, "text": " The cost function J, which is a function of C1 through CM, these are all the assignments"}, {"start": 140.6, "end": 148.26, "text": " of points to cluster centroids, as well as mu 1 through mu K. These are the locations"}, {"start": 148.26, "end": 155.23999999999998, "text": " of all the cluster centroids as defined as this expression on the right."}, {"start": 155.24, "end": 165.20000000000002, "text": " It is the average, so 1 over M of sum from i equals 1 to M of the squared distance between"}, {"start": 165.20000000000002, "end": 171.64000000000001, "text": " every training example Xi as i goes from 1 through M. It is the square distance between"}, {"start": 171.64000000000001, "end": 175.8, "text": " Xi and mu subscript Ci."}, {"start": 175.8, "end": 178.72, "text": " So this quantity up here."}, {"start": 178.72, "end": 184.52, "text": " In other words, the cost function for K means is the average squared distance between every"}, {"start": 184.52, "end": 191.72, "text": " training example Xi and the location of the cluster centroid to which the training example"}, {"start": 191.72, "end": 193.24, "text": " Xi has been assigned."}, {"start": 193.24, "end": 200.08, "text": " So for this example up here, we would be measuring the distance between X10 and mu subscript"}, {"start": 200.08, "end": 205.48000000000002, "text": " C10, the cluster centroid to which X10 has been assigned, and taking the square of that"}, {"start": 205.48000000000002, "end": 206.48000000000002, "text": " distance."}, {"start": 206.48000000000002, "end": 210.36, "text": " And that would be one of the terms over here that we're averaging over."}, {"start": 210.36, "end": 217.68, "text": " And it turns out that what the K means algorithm is doing is trying to find assignments of"}, {"start": 217.68, "end": 223.54000000000002, "text": " points to cluster centroids, as well as find locations of cluster centroids that minimizes"}, {"start": 223.54000000000002, "end": 225.20000000000002, "text": " the squared distance."}, {"start": 225.20000000000002, "end": 233.48000000000002, "text": " Visually, here's what you saw part way into the run of K means in an earlier video."}, {"start": 233.48000000000002, "end": 238.60000000000002, "text": " And at this step, the cost function, if you were to compute it, would be to look at every"}, {"start": 238.6, "end": 243.6, "text": " one of the blue points and measure these distances and compute the square."}, {"start": 243.6, "end": 250.28, "text": " And then also similarly look at every one of the red points and compute these distances"}, {"start": 250.28, "end": 252.14, "text": " and compute the square."}, {"start": 252.14, "end": 256.8, "text": " And then the average of the squares of all of these differences for the red and the blue"}, {"start": 256.8, "end": 267.48, "text": " points is the value of the cost function J at this particular configuration of the parameters"}, {"start": 267.48, "end": 270.64000000000004, "text": " for K means."}, {"start": 270.64000000000004, "end": 275.6, "text": " And what it will do on every step is try to update the cluster assignments C1 through"}, {"start": 275.6, "end": 282.20000000000005, "text": " C30 in this example, or update the positions of the cluster centroids mu1 and mu2 in order"}, {"start": 282.20000000000005, "end": 284.76, "text": " to keep on reducing this cost function J."}, {"start": 284.76, "end": 290.72, "text": " Oh, and by the way, this cost function J also has a name in the literature."}, {"start": 290.72, "end": 294.48, "text": " It's called the distortion function."}, {"start": 294.48, "end": 298.54, "text": " I don't know that this is a great name, but if you hear someone talk about the K means"}, {"start": 298.54, "end": 304.32, "text": " algorithm and the distortion or the distortion cost function, that's just what this formula"}, {"start": 304.32, "end": 307.20000000000005, "text": " J is computing."}, {"start": 307.20000000000005, "end": 311.96000000000004, "text": " Let's now take a deeper look at the algorithm and why the algorithm is trying to minimize"}, {"start": 311.96000000000004, "end": 316.56, "text": " this cost function J or why it's trying to minimize the distortion."}, {"start": 316.56, "end": 321.94, "text": " Here on top of copied over the cost function from the previous slide."}, {"start": 321.94, "end": 328.28, "text": " It turns out that the first part of K means where you assign points to cluster centroids,"}, {"start": 328.28, "end": 335.64, "text": " that turns out to be trying to update C1 through CM to try to minimize the cost function J"}, {"start": 335.64, "end": 341.2, "text": " as much as possible while holding mu1 through muK fix."}, {"start": 341.2, "end": 346.15999999999997, "text": " And the second step in contrast, where you move the cluster centroid, it turns out that"}, {"start": 346.16, "end": 354.8, "text": " that is trying to leave C1 through CM fix, but to update mu1 through muK to try to minimize"}, {"start": 354.8, "end": 358.48, "text": " the cost function or the distortion as much as possible."}, {"start": 358.48, "end": 360.64000000000004, "text": " Let's take a look at why this is the case."}, {"start": 360.64000000000004, "end": 366.88, "text": " During the first step, if you want to choose the values of C1 through CM or save a particular"}, {"start": 366.88, "end": 378.84, "text": " value of Ci to try to minimize this, well, what would make Xi minus mu Ci as small as"}, {"start": 378.84, "end": 380.26, "text": " possible?"}, {"start": 380.26, "end": 387.32, "text": " This is the distance or the square distance between a training example Xi and the location"}, {"start": 387.32, "end": 391.4, "text": " of the cluster centroid to which it's been assigned."}, {"start": 391.4, "end": 396.7, "text": " So if you want to minimize this distance or the square distance, what you should do is"}, {"start": 396.7, "end": 402.52, "text": " assign Xi to the closest cluster centroid."}, {"start": 402.52, "end": 409.08, "text": " So to take a simplified example, if you have two cluster centroids, say cluster centroids"}, {"start": 409.08, "end": 415.96, "text": " one and two, and just a single training example Xi, if you were to assign it to cluster centroid"}, {"start": 415.96, "end": 424.76, "text": " one, this square distance here would be this large distance, well, squared."}, {"start": 424.76, "end": 429.36, "text": " And if you were to assign it to cluster centroid two, then this squared distance would be the"}, {"start": 429.36, "end": 432.12, "text": " square of this much smaller distance."}, {"start": 432.12, "end": 436.48, "text": " So if you want to minimize this term, you would take Xi and assign it to the closer"}, {"start": 436.48, "end": 442.02, "text": " cluster centroid, which is exactly what the algorithm is doing up here."}, {"start": 442.02, "end": 447.15999999999997, "text": " So that's why the step where you assign points to cluster centroids is choosing the values"}, {"start": 447.15999999999997, "end": 454.28, "text": " for Ci to try to minimize J without changing mu1 through muK for now, but just choosing"}, {"start": 454.28, "end": 460.71999999999997, "text": " the values of C1 through Cm to try to make these terms as small as possible."}, {"start": 460.71999999999997, "end": 463.64, "text": " How about the second step of the K-means algorithm?"}, {"start": 463.64, "end": 466.65999999999997, "text": " That is to move the cluster centroids."}, {"start": 466.65999999999997, "end": 474.2, "text": " It turns out that choosing muK to be average at the mean of the points assigned is the"}, {"start": 474.2, "end": 480.73999999999995, "text": " choice of these terms mu that will minimize this expression."}, {"start": 480.74, "end": 487.04, "text": " To take a simplified example, say you have a cluster with just two points assigned to"}, {"start": 487.04, "end": 489.44, "text": " it, shown as follows."}, {"start": 489.44, "end": 495.92, "text": " And so with the cluster centroid here, the average of the squared distances would be"}, {"start": 495.92, "end": 503.92, "text": " a distance of one here squared plus this distance here, which is nine squared."}, {"start": 503.92, "end": 506.72, "text": " And then you take the average of these two numbers."}, {"start": 506.72, "end": 515.94, "text": " And so that turns out to be one half of one plus 81, which turns out to be 41."}, {"start": 515.94, "end": 522.96, "text": " But if you were to take the average of these two points, so one plus 11 over two, that's"}, {"start": 522.96, "end": 524.4, "text": " equal to six."}, {"start": 524.4, "end": 529.48, "text": " And if you were to move the cluster centroid over here to the middle, then the average"}, {"start": 529.48, "end": 537.4, "text": " of these two squared distances turns out to be a distance of five and five here."}, {"start": 537.4, "end": 545.3000000000001, "text": " So you end up with one half of five squared plus five squared, which is equal to 25."}, {"start": 545.3000000000001, "end": 549.5600000000001, "text": " And this is a much smaller average squared distance than 41."}, {"start": 549.5600000000001, "end": 554.24, "text": " And in fact, you can play around with the location of this cluster centroid and maybe"}, {"start": 554.24, "end": 559.72, "text": " convince yourself that taking this mean location, this average location in the middle of these"}, {"start": 559.72, "end": 565.4, "text": " two training samples, that is really the value that minimizes the squared distance."}, {"start": 565.4, "end": 571.84, "text": " So the fact that the k-means algorithm is optimizing a cost function j means that it"}, {"start": 571.84, "end": 573.84, "text": " is guaranteed to converge."}, {"start": 573.84, "end": 580.34, "text": " That is, on every single iteration, the distortion cost function should go down or stay the same."}, {"start": 580.34, "end": 585.94, "text": " But if it ever fails to go down or stay the same in the worst case, if it ever goes up,"}, {"start": 585.94, "end": 587.12, "text": " that means there's a bug in the code."}, {"start": 587.12, "end": 594.36, "text": " It should never go up because every single step of k-means is setting the value c i and"}, {"start": 594.36, "end": 599.08, "text": " mu k to try to reduce the cost function."}, {"start": 599.08, "end": 604.36, "text": " Also if the cost function ever stops going down, that also gives you one way to test"}, {"start": 604.36, "end": 606.5600000000001, "text": " if k-means has converged."}, {"start": 606.56, "end": 612.2399999999999, "text": " Once there's a single iteration where it stays the same, that usually means k-means has converged"}, {"start": 612.2399999999999, "end": 616.3199999999999, "text": " and you should just stop running the algorithm even further."}, {"start": 616.3199999999999, "end": 621.54, "text": " Or in some rare cases, you will run k-means for a long time and the cost function or the"}, {"start": 621.54, "end": 625.1199999999999, "text": " distortion is just going down very, very slowly."}, {"start": 625.1199999999999, "end": 628.88, "text": " And that's a bit like gradient descent where maybe running it even longer might help a"}, {"start": 628.88, "end": 629.88, "text": " bit."}, {"start": 629.88, "end": 634.5799999999999, "text": " But if the rate at which the cost function is going down has become very, very slow,"}, {"start": 634.58, "end": 638.2, "text": " you might also just say, this is good enough, I'm just going to say it's close enough to"}, {"start": 638.2, "end": 644.76, "text": " convergence and not spend even more compute cycles running the algorithm for even longer."}, {"start": 644.76, "end": 648.6, "text": " So these are some of the ways that computing the cost function is helpful."}, {"start": 648.6, "end": 652.48, "text": " It helps you figure out if the algorithm has converged."}, {"start": 652.48, "end": 659.2, "text": " It turns out that there's one other very useful way to take advantage of the cost function,"}, {"start": 659.2, "end": 665.0400000000001, "text": " which is to use multiple different random initializations of the cluster centroids."}, {"start": 665.0400000000001, "end": 669.88, "text": " It turns out if you do this, you can often find much better clusters using k-means."}, {"start": 669.88, "end": 689.92, "text": " Let's take a look at the next video of how to do that."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=qCPYJL_tQK8
8.6 Clustering | Initializing K-means-- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
The very first step of the k-means clustering algorithm was to choose random locations as the initial guesses for the cluster centroids mu1 through mu k. But how do you actually take that random guess? Let's take a look at that in this video, as well as how you can take multiple attempts at the initial guesses for mu1 through mu k that will result in you finding a better set of clusters. Let's take a look. Here again is the k-means algorithm and in this video, let's take a look at how you can implement this first step. When running k-means, you should pretty much always choose the number of cluster centroids k to be less than the training examples m. It doesn't really make sense to have k greater than m because then there won't even be enough training examples to have at least one training example per cluster centroid. So in our earlier example, we had k equals 2 and m equals 30. In order to choose the cluster centroids, the most common way is to randomly pick k training examples. So here is a training set where if I were to randomly pick two training examples, maybe I end up picking this one and this one and then we would set mu1 through mu k equal to these k training examples. So I might initialize my red cluster centroid here and initialize my blue cluster centroid over here in the example where k was equal to 2. And it turns out that if this was your random initialization and you were to run k-means, you probably end up with k-means deciding that these are the two clusters in the data set. Note that this method of initializing the cluster centroids is a little bit different than what I had used in the illustration in the earlier videos where I was initializing the cluster centroids mu1 and mu2 to be just random points rather than sitting on top of specific training examples. I've done that to make the illustrations clearer in the earlier videos, but what I'm showing in this slide is actually the much more commonly used way of initializing the cluster centroids. Now with this method, there is a chance that you will end up with an initialization of the cluster centroids where the red crosses here and maybe the blue crosses here. And depending on how you choose the random initial cluster centroids, k-means will end up picking a different set of clusters for your data set. Let's look at a slightly more complex example where we're going to look at this data set and try to find three clusters. So k equals 3 in this data. If you were to run k-means with one random initialization of the cluster centroids, you may get this result up here. And this looks like a pretty good choice, pretty good clustering of the data into three different clusters. But with a different initialization, say you had happened to initialize two of the cluster centroids within this group of points and one within this group of points, after running k-means, you might end up with this clustering, which doesn't look as good. And this turns out to be a local optima in which k-means is trying to minimize the distortion cost function, that cost function j of c1 through cm and mu1 through mu k that you saw in the last video. But with this less fortunate choice of random initialization, it had just happened to get stuck in a local minima. And here's another example of a local minima, where a different random initialization calls k-means to find this clustering of the data into three clusters, which again doesn't seem as good as the one that you saw up here on top. So if you want to give k-means multiple shots at finding the best local optima, if you want to try multiple random initializations to give it a better chance of finding this good clustering up on top, one other thing you could do with the k-means algorithm is to run it multiple times and then to try to find the best local optima. And it turns out that if you were to run k-means three times, say, and end up with these three distinct clusterings, then one way to choose between these three solutions is to compute the cost function j for all three of these solutions, all three of these choices of clusters found by k-means, and then to pick one of these three according to which one of them gives you the lowest value for the cost function j. And in fact, if you look at this grouping of clusters up here, this green cross has relatively small square distances, all the green dots. The red cross is relatively small distance in the red dots, and similarly the blue cross. And so the cost function j would be relatively small for this example on top. But here, the blue cross has larger distances to all of the blue dots. And here, the red cross has larger distances to all of the red dots, which is why the cost function j for these examples down below would be larger, which is why if you pick from these three options, the one with the smallest distortion, the smallest cost function j, you end up selecting this choice of the three cluster centroid. So let me write this out more formally into an algorithm in which you would run k-means multiple times using different random initializations. Here's the algorithm. If you want to use 100 random initializations for k-means, then you would run 100 times randomly initialized k-means using the methods that you saw earlier in this video. Pick k-training examples and let the cluster centroid initially be the locations of those k-training examples. Using that random initialization, run the k-means algorithm to convergence, and that will give you a choice of cluster assignments and cluster centroids. And then finally, you would compute the distortion, compute the cost function as follows. After doing this, say, 100 times, you would finally pick the set of clusters that gave the lowest cost. And it turns out that if you do this, it will often give you a much better set of clusters with a much lower distortion function than if you were to run k-means only a single time. I plugged in the number up here as 100. When I'm using this method, doing this somewhere between, say, 50 to 1,000 times would be pretty common where if you run this procedure a lot more than 1,000 times, it tends to get computationally expensive and you tend to have diminishing returns when you run it a lot of times. Whereas trying at least maybe 50 or 100 random initializations will often give you a much better result than if you only had one shot at picking a good random initialization. But with this technique, you are much more likely to end up with this good choice of clusters on top than these less superior local minima down at the bottom. So that's it. When I'm using the k-means algorithm myself, I will almost always use more than one random initialization because it just causes k-means to do a much better job minimizing the distortion cost function and finding a much better choice for the cluster centroids. Before we wrap up our discussion of k-means, there's just one more video in which I hope to discuss with you the question of how do you choose the number of cluster centroids? How do you choose the value of k? Let's go on to the next video to take a look at that.
[{"start": 0.0, "end": 7.44, "text": " The very first step of the k-means clustering algorithm was to choose random locations as"}, {"start": 7.44, "end": 12.280000000000001, "text": " the initial guesses for the cluster centroids mu1 through mu k."}, {"start": 12.280000000000001, "end": 15.72, "text": " But how do you actually take that random guess?"}, {"start": 15.72, "end": 21.080000000000002, "text": " Let's take a look at that in this video, as well as how you can take multiple attempts"}, {"start": 21.080000000000002, "end": 26.26, "text": " at the initial guesses for mu1 through mu k that will result in you finding a better"}, {"start": 26.26, "end": 27.26, "text": " set of clusters."}, {"start": 27.26, "end": 29.02, "text": " Let's take a look."}, {"start": 29.02, "end": 35.519999999999996, "text": " Here again is the k-means algorithm and in this video, let's take a look at how you can"}, {"start": 35.519999999999996, "end": 38.26, "text": " implement this first step."}, {"start": 38.26, "end": 43.42, "text": " When running k-means, you should pretty much always choose the number of cluster centroids"}, {"start": 43.42, "end": 47.08, "text": " k to be less than the training examples m."}, {"start": 47.08, "end": 53.28, "text": " It doesn't really make sense to have k greater than m because then there won't even be enough"}, {"start": 53.28, "end": 58.68, "text": " training examples to have at least one training example per cluster centroid."}, {"start": 58.68, "end": 65.12, "text": " So in our earlier example, we had k equals 2 and m equals 30."}, {"start": 65.12, "end": 75.16, "text": " In order to choose the cluster centroids, the most common way is to randomly pick k"}, {"start": 75.16, "end": 77.28, "text": " training examples."}, {"start": 77.28, "end": 83.88, "text": " So here is a training set where if I were to randomly pick two training examples, maybe"}, {"start": 83.88, "end": 92.52, "text": " I end up picking this one and this one and then we would set mu1 through mu k equal to"}, {"start": 92.52, "end": 95.44, "text": " these k training examples."}, {"start": 95.44, "end": 105.5, "text": " So I might initialize my red cluster centroid here and initialize my blue cluster centroid"}, {"start": 105.5, "end": 109.8, "text": " over here in the example where k was equal to 2."}, {"start": 109.8, "end": 115.52, "text": " And it turns out that if this was your random initialization and you were to run k-means,"}, {"start": 115.52, "end": 122.12, "text": " you probably end up with k-means deciding that these are the two clusters in the data"}, {"start": 122.12, "end": 123.12, "text": " set."}, {"start": 123.12, "end": 127.32, "text": " Note that this method of initializing the cluster centroids is a little bit different"}, {"start": 127.32, "end": 132.24, "text": " than what I had used in the illustration in the earlier videos where I was initializing"}, {"start": 132.24, "end": 137.51999999999998, "text": " the cluster centroids mu1 and mu2 to be just random points rather than sitting on top of"}, {"start": 137.51999999999998, "end": 139.64, "text": " specific training examples."}, {"start": 139.64, "end": 144.6, "text": " I've done that to make the illustrations clearer in the earlier videos, but what I'm showing"}, {"start": 144.6, "end": 151.35999999999999, "text": " in this slide is actually the much more commonly used way of initializing the cluster centroids."}, {"start": 151.35999999999999, "end": 159.27999999999997, "text": " Now with this method, there is a chance that you will end up with an initialization of"}, {"start": 159.27999999999997, "end": 165.51999999999998, "text": " the cluster centroids where the red crosses here and maybe the blue crosses here."}, {"start": 165.52, "end": 172.56, "text": " And depending on how you choose the random initial cluster centroids, k-means will end"}, {"start": 172.56, "end": 177.20000000000002, "text": " up picking a different set of clusters for your data set."}, {"start": 177.20000000000002, "end": 182.64000000000001, "text": " Let's look at a slightly more complex example where we're going to look at this data set"}, {"start": 182.64000000000001, "end": 185.28, "text": " and try to find three clusters."}, {"start": 185.28, "end": 188.42000000000002, "text": " So k equals 3 in this data."}, {"start": 188.42000000000002, "end": 195.24, "text": " If you were to run k-means with one random initialization of the cluster centroids, you"}, {"start": 195.24, "end": 197.92000000000002, "text": " may get this result up here."}, {"start": 197.92000000000002, "end": 202.44, "text": " And this looks like a pretty good choice, pretty good clustering of the data into three"}, {"start": 202.44, "end": 204.68, "text": " different clusters."}, {"start": 204.68, "end": 210.8, "text": " But with a different initialization, say you had happened to initialize two of the cluster"}, {"start": 210.8, "end": 216.32000000000002, "text": " centroids within this group of points and one within this group of points, after running"}, {"start": 216.32000000000002, "end": 222.54000000000002, "text": " k-means, you might end up with this clustering, which doesn't look as good."}, {"start": 222.54, "end": 229.42, "text": " And this turns out to be a local optima in which k-means is trying to minimize the distortion"}, {"start": 229.42, "end": 236.72, "text": " cost function, that cost function j of c1 through cm and mu1 through mu k that you saw"}, {"start": 236.72, "end": 238.66, "text": " in the last video."}, {"start": 238.66, "end": 246.23999999999998, "text": " But with this less fortunate choice of random initialization, it had just happened to get"}, {"start": 246.23999999999998, "end": 249.82, "text": " stuck in a local minima."}, {"start": 249.82, "end": 254.92, "text": " And here's another example of a local minima, where a different random initialization calls"}, {"start": 254.92, "end": 263.68, "text": " k-means to find this clustering of the data into three clusters, which again doesn't seem"}, {"start": 263.68, "end": 268.12, "text": " as good as the one that you saw up here on top."}, {"start": 268.12, "end": 275.5, "text": " So if you want to give k-means multiple shots at finding the best local optima, if you want"}, {"start": 275.5, "end": 281.12, "text": " to try multiple random initializations to give it a better chance of finding this good"}, {"start": 281.12, "end": 286.76, "text": " clustering up on top, one other thing you could do with the k-means algorithm is to"}, {"start": 286.76, "end": 292.44, "text": " run it multiple times and then to try to find the best local optima."}, {"start": 292.44, "end": 299.12, "text": " And it turns out that if you were to run k-means three times, say, and end up with these three"}, {"start": 299.12, "end": 305.72, "text": " distinct clusterings, then one way to choose between these three solutions is to compute"}, {"start": 305.72, "end": 313.2, "text": " the cost function j for all three of these solutions, all three of these choices of clusters"}, {"start": 313.2, "end": 319.44, "text": " found by k-means, and then to pick one of these three according to which one of them"}, {"start": 319.44, "end": 324.24, "text": " gives you the lowest value for the cost function j."}, {"start": 324.24, "end": 330.68, "text": " And in fact, if you look at this grouping of clusters up here, this green cross has"}, {"start": 330.68, "end": 334.2, "text": " relatively small square distances, all the green dots."}, {"start": 334.2, "end": 339.3, "text": " The red cross is relatively small distance in the red dots, and similarly the blue cross."}, {"start": 339.3, "end": 345.24, "text": " And so the cost function j would be relatively small for this example on top."}, {"start": 345.24, "end": 351.88, "text": " But here, the blue cross has larger distances to all of the blue dots."}, {"start": 351.88, "end": 358.32, "text": " And here, the red cross has larger distances to all of the red dots, which is why the cost"}, {"start": 358.32, "end": 365.36, "text": " function j for these examples down below would be larger, which is why if you pick from these"}, {"start": 365.36, "end": 372.32, "text": " three options, the one with the smallest distortion, the smallest cost function j, you end up selecting"}, {"start": 372.32, "end": 375.9, "text": " this choice of the three cluster centroid."}, {"start": 375.9, "end": 381.91999999999996, "text": " So let me write this out more formally into an algorithm in which you would run k-means"}, {"start": 381.91999999999996, "end": 386.88, "text": " multiple times using different random initializations."}, {"start": 386.88, "end": 388.65999999999997, "text": " Here's the algorithm."}, {"start": 388.65999999999997, "end": 398.4, "text": " If you want to use 100 random initializations for k-means, then you would run 100 times"}, {"start": 398.4, "end": 405.15999999999997, "text": " randomly initialized k-means using the methods that you saw earlier in this video."}, {"start": 405.16, "end": 411.52000000000004, "text": " Pick k-training examples and let the cluster centroid initially be the locations of those"}, {"start": 411.52000000000004, "end": 414.24, "text": " k-training examples."}, {"start": 414.24, "end": 419.96000000000004, "text": " Using that random initialization, run the k-means algorithm to convergence, and that"}, {"start": 419.96000000000004, "end": 425.8, "text": " will give you a choice of cluster assignments and cluster centroids."}, {"start": 425.8, "end": 431.96000000000004, "text": " And then finally, you would compute the distortion, compute the cost function as follows."}, {"start": 431.96, "end": 438.91999999999996, "text": " After doing this, say, 100 times, you would finally pick the set of clusters that gave"}, {"start": 438.91999999999996, "end": 442.0, "text": " the lowest cost."}, {"start": 442.0, "end": 448.56, "text": " And it turns out that if you do this, it will often give you a much better set of clusters"}, {"start": 448.56, "end": 454.91999999999996, "text": " with a much lower distortion function than if you were to run k-means only a single time."}, {"start": 454.91999999999996, "end": 458.56, "text": " I plugged in the number up here as 100."}, {"start": 458.56, "end": 465.08, "text": " When I'm using this method, doing this somewhere between, say, 50 to 1,000 times would be pretty"}, {"start": 465.08, "end": 472.0, "text": " common where if you run this procedure a lot more than 1,000 times, it tends to get computationally"}, {"start": 472.0, "end": 478.2, "text": " expensive and you tend to have diminishing returns when you run it a lot of times."}, {"start": 478.2, "end": 484.48, "text": " Whereas trying at least maybe 50 or 100 random initializations will often give you a much"}, {"start": 484.48, "end": 490.6, "text": " better result than if you only had one shot at picking a good random initialization."}, {"start": 490.6, "end": 495.08000000000004, "text": " But with this technique, you are much more likely to end up with this good choice of"}, {"start": 495.08000000000004, "end": 500.48, "text": " clusters on top than these less superior local minima down at the bottom."}, {"start": 500.48, "end": 501.48, "text": " So that's it."}, {"start": 501.48, "end": 506.12, "text": " When I'm using the k-means algorithm myself, I will almost always use more than one random"}, {"start": 506.12, "end": 512.24, "text": " initialization because it just causes k-means to do a much better job minimizing the distortion"}, {"start": 512.24, "end": 518.16, "text": " cost function and finding a much better choice for the cluster centroids."}, {"start": 518.16, "end": 522.88, "text": " Before we wrap up our discussion of k-means, there's just one more video in which I hope"}, {"start": 522.88, "end": 527.8, "text": " to discuss with you the question of how do you choose the number of cluster centroids?"}, {"start": 527.8, "end": 530.5600000000001, "text": " How do you choose the value of k?"}, {"start": 530.56, "end": 543.4399999999999, "text": " Let's go on to the next video to take a look at that."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=3OaUVZbeYgA
8.7 Clustering | Choosing the number of clusters -- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
The k-means algorithm requires as one of its inputs k, the number of clusters you want it to find. But how do you decide how many clusters to use? Do you want two clusters or three clusters or five clusters or ten clusters? Let's take a look. For a lot of clustering problems, the right value of k is truly ambiguous. If I were to show different people the same data set and ask how many clusters do you see, there will definitely be people that will say, oh, looks like there are two distinct clusters and they would be right. And there would also be others that will see actually four distinct clusters and they would also be right. Because clustering is an unsupervised learning algorithm, you're not given the quote right answers in the form of specific labels to try to replicate. And so there are a lot of applications where the data itself does not give a clear indicator for how many clusters there are in it. And I think it truly is ambiguous if this data has two or four or maybe three clusters. If you take, say, the red one here and the two blue ones here, say. If you look at the academic literature on k-means, there are a few techniques to try to automatically choose the number of clusters to use for a certain application. I'll briefly mention one here that you may see others refer to, although I have to say I personally do not use this method myself. But one way to try to choose the value of k is called the elbow method. And what that does is you would run k-means with a variety of values of k and plot the cost function or the distortion function j as a function of the number of clusters. What you find is that when you have very few clusters, say one cluster, the distortion function of the cost function j will be high. And as you increase the number of clusters, it will go down, maybe as follows. And one method for trying to choose the number of clusters is called the elbow method. And what that does is look at the cost function as a function of the number of clusters and see if there's a bend in the curve. And we call that an elbow. And if the curve looks like this, you say, well, it looks like the cost function is decreasing rapidly until we get to three clusters, but it decreases more slowly after that. So let's choose k equals three. And this is called an elbow, by the way, because think of it as analogous to that's your hand and that's your elbow over here. So plotting the cost function as a function of k could help. It could help you gain some insight. I personally hardly ever use the elbow method myself to choose the right number of clusters because I think for a lot of applications, the right number of clusters is truly ambiguous. And you find that a lot of cost functions look like this, where it just decreases smoothly and it doesn't have a clear elbow by which you could use to pick the value of k. By the way, one technique that does not work is to choose k so as to minimize the cost function j because doing so would cause you to almost always just choose the largest possible value of k because having more clusters will pretty much always reduce the cost function j. So choosing k to minimize the cost function j is not a good technique. So how do you choose the value of k in practice? Then you're running k-means in order to get clusters to use for some later or some downstream purpose. That is, you're going to take the clusters and do something with those clusters. So what I usually do and what I recommend you do is to evaluate k-means based on how well it performs for that later downstream purpose. Let me illustrate the example of t-shirt sizing. One thing you could do is run k-means on this data set to find three clusters, in which case you may find clusters like that. And this would be how you size your small, medium, and large t-shirts. But how many t-shirt sizes should there be? Well it's ambiguous. If you were to also run k-means with five clusters, you might get clusters that look like this, and this would let you size t-shirts according to extra small, small, medium, large, and extra large. And so both of these are completely valid and completely fine groupings of the data into clusters. But whether you want to use three clusters or five clusters can now be decided based on what makes sense for your t-shirt business. There's a trade-off between how well the t-shirts will fit depending on whether you have three sizes or five sizes, but there will be extra costs as well associated with manufacturing and shipping five types of t-shirts instead of three different types of t-shirts. So what I would do in this case is to run k-means with k equals three and k equals five, and then look at these two solutions to see based on the trade-off between fit of t-shirts with more sizes results in better fit versus the extra cost of making more t-shirts where making fewer t-shirts is simpler and less expensive to try to decide what makes sense for the t-shirt business. When you get to the programming exercise, you also see there an application of k-means to image compression. This is actually one of the most fun visual examples of k-means. And there you see that there'll be a trade-off between the quality of the compressed image, that is how good the image looks versus how much you can compress the image, that is the size of the image. And there you see that there will be a trade-off between the quality of the compressed image, that is how good the image looks versus how much you can compress the image to save this space. And in that programming exercise, you see that you can use that trade-off to maybe manually decide what's the best value of k based on how good you want the image to look versus how large you want the compressed image size to be. So that's it for the k-means clustering algorithm. Congrats on learning your first unsupervised learning algorithm. You now know not just how to do supervised learning, but also unsupervised learning. And I hope you also have fun with the practice lab. It's actually one of the most fun exercises I know of for k-means. And with that, we're ready to move on to our second unsupervised learning algorithm, which is anomaly detection. How do you look at a data set and find unusual or anomalous things in it? This turns out to be another one of the most commercially important applications of unsupervised learning. I've used this myself many times in many different applications. Let's go on to the next video to talk about anomaly detection.
[{"start": 0.0, "end": 7.96, "text": " The k-means algorithm requires as one of its inputs k, the number of clusters you want it"}, {"start": 7.96, "end": 8.96, "text": " to find."}, {"start": 8.96, "end": 11.96, "text": " But how do you decide how many clusters to use?"}, {"start": 11.96, "end": 15.88, "text": " Do you want two clusters or three clusters or five clusters or ten clusters?"}, {"start": 15.88, "end": 17.38, "text": " Let's take a look."}, {"start": 17.38, "end": 24.2, "text": " For a lot of clustering problems, the right value of k is truly ambiguous."}, {"start": 24.2, "end": 29.52, "text": " If I were to show different people the same data set and ask how many clusters do you"}, {"start": 29.52, "end": 35.68, "text": " see, there will definitely be people that will say, oh, looks like there are two distinct"}, {"start": 35.68, "end": 39.2, "text": " clusters and they would be right."}, {"start": 39.2, "end": 48.120000000000005, "text": " And there would also be others that will see actually four distinct clusters and they would"}, {"start": 48.120000000000005, "end": 50.8, "text": " also be right."}, {"start": 50.8, "end": 56.92, "text": " Because clustering is an unsupervised learning algorithm, you're not given the quote right"}, {"start": 56.92, "end": 61.6, "text": " answers in the form of specific labels to try to replicate."}, {"start": 61.6, "end": 68.16, "text": " And so there are a lot of applications where the data itself does not give a clear indicator"}, {"start": 68.16, "end": 70.96000000000001, "text": " for how many clusters there are in it."}, {"start": 70.96000000000001, "end": 78.76, "text": " And I think it truly is ambiguous if this data has two or four or maybe three clusters."}, {"start": 78.76, "end": 83.0, "text": " If you take, say, the red one here and the two blue ones here, say."}, {"start": 83.0, "end": 88.36, "text": " If you look at the academic literature on k-means, there are a few techniques to try"}, {"start": 88.36, "end": 94.32, "text": " to automatically choose the number of clusters to use for a certain application."}, {"start": 94.32, "end": 100.16, "text": " I'll briefly mention one here that you may see others refer to, although I have to say"}, {"start": 100.16, "end": 105.8, "text": " I personally do not use this method myself."}, {"start": 105.8, "end": 112.32, "text": " But one way to try to choose the value of k is called the elbow method."}, {"start": 112.32, "end": 120.75999999999999, "text": " And what that does is you would run k-means with a variety of values of k and plot the"}, {"start": 120.75999999999999, "end": 126.96, "text": " cost function or the distortion function j as a function of the number of clusters."}, {"start": 126.96, "end": 132.44, "text": " What you find is that when you have very few clusters, say one cluster, the distortion"}, {"start": 132.44, "end": 135.4, "text": " function of the cost function j will be high."}, {"start": 135.4, "end": 143.52, "text": " And as you increase the number of clusters, it will go down, maybe as follows."}, {"start": 143.52, "end": 149.20000000000002, "text": " And one method for trying to choose the number of clusters is called the elbow method."}, {"start": 149.20000000000002, "end": 154.84, "text": " And what that does is look at the cost function as a function of the number of clusters and"}, {"start": 154.84, "end": 159.28, "text": " see if there's a bend in the curve."}, {"start": 159.28, "end": 162.20000000000002, "text": " And we call that an elbow."}, {"start": 162.2, "end": 168.11999999999998, "text": " And if the curve looks like this, you say, well, it looks like the cost function is decreasing"}, {"start": 168.11999999999998, "end": 173.35999999999999, "text": " rapidly until we get to three clusters, but it decreases more slowly after that."}, {"start": 173.35999999999999, "end": 176.44, "text": " So let's choose k equals three."}, {"start": 176.44, "end": 183.39999999999998, "text": " And this is called an elbow, by the way, because think of it as analogous to that's your hand"}, {"start": 183.39999999999998, "end": 187.6, "text": " and that's your elbow over here."}, {"start": 187.6, "end": 191.88, "text": " So plotting the cost function as a function of k could help."}, {"start": 191.88, "end": 194.68, "text": " It could help you gain some insight."}, {"start": 194.68, "end": 201.48, "text": " I personally hardly ever use the elbow method myself to choose the right number of clusters"}, {"start": 201.48, "end": 207.32, "text": " because I think for a lot of applications, the right number of clusters is truly ambiguous."}, {"start": 207.32, "end": 214.04, "text": " And you find that a lot of cost functions look like this, where it just decreases smoothly"}, {"start": 214.04, "end": 220.28, "text": " and it doesn't have a clear elbow by which you could use to pick the value of k."}, {"start": 220.28, "end": 226.32, "text": " By the way, one technique that does not work is to choose k so as to minimize the cost"}, {"start": 226.32, "end": 232.6, "text": " function j because doing so would cause you to almost always just choose the largest possible"}, {"start": 232.6, "end": 238.2, "text": " value of k because having more clusters will pretty much always reduce the cost function"}, {"start": 238.2, "end": 239.2, "text": " j."}, {"start": 239.2, "end": 244.48, "text": " So choosing k to minimize the cost function j is not a good technique."}, {"start": 244.48, "end": 249.04, "text": " So how do you choose the value of k in practice?"}, {"start": 249.04, "end": 254.67999999999998, "text": " Then you're running k-means in order to get clusters to use for some later or some downstream"}, {"start": 254.67999999999998, "end": 256.2, "text": " purpose."}, {"start": 256.2, "end": 260.96, "text": " That is, you're going to take the clusters and do something with those clusters."}, {"start": 260.96, "end": 266.92, "text": " So what I usually do and what I recommend you do is to evaluate k-means based on how"}, {"start": 266.92, "end": 272.4, "text": " well it performs for that later downstream purpose."}, {"start": 272.4, "end": 276.65999999999997, "text": " Let me illustrate the example of t-shirt sizing."}, {"start": 276.66, "end": 281.92, "text": " One thing you could do is run k-means on this data set to find three clusters, in which"}, {"start": 281.92, "end": 285.92, "text": " case you may find clusters like that."}, {"start": 285.92, "end": 290.88000000000005, "text": " And this would be how you size your small, medium, and large t-shirts."}, {"start": 290.88000000000005, "end": 293.92, "text": " But how many t-shirt sizes should there be?"}, {"start": 293.92, "end": 295.68, "text": " Well it's ambiguous."}, {"start": 295.68, "end": 303.38, "text": " If you were to also run k-means with five clusters, you might get clusters that look"}, {"start": 303.38, "end": 310.36, "text": " like this, and this would let you size t-shirts according to extra small, small, medium, large,"}, {"start": 310.36, "end": 312.24, "text": " and extra large."}, {"start": 312.24, "end": 317.6, "text": " And so both of these are completely valid and completely fine groupings of the data"}, {"start": 317.6, "end": 319.68, "text": " into clusters."}, {"start": 319.68, "end": 325.44, "text": " But whether you want to use three clusters or five clusters can now be decided based"}, {"start": 325.44, "end": 329.71999999999997, "text": " on what makes sense for your t-shirt business."}, {"start": 329.72, "end": 334.12, "text": " There's a trade-off between how well the t-shirts will fit depending on whether you have three"}, {"start": 334.12, "end": 341.52000000000004, "text": " sizes or five sizes, but there will be extra costs as well associated with manufacturing"}, {"start": 341.52000000000004, "end": 346.52000000000004, "text": " and shipping five types of t-shirts instead of three different types of t-shirts."}, {"start": 346.52000000000004, "end": 352.68, "text": " So what I would do in this case is to run k-means with k equals three and k equals five,"}, {"start": 352.68, "end": 360.48, "text": " and then look at these two solutions to see based on the trade-off between fit of t-shirts"}, {"start": 360.48, "end": 367.0, "text": " with more sizes results in better fit versus the extra cost of making more t-shirts where"}, {"start": 367.0, "end": 372.08, "text": " making fewer t-shirts is simpler and less expensive to try to decide what makes sense"}, {"start": 372.08, "end": 374.08, "text": " for the t-shirt business."}, {"start": 374.08, "end": 379.92, "text": " When you get to the programming exercise, you also see there an application of k-means"}, {"start": 379.92, "end": 387.28000000000003, "text": " to image compression. This is actually one of the most fun visual examples of k-means."}, {"start": 387.28000000000003, "end": 393.18, "text": " And there you see that there'll be a trade-off between the quality of the compressed image,"}, {"start": 393.18, "end": 399.48, "text": " that is how good the image looks versus how much you can compress the image, that is the"}, {"start": 399.48, "end": 401.44, "text": " size of the image."}, {"start": 401.44, "end": 405.84000000000003, "text": " And there you see that there will be a trade-off between the quality of the compressed image,"}, {"start": 405.84, "end": 412.11999999999995, "text": " that is how good the image looks versus how much you can compress the image to save this"}, {"start": 412.11999999999995, "end": 419.15999999999997, "text": " space. And in that programming exercise, you see that you can use that trade-off to maybe"}, {"start": 419.15999999999997, "end": 424.84, "text": " manually decide what's the best value of k based on how good you want the image to look"}, {"start": 424.84, "end": 429.64, "text": " versus how large you want the compressed image size to be."}, {"start": 429.64, "end": 435.59999999999997, "text": " So that's it for the k-means clustering algorithm. Congrats on learning your first"}, {"start": 435.6, "end": 440.40000000000003, "text": " unsupervised learning algorithm. You now know not just how to do supervised learning, but"}, {"start": 440.40000000000003, "end": 445.92, "text": " also unsupervised learning. And I hope you also have fun with the practice lab. It's"}, {"start": 445.92, "end": 451.24, "text": " actually one of the most fun exercises I know of for k-means."}, {"start": 451.24, "end": 456.64000000000004, "text": " And with that, we're ready to move on to our second unsupervised learning algorithm, which"}, {"start": 456.64000000000004, "end": 463.02000000000004, "text": " is anomaly detection. How do you look at a data set and find unusual or anomalous things"}, {"start": 463.02, "end": 468.47999999999996, "text": " in it? This turns out to be another one of the most commercially important applications"}, {"start": 468.47999999999996, "end": 473.96, "text": " of unsupervised learning. I've used this myself many times in many different applications."}, {"start": 473.96, "end": 493.96, "text": " Let's go on to the next video to talk about anomaly detection."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=e6oYV5MDXFk
8.8 Anomaly Detection | Finding unusual events -- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's look at our second unsupervised learning algorithm. Anomaly detection algorithms look at an unlabeled data set of normal events and thereby learns to detect or to raise a red flag for if there is an unusual or an anomalous event. Let's look at an example. Some of my friends were working on using anomaly detection to detect possible problems with aircraft engines that were being manufactured. When a company makes an aircraft engine, you really want that aircraft engine to be reliable and function well because an aircraft engine failure has very negative consequences. So some of my friends were using anomaly detection to check if an aircraft engine after it was manufactured seemed anomalous or if there seemed to be anything wrong with it. Here's the idea. After an aircraft engine rolls off the assembly line, you can compute a number of different features of the aircraft engine. So say feature X1 measures the heat generated by the engine, feature X2 measures the vibration intensity, and so on and so forth for additional features as well. But to simplify this slide a bit, I'm going to use just two features, X1 and X2, corresponding to the heat and the vibrations of the engine. Now it turns out that aircraft engine manufacturers don't make that many bad engines. And so the easier type of data to collect would be if you have manufactured M aircraft engines to collect the features X1 and X2 about how these M engines behave. And probably most of them are just fine, they're normal engines rather than ones with a defect or a flaw in them. And the anomaly detection problem is after the learning algorithm has seen these M examples of how aircraft engines typically behave in terms of how much heat they generate and how much they vibrate, if a brand new aircraft engine were to roll off the assembly line and if it had a new feature vector given by X test, we'd like to know does this engine look similar to ones that have been manufactured before? So is this probably okay? Or is there something really weird about this engine which might cause this performance to be suspect, meaning that maybe we should inspect it even more carefully before we let it get shipped out and be installed in an airplane and then hopefully nothing will go wrong with it. Here's how an anomaly detection algorithm works. Let me plot the examples X1 through XM over here via these crosses where each cross, each data point in this plot corresponds to a specific engine with a specific amount of heat and a specific amount of vibrations. If this new aircraft engine X test rolls off the assembly line and if you were to plot these values of X1 and X2 and if it were here, you'd say, okay, that looks probably okay. It looks very similar to other aircraft engines. Maybe I don't need to worry about this one. But if this new aircraft engine has a heat and vibration signature that is say all the way down here, then this data point down here looks very different than the ones we saw up on top. And so we will probably say, boy, this looks like an anomaly. This doesn't look like the examples I've seen before. We better inspect this more carefully before we let this engine get installed on an airplane. How can you have an algorithm address this problem? The most common way to carry out anomaly detection is through a technique called density estimation. And what that means is when you're given your training set of these M examples, the first thing you do is build a model for the probability of X. In other words, the learning algorithm will try to figure out what are the values of the features X1 and X2 that have high probability and what are the values that are less likely or have lower chance or lower probability of being seen in the data set. In this example that we have here, I think it is quite likely to see examples in that little ellipse in the middle. So that region in the middle would have high probability. Maybe things in this ellipse have a little bit lower probability. Things in this ellipse or this oval have even lower probability and things outside have even lower probability. The details of how you decide from the training sets what regions are higher versus lower probability is something we'll see in the next few videos. And having modeled or having learned to model for P of X when you are given the new test example X test, what you will do is then compute the probability of X test and if it is small or more precisely if it is less than some small number that I'm going to call epsilon. This is a Greek alphabet epsilon, which you should think of as a small number, which means that P of X is very small or in other words, the specific value of X that you saw for a certain user was very unlikely relative to other users that you have seen. But if P of X test is less than some small threshold or some small number epsilon, we will raise a flag to say that this could be an anomaly. So for example, if X test was all the way down here, the probability of an example landing all the way out here is actually quite low. And so hopefully P of X test for this value of X test will be less than epsilon. And so we would flag this as an anomaly. Whereas in contrast, if P of X test is not less than epsilon, if P of X says is greater than equal to epsilon, then we will say that it looks okay, this doesn't look like an anomaly. And that corresponds to if you had an example in here, say, where our model P of X will say that examples near the middle here, they're actually quite high probability, it's a very high chance that the new airplane engine will have features close to these inner ellipses. And so P of X test will be large for those examples and we'll say it's okay and it's not an anomaly. Anomaly detection is used today in many applications. It is frequently used in fraud detection, where, for example, if you are running a website with many different features, if you compute Xi to be the features of user eyes activities. So for example, the features Xi might be the features of a particular user eyes activities. And examples of features might include how often does this user log in? And how many web pages do they visit? How many transactions are they making? Or how many posts on the discussion forum are they making? What is their typing speed? How many characters per second do they seem able to type? With data like this, you can then model P of X from data to model what is the typical behavior of a given user. In a common workflow of fraud detection, you wouldn't automatically turn off an account just because it seemed anomalous, but instead you may ask the security team to take a closer look or put in some additional security checks, such as ask the user to verify their identity with a cell phone number or ask them to pause a capture to prove that they're human and so on. But algorithms like this are routinely used today to try to find unusual or maybe slightly suspicious activity so they can more carefully screen those accounts to make sure there isn't something fraudulent. And this type of fraud detection is used both to find fake accounts and this type of algorithm is also used frequently to try to identify financial fraud, such as if there's a very unusual pattern of purchases, then that may be something well worth a security team taking a more careful look at. Anomaly detection is also frequently used in manufacturing. You saw an example on the previous slide with aircraft engine manufacturing, but many manufacturers in multiple continents in many, many factories will routinely use anomaly detection to see if whatever they just manufactured, anything from an airplane engine to a printed circuit board to a smartphone to a motor to many, many things to see if you've just manufactured a unit that somehow behaves strangely because that may indicate that there's something wrong with your airplane engine or printed circuit boards or what have you that might cause you to want to take a more careful look before you ship that object to a customer. It's also used to monitor computers in clusters and in data centers where if XI are the features of a certain machine I, such as if the features captured the memory usage, the number of disk accesses per second, CPU load, features can also be ratios such as the ratio of CPU load to network traffic. And if ever a specific computer behaves very differently than other computers, it might be worth taking a look at that computer to see if something is wrong with it, such as if it has had a hot disk failure or a network card failure or something's wrong with it, or if maybe it has been hacked into. Anomaly detection is one of those algorithms that is very widely used even though you don't seem to hear people talk about it that much. I remember the first time I worked on the commercial application of anomaly detection was when I was helping a telco company put in place anomaly detection to see when any one of the cell towers was behaving in an unusual way because that probably meant there was something wrong with the cell tower and so they want to get a technician to take a look. So hopefully that helped more people get good cell phone coverage. And I've also used anomaly detection to find fraudulent financial transactions. And these days I often use it to help manufacturing companies find anomalous parts that they may have manufactured but should inspect more often. So it is a very useful tool to have in your tool chest. And in the next few videos, we'll talk about how you can build and get these algorithms to work for yourself. In order to get anomalous detection algorithms to work, we'll need to use a Gaussian distribution to model the data, pfx. So let's go on to the next video to talk about Gaussian distributions.
[{"start": 0.0, "end": 6.96, "text": " Let's look at our second unsupervised learning algorithm."}, {"start": 6.96, "end": 13.8, "text": " Anomaly detection algorithms look at an unlabeled data set of normal events and thereby learns"}, {"start": 13.8, "end": 20.72, "text": " to detect or to raise a red flag for if there is an unusual or an anomalous event."}, {"start": 20.72, "end": 23.02, "text": " Let's look at an example."}, {"start": 23.02, "end": 29.36, "text": " Some of my friends were working on using anomaly detection to detect possible problems with"}, {"start": 29.36, "end": 33.76, "text": " aircraft engines that were being manufactured."}, {"start": 33.76, "end": 39.04, "text": " When a company makes an aircraft engine, you really want that aircraft engine to be reliable"}, {"start": 39.04, "end": 45.7, "text": " and function well because an aircraft engine failure has very negative consequences."}, {"start": 45.7, "end": 52.8, "text": " So some of my friends were using anomaly detection to check if an aircraft engine after it was"}, {"start": 52.8, "end": 57.92, "text": " manufactured seemed anomalous or if there seemed to be anything wrong with it."}, {"start": 57.92, "end": 61.04, "text": " Here's the idea."}, {"start": 61.04, "end": 66.4, "text": " After an aircraft engine rolls off the assembly line, you can compute a number of different"}, {"start": 66.4, "end": 68.32000000000001, "text": " features of the aircraft engine."}, {"start": 68.32000000000001, "end": 76.16, "text": " So say feature X1 measures the heat generated by the engine, feature X2 measures the vibration"}, {"start": 76.16, "end": 80.92, "text": " intensity, and so on and so forth for additional features as well."}, {"start": 80.92, "end": 86.78, "text": " But to simplify this slide a bit, I'm going to use just two features, X1 and X2, corresponding"}, {"start": 86.78, "end": 90.92, "text": " to the heat and the vibrations of the engine."}, {"start": 90.92, "end": 97.2, "text": " Now it turns out that aircraft engine manufacturers don't make that many bad engines."}, {"start": 97.2, "end": 104.48, "text": " And so the easier type of data to collect would be if you have manufactured M aircraft"}, {"start": 104.48, "end": 111.2, "text": " engines to collect the features X1 and X2 about how these M engines behave."}, {"start": 111.2, "end": 117.28, "text": " And probably most of them are just fine, they're normal engines rather than ones with a defect"}, {"start": 117.28, "end": 120.68, "text": " or a flaw in them."}, {"start": 120.68, "end": 128.84, "text": " And the anomaly detection problem is after the learning algorithm has seen these M examples"}, {"start": 128.84, "end": 133.64000000000001, "text": " of how aircraft engines typically behave in terms of how much heat they generate and how"}, {"start": 133.64000000000001, "end": 140.24, "text": " much they vibrate, if a brand new aircraft engine were to roll off the assembly line"}, {"start": 140.24, "end": 148.5, "text": " and if it had a new feature vector given by X test, we'd like to know does this engine"}, {"start": 148.5, "end": 153.12, "text": " look similar to ones that have been manufactured before?"}, {"start": 153.12, "end": 155.52, "text": " So is this probably okay?"}, {"start": 155.52, "end": 160.08, "text": " Or is there something really weird about this engine which might cause this performance"}, {"start": 160.08, "end": 166.0, "text": " to be suspect, meaning that maybe we should inspect it even more carefully before we let"}, {"start": 166.0, "end": 170.4, "text": " it get shipped out and be installed in an airplane and then hopefully nothing will go"}, {"start": 170.4, "end": 171.4, "text": " wrong with it."}, {"start": 171.4, "end": 174.76, "text": " Here's how an anomaly detection algorithm works."}, {"start": 174.76, "end": 182.8, "text": " Let me plot the examples X1 through XM over here via these crosses where each cross, each"}, {"start": 182.8, "end": 188.44, "text": " data point in this plot corresponds to a specific engine with a specific amount of heat and"}, {"start": 188.44, "end": 192.14, "text": " a specific amount of vibrations."}, {"start": 192.14, "end": 197.67999999999998, "text": " If this new aircraft engine X test rolls off the assembly line and if you were to plot"}, {"start": 197.67999999999998, "end": 205.88, "text": " these values of X1 and X2 and if it were here, you'd say, okay, that looks probably okay."}, {"start": 205.88, "end": 210.44, "text": " It looks very similar to other aircraft engines."}, {"start": 210.44, "end": 214.23999999999998, "text": " Maybe I don't need to worry about this one."}, {"start": 214.23999999999998, "end": 219.26, "text": " But if this new aircraft engine has a heat and vibration signature that is say all the"}, {"start": 219.26, "end": 225.23999999999998, "text": " way down here, then this data point down here looks very different than the ones we saw"}, {"start": 225.23999999999998, "end": 226.56, "text": " up on top."}, {"start": 226.56, "end": 231.48, "text": " And so we will probably say, boy, this looks like an anomaly."}, {"start": 231.48, "end": 234.23999999999998, "text": " This doesn't look like the examples I've seen before."}, {"start": 234.23999999999998, "end": 239.68, "text": " We better inspect this more carefully before we let this engine get installed on an airplane."}, {"start": 239.68, "end": 244.23999999999998, "text": " How can you have an algorithm address this problem?"}, {"start": 244.24, "end": 253.64000000000001, "text": " The most common way to carry out anomaly detection is through a technique called density estimation."}, {"start": 253.64000000000001, "end": 259.56, "text": " And what that means is when you're given your training set of these M examples, the first"}, {"start": 259.56, "end": 268.04, "text": " thing you do is build a model for the probability of X."}, {"start": 268.04, "end": 272.88, "text": " In other words, the learning algorithm will try to figure out what are the values of the"}, {"start": 272.88, "end": 279.2, "text": " features X1 and X2 that have high probability and what are the values that are less likely"}, {"start": 279.2, "end": 285.71999999999997, "text": " or have lower chance or lower probability of being seen in the data set."}, {"start": 285.71999999999997, "end": 292.15999999999997, "text": " In this example that we have here, I think it is quite likely to see examples in that"}, {"start": 292.15999999999997, "end": 293.68, "text": " little ellipse in the middle."}, {"start": 293.68, "end": 298.24, "text": " So that region in the middle would have high probability."}, {"start": 298.24, "end": 302.4, "text": " Maybe things in this ellipse have a little bit lower probability."}, {"start": 302.4, "end": 307.23999999999995, "text": " Things in this ellipse or this oval have even lower probability and things outside have"}, {"start": 307.23999999999995, "end": 311.44, "text": " even lower probability."}, {"start": 311.44, "end": 317.28, "text": " The details of how you decide from the training sets what regions are higher versus lower"}, {"start": 317.28, "end": 323.12, "text": " probability is something we'll see in the next few videos."}, {"start": 323.12, "end": 331.67999999999995, "text": " And having modeled or having learned to model for P of X when you are given the new test"}, {"start": 331.68, "end": 341.28000000000003, "text": " example X test, what you will do is then compute the probability of X test and if it is small"}, {"start": 341.28000000000003, "end": 348.28000000000003, "text": " or more precisely if it is less than some small number that I'm going to call epsilon."}, {"start": 348.28000000000003, "end": 353.86, "text": " This is a Greek alphabet epsilon, which you should think of as a small number, which means"}, {"start": 353.86, "end": 361.2, "text": " that P of X is very small or in other words, the specific value of X that you saw for a"}, {"start": 361.2, "end": 368.47999999999996, "text": " certain user was very unlikely relative to other users that you have seen."}, {"start": 368.47999999999996, "end": 374.52, "text": " But if P of X test is less than some small threshold or some small number epsilon, we"}, {"start": 374.52, "end": 379.28, "text": " will raise a flag to say that this could be an anomaly."}, {"start": 379.28, "end": 386.2, "text": " So for example, if X test was all the way down here, the probability of an example landing"}, {"start": 386.2, "end": 388.91999999999996, "text": " all the way out here is actually quite low."}, {"start": 388.92, "end": 393.68, "text": " And so hopefully P of X test for this value of X test will be less than epsilon."}, {"start": 393.68, "end": 397.8, "text": " And so we would flag this as an anomaly."}, {"start": 397.8, "end": 403.64000000000004, "text": " Whereas in contrast, if P of X test is not less than epsilon, if P of X says is greater"}, {"start": 403.64000000000004, "end": 410.48, "text": " than equal to epsilon, then we will say that it looks okay, this doesn't look like an anomaly."}, {"start": 410.48, "end": 416.0, "text": " And that corresponds to if you had an example in here, say, where our model P of X will"}, {"start": 416.0, "end": 420.8, "text": " say that examples near the middle here, they're actually quite high probability, it's a very"}, {"start": 420.8, "end": 427.46, "text": " high chance that the new airplane engine will have features close to these inner ellipses."}, {"start": 427.46, "end": 431.56, "text": " And so P of X test will be large for those examples and we'll say it's okay and it's"}, {"start": 431.56, "end": 433.88, "text": " not an anomaly."}, {"start": 433.88, "end": 437.4, "text": " Anomaly detection is used today in many applications."}, {"start": 437.4, "end": 443.6, "text": " It is frequently used in fraud detection, where, for example, if you are running a website"}, {"start": 443.6, "end": 451.40000000000003, "text": " with many different features, if you compute Xi to be the features of user eyes activities."}, {"start": 451.40000000000003, "end": 458.24, "text": " So for example, the features Xi might be the features of a particular user eyes activities."}, {"start": 458.24, "end": 463.48, "text": " And examples of features might include how often does this user log in?"}, {"start": 463.48, "end": 466.36, "text": " And how many web pages do they visit?"}, {"start": 466.36, "end": 469.12, "text": " How many transactions are they making?"}, {"start": 469.12, "end": 473.8, "text": " Or how many posts on the discussion forum are they making?"}, {"start": 473.8, "end": 476.0, "text": " What is their typing speed?"}, {"start": 476.0, "end": 480.36, "text": " How many characters per second do they seem able to type?"}, {"start": 480.36, "end": 486.0, "text": " With data like this, you can then model P of X from data to model what is the typical"}, {"start": 486.0, "end": 489.8, "text": " behavior of a given user."}, {"start": 489.8, "end": 494.84000000000003, "text": " In a common workflow of fraud detection, you wouldn't automatically turn off an account"}, {"start": 494.84, "end": 501.4, "text": " just because it seemed anomalous, but instead you may ask the security team to take a closer"}, {"start": 501.4, "end": 507.71999999999997, "text": " look or put in some additional security checks, such as ask the user to verify their identity"}, {"start": 507.71999999999997, "end": 513.0799999999999, "text": " with a cell phone number or ask them to pause a capture to prove that they're human and"}, {"start": 513.0799999999999, "end": 514.4, "text": " so on."}, {"start": 514.4, "end": 520.0799999999999, "text": " But algorithms like this are routinely used today to try to find unusual or maybe slightly"}, {"start": 520.08, "end": 526.24, "text": " suspicious activity so they can more carefully screen those accounts to make sure there isn't"}, {"start": 526.24, "end": 528.32, "text": " something fraudulent."}, {"start": 528.32, "end": 538.36, "text": " And this type of fraud detection is used both to find fake accounts and this type of algorithm"}, {"start": 538.36, "end": 545.96, "text": " is also used frequently to try to identify financial fraud, such as if there's a very"}, {"start": 545.96, "end": 552.2, "text": " unusual pattern of purchases, then that may be something well worth a security team taking"}, {"start": 552.2, "end": 554.6800000000001, "text": " a more careful look at."}, {"start": 554.6800000000001, "end": 558.0, "text": " Anomaly detection is also frequently used in manufacturing."}, {"start": 558.0, "end": 564.72, "text": " You saw an example on the previous slide with aircraft engine manufacturing, but many manufacturers"}, {"start": 564.72, "end": 571.24, "text": " in multiple continents in many, many factories will routinely use anomaly detection to see"}, {"start": 571.24, "end": 576.48, "text": " if whatever they just manufactured, anything from an airplane engine to a printed circuit"}, {"start": 576.48, "end": 582.2, "text": " board to a smartphone to a motor to many, many things to see if you've just manufactured"}, {"start": 582.2, "end": 588.5600000000001, "text": " a unit that somehow behaves strangely because that may indicate that there's something wrong"}, {"start": 588.5600000000001, "end": 593.36, "text": " with your airplane engine or printed circuit boards or what have you that might cause you"}, {"start": 593.36, "end": 598.88, "text": " to want to take a more careful look before you ship that object to a customer."}, {"start": 598.88, "end": 607.2, "text": " It's also used to monitor computers in clusters and in data centers where if XI are the features"}, {"start": 607.2, "end": 612.84, "text": " of a certain machine I, such as if the features captured the memory usage, the number of disk"}, {"start": 612.84, "end": 619.68, "text": " accesses per second, CPU load, features can also be ratios such as the ratio of CPU load"}, {"start": 619.68, "end": 622.52, "text": " to network traffic."}, {"start": 622.52, "end": 628.88, "text": " And if ever a specific computer behaves very differently than other computers, it might"}, {"start": 628.88, "end": 633.36, "text": " be worth taking a look at that computer to see if something is wrong with it, such as"}, {"start": 633.36, "end": 638.36, "text": " if it has had a hot disk failure or a network card failure or something's wrong with it,"}, {"start": 638.36, "end": 641.72, "text": " or if maybe it has been hacked into."}, {"start": 641.72, "end": 646.1999999999999, "text": " Anomaly detection is one of those algorithms that is very widely used even though you don't"}, {"start": 646.1999999999999, "end": 649.84, "text": " seem to hear people talk about it that much."}, {"start": 649.84, "end": 654.08, "text": " I remember the first time I worked on the commercial application of anomaly detection"}, {"start": 654.08, "end": 660.0400000000001, "text": " was when I was helping a telco company put in place anomaly detection to see when any"}, {"start": 660.0400000000001, "end": 664.96, "text": " one of the cell towers was behaving in an unusual way because that probably meant there"}, {"start": 664.96, "end": 669.4, "text": " was something wrong with the cell tower and so they want to get a technician to take a"}, {"start": 669.4, "end": 670.4, "text": " look."}, {"start": 670.4, "end": 674.9200000000001, "text": " So hopefully that helped more people get good cell phone coverage."}, {"start": 674.92, "end": 680.36, "text": " And I've also used anomaly detection to find fraudulent financial transactions."}, {"start": 680.36, "end": 685.5999999999999, "text": " And these days I often use it to help manufacturing companies find anomalous parts that they may"}, {"start": 685.5999999999999, "end": 689.12, "text": " have manufactured but should inspect more often."}, {"start": 689.12, "end": 693.68, "text": " So it is a very useful tool to have in your tool chest."}, {"start": 693.68, "end": 697.4, "text": " And in the next few videos, we'll talk about how you can build and get these algorithms"}, {"start": 697.4, "end": 699.64, "text": " to work for yourself."}, {"start": 699.64, "end": 706.68, "text": " In order to get anomalous detection algorithms to work, we'll need to use a Gaussian distribution"}, {"start": 706.68, "end": 709.28, "text": " to model the data, pfx."}, {"start": 709.28, "end": 730.3199999999999, "text": " So let's go on to the next video to talk about Gaussian distributions."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=kAVXR1qGjwU
8.9 Anomaly Detection | Gaussian (normal) distribution -- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In order to apply anomaly detection, we're going to need to use the Gaussian distribution, which is also called the normal distribution. So when you hear me say either Gaussian distribution or normal distribution, they mean exactly the same thing. And if you've heard of the bell-shaped distribution, that also refers to the same thing. But if you haven't heard of the bell-shaped distribution, that's fine too. But let's take a look at what is the Gaussian or the normal distribution. Say X is a number, and if X is a random number, sometimes called a random variable, but if X can take on random values, and if the probability of X is given by a Gaussian or a normal distribution with mean parameter mu and with variance sigma squared, what that means is that the probability of X looks like a curve that goes like this. The center, the middle of the curve, is given by the mean, mu, and the standard deviation or the width of this curve is given by that variance parameter sigma. Technically, sigma is called the standard deviation, and the square of sigma, or sigma squared, is called the variance of the distribution. And this curve here shows what is P of X or the probability of X. If you've heard of the bell-shaped curve, this is that bell-shaped curve because a lot of classic bells, say in towers, were kind of shaped like this with the bell clapper hanging down here. And so the shape of this curve is vaguely reminiscent of the shape of the large bells that you will still find in some old buildings today. Better looking than my hand-drawn one, there's a picture of the Liberty Bell, and indeed the Liberty Bell's shape on top is a vaguely bell-shaped curve. If you're wondering what this P of X really means, here's one way to interpret it. It means that if you were to get, say, 100 numbers drawn from this probability distribution, and you were to plot a histogram of these 100 numbers drawn from this distribution, you might get a histogram that looks like this. And so it looks vaguely bell-shaped. And what this curve on the left indicates is not if you have just 100 examples or a thousand or a million or a billion, but if you had a practically infinite number of examples and you were to draw a histogram of this practically infinite number of examples with a very, very fine histogram bin, then you end up with essentially this bell-shaped curve here on the left. The formula for P of X is given by this expression. P of X equals 1 over square root 2 pi. Pi here is that 3.14159, it was about 22 over 7, a ratio of a circle's diameter to circumference, times sigma times e to the negative x minus mu, the mean parameter squared, divided by 2 sigma squared. And for any given value of mu and sigma, if you were to plot this function as a function of x, you get this type of bell-shaped curve that is centered at mu and with the width of this bell-shaped curve being determined by the parameter sigma. Now let's look at a few examples of how changing mu and sigma will affect the Gaussian distribution. First, let me set mu equals to 0 and sigma equals 1. Here's my plot of a Gaussian distribution with mean 0, mu equals 0, and standard deviation sigma equals 1. You notice that this distribution is centered at 0 and that is the standard deviation sigma is equal to 1. Now let's reduce the standard deviation sigma to 0.5. If you plot the Gaussian distribution with mu equals 0 and sigma equals 0.5, it now looks like this. Notice that it's still centered at 0 because mu is 0, but it's become a much thinner curve because sigma is now 1 half. And you might recall that sigma is the standard deviation is 0.5, whereas sigma squared is also called the variance and so that's equal to 0.5 squared or 0.25. You may have heard that probabilities always have to sum up to 1, so that's why the area under the curve is always equal to 1, which is why when the Gaussian distribution becomes skinnier it has to become taller as well. Let's look at another value of mu and sigma. Now I'm going to increase sigma to 2, so the standard deviation is 2 and the variance is 4. This now creates a much wider distribution because sigma here is now much larger and because it's now a wider distribution it's become shorter as well because the area under the curve is still equal to 1. And finally let's try changing the mean parameter mu and I'll leave sigma equals 0.5. In this case the center of the distribution mu moves over here to the right, but the width of the distribution is the same as the one on top because the standard deviation is 0.5 in both of these cases on the right. So this is how different choices of mu and sigma affect the Gaussian distribution. When you're applying this to anomaly detection here's what you have to do. You're given a data set of m examples and here x is just a number and here I plotted the training sets with 11 examples and what we have to do is try to estimate what are good choices for the mean parameter mu as well as for the variance parameter sigma squared. And given a data set like this it would seem that a Gaussian distribution may be looking like that with a center here and a standard deviation kind of like that. This might be a pretty good fit to the data. The way you would compute mu and sigma squared mathematically is our estimate for mu will be just the average of all the returning examples. So it's 1 over m times sum from i equals 1 through m of the values of your training examples and the value we would use to estimate sigma squared will be 1 over m times will be the average of the squared difference between the examples and that mu that you just estimated here on the left. It turns out that if you implement these two formulas in code with this value for mu and this value for sigma squared then you pretty much get the Gaussian distribution that I hand drew on top and this will give you a choice of mu and sigma for a Gaussian distribution so that it kind of looks like the 11 training examples might have been drawn from this Gaussian distribution. If you've taken an advanced statistics class you may have heard that these formulas for mu and sigma squared are technically called the maximum likelihood estimates for mu and sigma and some statistics classes will tell you to use the formula 1 over m minus 1 instead of 1 over m. In practice using 1 over m or 1 over m minus 1 makes very little difference. I always use 1 over m but there are some other properties of dividing by m minus 1 that some statisticians prefer. But if you don't understand what I just said don't worry about it. All you need to know is that if you set mu according to this formula and sigma squared according to this formula you get a pretty good estimate of mu and sigma and in particular you get a Gaussian distribution that will be a plausible probability distribution in terms of what's the probability distribution that the training examples had come from. You can probably guess what comes next. If you were to get an example over here then p of x is pretty high whereas if you were to get an example way out here then p of x is pretty low which is why we would consider this example okay not really anomalous, not a lot like the other ones, whereas an example way out here to be pretty unusual compared to the examples we've seen and therefore more anomalous because p of x which is the height of this curve is much lower over here on the left compared to this point over here closer to the middle. Now we've done this only for when x is a number as if you had only a single feature for your anomaly detection problem. For practical anomaly detection applications you usually have a lot of different features so you've now seen how the Gaussian distribution works if x is a single number. This corresponds to if say you had just one feature for your anomaly detection problem but for practical anomaly detection applications you will have many features two or three or some even larger number n of features. Let's take what you saw for a single Gaussian and use it to build a more sophisticated anomaly detection algorithm that can handle multiple features. Let's go through that in the next video.
[{"start": 0.0, "end": 8.84, "text": " In order to apply anomaly detection, we're going to need to use the Gaussian distribution,"}, {"start": 8.84, "end": 12.46, "text": " which is also called the normal distribution."}, {"start": 12.46, "end": 17.900000000000002, "text": " So when you hear me say either Gaussian distribution or normal distribution, they mean exactly"}, {"start": 17.900000000000002, "end": 19.44, "text": " the same thing."}, {"start": 19.44, "end": 25.16, "text": " And if you've heard of the bell-shaped distribution, that also refers to the same thing."}, {"start": 25.16, "end": 29.04, "text": " But if you haven't heard of the bell-shaped distribution, that's fine too."}, {"start": 29.04, "end": 34.08, "text": " But let's take a look at what is the Gaussian or the normal distribution."}, {"start": 34.08, "end": 40.2, "text": " Say X is a number, and if X is a random number, sometimes called a random variable, but if"}, {"start": 40.2, "end": 48.92, "text": " X can take on random values, and if the probability of X is given by a Gaussian or a normal distribution"}, {"start": 48.92, "end": 58.92, "text": " with mean parameter mu and with variance sigma squared, what that means is that the probability"}, {"start": 58.92, "end": 65.1, "text": " of X looks like a curve that goes like this."}, {"start": 65.1, "end": 74.16, "text": " The center, the middle of the curve, is given by the mean, mu, and the standard deviation"}, {"start": 74.16, "end": 81.36, "text": " or the width of this curve is given by that variance parameter sigma."}, {"start": 81.36, "end": 87.08, "text": " Technically, sigma is called the standard deviation, and the square of sigma, or sigma"}, {"start": 87.08, "end": 90.8, "text": " squared, is called the variance of the distribution."}, {"start": 90.8, "end": 96.36, "text": " And this curve here shows what is P of X or the probability of X."}, {"start": 96.36, "end": 101.88, "text": " If you've heard of the bell-shaped curve, this is that bell-shaped curve because a lot"}, {"start": 101.88, "end": 110.16, "text": " of classic bells, say in towers, were kind of shaped like this with the bell clapper"}, {"start": 110.16, "end": 111.6, "text": " hanging down here."}, {"start": 111.6, "end": 117.08, "text": " And so the shape of this curve is vaguely reminiscent of the shape of the large bells"}, {"start": 117.08, "end": 121.96, "text": " that you will still find in some old buildings today."}, {"start": 121.96, "end": 127.03999999999999, "text": " Better looking than my hand-drawn one, there's a picture of the Liberty Bell, and indeed"}, {"start": 127.03999999999999, "end": 132.88, "text": " the Liberty Bell's shape on top is a vaguely bell-shaped curve."}, {"start": 132.88, "end": 140.4, "text": " If you're wondering what this P of X really means, here's one way to interpret it."}, {"start": 140.4, "end": 147.44, "text": " It means that if you were to get, say, 100 numbers drawn from this probability distribution,"}, {"start": 147.44, "end": 153.84, "text": " and you were to plot a histogram of these 100 numbers drawn from this distribution,"}, {"start": 153.84, "end": 156.04000000000002, "text": " you might get a histogram that looks like this."}, {"start": 156.04000000000002, "end": 159.04000000000002, "text": " And so it looks vaguely bell-shaped."}, {"start": 159.04000000000002, "end": 166.32, "text": " And what this curve on the left indicates is not if you have just 100 examples or a"}, {"start": 166.32, "end": 172.79999999999998, "text": " thousand or a million or a billion, but if you had a practically infinite number of examples"}, {"start": 172.79999999999998, "end": 178.68, "text": " and you were to draw a histogram of this practically infinite number of examples with a very, very"}, {"start": 178.68, "end": 187.16, "text": " fine histogram bin, then you end up with essentially this bell-shaped curve here on the left."}, {"start": 187.16, "end": 192.04, "text": " The formula for P of X is given by this expression."}, {"start": 192.04, "end": 197.04, "text": " P of X equals 1 over square root 2 pi."}, {"start": 197.04, "end": 206.23999999999998, "text": " Pi here is that 3.14159, it was about 22 over 7, a ratio of a circle's diameter to circumference,"}, {"start": 206.23999999999998, "end": 214.64, "text": " times sigma times e to the negative x minus mu, the mean parameter squared, divided by"}, {"start": 214.64, "end": 217.88, "text": " 2 sigma squared."}, {"start": 217.88, "end": 224.96, "text": " And for any given value of mu and sigma, if you were to plot this function as a function"}, {"start": 224.96, "end": 231.04, "text": " of x, you get this type of bell-shaped curve that is centered at mu and with the width"}, {"start": 231.04, "end": 237.56, "text": " of this bell-shaped curve being determined by the parameter sigma."}, {"start": 237.56, "end": 244.72, "text": " Now let's look at a few examples of how changing mu and sigma will affect the Gaussian distribution."}, {"start": 244.72, "end": 250.68, "text": " First, let me set mu equals to 0 and sigma equals 1."}, {"start": 250.68, "end": 257.28, "text": " Here's my plot of a Gaussian distribution with mean 0, mu equals 0, and standard deviation"}, {"start": 257.28, "end": 260.32, "text": " sigma equals 1."}, {"start": 260.32, "end": 267.8, "text": " You notice that this distribution is centered at 0 and that is the standard deviation sigma"}, {"start": 267.8, "end": 269.56, "text": " is equal to 1."}, {"start": 269.56, "end": 276.16, "text": " Now let's reduce the standard deviation sigma to 0.5."}, {"start": 276.16, "end": 283.04, "text": " If you plot the Gaussian distribution with mu equals 0 and sigma equals 0.5, it now looks"}, {"start": 283.04, "end": 284.8, "text": " like this."}, {"start": 284.8, "end": 291.4, "text": " Notice that it's still centered at 0 because mu is 0, but it's become a much thinner curve"}, {"start": 291.4, "end": 296.24, "text": " because sigma is now 1 half."}, {"start": 296.24, "end": 303.12, "text": " And you might recall that sigma is the standard deviation is 0.5, whereas sigma squared is"}, {"start": 303.12, "end": 310.2, "text": " also called the variance and so that's equal to 0.5 squared or 0.25."}, {"start": 310.2, "end": 315.12, "text": " You may have heard that probabilities always have to sum up to 1, so that's why the area"}, {"start": 315.12, "end": 321.44, "text": " under the curve is always equal to 1, which is why when the Gaussian distribution becomes"}, {"start": 321.44, "end": 325.2, "text": " skinnier it has to become taller as well."}, {"start": 325.2, "end": 328.03999999999996, "text": " Let's look at another value of mu and sigma."}, {"start": 328.03999999999996, "end": 334.76, "text": " Now I'm going to increase sigma to 2, so the standard deviation is 2 and the variance is"}, {"start": 334.76, "end": 336.76, "text": " 4."}, {"start": 336.76, "end": 344.68, "text": " This now creates a much wider distribution because sigma here is now much larger and"}, {"start": 344.68, "end": 349.12, "text": " because it's now a wider distribution it's become shorter as well because the area under"}, {"start": 349.12, "end": 352.56, "text": " the curve is still equal to 1."}, {"start": 352.56, "end": 362.04, "text": " And finally let's try changing the mean parameter mu and I'll leave sigma equals 0.5."}, {"start": 362.04, "end": 370.24, "text": " In this case the center of the distribution mu moves over here to the right, but the width"}, {"start": 370.24, "end": 375.6, "text": " of the distribution is the same as the one on top because the standard deviation is 0.5"}, {"start": 375.6, "end": 378.64, "text": " in both of these cases on the right."}, {"start": 378.64, "end": 385.76, "text": " So this is how different choices of mu and sigma affect the Gaussian distribution."}, {"start": 385.76, "end": 390.59999999999997, "text": " When you're applying this to anomaly detection here's what you have to do."}, {"start": 390.59999999999997, "end": 399.44, "text": " You're given a data set of m examples and here x is just a number and here I plotted"}, {"start": 399.44, "end": 406.2, "text": " the training sets with 11 examples and what we have to do is try to estimate what are"}, {"start": 406.2, "end": 415.8, "text": " good choices for the mean parameter mu as well as for the variance parameter sigma squared."}, {"start": 415.8, "end": 422.48, "text": " And given a data set like this it would seem that a Gaussian distribution may be looking"}, {"start": 422.48, "end": 428.59999999999997, "text": " like that with a center here and a standard deviation kind of like that."}, {"start": 428.59999999999997, "end": 432.64, "text": " This might be a pretty good fit to the data."}, {"start": 432.64, "end": 439.56, "text": " The way you would compute mu and sigma squared mathematically is our estimate for mu will"}, {"start": 439.56, "end": 442.91999999999996, "text": " be just the average of all the returning examples."}, {"start": 442.91999999999996, "end": 450.68, "text": " So it's 1 over m times sum from i equals 1 through m of the values of your training examples"}, {"start": 450.68, "end": 459.68, "text": " and the value we would use to estimate sigma squared will be 1 over m times will be the"}, {"start": 459.68, "end": 466.92, "text": " average of the squared difference between the examples and that mu that you just estimated"}, {"start": 466.92, "end": 469.02, "text": " here on the left."}, {"start": 469.02, "end": 474.16, "text": " It turns out that if you implement these two formulas in code with this value for mu and"}, {"start": 474.16, "end": 478.72, "text": " this value for sigma squared then you pretty much get the Gaussian distribution that I"}, {"start": 478.72, "end": 485.36, "text": " hand drew on top and this will give you a choice of mu and sigma for a Gaussian distribution"}, {"start": 485.36, "end": 490.16, "text": " so that it kind of looks like the 11 training examples might have been drawn from this Gaussian"}, {"start": 490.16, "end": 492.96000000000004, "text": " distribution."}, {"start": 492.96000000000004, "end": 498.36, "text": " If you've taken an advanced statistics class you may have heard that these formulas for"}, {"start": 498.36, "end": 503.72, "text": " mu and sigma squared are technically called the maximum likelihood estimates for mu and"}, {"start": 503.72, "end": 511.08000000000004, "text": " sigma and some statistics classes will tell you to use the formula 1 over m minus 1 instead"}, {"start": 511.08000000000004, "end": 513.38, "text": " of 1 over m."}, {"start": 513.38, "end": 518.92, "text": " In practice using 1 over m or 1 over m minus 1 makes very little difference."}, {"start": 518.92, "end": 526.96, "text": " I always use 1 over m but there are some other properties of dividing by m minus 1 that some"}, {"start": 526.96, "end": 529.52, "text": " statisticians prefer."}, {"start": 529.52, "end": 532.96, "text": " But if you don't understand what I just said don't worry about it."}, {"start": 532.96, "end": 539.26, "text": " All you need to know is that if you set mu according to this formula and sigma squared"}, {"start": 539.26, "end": 545.48, "text": " according to this formula you get a pretty good estimate of mu and sigma and in particular"}, {"start": 545.48, "end": 551.48, "text": " you get a Gaussian distribution that will be a plausible probability distribution in"}, {"start": 551.48, "end": 556.96, "text": " terms of what's the probability distribution that the training examples had come from."}, {"start": 556.96, "end": 560.4, "text": " You can probably guess what comes next."}, {"start": 560.4, "end": 569.36, "text": " If you were to get an example over here then p of x is pretty high whereas if you were"}, {"start": 569.36, "end": 576.56, "text": " to get an example way out here then p of x is pretty low which is why we would consider"}, {"start": 576.56, "end": 582.64, "text": " this example okay not really anomalous, not a lot like the other ones, whereas an example"}, {"start": 582.64, "end": 588.36, "text": " way out here to be pretty unusual compared to the examples we've seen and therefore more"}, {"start": 588.36, "end": 594.04, "text": " anomalous because p of x which is the height of this curve is much lower over here on the"}, {"start": 594.04, "end": 599.4, "text": " left compared to this point over here closer to the middle."}, {"start": 599.4, "end": 606.24, "text": " Now we've done this only for when x is a number as if you had only a single feature for your"}, {"start": 606.24, "end": 608.84, "text": " anomaly detection problem."}, {"start": 608.84, "end": 615.28, "text": " For practical anomaly detection applications you usually have a lot of different features"}, {"start": 615.28, "end": 621.8399999999999, "text": " so you've now seen how the Gaussian distribution works if x is a single number."}, {"start": 621.8399999999999, "end": 628.48, "text": " This corresponds to if say you had just one feature for your anomaly detection problem"}, {"start": 628.48, "end": 634.4399999999999, "text": " but for practical anomaly detection applications you will have many features two or three or"}, {"start": 634.4399999999999, "end": 638.1999999999999, "text": " some even larger number n of features."}, {"start": 638.1999999999999, "end": 643.4399999999999, "text": " Let's take what you saw for a single Gaussian and use it to build a more sophisticated anomaly"}, {"start": 643.44, "end": 647.12, "text": " detection algorithm that can handle multiple features."}, {"start": 647.12, "end": 674.12, "text": " Let's go through that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=mAtCVwNeeQU
8.10 Anomaly Detection | Anomaly detection algorithm-- [Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Now that you've seen how the Gaussian or the normal distribution works for a single number, we're ready to build our anomaly detection algorithm. Let's dive in. You have a training set X1 through Xm where here each example X has n features. So each example X is a vector with n numbers. In the case of the airplane engine example, we had two features corresponding to the heat and the vibrations. And so each of these XIs would be a two dimensional vector and n would be equal to two. But for many practical applications, n can be much larger and you might do this with dozens or even hundreds of features. Given this training set, what we would like to do is to carry out density estimation. And all that means is we will build a model or estimate the probability for P of X. What's the probability of any given feature vector? And our model for P of X is going to be as follows. X is a feature vector with values X1, X2, and so on down to Xn. And I'm going to model P of X as the probability of X1 times the probability of X2 times the probability of X3 times the probability of Xn for the n features in the feature vectors. If you've taken an advanced class in probability statistics before, you may recognize that this equation corresponds to assuming that the features X1, X2, and so on up to Xm are statistically independent. But it turns out this algorithm often works fine even if the features are not actually statistically independent. But if you don't understand what I just said, don't worry about it. Understanding statistical independence is not needed to fully complete this class and also be able to very effectively use the Nullary Detection algorithm. Now to fill in this equation a little bit more, we are saying that the probability of all the features of this vector features X is the product of P of X1 and P of X2 and so on up through P of Xn. And in order to model the probability of X1, say the heat feature in this example, we're going to have two parameters mu1 and sigma1 or sigma squared 1. And what that means is we're going to estimate the mean of the feature X1 and also the variance of feature X1 and that will be mu1 and sigma1. To model P of X2, X2 is a totally different feature measuring the vibrations of the airplane engine, we're going to have two different parameters which I'm going to write as mu2 sigma2 squared and it turns out this will correspond to the mean or the average of the vibration feature and the variance of the vibration feature and so on. If you have additional features mu3 sigma3 squared up through mu n and sigma n squared. In case you're wondering why we multiply probabilities, maybe here's one example that could build intuition. Suppose for an aircraft engine there's a one-tenth chance that it is really really hot, unusually hot and maybe there is a one in 20 chance that it vibrates really really hot. Then what is the chance that it runs really really hot and vibrates really really hot? We're saying that the chance of that is one-tenth times one over 20 which is one over 200 so it's really really unlikely to get an engine that both runs really hot and vibrates really hot. It's the product of these two probabilities. Or the chance of both of these things happening we're saying is the product of both of these probabilities. A somewhat more compact way to write this equation up here is to say that this is equal to the product from j equals one through n of p of xj with parameters mu j and sigma squared j. And this symbol here is a lot like the summation symbol except that whereas the summation symbol corresponds to addition, this symbol here corresponds to multiplying these terms over here for j equals one through n. So let's put it all together to see how you can build an anomaly detection system. The first step is to choose features xi that you think might be indicative of anomalous examples. Having come up with the features you want to use, you would then fit the parameters mu one through mu n and sigma squared one through sigma squared n for the n features in your data set. As you might guess, the parameter mu j will be just the average of xj of the feature j of all the examples in your training set and sigma squared j will be the average of the square difference between the j feature and the value mu j that you just computed up here on top. And by the way, if you have a vectorized implementation, you can also compute mu as the average of the training examples as follows where here x and mu are both vectors. And so this would be the vectorized way of computing mu one through mu n all at the same time. And by estimating these parameters on your unlabeled training set, you've now computed all the parameters of your model. Finally, when you are given a new example, x test, or I'm just going to write the new example as x here, what you would do is compute p of x and see if it's large or small. So p of x, as you saw on the last slide, is the product from j equals one through n of the probability of the individual features. So p of xj with parameters mu j and sigma squared j. And if you substitute in the formula for this probability, you end up with this expression, one over root two pi sigma j of e to this expression over here. And so xj are the features. This is a j feature of your new example. Mu j and sigma j are numbers or parameters you had computed in the previous step. And if you compute out this formula, you get some number for p of x. And the final step is to see if p of x is less than epsilon. And if it is, then you flag that it is an anomaly. One intuition behind what this algorithm is doing is that it will tend to flag an example as anomalous if one or more of the features are either very large or very small relative to what it has seen in the training set. So for each of the features xj, you're fitting a Gaussian distribution like this. And so if even one of the features of the new example was way out here, say, then p of xj would be very small. And if just one of the terms in this product is very small, then this overall product, we multiply together will tend to be very small and thus p of x will be small. And what anomaly detection is doing in this algorithm is a systematic way of quantifying whether or not this new example x has any features that are unusually large or unusually small. Now, let's take a look at what all this actually means on one example. Here's a data set with features x1 and x2. And you notice that the features x1 take on a much larger range of values than the features x2. If you were to compute the mean of the features x1, you end up with five, which is why mu one is equal to one. And it turns out that for this data set, if you compute sigma one, it will be equal to about two. And if you were to compute mu two, the average of the features on x2, the average is three. And similarly, its variance or standard deviation is much smaller, which is why sigma two is equal to one. So that corresponds to this Gaussian distribution for x1 and this Gaussian distribution for x2. If you were to actually multiply p of x1 and p of x2, then you end up with this 3D surface plot for p of x, where at any point, the height of this is the product of p of x1 times p of x2 for the corresponding values of x1 and x2. And this signifies that values where p of x is higher are more likely, so values near the middle, kind of here, are more likely, whereas values far out here, like values out here, are much less likely, and have much lower chance. Now let me pick two test examples. The first one here, I'm going to write as x test one, and the second one down here as x test two. And let's see which of these two examples the algorithm will flag as anomalous. I'm going to pick the parameter epsilon to be equal to 0.02. And if you were to compute p of x test one, it turns out to be about 0.04. And this is much bigger than epsilon, and so the algorithm will say, this looks okay, doesn't look like an anomaly. Whereas in contrast, if you were to compute p of x for this point down here, corresponding to x1 equals about 8, and x2 equals about 0.5, kind of down here, then p of x test two is 0.0021. So this is much smaller than epsilon, and so the algorithm will flag this as a likely anomaly. So pretty much as you might hope, it decides that x test one looks pretty normal, whereas x test two, which is much further away than anything you see in the training set, looks like it could be an anomaly. So you've seen the process of how to build an anomaly detection system, but how do you choose the parameter epsilon? And how do you know if your anomaly detection system is working well? In the next video, let's dive a little bit more deeply into the process of developing and evaluating the performance of an anomaly detection system. Let's go on to the next video.
[{"start": 0.0, "end": 7.82, "text": " Now that you've seen how the Gaussian or the normal distribution works for a single number,"}, {"start": 7.82, "end": 10.96, "text": " we're ready to build our anomaly detection algorithm."}, {"start": 10.96, "end": 12.56, "text": " Let's dive in."}, {"start": 12.56, "end": 20.96, "text": " You have a training set X1 through Xm where here each example X has n features."}, {"start": 20.96, "end": 24.92, "text": " So each example X is a vector with n numbers."}, {"start": 24.92, "end": 31.880000000000003, "text": " In the case of the airplane engine example, we had two features corresponding to the heat"}, {"start": 31.880000000000003, "end": 33.2, "text": " and the vibrations."}, {"start": 33.2, "end": 38.980000000000004, "text": " And so each of these XIs would be a two dimensional vector and n would be equal to two."}, {"start": 38.980000000000004, "end": 43.160000000000004, "text": " But for many practical applications, n can be much larger and you might do this with"}, {"start": 43.160000000000004, "end": 46.92, "text": " dozens or even hundreds of features."}, {"start": 46.92, "end": 53.120000000000005, "text": " Given this training set, what we would like to do is to carry out density estimation."}, {"start": 53.12, "end": 60.4, "text": " And all that means is we will build a model or estimate the probability for P of X."}, {"start": 60.4, "end": 66.12, "text": " What's the probability of any given feature vector?"}, {"start": 66.12, "end": 71.28, "text": " And our model for P of X is going to be as follows."}, {"start": 71.28, "end": 79.03999999999999, "text": " X is a feature vector with values X1, X2, and so on down to Xn."}, {"start": 79.04, "end": 87.76, "text": " And I'm going to model P of X as the probability of X1 times the probability of X2 times the"}, {"start": 87.76, "end": 98.2, "text": " probability of X3 times the probability of Xn for the n features in the feature vectors."}, {"start": 98.2, "end": 103.52000000000001, "text": " If you've taken an advanced class in probability statistics before, you may recognize that"}, {"start": 103.52, "end": 110.8, "text": " this equation corresponds to assuming that the features X1, X2, and so on up to Xm are"}, {"start": 110.8, "end": 113.52, "text": " statistically independent."}, {"start": 113.52, "end": 117.8, "text": " But it turns out this algorithm often works fine even if the features are not actually"}, {"start": 117.8, "end": 120.52, "text": " statistically independent."}, {"start": 120.52, "end": 124.19999999999999, "text": " But if you don't understand what I just said, don't worry about it."}, {"start": 124.19999999999999, "end": 129.42, "text": " Understanding statistical independence is not needed to fully complete this class and"}, {"start": 129.42, "end": 134.76, "text": " also be able to very effectively use the Nullary Detection algorithm."}, {"start": 134.76, "end": 140.27999999999997, "text": " Now to fill in this equation a little bit more, we are saying that the probability of"}, {"start": 140.27999999999997, "end": 146.16, "text": " all the features of this vector features X is the product of P of X1 and P of X2 and"}, {"start": 146.16, "end": 149.23999999999998, "text": " so on up through P of Xn."}, {"start": 149.23999999999998, "end": 156.92, "text": " And in order to model the probability of X1, say the heat feature in this example, we're"}, {"start": 156.92, "end": 164.67999999999998, "text": " going to have two parameters mu1 and sigma1 or sigma squared 1."}, {"start": 164.67999999999998, "end": 172.35999999999999, "text": " And what that means is we're going to estimate the mean of the feature X1 and also the variance"}, {"start": 172.35999999999999, "end": 178.0, "text": " of feature X1 and that will be mu1 and sigma1."}, {"start": 178.0, "end": 184.27999999999997, "text": " To model P of X2, X2 is a totally different feature measuring the vibrations of the airplane"}, {"start": 184.28, "end": 192.0, "text": " engine, we're going to have two different parameters which I'm going to write as mu2"}, {"start": 192.0, "end": 197.08, "text": " sigma2 squared and it turns out this will correspond to the mean or the average of the"}, {"start": 197.08, "end": 203.32, "text": " vibration feature and the variance of the vibration feature and so on."}, {"start": 203.32, "end": 213.6, "text": " If you have additional features mu3 sigma3 squared up through mu n and sigma n squared."}, {"start": 213.6, "end": 220.24, "text": " In case you're wondering why we multiply probabilities, maybe here's one example that could build"}, {"start": 220.24, "end": 222.51999999999998, "text": " intuition."}, {"start": 222.51999999999998, "end": 229.48, "text": " Suppose for an aircraft engine there's a one-tenth chance that it is really really hot, unusually"}, {"start": 229.48, "end": 238.35999999999999, "text": " hot and maybe there is a one in 20 chance that it vibrates really really hot."}, {"start": 238.35999999999999, "end": 242.92, "text": " Then what is the chance that it runs really really hot and vibrates really really hot?"}, {"start": 242.92, "end": 250.56, "text": " We're saying that the chance of that is one-tenth times one over 20 which is one over 200 so"}, {"start": 250.56, "end": 256.03999999999996, "text": " it's really really unlikely to get an engine that both runs really hot and vibrates really"}, {"start": 256.03999999999996, "end": 257.03999999999996, "text": " hot."}, {"start": 257.03999999999996, "end": 260.56, "text": " It's the product of these two probabilities."}, {"start": 260.56, "end": 264.8, "text": " Or the chance of both of these things happening we're saying is the product of both of these"}, {"start": 264.8, "end": 267.36, "text": " probabilities."}, {"start": 267.36, "end": 273.2, "text": " A somewhat more compact way to write this equation up here is to say that this is equal"}, {"start": 273.2, "end": 286.16, "text": " to the product from j equals one through n of p of xj with parameters mu j and sigma"}, {"start": 286.16, "end": 289.76, "text": " squared j."}, {"start": 289.76, "end": 296.52000000000004, "text": " And this symbol here is a lot like the summation symbol except that whereas the summation symbol"}, {"start": 296.52, "end": 303.2, "text": " corresponds to addition, this symbol here corresponds to multiplying these terms over"}, {"start": 303.2, "end": 306.88, "text": " here for j equals one through n."}, {"start": 306.88, "end": 313.84, "text": " So let's put it all together to see how you can build an anomaly detection system."}, {"start": 313.84, "end": 320.0, "text": " The first step is to choose features xi that you think might be indicative of anomalous"}, {"start": 320.0, "end": 322.91999999999996, "text": " examples."}, {"start": 322.92, "end": 327.64000000000004, "text": " Having come up with the features you want to use, you would then fit the parameters"}, {"start": 327.64000000000004, "end": 334.48, "text": " mu one through mu n and sigma squared one through sigma squared n for the n features"}, {"start": 334.48, "end": 337.64000000000004, "text": " in your data set."}, {"start": 337.64000000000004, "end": 345.56, "text": " As you might guess, the parameter mu j will be just the average of xj of the feature j"}, {"start": 345.56, "end": 351.64, "text": " of all the examples in your training set and sigma squared j will be the average of the"}, {"start": 351.64, "end": 360.2, "text": " square difference between the j feature and the value mu j that you just computed up here"}, {"start": 360.2, "end": 361.2, "text": " on top."}, {"start": 361.2, "end": 369.96, "text": " And by the way, if you have a vectorized implementation, you can also compute mu as the average of"}, {"start": 369.96, "end": 376.59999999999997, "text": " the training examples as follows where here x and mu are both vectors."}, {"start": 376.6, "end": 382.56, "text": " And so this would be the vectorized way of computing mu one through mu n all at the same"}, {"start": 382.56, "end": 383.56, "text": " time."}, {"start": 383.56, "end": 388.48, "text": " And by estimating these parameters on your unlabeled training set, you've now computed"}, {"start": 388.48, "end": 391.12, "text": " all the parameters of your model."}, {"start": 391.12, "end": 397.28000000000003, "text": " Finally, when you are given a new example, x test, or I'm just going to write the new"}, {"start": 397.28000000000003, "end": 405.20000000000005, "text": " example as x here, what you would do is compute p of x and see if it's large or small."}, {"start": 405.2, "end": 410.88, "text": " So p of x, as you saw on the last slide, is the product from j equals one through n of"}, {"start": 410.88, "end": 413.7, "text": " the probability of the individual features."}, {"start": 413.7, "end": 420.08, "text": " So p of xj with parameters mu j and sigma squared j."}, {"start": 420.08, "end": 427.84, "text": " And if you substitute in the formula for this probability, you end up with this expression,"}, {"start": 427.84, "end": 432.59999999999997, "text": " one over root two pi sigma j of e to this expression over here."}, {"start": 432.6, "end": 436.12, "text": " And so xj are the features."}, {"start": 436.12, "end": 439.24, "text": " This is a j feature of your new example."}, {"start": 439.24, "end": 445.68, "text": " Mu j and sigma j are numbers or parameters you had computed in the previous step."}, {"start": 445.68, "end": 453.52000000000004, "text": " And if you compute out this formula, you get some number for p of x."}, {"start": 453.52000000000004, "end": 458.8, "text": " And the final step is to see if p of x is less than epsilon."}, {"start": 458.8, "end": 463.32, "text": " And if it is, then you flag that it is an anomaly."}, {"start": 463.32, "end": 469.24, "text": " One intuition behind what this algorithm is doing is that it will tend to flag an example"}, {"start": 469.24, "end": 476.68, "text": " as anomalous if one or more of the features are either very large or very small relative"}, {"start": 476.68, "end": 479.90000000000003, "text": " to what it has seen in the training set."}, {"start": 479.90000000000003, "end": 485.24, "text": " So for each of the features xj, you're fitting a Gaussian distribution like this."}, {"start": 485.24, "end": 492.64, "text": " And so if even one of the features of the new example was way out here, say, then p"}, {"start": 492.64, "end": 495.68, "text": " of xj would be very small."}, {"start": 495.68, "end": 501.8, "text": " And if just one of the terms in this product is very small, then this overall product,"}, {"start": 501.8, "end": 508.7, "text": " we multiply together will tend to be very small and thus p of x will be small."}, {"start": 508.7, "end": 516.64, "text": " And what anomaly detection is doing in this algorithm is a systematic way of quantifying"}, {"start": 516.64, "end": 522.16, "text": " whether or not this new example x has any features that are unusually large or unusually"}, {"start": 522.16, "end": 523.16, "text": " small."}, {"start": 523.16, "end": 530.0, "text": " Now, let's take a look at what all this actually means on one example."}, {"start": 530.0, "end": 535.04, "text": " Here's a data set with features x1 and x2."}, {"start": 535.04, "end": 540.36, "text": " And you notice that the features x1 take on a much larger range of values than the features"}, {"start": 540.36, "end": 543.36, "text": " x2."}, {"start": 543.36, "end": 549.16, "text": " If you were to compute the mean of the features x1, you end up with five, which is why mu"}, {"start": 549.16, "end": 550.8399999999999, "text": " one is equal to one."}, {"start": 550.8399999999999, "end": 556.04, "text": " And it turns out that for this data set, if you compute sigma one, it will be equal to"}, {"start": 556.04, "end": 558.04, "text": " about two."}, {"start": 558.04, "end": 564.9599999999999, "text": " And if you were to compute mu two, the average of the features on x2, the average is three."}, {"start": 564.96, "end": 571.1600000000001, "text": " And similarly, its variance or standard deviation is much smaller, which is why sigma two is"}, {"start": 571.1600000000001, "end": 573.12, "text": " equal to one."}, {"start": 573.12, "end": 580.76, "text": " So that corresponds to this Gaussian distribution for x1 and this Gaussian distribution for"}, {"start": 580.76, "end": 583.4000000000001, "text": " x2."}, {"start": 583.4000000000001, "end": 590.32, "text": " If you were to actually multiply p of x1 and p of x2, then you end up with this 3D surface"}, {"start": 590.32, "end": 598.84, "text": " plot for p of x, where at any point, the height of this is the product of p of x1 times p"}, {"start": 598.84, "end": 603.8000000000001, "text": " of x2 for the corresponding values of x1 and x2."}, {"start": 603.8000000000001, "end": 611.48, "text": " And this signifies that values where p of x is higher are more likely, so values near"}, {"start": 611.48, "end": 617.1600000000001, "text": " the middle, kind of here, are more likely, whereas values far out here, like values out"}, {"start": 617.16, "end": 622.7199999999999, "text": " here, are much less likely, and have much lower chance."}, {"start": 622.7199999999999, "end": 625.92, "text": " Now let me pick two test examples."}, {"start": 625.92, "end": 633.0, "text": " The first one here, I'm going to write as x test one, and the second one down here as"}, {"start": 633.0, "end": 634.88, "text": " x test two."}, {"start": 634.88, "end": 640.16, "text": " And let's see which of these two examples the algorithm will flag as anomalous."}, {"start": 640.16, "end": 648.0, "text": " I'm going to pick the parameter epsilon to be equal to 0.02."}, {"start": 648.0, "end": 656.24, "text": " And if you were to compute p of x test one, it turns out to be about 0.04."}, {"start": 656.24, "end": 660.64, "text": " And this is much bigger than epsilon, and so the algorithm will say, this looks okay,"}, {"start": 660.64, "end": 662.8, "text": " doesn't look like an anomaly."}, {"start": 662.8, "end": 669.0799999999999, "text": " Whereas in contrast, if you were to compute p of x for this point down here, corresponding"}, {"start": 669.08, "end": 680.36, "text": " to x1 equals about 8, and x2 equals about 0.5, kind of down here, then p of x test two"}, {"start": 680.36, "end": 681.36, "text": " is 0.0021."}, {"start": 681.36, "end": 688.4000000000001, "text": " So this is much smaller than epsilon, and so the algorithm will flag this as a likely"}, {"start": 688.4000000000001, "end": 689.72, "text": " anomaly."}, {"start": 689.72, "end": 696.1600000000001, "text": " So pretty much as you might hope, it decides that x test one looks pretty normal, whereas"}, {"start": 696.16, "end": 701.28, "text": " x test two, which is much further away than anything you see in the training set, looks"}, {"start": 701.28, "end": 703.9, "text": " like it could be an anomaly."}, {"start": 703.9, "end": 710.12, "text": " So you've seen the process of how to build an anomaly detection system, but how do you"}, {"start": 710.12, "end": 712.16, "text": " choose the parameter epsilon?"}, {"start": 712.16, "end": 716.64, "text": " And how do you know if your anomaly detection system is working well?"}, {"start": 716.64, "end": 721.72, "text": " In the next video, let's dive a little bit more deeply into the process of developing"}, {"start": 721.72, "end": 725.9599999999999, "text": " and evaluating the performance of an anomaly detection system."}, {"start": 725.96, "end": 727.48, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=LtgmlBfZAz0
8.11 Anomaly Detection | Developing and evaluating an anomaly detection system-- [ML | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
I'd like to share with you some practical tips for developing an anomaly detection system. One of the key ideas will be that if you can have a way to evaluate a system even as it's being developed, you'll be able to make decisions and change a system and improve it much more quickly. Let's take a look at what that means. When you're developing a learning algorithm, say choosing different features or trying different values of the parameters like epsilon, making decisions about whether or not to change a feature in a certain way or to increase or decrease epsilon or other parameters, making those decisions is much easier if you have a way of evaluating the learning algorithm. This is sometimes called row number evaluation, meaning that if you can quickly change the algorithm in some way, such as change a feature or change a parameter, and have a way of computing a number that tells you if the algorithm got better or worse, then it makes it much easier to decide whether or not to stick with that change to the algorithm. This is how it's often done in anomaly detection, which is even though we've mainly been talking about unlabeled data, I'm going to change that assumption a bit and assume that we have some labeled data, including just a small number usually of previously observed anomalies. So maybe after making airplane engines for a few years, you've just seen a few airplane engines that were anomalous. And for examples that you know are anomalous, I'm going to associate a label y equals one to indicate this anomalous. And for examples that we think are normal, I'm going to associate the label y equals zero. And so the training set that the anomaly detection algorithm will learn from is still this unlabeled training set of x one through x m. And I'm going to think of all of these examples as ones that we'll just assume are normal and not anomalous. So y is equal to zero. In practice, if a few anomalous examples were to slip into this training set, your algorithm will still usually do okay. To evaluate your algorithm, to come up with a way for you to have a real number evaluation, it turns out to be very useful if you have a small number of anomalous examples so that you can create a cross validation set, which I'm going to denote x cv one, y cv one through x cv m cv and y cv m cv. This is similar notation as you had seen in the second course of the specialization and similarly have a test set of some number of examples where both the cross validation and the test sets hopefully include a few anomalous examples. In other words, the cross validation and test sets will have a few examples with y equals one, but also a lot of examples where y is equal to zero. And again, in practice, anomaly detection algorithm will work okay. If there are some examples, they're actually anomalous, but they were accidentally labeled with y equals zero. Let's illustrate this with the aircraft engine example. Let's say you have been manufacturing aircraft engines for years and so you've collected data from 10,000 goods or normal engines. But over the years, you had also collected data from 20 flawed or anomalous engines. Usually the number of anomalous engines that is y equals one will be much smaller. And so it would not be atypical to apply this type of algorithm with anywhere from say two to 50 known anomalies. We're going to take this data set and break it up into a training set, a cross validation set and a test set. Here's one example. I'm going to put 6,000 good engines into the training set. And again, if there are a couple of anomalous engines that got slipped into this set is actually okay. I wouldn't worry too much about that. And then let's put 2,000 good engines and 10 of the known anomalies into the cross validation set and a separate 2,000 good and 10 anomalous engines into the test set. What you can do then is train the algorithm on the training set, fit the Gaussian distributions to these 6,000 examples. And then on the cross validation set, you can see how many of the anomalous engines it correctly flags. And so for example, you could use the cross validation set to tune the parameter epsilon and set it higher or lower depending on whether the algorithm seems to be reliably detecting these 10 anomalies without taking too many of these 2,000 good engines and flagging them as anomalies. And after you have tuned the parameter epsilon and maybe also added or subtracted or tuned the features xj, you can then take the algorithm and evaluate it on your test set to see how many of these 10 anomalous engines it finds as well as how many mistakes it makes by flagging the good engines as anomalous ones. Notice that this is still primarily an unsupervised learning algorithm because the training set really has no labels or they all have labels that we're assuming to be y equals zero. And so we learn from the training set by fitting the Gaussian distributions as you saw in the previous video. But it turns out if you're building a practical anomaly detection system, having a small number of anomalies to use to evaluate the algorithm in your cross validation and test sets is very helpful for tuning the algorithm. Because the number of flawed engines is so small, there's one other alternative that I often see people use for anomaly detection, which is to not use a test set, but to have just a training set and a cross validation set. So in this example, you would still train on 6,000 good engines, but take the remainder of the data, the 4,000 remaining good engines as well as all the anomalies and put them in the cross validation set, and you would then tune the parameters epsilon and add or subtract features xj to try to get it to do as well as possible as evaluated on the cross validation set. If you have very, very few flawed engines, so if you had only two flawed engines, then this really makes sense to put all of that in the cross validation set and you just don't have enough data to create a totally separate test set that is distinct from your cross validation set. The downside of this alternative here is that after you've tuned your algorithm, you don't have a fair way to tell how well this will actually do on future examples because you don't have a test set. But when your data set is small, especially when the number of anomalies you have in your data set is small, this might be the best alternative you have. And so I see this done quite often as well when you just don't have enough data to create a separate test set. And if this is the case, just be aware that there's a higher risk that you would have overfit some of your decisions around epsilon and choice of features and so on to the cross validation set. And so its performance on real data in the future may not be as good as you were expecting. Now let's take a closer look at how to actually evaluate the algorithm on your cross validation sets or on the test set. Here's what you do. You would first fit the model p of x on the training set. So this was a 6,000 examples of good engines. Then on any cross validation or test set example x, you would compute p of x and you would predict y equals 1, that is anomalous if p of x is less than epsilon, and you predict y is zero if p of x is greater than or equal to epsilon. And so based on this, you can now look at how accurately this algorithm's predictions on the cross validation or test set matches the labels why you have the cross validation or the test sets. In the third week of the second course, we had had a couple optional videos on how to handle highly skewed data distributions where the number of positive examples, y equals 1, can be much smaller than the number of negative examples where y equals zero. And this is the case as well for many anomaly detection applications where the number of anomalies in your cross validation set is much smaller. In our previous example, we had maybe 10 positive examples and 2,000 negative examples because we had 10 anomalies and 2,000 normal examples. If you saw those optional videos, you may recall that we saw it could be useful to compute things like the true positive, false positive, false negative, and true negative rates, or to compute precision recall or F1 score, and that these are alternative metrics to classification accuracy that could work better when your data distribution is very skewed. So if you saw that video, you might consider applying those types of evaluation metrics as well to tell how well your learning algorithm is doing at finding that small handful of anomalies or positive examples amidst this much larger set of negative examples of normal plane engines. If you didn't watch that video, don't worry about it. It's okay. The intuition I hope you get is to use the cross validation set to just look at how many anomalies it's finding and also how many normal engines it is incorrectly flagging as an anomaly, and then to just use that to try to choose a good choice for the parameter epsilon. So you find that the practical process of building an anomaly detection system is much easier if you actually have just a small number of labeled examples of known anomalies. Now this does raise a question. If you have a few labeled examples, should you still be using an unsupervised learning algorithm? Why not take those labeled examples and use a supervised learning algorithm instead? In the next video, let's take a look at a comparison between anomaly detection and supervised learning and when you might prefer one over the other. Let's go on to the next video.
[{"start": 0.0, "end": 7.8, "text": " I'd like to share with you some practical tips for developing an anomaly detection system."}, {"start": 7.8, "end": 13.84, "text": " One of the key ideas will be that if you can have a way to evaluate a system even as it's"}, {"start": 13.84, "end": 19.16, "text": " being developed, you'll be able to make decisions and change a system and improve it much more"}, {"start": 19.16, "end": 20.16, "text": " quickly."}, {"start": 20.16, "end": 22.400000000000002, "text": " Let's take a look at what that means."}, {"start": 22.400000000000002, "end": 26.84, "text": " When you're developing a learning algorithm, say choosing different features or trying"}, {"start": 26.84, "end": 33.08, "text": " different values of the parameters like epsilon, making decisions about whether or not to change"}, {"start": 33.08, "end": 37.92, "text": " a feature in a certain way or to increase or decrease epsilon or other parameters, making"}, {"start": 37.92, "end": 44.2, "text": " those decisions is much easier if you have a way of evaluating the learning algorithm."}, {"start": 44.2, "end": 50.3, "text": " This is sometimes called row number evaluation, meaning that if you can quickly change the"}, {"start": 50.3, "end": 56.019999999999996, "text": " algorithm in some way, such as change a feature or change a parameter, and have a way of computing"}, {"start": 56.02, "end": 61.92, "text": " a number that tells you if the algorithm got better or worse, then it makes it much easier"}, {"start": 61.92, "end": 66.44, "text": " to decide whether or not to stick with that change to the algorithm."}, {"start": 66.44, "end": 72.68, "text": " This is how it's often done in anomaly detection, which is even though we've mainly been talking"}, {"start": 72.68, "end": 79.30000000000001, "text": " about unlabeled data, I'm going to change that assumption a bit and assume that we have"}, {"start": 79.3, "end": 87.08, "text": " some labeled data, including just a small number usually of previously observed anomalies."}, {"start": 87.08, "end": 93.32, "text": " So maybe after making airplane engines for a few years, you've just seen a few airplane"}, {"start": 93.32, "end": 96.1, "text": " engines that were anomalous."}, {"start": 96.1, "end": 103.8, "text": " And for examples that you know are anomalous, I'm going to associate a label y equals one"}, {"start": 103.8, "end": 107.16, "text": " to indicate this anomalous."}, {"start": 107.16, "end": 112.64, "text": " And for examples that we think are normal, I'm going to associate the label y equals"}, {"start": 112.64, "end": 113.64, "text": " zero."}, {"start": 113.64, "end": 121.16, "text": " And so the training set that the anomaly detection algorithm will learn from is still this unlabeled"}, {"start": 121.16, "end": 125.08, "text": " training set of x one through x m."}, {"start": 125.08, "end": 132.96, "text": " And I'm going to think of all of these examples as ones that we'll just assume are normal"}, {"start": 132.96, "end": 134.44, "text": " and not anomalous."}, {"start": 134.44, "end": 137.24, "text": " So y is equal to zero."}, {"start": 137.24, "end": 142.76, "text": " In practice, if a few anomalous examples were to slip into this training set, your algorithm"}, {"start": 142.76, "end": 145.24, "text": " will still usually do okay."}, {"start": 145.24, "end": 153.44, "text": " To evaluate your algorithm, to come up with a way for you to have a real number evaluation,"}, {"start": 153.44, "end": 161.4, "text": " it turns out to be very useful if you have a small number of anomalous examples so that"}, {"start": 161.4, "end": 168.24, "text": " you can create a cross validation set, which I'm going to denote x cv one, y cv one through"}, {"start": 168.24, "end": 170.64000000000001, "text": " x cv m cv and y cv m cv."}, {"start": 170.64000000000001, "end": 176.24, "text": " This is similar notation as you had seen in the second course of the specialization and"}, {"start": 176.24, "end": 185.28, "text": " similarly have a test set of some number of examples where both the cross validation and"}, {"start": 185.28, "end": 192.36, "text": " the test sets hopefully include a few anomalous examples."}, {"start": 192.36, "end": 197.16, "text": " In other words, the cross validation and test sets will have a few examples with y equals"}, {"start": 197.16, "end": 201.88, "text": " one, but also a lot of examples where y is equal to zero."}, {"start": 201.88, "end": 206.88, "text": " And again, in practice, anomaly detection algorithm will work okay."}, {"start": 206.88, "end": 212.44, "text": " If there are some examples, they're actually anomalous, but they were accidentally labeled"}, {"start": 212.44, "end": 214.16, "text": " with y equals zero."}, {"start": 214.16, "end": 218.92, "text": " Let's illustrate this with the aircraft engine example."}, {"start": 218.92, "end": 223.44, "text": " Let's say you have been manufacturing aircraft engines for years and so you've collected"}, {"start": 223.44, "end": 228.12, "text": " data from 10,000 goods or normal engines."}, {"start": 228.12, "end": 236.35999999999999, "text": " But over the years, you had also collected data from 20 flawed or anomalous engines."}, {"start": 236.35999999999999, "end": 242.48, "text": " Usually the number of anomalous engines that is y equals one will be much smaller."}, {"start": 242.48, "end": 250.56, "text": " And so it would not be atypical to apply this type of algorithm with anywhere from say two"}, {"start": 250.56, "end": 254.28, "text": " to 50 known anomalies."}, {"start": 254.28, "end": 258.88, "text": " We're going to take this data set and break it up into a training set, a cross validation"}, {"start": 258.88, "end": 260.59999999999997, "text": " set and a test set."}, {"start": 260.59999999999997, "end": 262.03999999999996, "text": " Here's one example."}, {"start": 262.03999999999996, "end": 267.4, "text": " I'm going to put 6,000 good engines into the training set."}, {"start": 267.4, "end": 272.47999999999996, "text": " And again, if there are a couple of anomalous engines that got slipped into this set is"}, {"start": 272.47999999999996, "end": 273.47999999999996, "text": " actually okay."}, {"start": 273.47999999999996, "end": 276.84, "text": " I wouldn't worry too much about that."}, {"start": 276.84, "end": 284.03999999999996, "text": " And then let's put 2,000 good engines and 10 of the known anomalies into the cross validation"}, {"start": 284.03999999999996, "end": 291.79999999999995, "text": " set and a separate 2,000 good and 10 anomalous engines into the test set."}, {"start": 291.8, "end": 299.2, "text": " What you can do then is train the algorithm on the training set, fit the Gaussian distributions"}, {"start": 299.2, "end": 301.76, "text": " to these 6,000 examples."}, {"start": 301.76, "end": 309.18, "text": " And then on the cross validation set, you can see how many of the anomalous engines"}, {"start": 309.18, "end": 312.08000000000004, "text": " it correctly flags."}, {"start": 312.08000000000004, "end": 318.44, "text": " And so for example, you could use the cross validation set to tune the parameter epsilon"}, {"start": 318.44, "end": 326.92, "text": " and set it higher or lower depending on whether the algorithm seems to be reliably detecting"}, {"start": 326.92, "end": 332.6, "text": " these 10 anomalies without taking too many of these 2,000 good engines and flagging them"}, {"start": 332.6, "end": 335.28, "text": " as anomalies."}, {"start": 335.28, "end": 340.64, "text": " And after you have tuned the parameter epsilon and maybe also added or subtracted or tuned"}, {"start": 340.64, "end": 348.84, "text": " the features xj, you can then take the algorithm and evaluate it on your test set to see how"}, {"start": 348.84, "end": 355.32, "text": " many of these 10 anomalous engines it finds as well as how many mistakes it makes by flagging"}, {"start": 355.32, "end": 358.76, "text": " the good engines as anomalous ones."}, {"start": 358.76, "end": 366.0, "text": " Notice that this is still primarily an unsupervised learning algorithm because the training set"}, {"start": 366.0, "end": 371.96, "text": " really has no labels or they all have labels that we're assuming to be y equals zero."}, {"start": 371.96, "end": 376.92, "text": " And so we learn from the training set by fitting the Gaussian distributions as you saw in the"}, {"start": 376.92, "end": 378.76, "text": " previous video."}, {"start": 378.76, "end": 384.64, "text": " But it turns out if you're building a practical anomaly detection system, having a small number"}, {"start": 384.64, "end": 390.84, "text": " of anomalies to use to evaluate the algorithm in your cross validation and test sets is"}, {"start": 390.84, "end": 394.2, "text": " very helpful for tuning the algorithm."}, {"start": 394.2, "end": 399.96, "text": " Because the number of flawed engines is so small, there's one other alternative that"}, {"start": 399.96, "end": 407.32, "text": " I often see people use for anomaly detection, which is to not use a test set, but to have"}, {"start": 407.32, "end": 410.84, "text": " just a training set and a cross validation set."}, {"start": 410.84, "end": 415.8, "text": " So in this example, you would still train on 6,000 good engines, but take the remainder"}, {"start": 415.8, "end": 421.4, "text": " of the data, the 4,000 remaining good engines as well as all the anomalies and put them"}, {"start": 421.4, "end": 426.52, "text": " in the cross validation set, and you would then tune the parameters epsilon and add or"}, {"start": 426.52, "end": 432.52, "text": " subtract features xj to try to get it to do as well as possible as evaluated on the cross"}, {"start": 432.52, "end": 434.59999999999997, "text": " validation set."}, {"start": 434.59999999999997, "end": 441.64, "text": " If you have very, very few flawed engines, so if you had only two flawed engines, then"}, {"start": 441.64, "end": 446.64, "text": " this really makes sense to put all of that in the cross validation set and you just don't"}, {"start": 446.64, "end": 451.76, "text": " have enough data to create a totally separate test set that is distinct from your cross"}, {"start": 451.76, "end": 453.32, "text": " validation set."}, {"start": 453.32, "end": 458.64, "text": " The downside of this alternative here is that after you've tuned your algorithm, you don't"}, {"start": 458.64, "end": 465.03999999999996, "text": " have a fair way to tell how well this will actually do on future examples because you"}, {"start": 465.03999999999996, "end": 466.8, "text": " don't have a test set."}, {"start": 466.8, "end": 471.03999999999996, "text": " But when your data set is small, especially when the number of anomalies you have in your"}, {"start": 471.03999999999996, "end": 475.0, "text": " data set is small, this might be the best alternative you have."}, {"start": 475.0, "end": 480.36, "text": " And so I see this done quite often as well when you just don't have enough data to create"}, {"start": 480.36, "end": 482.32, "text": " a separate test set."}, {"start": 482.32, "end": 487.28, "text": " And if this is the case, just be aware that there's a higher risk that you would have"}, {"start": 487.28, "end": 492.52, "text": " overfit some of your decisions around epsilon and choice of features and so on to the cross"}, {"start": 492.52, "end": 493.96, "text": " validation set."}, {"start": 493.96, "end": 501.4, "text": " And so its performance on real data in the future may not be as good as you were expecting."}, {"start": 501.4, "end": 507.2, "text": " Now let's take a closer look at how to actually evaluate the algorithm on your cross validation"}, {"start": 507.2, "end": 509.64, "text": " sets or on the test set."}, {"start": 509.64, "end": 511.08, "text": " Here's what you do."}, {"start": 511.08, "end": 515.28, "text": " You would first fit the model p of x on the training set."}, {"start": 515.28, "end": 519.12, "text": " So this was a 6,000 examples of good engines."}, {"start": 519.12, "end": 527.3199999999999, "text": " Then on any cross validation or test set example x, you would compute p of x and you would"}, {"start": 527.32, "end": 533.6400000000001, "text": " predict y equals 1, that is anomalous if p of x is less than epsilon, and you predict"}, {"start": 533.6400000000001, "end": 539.24, "text": " y is zero if p of x is greater than or equal to epsilon."}, {"start": 539.24, "end": 546.2, "text": " And so based on this, you can now look at how accurately this algorithm's predictions"}, {"start": 546.2, "end": 554.12, "text": " on the cross validation or test set matches the labels why you have the cross validation"}, {"start": 554.12, "end": 555.8800000000001, "text": " or the test sets."}, {"start": 555.88, "end": 561.68, "text": " In the third week of the second course, we had had a couple optional videos on how to"}, {"start": 561.68, "end": 569.32, "text": " handle highly skewed data distributions where the number of positive examples, y equals"}, {"start": 569.32, "end": 574.8, "text": " 1, can be much smaller than the number of negative examples where y equals zero."}, {"start": 574.8, "end": 579.8, "text": " And this is the case as well for many anomaly detection applications where the number of"}, {"start": 579.8, "end": 584.2, "text": " anomalies in your cross validation set is much smaller."}, {"start": 584.2, "end": 591.36, "text": " In our previous example, we had maybe 10 positive examples and 2,000 negative examples because"}, {"start": 591.36, "end": 595.84, "text": " we had 10 anomalies and 2,000 normal examples."}, {"start": 595.84, "end": 601.32, "text": " If you saw those optional videos, you may recall that we saw it could be useful to compute"}, {"start": 601.32, "end": 605.84, "text": " things like the true positive, false positive, false negative, and true negative rates, or"}, {"start": 605.84, "end": 611.6400000000001, "text": " to compute precision recall or F1 score, and that these are alternative metrics to classification"}, {"start": 611.64, "end": 617.76, "text": " accuracy that could work better when your data distribution is very skewed."}, {"start": 617.76, "end": 623.04, "text": " So if you saw that video, you might consider applying those types of evaluation metrics"}, {"start": 623.04, "end": 629.92, "text": " as well to tell how well your learning algorithm is doing at finding that small handful of"}, {"start": 629.92, "end": 636.36, "text": " anomalies or positive examples amidst this much larger set of negative examples of normal"}, {"start": 636.36, "end": 637.36, "text": " plane engines."}, {"start": 637.36, "end": 640.52, "text": " If you didn't watch that video, don't worry about it."}, {"start": 640.52, "end": 641.52, "text": " It's okay."}, {"start": 641.52, "end": 647.0, "text": " The intuition I hope you get is to use the cross validation set to just look at how many"}, {"start": 647.0, "end": 654.0, "text": " anomalies it's finding and also how many normal engines it is incorrectly flagging as an anomaly,"}, {"start": 654.0, "end": 660.1999999999999, "text": " and then to just use that to try to choose a good choice for the parameter epsilon."}, {"start": 660.1999999999999, "end": 666.48, "text": " So you find that the practical process of building an anomaly detection system is much"}, {"start": 666.48, "end": 673.88, "text": " easier if you actually have just a small number of labeled examples of known anomalies."}, {"start": 673.88, "end": 676.12, "text": " Now this does raise a question."}, {"start": 676.12, "end": 680.64, "text": " If you have a few labeled examples, should you still be using an unsupervised learning"}, {"start": 680.64, "end": 681.64, "text": " algorithm?"}, {"start": 681.64, "end": 686.72, "text": " Why not take those labeled examples and use a supervised learning algorithm instead?"}, {"start": 686.72, "end": 692.6, "text": " In the next video, let's take a look at a comparison between anomaly detection and supervised"}, {"start": 692.6, "end": 696.24, "text": " learning and when you might prefer one over the other."}, {"start": 696.24, "end": 697.84, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=7dIVRduaHzQ
8.12 Anomaly Detection | Anomaly detection vs. supervised learning -- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
When you have a few positive examples with y equals 1 and a large number of negative examples say y equals 0, when should you use anomaly detection and when should you use supervised learning? The decision is actually quite subtle in some applications, so let me share with you some thoughts and some suggestions for how to pick between these two types of algorithms. An anomaly detection algorithm will typically be the more appropriate choice when you have a very small number of positive examples, 0 to 20 positive examples is not uncommon and a relatively large number of negative examples with which to try to build a model for p of x. Where you recall that the parameters for p of x are learned only from the negative examples and this much smaller set of positive examples is only used in your cross-validation set and test set for parameter tuning and for evaluation. In contrast, if you have a larger number of positive and negative examples, then supervised learning might be more applicable. Now, even if you have only 20 positive training examples, it might be okay to apply a supervised learning algorithm, but it turns out that the way anomaly detection looks at the data set versus the way supervised learning looks at the data set are quite different. Here's the main difference, which is that if you think there are many different types of anomalies or many different types of positive examples, then anomaly detection might be more appropriate. When there are many different ways for an aircraft engine to go wrong and if tomorrow there may be a brand new way for an aircraft engine to have something wrong with it, then your 20, say, positive examples may not cover all of the ways that an aircraft engine could go wrong. That makes it hard for any algorithm to learn from the smallest of the positive examples what the anomalies, what the positive examples look like. Future anomalies may look nothing like any of the anomalous examples we've seen so far. If you believe this to be true for your problem, then I would gravitate to using an anomaly detection algorithm. Because what anomaly detection does is it looks at the normal examples, that is the y equals zero and negative examples, and just try to model what they look like. Something that deviates a lot from normal, it flags as an anomaly, including if there's a brand new way for an aircraft engine to fail that had never been seen before in your data set. In contrast, supervised learning has a different way of looking at the problem. When you apply supervised learning, ideally you would hope to have enough positive examples for the algorithm to get a sense of what the positive examples are like. And with supervised learning, we tend to assume that the future positive examples are likely to be similar to the ones in the training set. So let me illustrate this with one example. If you are using a system to find, say, financial fraud, there are many different ways, unfortunately, that some individuals are trying to commit financial fraud. And unfortunately, there are new types of financial fraud attempts every few months or every year. And what that means is that because they keep on popping up completely new and unique forms of financial fraud, anomaly detection is often used to just look for anything that's different than transactions we've seen in the past. In contrast, if you look at the problem of email spam detection, well, there are many different types of spam email, but even over many years, spam emails keep on trying to sell similar things or get you to go to similar websites and so on. Spam email that you will get in the next few days is much more likely to be similar to spam emails that you have seen in the past. So that's why supervised learning works well for spam because it's trying to detect more of the types of spam emails that you have probably seen in the past in your training set. Whereas if you're trying to detect brand new types of fraud that have never been seen before, then anomaly detection may be more applicable. Let's go through a few more examples. We have already seen fraud detection being one use case of anomaly detection. Although supervised learning is used to define previously observed forms of fraud, and we've seen emails spam classification typically being addressed using supervised learning. You've also seen the example of manufacturing where you may want to find new previously unseen defects, such as if there are brand new ways for an aircraft engine to fail in the future that you still want to detect, even if you don't have any positive example like that in your training set. It turns out that in manufacturing, supervised learning is also used to find defects, more for finding known and previously seen defects. For example, if you are a smartphone maker, you're making cell phones and you know that occasionally your machine for making the case of the smartphone will accidentally scratch the cover. So scratches are a common defect on smartphones. And so you can get enough training examples of scratched smartphones corresponding to a label Y equals one and just train the system to decide if a new smartphone that you just manufactured has any scratches in it. And the difference is if you just see scratched smartphones over and over, and you want to check if your phones are scratched, then supervised learning works well. Whereas if you suspect that there are going to be brand new ways for something to go wrong in the future, then anomaly detection will work well. Some other examples, you've heard me talk about monitoring machines in the data center, especially if a machine has been hacked, it can behave differently in a brand new way, unlike any previous way it has behaved. So that would feel more like an anomaly detection application. In fact, one theme is that many security related applications because hackers are often finding brand new ways to hack into systems, many security related applications will use anomaly detection. Whereas returning to supervised learning, if you want to learn to predict the weather, well, there's only a handful types of weather that you typically see, is it sunny, rainy, is it going to snow? And so because you see the same output labels over and over, weather prediction would tend to be a supervised learning task. Or if you want to use the symptoms of a patient to see if the patient has a specific disease that you've seen before, then that would also tend to be a supervised learning application. So I hope that gives you a framework for deciding when you have a small set of positive examples, as well as maybe a large set of negative examples, whether to use anomaly detection or supervised learning. Anomaly detection tries to find brand new positive examples that may be unlike anything you've seen before. Whereas supervised learning looks at your positive examples and tries to decide if a future example is similar to the positive examples that you've already seen. Now, it turns out that when building an anomaly detection algorithm, the choice of features is very important. And when building anomaly detection systems, I often spend a bit of time trying to tune the features I use for the system. In the next video, let me share some practical tips on how to tune the features you feed to anomaly detection algorithm.
[{"start": 0.0, "end": 8.44, "text": " When you have a few positive examples with y equals 1 and a large number of negative"}, {"start": 8.44, "end": 13.86, "text": " examples say y equals 0, when should you use anomaly detection and when should you use"}, {"start": 13.86, "end": 19.5, "text": " supervised learning? The decision is actually quite subtle in some applications, so let"}, {"start": 19.5, "end": 24.64, "text": " me share with you some thoughts and some suggestions for how to pick between these two types of"}, {"start": 24.64, "end": 30.8, "text": " algorithms. An anomaly detection algorithm will typically be the more appropriate choice"}, {"start": 30.8, "end": 38.92, "text": " when you have a very small number of positive examples, 0 to 20 positive examples is not"}, {"start": 38.92, "end": 46.58, "text": " uncommon and a relatively large number of negative examples with which to try to build"}, {"start": 46.58, "end": 53.38, "text": " a model for p of x. Where you recall that the parameters for p of x are learned only"}, {"start": 53.38, "end": 59.32, "text": " from the negative examples and this much smaller set of positive examples is only used in your"}, {"start": 59.32, "end": 65.2, "text": " cross-validation set and test set for parameter tuning and for evaluation. In contrast, if"}, {"start": 65.2, "end": 71.76, "text": " you have a larger number of positive and negative examples, then supervised learning might be"}, {"start": 71.76, "end": 79.96000000000001, "text": " more applicable. Now, even if you have only 20 positive training examples, it might be"}, {"start": 79.96, "end": 86.8, "text": " okay to apply a supervised learning algorithm, but it turns out that the way anomaly detection"}, {"start": 86.8, "end": 92.72, "text": " looks at the data set versus the way supervised learning looks at the data set are quite different."}, {"start": 92.72, "end": 98.67999999999999, "text": " Here's the main difference, which is that if you think there are many different types"}, {"start": 98.67999999999999, "end": 105.66, "text": " of anomalies or many different types of positive examples, then anomaly detection might be"}, {"start": 105.66, "end": 111.6, "text": " more appropriate. When there are many different ways for an aircraft engine to go wrong and"}, {"start": 111.6, "end": 117.08, "text": " if tomorrow there may be a brand new way for an aircraft engine to have something wrong"}, {"start": 117.08, "end": 124.88, "text": " with it, then your 20, say, positive examples may not cover all of the ways that an aircraft"}, {"start": 124.88, "end": 130.0, "text": " engine could go wrong. That makes it hard for any algorithm to learn from the smallest"}, {"start": 130.0, "end": 136.12, "text": " of the positive examples what the anomalies, what the positive examples look like. Future"}, {"start": 136.12, "end": 141.64, "text": " anomalies may look nothing like any of the anomalous examples we've seen so far. If you"}, {"start": 141.64, "end": 147.28, "text": " believe this to be true for your problem, then I would gravitate to using an anomaly"}, {"start": 147.28, "end": 153.52, "text": " detection algorithm. Because what anomaly detection does is it looks at the normal examples,"}, {"start": 153.52, "end": 159.64, "text": " that is the y equals zero and negative examples, and just try to model what they look like."}, {"start": 159.64, "end": 164.39999999999998, "text": " Something that deviates a lot from normal, it flags as an anomaly, including if there's"}, {"start": 164.39999999999998, "end": 170.16, "text": " a brand new way for an aircraft engine to fail that had never been seen before in your"}, {"start": 170.16, "end": 175.51999999999998, "text": " data set. In contrast, supervised learning has a different way of looking at the problem."}, {"start": 175.51999999999998, "end": 180.76, "text": " When you apply supervised learning, ideally you would hope to have enough positive examples"}, {"start": 180.76, "end": 185.72, "text": " for the algorithm to get a sense of what the positive examples are like. And with supervised"}, {"start": 185.72, "end": 191.72, "text": " learning, we tend to assume that the future positive examples are likely to be similar"}, {"start": 191.72, "end": 198.72, "text": " to the ones in the training set. So let me illustrate this with one example. If you are"}, {"start": 198.72, "end": 205.28, "text": " using a system to find, say, financial fraud, there are many different ways, unfortunately,"}, {"start": 205.28, "end": 211.28, "text": " that some individuals are trying to commit financial fraud. And unfortunately, there"}, {"start": 211.28, "end": 216.96, "text": " are new types of financial fraud attempts every few months or every year. And what that"}, {"start": 216.96, "end": 223.64, "text": " means is that because they keep on popping up completely new and unique forms of financial"}, {"start": 223.64, "end": 229.92000000000002, "text": " fraud, anomaly detection is often used to just look for anything that's different than"}, {"start": 229.92000000000002, "end": 236.88, "text": " transactions we've seen in the past. In contrast, if you look at the problem of email spam detection,"}, {"start": 236.88, "end": 243.4, "text": " well, there are many different types of spam email, but even over many years, spam emails"}, {"start": 243.4, "end": 250.44, "text": " keep on trying to sell similar things or get you to go to similar websites and so on. Spam"}, {"start": 250.44, "end": 255.72, "text": " email that you will get in the next few days is much more likely to be similar to spam"}, {"start": 255.72, "end": 262.4, "text": " emails that you have seen in the past. So that's why supervised learning works well"}, {"start": 262.4, "end": 267.91999999999996, "text": " for spam because it's trying to detect more of the types of spam emails that you have"}, {"start": 267.91999999999996, "end": 273.12, "text": " probably seen in the past in your training set. Whereas if you're trying to detect brand"}, {"start": 273.12, "end": 278.08, "text": " new types of fraud that have never been seen before, then anomaly detection may be more"}, {"start": 278.08, "end": 286.21999999999997, "text": " applicable. Let's go through a few more examples. We have already seen fraud detection being"}, {"start": 286.21999999999997, "end": 292.14, "text": " one use case of anomaly detection. Although supervised learning is used to define previously"}, {"start": 292.14, "end": 298.38, "text": " observed forms of fraud, and we've seen emails spam classification typically being addressed"}, {"start": 298.38, "end": 305.2, "text": " using supervised learning. You've also seen the example of manufacturing where you may"}, {"start": 305.2, "end": 312.0, "text": " want to find new previously unseen defects, such as if there are brand new ways for an"}, {"start": 312.0, "end": 316.0, "text": " aircraft engine to fail in the future that you still want to detect, even if you don't"}, {"start": 316.0, "end": 322.76, "text": " have any positive example like that in your training set. It turns out that in manufacturing,"}, {"start": 322.76, "end": 328.24, "text": " supervised learning is also used to find defects, more for finding known and previously seen"}, {"start": 328.24, "end": 334.64, "text": " defects. For example, if you are a smartphone maker, you're making cell phones and you know"}, {"start": 334.64, "end": 340.28, "text": " that occasionally your machine for making the case of the smartphone will accidentally"}, {"start": 340.28, "end": 347.2, "text": " scratch the cover. So scratches are a common defect on smartphones. And so you can get"}, {"start": 347.2, "end": 354.0, "text": " enough training examples of scratched smartphones corresponding to a label Y equals one and"}, {"start": 354.0, "end": 359.67999999999995, "text": " just train the system to decide if a new smartphone that you just manufactured has any scratches"}, {"start": 359.67999999999995, "end": 364.84, "text": " in it. And the difference is if you just see scratched smartphones over and over, and you"}, {"start": 364.84, "end": 370.2, "text": " want to check if your phones are scratched, then supervised learning works well. Whereas"}, {"start": 370.2, "end": 373.96, "text": " if you suspect that there are going to be brand new ways for something to go wrong in"}, {"start": 373.96, "end": 379.35999999999996, "text": " the future, then anomaly detection will work well. Some other examples, you've heard me"}, {"start": 379.35999999999996, "end": 384.88, "text": " talk about monitoring machines in the data center, especially if a machine has been hacked,"}, {"start": 384.88, "end": 389.84, "text": " it can behave differently in a brand new way, unlike any previous way it has behaved. So"}, {"start": 389.84, "end": 396.15999999999997, "text": " that would feel more like an anomaly detection application. In fact, one theme is that many"}, {"start": 396.15999999999997, "end": 402.64, "text": " security related applications because hackers are often finding brand new ways to hack into"}, {"start": 402.64, "end": 407.76, "text": " systems, many security related applications will use anomaly detection. Whereas returning"}, {"start": 407.76, "end": 413.4, "text": " to supervised learning, if you want to learn to predict the weather, well, there's only"}, {"start": 413.4, "end": 419.71999999999997, "text": " a handful types of weather that you typically see, is it sunny, rainy, is it going to snow?"}, {"start": 419.72, "end": 425.04, "text": " And so because you see the same output labels over and over, weather prediction would tend"}, {"start": 425.04, "end": 430.40000000000003, "text": " to be a supervised learning task. Or if you want to use the symptoms of a patient to see"}, {"start": 430.40000000000003, "end": 434.96000000000004, "text": " if the patient has a specific disease that you've seen before, then that would also tend"}, {"start": 434.96000000000004, "end": 441.0, "text": " to be a supervised learning application. So I hope that gives you a framework for deciding"}, {"start": 441.0, "end": 446.6, "text": " when you have a small set of positive examples, as well as maybe a large set of negative examples,"}, {"start": 446.6, "end": 452.40000000000003, "text": " whether to use anomaly detection or supervised learning. Anomaly detection tries to find"}, {"start": 452.40000000000003, "end": 457.8, "text": " brand new positive examples that may be unlike anything you've seen before. Whereas supervised"}, {"start": 457.8, "end": 463.16, "text": " learning looks at your positive examples and tries to decide if a future example is similar"}, {"start": 463.16, "end": 469.6, "text": " to the positive examples that you've already seen. Now, it turns out that when building"}, {"start": 469.6, "end": 475.52000000000004, "text": " an anomaly detection algorithm, the choice of features is very important. And when building"}, {"start": 475.52, "end": 480.15999999999997, "text": " anomaly detection systems, I often spend a bit of time trying to tune the features I"}, {"start": 480.15999999999997, "end": 485.4, "text": " use for the system. In the next video, let me share some practical tips on how to tune"}, {"start": 485.4, "end": 506.4, "text": " the features you feed to anomaly detection algorithm."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=hBiFuWa91wE
8.13 Anomaly Detection | Choosing what features to use -- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
When building an anomaly detection algorithm, I found that choosing a good choice of features turns out to be really important. In supervised learning, if you don't have the features quite right, or if you have a few extra features that are not relevant to the problem, that often turns out to be okay because the algorithm has the supervised signal, that is enough labels y for the algorithm to figure out what features ignore or how to rescale a feature and to take the best advantage of the features you do give it. But for anomaly detection, which runs or learns just from unlabeled data, it's harder for the algorithm to figure out what features to ignore. So I've found that carefully choosing the features is even more important for anomaly detection than for supervised learning approaches. Let's take a look at this video at some practical tips for how to tune the features for anomaly detection to try to get you the best possible performance. One step that can help your anomaly detection algorithm is to try to make sure the features you give it are more or less Gaussian. And if your features are not Gaussian, sometimes you can change it to make it a little bit more Gaussian. Let me show you what I mean. If you have a feature x, I will often plot a histogram of the feature, which you can do using the Python command plt.his. You see this in the practice lab as well. In order to look at a histogram of the data. This distribution here looks pretty Gaussian. So this would be a good candidate feature if you think this is a feature that helps distinguish between anomalies and normal examples. But quite often when you plot a histogram of your features, you may find that a feature has a distribution like this. This does not at all look like that symmetric bell shaped curve. When that is the case, I would consider if you can take this feature x and transform it in order to make it more Gaussian. For example, maybe if you were to compute the log of x and plot a histogram of log of x, it'll look like this and this looks much more Gaussian. And so if this feature was feature x1, then instead of using the original feature x1, which looks like this on the left, you might instead replace that feature with log of x1 to get this distribution over here. Because when x1 is made more Gaussian, when anomaly detection models p of x1 using a Gaussian distribution like that is more likely to be a good fit to the data. Other than the log function, other things you might do is given a different feature x2, you may replace it with x2 log of x2 plus one. This would be a different way of transforming x2. And more generally, log of x2 plus c would be one example of a formula you can use to change x2 to try to make it more Gaussian. Or for a different feature, you might try taking a square root or really the square root of x cubed is x3 to the power of one half and you may change that exponentiation term. So for a different feature x4, you might use x4 to the power of one third, for example. So when I'm building an anomaly detection system, I'll sometimes take a look at my features and if I see any of the highly non-Gaussian by plotting a histogram, I might choose transformations like these or others in order to try to make it more Gaussian. It turns out a larger value of c will end up transforming this distribution less. But in practice, I just try a bunch of different values of c and then try to take a look to pick one that looks better in terms of making the distribution more Gaussian. Now let me illustrate how I actually do this in a Jupyter notebook. So this is what the process of exploring different transformations in the features might look like. When you have a feature x, you can plot a histogram of it as follows. It actually looks like, um, this is a pretty coarse histogram. Let me increase the number of bins in my histogram to 50. So bins equals 50. There that's my histogram bins. Oh, and by the way, if you want to change the color, you can also do so as follows. And if you want to try a different transformation, you can try, for example, to plot x square root of x. So x to the power of 0.5 with again, 50 histogram bins, in which case it might look like this. And this actually looks somewhat more Gaussian, but not perfectly. And let's try a different parameter. So let me try to the power of 0.25. Maybe I adjusted a little bit too far. Still 0.4. That looks pretty Gaussian. So one thing you could do is replace x with x to the power of 0.4. And so you would set x to be equal to x to the power of 0.4 and just, you know, use the value of x in your training process instead. Well, let me show you another transformation. Here I'm going to try taking the log of x. So log of x, let's plot it with 50 bins. I'm going to use the NumPy log function as follows. And oops, it turns out you get an error because it turns out that x in this example has some values that are equal to 0 and well, log of 0 is negative infinity is not defined. So common trick is to add just a very tiny number there. So x plus 0.001 becomes non-negative. And so you get a histogram that looks like this. But if you want the distribution to look more Gaussian, you can also play around with this parameter to try to see if there's a value that causes the data to look more symmetric and maybe look more Gaussian as follows. And just as I'm doing right now in real time, you can see that you can very quickly change these parameters and plot the histogram in order to try to take a look and try to get something a bit more Gaussian than was the original data x that you saw in this histogram up above. If you read the machine learning literature, there are some ways to automatically measure how close these distributions are to Gaussians, but I found that in practice, it doesn't make a big difference. If you just try a few values and pick something that looks right to you, that will work well for our practical purposes. So by trying things out in a Jupyter Notebook, you can try to pick a transformation that makes your data more Gaussian. And just as a reminder, whatever transformations you apply to the training set, please remember to apply the same transformation to your cross validation and test set data as well. Other than making sure that your data is approximately Gaussian, after you've trained your anomaly detection algorithm, if it doesn't work that well on your cross validation set, you can also carry out an error analysis process for anomaly detection. In other words, you can try to look at where the algorithm is not yet doing well, where it's making errors, and then use that to try to come up with improvements. So as a reminder, what we want is for p of x to be large for normal examples x, so greater than or equal to epsilon, and p of x to be small or less than epsilon for the anomalous examples x. When you've learned to model p of x from your unlabeled data, the most common problem that you may run into is that p of x is comparable in value, say is large for both normal and for anomalous examples. As a concrete example, if this is your data set, you might fit that Gaussian to it. And if you have an example in your cross validation set or test set that is over here, that is anomalous, then this has a pretty high probability. And in fact, it looks quite similar to the other examples in your training set. And so even though this is an anomaly, p of x is actually pretty large. And so the algorithm will fail to flag this particular example as an anomaly. In that case, what I would normally do is try to look at that example and try to figure out what is it that made me think is an anomaly, even if this feature x1 took on values similar to other training examples. And if I can identify some new feature, say x2, that helps distinguish this example from the normal examples, then adding that feature can help improve the performance of the algorithm. Here's a picture showing what I mean. If I can come up with a new feature x2, say I'm trying to detect fraudulent behavior, and if x1 is the number of transactions they make, maybe this user looks like they're making similar transactions as everyone else. But if I discover that this user has some insanely fast typing speed, and if I were to add a new feature x2, that is the typing speed of this user. And if it turns out that when I plot this data using the old feature x1 and this new feature x2 causes x2 to stand out over here, then it becomes much easier for the anomaly detection algorithm to recognize that x2 is an anomalous user. Because when you have this new feature x2, the learning algorithm may fit a Gaussian distribution that assigns high probability to points in this region, a bit lower in this region, and a bit lower in this region. And so this example, because of the very anomalous value of x2, becomes easier to detect as an anomaly. So just to summarize, the development process I'll often go through is to train a model and then to see what anomalies in the cross validation set the algorithm is failing to detect. And then to look at those examples to see if that can inspire the creation of new features that would allow the algorithm to spot that. That example takes on unusually large or unusually small values on the new features so that it can now successfully flag those examples as anomalies. Just as one more example, let's say you're building an anomaly detection system to monitor computers in a data center to try to figure out if a computer may be behaving strangely and deserves a closer look, maybe because of a hardware failure or because it's been hacked into or something. So what you'd like to do is to choose features that might take on unusually large or small values in the event of an anomaly. You might start off with features like x1 is the memory use, x2 is number of disk accesses per second, then the CPU load and the volume of network traffic. And if you train the algorithm, you may find that it detects some anomalies but fails to detect some other anomalies. In that case, it's not unusual to create new features by combining old features. So for example, if you find that there's a computer that is behaving very strangely, that neither is CPU load nor network traffic is that unusual. But what is unusual is it has a really high CPU load while having a very low network traffic volume. If you're running a data center that streams videos, then computers may have high CPU load and high network traffic or low CPU load and no network traffic. But what's unusual about this one machine is it has very high CPU load despite a very low traffic volume. In that case, you might create a new feature x5, which is a ratio of CPU load to network traffic. And this new feature would help the anomaly detection algorithm flag future examples like the specific machine you may be seeing as anomalous. Or you can also consider other features like the square of the CPU load divided by the network traffic volume. And you can play around with different choices of these features in order to try to get it so that p of x is still large for the normal examples, but it becomes small in the anomalies in your cross validation set. So that's it. Thanks for sticking with me to the end of this week. I hope you enjoyed hearing about both clustering algorithms and anomaly detection algorithms and that you also enjoy playing with these ideas in the optional labs and the practice labs. Next week, we'll go on to talk about recommender systems. When you go to a website and it recommends products or movies or other things to you, how does that algorithm actually work? This is one of the most commercially important algorithms in machine learning that gets talked about surprisingly little. But next week, we'll take a look at how these algorithms work so that you understand the next time you go to a website and it recommends something to you, maybe how that came about as well as you be able to build other algorithms like that for yourself as well. So have fun with the labs and I look forward to seeing you next week.
[{"start": 0.0, "end": 8.28, "text": " When building an anomaly detection algorithm, I found that choosing a good choice of features"}, {"start": 8.28, "end": 11.040000000000001, "text": " turns out to be really important."}, {"start": 11.040000000000001, "end": 15.08, "text": " In supervised learning, if you don't have the features quite right, or if you have a"}, {"start": 15.08, "end": 20.8, "text": " few extra features that are not relevant to the problem, that often turns out to be okay"}, {"start": 20.8, "end": 26.52, "text": " because the algorithm has the supervised signal, that is enough labels y for the algorithm"}, {"start": 26.52, "end": 32.08, "text": " to figure out what features ignore or how to rescale a feature and to take the best"}, {"start": 32.08, "end": 35.36, "text": " advantage of the features you do give it."}, {"start": 35.36, "end": 41.4, "text": " But for anomaly detection, which runs or learns just from unlabeled data, it's harder for"}, {"start": 41.4, "end": 44.519999999999996, "text": " the algorithm to figure out what features to ignore."}, {"start": 44.519999999999996, "end": 49.239999999999995, "text": " So I've found that carefully choosing the features is even more important for anomaly"}, {"start": 49.239999999999995, "end": 53.239999999999995, "text": " detection than for supervised learning approaches."}, {"start": 53.24, "end": 58.32, "text": " Let's take a look at this video at some practical tips for how to tune the features for anomaly"}, {"start": 58.32, "end": 61.84, "text": " detection to try to get you the best possible performance."}, {"start": 61.84, "end": 67.48, "text": " One step that can help your anomaly detection algorithm is to try to make sure the features"}, {"start": 67.48, "end": 71.92, "text": " you give it are more or less Gaussian."}, {"start": 71.92, "end": 77.88, "text": " And if your features are not Gaussian, sometimes you can change it to make it a little bit"}, {"start": 77.88, "end": 78.88, "text": " more Gaussian."}, {"start": 78.88, "end": 81.4, "text": " Let me show you what I mean."}, {"start": 81.4, "end": 89.56, "text": " If you have a feature x, I will often plot a histogram of the feature, which you can"}, {"start": 89.56, "end": 94.68, "text": " do using the Python command plt.his."}, {"start": 94.68, "end": 97.72, "text": " You see this in the practice lab as well."}, {"start": 97.72, "end": 101.34, "text": " In order to look at a histogram of the data."}, {"start": 101.34, "end": 104.10000000000001, "text": " This distribution here looks pretty Gaussian."}, {"start": 104.10000000000001, "end": 109.04, "text": " So this would be a good candidate feature if you think this is a feature that helps"}, {"start": 109.04, "end": 114.12, "text": " distinguish between anomalies and normal examples."}, {"start": 114.12, "end": 119.04, "text": " But quite often when you plot a histogram of your features, you may find that a feature"}, {"start": 119.04, "end": 121.54, "text": " has a distribution like this."}, {"start": 121.54, "end": 127.32000000000001, "text": " This does not at all look like that symmetric bell shaped curve."}, {"start": 127.32000000000001, "end": 136.48000000000002, "text": " When that is the case, I would consider if you can take this feature x and transform"}, {"start": 136.48, "end": 139.79999999999998, "text": " it in order to make it more Gaussian."}, {"start": 139.79999999999998, "end": 147.0, "text": " For example, maybe if you were to compute the log of x and plot a histogram of log of"}, {"start": 147.0, "end": 152.6, "text": " x, it'll look like this and this looks much more Gaussian."}, {"start": 152.6, "end": 159.12, "text": " And so if this feature was feature x1, then instead of using the original feature x1,"}, {"start": 159.12, "end": 165.79999999999998, "text": " which looks like this on the left, you might instead replace that feature with log of x1"}, {"start": 165.8, "end": 168.84, "text": " to get this distribution over here."}, {"start": 168.84, "end": 177.96, "text": " Because when x1 is made more Gaussian, when anomaly detection models p of x1 using a Gaussian"}, {"start": 177.96, "end": 182.68, "text": " distribution like that is more likely to be a good fit to the data."}, {"start": 182.68, "end": 187.52, "text": " Other than the log function, other things you might do is given a different feature"}, {"start": 187.52, "end": 193.28, "text": " x2, you may replace it with x2 log of x2 plus one."}, {"start": 193.28, "end": 197.92, "text": " This would be a different way of transforming x2."}, {"start": 197.92, "end": 204.92000000000002, "text": " And more generally, log of x2 plus c would be one example of a formula you can use to"}, {"start": 204.92000000000002, "end": 209.08, "text": " change x2 to try to make it more Gaussian."}, {"start": 209.08, "end": 213.6, "text": " Or for a different feature, you might try taking a square root or really the square"}, {"start": 213.6, "end": 219.88, "text": " root of x cubed is x3 to the power of one half and you may change that exponentiation"}, {"start": 219.88, "end": 221.24, "text": " term."}, {"start": 221.24, "end": 227.74, "text": " So for a different feature x4, you might use x4 to the power of one third, for example."}, {"start": 227.74, "end": 232.76000000000002, "text": " So when I'm building an anomaly detection system, I'll sometimes take a look at my features"}, {"start": 232.76000000000002, "end": 239.64000000000001, "text": " and if I see any of the highly non-Gaussian by plotting a histogram, I might choose transformations"}, {"start": 239.64000000000001, "end": 243.64000000000001, "text": " like these or others in order to try to make it more Gaussian."}, {"start": 243.64, "end": 251.32, "text": " It turns out a larger value of c will end up transforming this distribution less."}, {"start": 251.32, "end": 257.28, "text": " But in practice, I just try a bunch of different values of c and then try to take a look to"}, {"start": 257.28, "end": 262.64, "text": " pick one that looks better in terms of making the distribution more Gaussian."}, {"start": 262.64, "end": 267.5, "text": " Now let me illustrate how I actually do this in a Jupyter notebook."}, {"start": 267.5, "end": 272.03999999999996, "text": " So this is what the process of exploring different transformations in the features might look"}, {"start": 272.04, "end": 273.84000000000003, "text": " like."}, {"start": 273.84000000000003, "end": 279.52000000000004, "text": " When you have a feature x, you can plot a histogram of it as follows."}, {"start": 279.52000000000004, "end": 284.08000000000004, "text": " It actually looks like, um, this is a pretty coarse histogram."}, {"start": 284.08000000000004, "end": 287.84000000000003, "text": " Let me increase the number of bins in my histogram to 50."}, {"start": 287.84000000000003, "end": 291.32000000000005, "text": " So bins equals 50."}, {"start": 291.32000000000005, "end": 292.76, "text": " There that's my histogram bins."}, {"start": 292.76, "end": 300.68, "text": " Oh, and by the way, if you want to change the color, you can also do so as follows."}, {"start": 300.68, "end": 308.28000000000003, "text": " And if you want to try a different transformation, you can try, for example, to plot x square"}, {"start": 308.28000000000003, "end": 309.28000000000003, "text": " root of x."}, {"start": 309.28000000000003, "end": 318.52, "text": " So x to the power of 0.5 with again, 50 histogram bins, in which case it might look like this."}, {"start": 318.52, "end": 323.64, "text": " And this actually looks somewhat more Gaussian, but not perfectly."}, {"start": 323.64, "end": 325.64, "text": " And let's try a different parameter."}, {"start": 325.64, "end": 331.32, "text": " So let me try to the power of 0.25."}, {"start": 331.32, "end": 333.76, "text": " Maybe I adjusted a little bit too far."}, {"start": 333.76, "end": 334.84, "text": " Still 0.4."}, {"start": 334.84, "end": 336.12, "text": " That looks pretty Gaussian."}, {"start": 336.12, "end": 342.47999999999996, "text": " So one thing you could do is replace x with x to the power of 0.4."}, {"start": 342.47999999999996, "end": 349.71999999999997, "text": " And so you would set x to be equal to x to the power of 0.4 and just, you know, use the"}, {"start": 349.71999999999997, "end": 353.59999999999997, "text": " value of x in your training process instead."}, {"start": 353.6, "end": 356.64000000000004, "text": " Well, let me show you another transformation."}, {"start": 356.64000000000004, "end": 359.48, "text": " Here I'm going to try taking the log of x."}, {"start": 359.48, "end": 366.16, "text": " So log of x, let's plot it with 50 bins."}, {"start": 366.16, "end": 370.12, "text": " I'm going to use the NumPy log function as follows."}, {"start": 370.12, "end": 376.56, "text": " And oops, it turns out you get an error because it turns out that x in this example has some"}, {"start": 376.56, "end": 383.12, "text": " values that are equal to 0 and well, log of 0 is negative infinity is not defined."}, {"start": 383.12, "end": 389.18, "text": " So common trick is to add just a very tiny number there."}, {"start": 389.18, "end": 393.6, "text": " So x plus 0.001 becomes non-negative."}, {"start": 393.6, "end": 396.76, "text": " And so you get a histogram that looks like this."}, {"start": 396.76, "end": 401.08, "text": " But if you want the distribution to look more Gaussian, you can also play around with this"}, {"start": 401.08, "end": 408.4, "text": " parameter to try to see if there's a value that causes the data to look more symmetric"}, {"start": 408.4, "end": 412.48, "text": " and maybe look more Gaussian as follows."}, {"start": 412.48, "end": 418.92, "text": " And just as I'm doing right now in real time, you can see that you can very quickly change"}, {"start": 418.92, "end": 424.8, "text": " these parameters and plot the histogram in order to try to take a look and try to get"}, {"start": 424.8, "end": 433.92, "text": " something a bit more Gaussian than was the original data x that you saw in this histogram"}, {"start": 433.92, "end": 434.92, "text": " up above."}, {"start": 434.92, "end": 439.64000000000004, "text": " If you read the machine learning literature, there are some ways to automatically measure"}, {"start": 439.64, "end": 445.0, "text": " how close these distributions are to Gaussians, but I found that in practice, it doesn't make"}, {"start": 445.0, "end": 446.0, "text": " a big difference."}, {"start": 446.0, "end": 450.4, "text": " If you just try a few values and pick something that looks right to you, that will work well"}, {"start": 450.4, "end": 452.8, "text": " for our practical purposes."}, {"start": 452.8, "end": 459.3, "text": " So by trying things out in a Jupyter Notebook, you can try to pick a transformation that"}, {"start": 459.3, "end": 461.91999999999996, "text": " makes your data more Gaussian."}, {"start": 461.91999999999996, "end": 468.2, "text": " And just as a reminder, whatever transformations you apply to the training set, please remember"}, {"start": 468.2, "end": 473.44, "text": " to apply the same transformation to your cross validation and test set data as well."}, {"start": 473.44, "end": 480.32, "text": " Other than making sure that your data is approximately Gaussian, after you've trained your anomaly"}, {"start": 480.32, "end": 487.7, "text": " detection algorithm, if it doesn't work that well on your cross validation set, you can"}, {"start": 487.7, "end": 493.32, "text": " also carry out an error analysis process for anomaly detection."}, {"start": 493.32, "end": 498.0, "text": " In other words, you can try to look at where the algorithm is not yet doing well, where"}, {"start": 498.0, "end": 504.0, "text": " it's making errors, and then use that to try to come up with improvements."}, {"start": 504.0, "end": 512.8, "text": " So as a reminder, what we want is for p of x to be large for normal examples x, so greater"}, {"start": 512.8, "end": 519.2, "text": " than or equal to epsilon, and p of x to be small or less than epsilon for the anomalous"}, {"start": 519.2, "end": 520.6, "text": " examples x."}, {"start": 520.6, "end": 526.92, "text": " When you've learned to model p of x from your unlabeled data, the most common problem that"}, {"start": 526.92, "end": 533.4, "text": " you may run into is that p of x is comparable in value, say is large for both normal and"}, {"start": 533.4, "end": 535.5999999999999, "text": " for anomalous examples."}, {"start": 535.5999999999999, "end": 542.92, "text": " As a concrete example, if this is your data set, you might fit that Gaussian to it."}, {"start": 542.92, "end": 549.76, "text": " And if you have an example in your cross validation set or test set that is over here, that is"}, {"start": 549.76, "end": 552.7199999999999, "text": " anomalous, then this has a pretty high probability."}, {"start": 552.72, "end": 557.5600000000001, "text": " And in fact, it looks quite similar to the other examples in your training set."}, {"start": 557.5600000000001, "end": 563.6, "text": " And so even though this is an anomaly, p of x is actually pretty large."}, {"start": 563.6, "end": 568.44, "text": " And so the algorithm will fail to flag this particular example as an anomaly."}, {"start": 568.44, "end": 576.1600000000001, "text": " In that case, what I would normally do is try to look at that example and try to figure"}, {"start": 576.16, "end": 584.8399999999999, "text": " out what is it that made me think is an anomaly, even if this feature x1 took on values similar"}, {"start": 584.8399999999999, "end": 587.9599999999999, "text": " to other training examples."}, {"start": 587.9599999999999, "end": 597.1999999999999, "text": " And if I can identify some new feature, say x2, that helps distinguish this example from"}, {"start": 597.1999999999999, "end": 603.48, "text": " the normal examples, then adding that feature can help improve the performance of the algorithm."}, {"start": 603.48, "end": 605.76, "text": " Here's a picture showing what I mean."}, {"start": 605.76, "end": 612.84, "text": " If I can come up with a new feature x2, say I'm trying to detect fraudulent behavior,"}, {"start": 612.84, "end": 620.24, "text": " and if x1 is the number of transactions they make, maybe this user looks like they're making"}, {"start": 620.24, "end": 623.08, "text": " similar transactions as everyone else."}, {"start": 623.08, "end": 630.72, "text": " But if I discover that this user has some insanely fast typing speed, and if I were to"}, {"start": 630.72, "end": 635.48, "text": " add a new feature x2, that is the typing speed of this user."}, {"start": 635.48, "end": 640.6800000000001, "text": " And if it turns out that when I plot this data using the old feature x1 and this new"}, {"start": 640.6800000000001, "end": 647.9200000000001, "text": " feature x2 causes x2 to stand out over here, then it becomes much easier for the anomaly"}, {"start": 647.9200000000001, "end": 653.04, "text": " detection algorithm to recognize that x2 is an anomalous user."}, {"start": 653.04, "end": 658.48, "text": " Because when you have this new feature x2, the learning algorithm may fit a Gaussian"}, {"start": 658.48, "end": 663.88, "text": " distribution that assigns high probability to points in this region, a bit lower in this"}, {"start": 663.88, "end": 667.12, "text": " region, and a bit lower in this region."}, {"start": 667.12, "end": 674.52, "text": " And so this example, because of the very anomalous value of x2, becomes easier to detect as an"}, {"start": 674.52, "end": 675.52, "text": " anomaly."}, {"start": 675.52, "end": 683.0, "text": " So just to summarize, the development process I'll often go through is to train a model"}, {"start": 683.0, "end": 688.52, "text": " and then to see what anomalies in the cross validation set the algorithm is failing to"}, {"start": 688.52, "end": 689.8, "text": " detect."}, {"start": 689.8, "end": 695.4, "text": " And then to look at those examples to see if that can inspire the creation of new features"}, {"start": 695.4, "end": 699.4799999999999, "text": " that would allow the algorithm to spot that."}, {"start": 699.4799999999999, "end": 705.92, "text": " That example takes on unusually large or unusually small values on the new features so that it"}, {"start": 705.92, "end": 709.68, "text": " can now successfully flag those examples as anomalies."}, {"start": 709.68, "end": 714.76, "text": " Just as one more example, let's say you're building an anomaly detection system to monitor"}, {"start": 714.76, "end": 720.52, "text": " computers in a data center to try to figure out if a computer may be behaving strangely"}, {"start": 720.52, "end": 725.08, "text": " and deserves a closer look, maybe because of a hardware failure or because it's been"}, {"start": 725.08, "end": 727.24, "text": " hacked into or something."}, {"start": 727.24, "end": 731.64, "text": " So what you'd like to do is to choose features that might take on unusually large or small"}, {"start": 731.64, "end": 735.2, "text": " values in the event of an anomaly."}, {"start": 735.2, "end": 740.56, "text": " You might start off with features like x1 is the memory use, x2 is number of disk accesses"}, {"start": 740.56, "end": 745.76, "text": " per second, then the CPU load and the volume of network traffic."}, {"start": 745.76, "end": 753.0799999999999, "text": " And if you train the algorithm, you may find that it detects some anomalies but fails to"}, {"start": 753.0799999999999, "end": 755.9599999999999, "text": " detect some other anomalies."}, {"start": 755.9599999999999, "end": 762.0, "text": " In that case, it's not unusual to create new features by combining old features."}, {"start": 762.0, "end": 769.04, "text": " So for example, if you find that there's a computer that is behaving very strangely,"}, {"start": 769.04, "end": 774.12, "text": " that neither is CPU load nor network traffic is that unusual."}, {"start": 774.12, "end": 781.0, "text": " But what is unusual is it has a really high CPU load while having a very low network traffic"}, {"start": 781.0, "end": 782.5999999999999, "text": " volume."}, {"start": 782.5999999999999, "end": 788.4, "text": " If you're running a data center that streams videos, then computers may have high CPU load"}, {"start": 788.4, "end": 792.64, "text": " and high network traffic or low CPU load and no network traffic."}, {"start": 792.64, "end": 796.9599999999999, "text": " But what's unusual about this one machine is it has very high CPU load despite a very"}, {"start": 796.9599999999999, "end": 798.7199999999999, "text": " low traffic volume."}, {"start": 798.72, "end": 803.12, "text": " In that case, you might create a new feature x5, which is a ratio of CPU load to network"}, {"start": 803.12, "end": 804.32, "text": " traffic."}, {"start": 804.32, "end": 810.1600000000001, "text": " And this new feature would help the anomaly detection algorithm flag future examples like"}, {"start": 810.1600000000001, "end": 814.12, "text": " the specific machine you may be seeing as anomalous."}, {"start": 814.12, "end": 822.32, "text": " Or you can also consider other features like the square of the CPU load divided by the"}, {"start": 822.32, "end": 824.6800000000001, "text": " network traffic volume."}, {"start": 824.68, "end": 830.1999999999999, "text": " And you can play around with different choices of these features in order to try to get it"}, {"start": 830.1999999999999, "end": 838.8, "text": " so that p of x is still large for the normal examples, but it becomes small in the anomalies"}, {"start": 838.8, "end": 841.0799999999999, "text": " in your cross validation set."}, {"start": 841.0799999999999, "end": 842.0799999999999, "text": " So that's it."}, {"start": 842.0799999999999, "end": 844.7199999999999, "text": " Thanks for sticking with me to the end of this week."}, {"start": 844.7199999999999, "end": 850.68, "text": " I hope you enjoyed hearing about both clustering algorithms and anomaly detection algorithms"}, {"start": 850.68, "end": 856.3199999999999, "text": " and that you also enjoy playing with these ideas in the optional labs and the practice"}, {"start": 856.3199999999999, "end": 858.52, "text": " labs."}, {"start": 858.52, "end": 862.4799999999999, "text": " Next week, we'll go on to talk about recommender systems."}, {"start": 862.4799999999999, "end": 867.66, "text": " When you go to a website and it recommends products or movies or other things to you,"}, {"start": 867.66, "end": 870.88, "text": " how does that algorithm actually work?"}, {"start": 870.88, "end": 876.68, "text": " This is one of the most commercially important algorithms in machine learning that gets talked"}, {"start": 876.68, "end": 879.16, "text": " about surprisingly little."}, {"start": 879.16, "end": 883.16, "text": " But next week, we'll take a look at how these algorithms work so that you understand the"}, {"start": 883.16, "end": 888.1999999999999, "text": " next time you go to a website and it recommends something to you, maybe how that came about"}, {"start": 888.1999999999999, "end": 893.24, "text": " as well as you be able to build other algorithms like that for yourself as well."}, {"start": 893.24, "end": 909.84, "text": " So have fun with the labs and I look forward to seeing you next week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=1aKPXsx54Ms
9.1 Recommender System | Making recommendations -- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to this second to last week of the machine learning specialization. I'm really happy that together we're almost all the way to the finish line. What we'll do this week is discuss recommender systems. This is one of the topics that has received quite a bit of attention in academia, but the commercial impact and the actual number of practical use cases of recommender systems seems to me to be even vastly greater than the amount of attention it has received in academia. Every time you go to an online shopping website like Amazon or a movie streaming sites like Netflix or go to one of the apps or sites that do food delivery, many of these sites will recommend things to you that they think you may want to buy or movies they think you may want to watch or restaurants that they think you may want to try out. And for many companies, a large fraction of sales is driven by their recommender systems. So today for many companies, the economics or the value driven by recommender systems is very large. And so what we're doing this week is take a look at how they work. So with that, let's dive in and take a look at what is a recommender system. I'm going to use as a running example, the application of predicting movie ratings. So say you run a large movie streaming website and your users have rated movies using one to five stars. And so in a typical recommender system, you have a set of users here. We have four users, Alice, Bob, Carol and Dave, which I've numbered users one through four as well as a set of movies, Love at Last, Romance Forever, Keep Up, Piece of Love, and then Non-Stop Car Chases and Smalls vs Karate. And what the users have done is rated these movies one to five stars. Or in fact, to make some of these examples a little bit easier, I'm just going to let them rate the movies from zero to five stars. So say Alice is rated Love at Last five stars, Romance Forever five stars. Maybe she has not yet watched Keep Puppies of Love, so you don't have a rating for that. And I'm going to denote that by a question mark. And she thinks Non-Stop Car Chases and Smalls vs Karate deserve zero stars. Bob rates that five stars, has not watched that, so you don't have a rating. Rates that four stars, zero, zero. Carol, on the other hand, thinks that deserves zero stars, has not watched that, zero stars. And she loves Non-Stop Car Chases and Smalls vs Karate. And Dave rates the movies as follows. In the typical recommender system, you will have some number of users, as well as some number of items, in this case, the items and movies that you want to recommend to the users. And even though I'm using movies in this example, the same logic or the same framework works for recommending anything from products or website myself, to restaurants, to even which media articles or social media articles to show to a user that may be more interesting for them. The notation I'm going to use is, I'm going to use NU to denote the number of users. So in this example, NU is equal to four because you have four users. And NM to denote the number of movies or really the number of items. So in this example, NM is equal to five because we have five movies. I'm going to set Rij to be equal to one if user j has rated movie i. So for example, user one, that is Alice, has rated movie one but has not rated movie three. And so R11 would be equal to one because she has rated movie one, but R31 would be equal to zero because she has not rated movie number three. Then finally, I'm going to use Yij to denote the rating given by user j to movie i. So for example, this rating here would be that movie three was rated by user two to be equal to four. Notice that not every user rates every movie and it's important for the system to know which users have rated which movies. That's why we're going to define Rij to be equal to one if user j has rated movie i and it will be equal to zero if user j has not rated movie i. So with this framework for recommender systems, one possible way to approach the problem is to look at the movies that users have not rated and to try to predict how users would rate those movies because then we can try to recommend to users things that they are more likely to rate as five stars. And in the next video, we'll start to develop an algorithm for doing exactly that, but making one very special assumption, which is we're going to assume temporarily that we have access to features or extra information about the movies, such as which movies are romance movies, which movies are action movies, and using that will start to develop an algorithm. But later this week, we'll actually come back and ask what if we don't have these features, how can we still get the algorithm to work then. But let's go on to the next video to start building up this algorithm.
[{"start": 0.0, "end": 6.8, "text": " Welcome to this second to last week of the machine learning specialization."}, {"start": 6.8, "end": 11.68, "text": " I'm really happy that together we're almost all the way to the finish line."}, {"start": 11.68, "end": 15.24, "text": " What we'll do this week is discuss recommender systems."}, {"start": 15.24, "end": 20.44, "text": " This is one of the topics that has received quite a bit of attention in academia, but"}, {"start": 20.44, "end": 25.28, "text": " the commercial impact and the actual number of practical use cases of recommender systems"}, {"start": 25.28, "end": 30.68, "text": " seems to me to be even vastly greater than the amount of attention it has received in"}, {"start": 30.68, "end": 32.52, "text": " academia."}, {"start": 32.52, "end": 38.44, "text": " Every time you go to an online shopping website like Amazon or a movie streaming sites like"}, {"start": 38.44, "end": 45.32, "text": " Netflix or go to one of the apps or sites that do food delivery, many of these sites"}, {"start": 45.32, "end": 49.84, "text": " will recommend things to you that they think you may want to buy or movies they think you"}, {"start": 49.84, "end": 53.92, "text": " may want to watch or restaurants that they think you may want to try out."}, {"start": 53.92, "end": 59.32, "text": " And for many companies, a large fraction of sales is driven by their recommender systems."}, {"start": 59.32, "end": 65.12, "text": " So today for many companies, the economics or the value driven by recommender systems"}, {"start": 65.12, "end": 67.22, "text": " is very large."}, {"start": 67.22, "end": 70.82, "text": " And so what we're doing this week is take a look at how they work."}, {"start": 70.82, "end": 75.0, "text": " So with that, let's dive in and take a look at what is a recommender system."}, {"start": 75.0, "end": 81.0, "text": " I'm going to use as a running example, the application of predicting movie ratings."}, {"start": 81.0, "end": 88.08, "text": " So say you run a large movie streaming website and your users have rated movies using one"}, {"start": 88.08, "end": 89.7, "text": " to five stars."}, {"start": 89.7, "end": 94.44, "text": " And so in a typical recommender system, you have a set of users here."}, {"start": 94.44, "end": 98.96000000000001, "text": " We have four users, Alice, Bob, Carol and Dave, which I've numbered users one through"}, {"start": 98.96000000000001, "end": 104.28, "text": " four as well as a set of movies, Love at Last, Romance Forever, Keep Up, Piece of Love, and"}, {"start": 104.28, "end": 108.48, "text": " then Non-Stop Car Chases and Smalls vs Karate."}, {"start": 108.48, "end": 113.28, "text": " And what the users have done is rated these movies one to five stars."}, {"start": 113.28, "end": 118.36, "text": " Or in fact, to make some of these examples a little bit easier, I'm just going to let"}, {"start": 118.36, "end": 121.64, "text": " them rate the movies from zero to five stars."}, {"start": 121.64, "end": 126.74000000000001, "text": " So say Alice is rated Love at Last five stars, Romance Forever five stars."}, {"start": 126.74000000000001, "end": 130.6, "text": " Maybe she has not yet watched Keep Puppies of Love, so you don't have a rating for that."}, {"start": 130.6, "end": 133.44, "text": " And I'm going to denote that by a question mark."}, {"start": 133.44, "end": 139.16, "text": " And she thinks Non-Stop Car Chases and Smalls vs Karate deserve zero stars."}, {"start": 139.16, "end": 144.07999999999998, "text": " Bob rates that five stars, has not watched that, so you don't have a rating."}, {"start": 144.07999999999998, "end": 146.96, "text": " Rates that four stars, zero, zero."}, {"start": 146.96, "end": 152.84, "text": " Carol, on the other hand, thinks that deserves zero stars, has not watched that, zero stars."}, {"start": 152.84, "end": 156.64, "text": " And she loves Non-Stop Car Chases and Smalls vs Karate."}, {"start": 156.64, "end": 160.88, "text": " And Dave rates the movies as follows."}, {"start": 160.88, "end": 168.0, "text": " In the typical recommender system, you will have some number of users, as well as some"}, {"start": 168.0, "end": 176.4, "text": " number of items, in this case, the items and movies that you want to recommend to the users."}, {"start": 176.4, "end": 181.34, "text": " And even though I'm using movies in this example, the same logic or the same framework works"}, {"start": 181.34, "end": 187.04, "text": " for recommending anything from products or website myself, to restaurants, to even which"}, {"start": 187.04, "end": 191.2, "text": " media articles or social media articles to show to a user that may be more interesting"}, {"start": 191.2, "end": 192.2, "text": " for them."}, {"start": 192.2, "end": 199.32, "text": " The notation I'm going to use is, I'm going to use NU to denote the number of users."}, {"start": 199.32, "end": 203.92, "text": " So in this example, NU is equal to four because you have four users."}, {"start": 203.92, "end": 208.92, "text": " And NM to denote the number of movies or really the number of items."}, {"start": 208.92, "end": 213.62, "text": " So in this example, NM is equal to five because we have five movies."}, {"start": 213.62, "end": 222.72, "text": " I'm going to set Rij to be equal to one if user j has rated movie i."}, {"start": 222.72, "end": 231.36, "text": " So for example, user one, that is Alice, has rated movie one but has not rated movie three."}, {"start": 231.36, "end": 240.56, "text": " And so R11 would be equal to one because she has rated movie one, but R31 would be equal"}, {"start": 240.56, "end": 244.56, "text": " to zero because she has not rated movie number three."}, {"start": 244.56, "end": 250.96, "text": " Then finally, I'm going to use Yij to denote the rating given by user j to movie i."}, {"start": 250.96, "end": 257.24, "text": " So for example, this rating here would be that movie three was rated by user two to"}, {"start": 257.24, "end": 259.56, "text": " be equal to four."}, {"start": 259.56, "end": 264.6, "text": " Notice that not every user rates every movie and it's important for the system to know"}, {"start": 264.6, "end": 267.28, "text": " which users have rated which movies."}, {"start": 267.28, "end": 273.96, "text": " That's why we're going to define Rij to be equal to one if user j has rated movie i and"}, {"start": 273.96, "end": 278.23999999999995, "text": " it will be equal to zero if user j has not rated movie i."}, {"start": 278.23999999999995, "end": 283.5, "text": " So with this framework for recommender systems, one possible way to approach the problem is"}, {"start": 283.5, "end": 289.4, "text": " to look at the movies that users have not rated and to try to predict how users would"}, {"start": 289.4, "end": 294.08, "text": " rate those movies because then we can try to recommend to users things that they are"}, {"start": 294.08, "end": 298.64, "text": " more likely to rate as five stars."}, {"start": 298.64, "end": 303.32, "text": " And in the next video, we'll start to develop an algorithm for doing exactly that, but making"}, {"start": 303.32, "end": 308.59999999999997, "text": " one very special assumption, which is we're going to assume temporarily that we have access"}, {"start": 308.59999999999997, "end": 314.96, "text": " to features or extra information about the movies, such as which movies are romance movies,"}, {"start": 314.96, "end": 320.34, "text": " which movies are action movies, and using that will start to develop an algorithm."}, {"start": 320.34, "end": 325.11999999999995, "text": " But later this week, we'll actually come back and ask what if we don't have these features,"}, {"start": 325.11999999999995, "end": 328.52, "text": " how can we still get the algorithm to work then."}, {"start": 328.52, "end": 351.24, "text": " But let's go on to the next video to start building up this algorithm."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=udHt4CiJH6M
9.2 Collaborative Filtering | Using per-item features-- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, let's take a look at how we can develop a recommender system if we had features of each item or features of each movie. So here's the same data set that we had previously with the four users having rated some but not all of the five movies. What if we additionally have features of the movies? So here I've added two features, X1 and X2, that tell us how much each of these is a romance movie and how much each of these is an action movie. So for example, Love at Last is a very romantic movie, so this feature takes on 0.9, but it's not a lot of action movies, so this feature takes on 0. But it turns out Non-Stop Car Chases has just a little bit of romance in it, so it's 0.1, but it has a ton of action, so that feature takes on the value of 1.0. So you recall that I had used the notation NU to denote the number of users, which is 4, and M to denote the number of movies, which is 5. I'm going to also introduce N to denote the number of features we have here, and so N is equal to 2, because we have two features, X1 and X2, for each movie. With these features, we have, for example, that the features for movie 1, that is the movie Love at Last, would be 0.9, 0, and the features for the third movie, Cute Puppies of Love, would be 0.99, and 0. And let's start by taking a look at how we might make predictions for Alice's movie ratings. So for user 1, that is Alice, let's say we predict the rating for movie i as w dot x of feature i plus b. So this is just a lot like linear regression. For example, if we end up choosing the parameter w1 equal 5, 0, and say b1 is equal to 0, then at our prediction for movie 3, where the features are 0.99 and 0, which we just copied from here, first feature is 0.99, second feature is 0, our prediction would be w dot x3 plus b equals 0.99 times 5 plus 0 times 0, which turns out to be equal to 4.95. And this rating seems pretty plausible. It looks like Alice has given high ratings to Love at Last and Romance Forever to two highly romantic movies, but given low ratings to the action movies Nonstop Conchases and Souls vs Karate. So if we look at Cute Puppies of Love, well predicting that she might rate that 4.95 seems quite plausible. And so these parameters w and b for Alice seems like a reasonable model for predicting her movie ratings. Just to add a little bit of notation, because we have not just one user but multiple users, or really nu equals four users, I'm going to add a superscript 1 here to denote that this is the parameter w1 for user 1 and add a superscript 1 there as well. And similarly here and here as well, so that we would actually have different parameters for each of the four users of our data set. And more generally, in this model, we can for user j, not just user 1 now, we can predict user j's rating for movie i as wj dot product xi plus bj. So here the parameters wj and bj are the parameters used to predict user j's rating for movie i, which is a function of xi, which is a features of movie i. And this is a lot like linear regression, except that we're fitting a different linear regression model for each of the four users in the data set. So let's take a look at how we can formulate the cost function for this algorithm. As a reminder, our notation is that rij is equal to 1 if user j has rated movie i or zero otherwise, and yij is the rating given by user j on movie i. And on the previous slide, we defined wj, bj as a parameters for user j, and xi as the feature vector for movie i. So the model we have is for user j and movie i predict the rating to be wj dot product xi plus bj. I'm going to introduce just one new piece of notation, which is I'm going to use mj to denote the number of movies rated by user j. So if a user has rated four movies, then mj would be equal to four. And if a user has rated three movies, then mj would be equal to three. So what we'd like to do is to learn the parameters wj and bj given the data that we have, that is given the ratings a user has given of a set of movies. So the algorithm we're going to use is very similar to linear regression. So let's write out the cost function for learning the parameters wj and bj for a given user j. This focus on one user on user j for now, I'm going to use the mean squared error criteria. So the cost will be the prediction, which is wj dot xi plus bj minus the actual rating that the user had given so minus y i j squared. And we're trying to choose parameters w and b to minimize the squared error between the predicted rating and the actual rating that was observed. But the user hasn't rated all the movies. So if we're going to sum over this, we're going to sum over only over the values of i where r i j is equal to one. So we're going to sum only over the movies i that user j has actually rated. So that's what this is. Sum over all values of i where r i j is equal to one, meaning that user j has rated that movie i. And then finally, we can take the usual normalization one over two mj. And this is very much like the cost function we had for linear regression with m or really mj training examples where you're summing over the mj movies for which you have a rating taken a squared error and then normalizing by this one over two mj. And this is going to be a cost function j of wj bj. And if we minimize this as a function of wj and bj, then you should come up with a pretty good choice of parameters wj and bj for making predictions for user j's ratings. Let me add just one more term to this cost function, which is the regularization term to prevent overfitting. And so here's our usual regularization parameter lambda divided by two mj and then times the sum of the squared values of the parameters w. And so n is a number of numbers in xi and that's the same as the number of numbers in wj. If you were to minimize this cost function j as a function of w and b, you should get a pretty good set of parameters for predicting user j's ratings for other movies. Now before moving on, it turns out that for recommender systems, it would be convenient to actually eliminate this division by mj term. mj is just a constant in this expression and so even if you take it out, you should end up with the same value of w and b. Now let me take this cost function down here at the bottom and copy it to the next slide. So we have that to learn the parameters wj, bj for user j, we would minimize this cost function as a function of wj and bj. But instead of focusing on a single user, let's look at how we learn the parameters for all of the users. To learn the parameters w1, b1, w2, b2 through wnu, bnu for all of our n subscript u users, we would take this cost function on top and sum it over all the nu users. So we would have sum from j equals 1 to nu of the same cost function that we had written up above and this becomes the cost for learning all the parameters for all of the users. Even if we use gradient descent or any other optimization algorithm to minimize this as a function of w1, b1 all the way through wnu, bnu, then you have a pretty good set of parameters for predicting movie ratings for all the users. And you may notice that this algorithm is a lot like linear regression where that plays a role similar to the output f of x of linear regression, only now we're training a different linear regression model for each of the n subscript u users. So that's how you can learn parameters and predict movie ratings if you had access to these features x1 and x2 that tell you how much is each of the movies a romance movie and how much is each of the movies an action movie. But where did these features come from and what if you don't have access to such features that give you enough detail about the movies with which to make these predictions? In the next video we'll look at a modification of this algorithm that will let you make predictions, let you make recommendations even if you don't have in advance features that describe the items of the movies in sufficient detail to run the algorithm that we just saw. Let's go on and take a look at that in the next video.
[{"start": 0.0, "end": 7.5200000000000005, "text": " So, let's take a look at how we can develop a recommender system if we had features of"}, {"start": 7.5200000000000005, "end": 10.72, "text": " each item or features of each movie."}, {"start": 10.72, "end": 16.22, "text": " So here's the same data set that we had previously with the four users having rated some but"}, {"start": 16.22, "end": 19.240000000000002, "text": " not all of the five movies."}, {"start": 19.240000000000002, "end": 22.84, "text": " What if we additionally have features of the movies?"}, {"start": 22.84, "end": 29.88, "text": " So here I've added two features, X1 and X2, that tell us how much each of these is a romance"}, {"start": 29.88, "end": 34.24, "text": " movie and how much each of these is an action movie."}, {"start": 34.24, "end": 41.04, "text": " So for example, Love at Last is a very romantic movie, so this feature takes on 0.9, but it's"}, {"start": 41.04, "end": 44.879999999999995, "text": " not a lot of action movies, so this feature takes on 0."}, {"start": 44.879999999999995, "end": 52.28, "text": " But it turns out Non-Stop Car Chases has just a little bit of romance in it, so it's 0.1,"}, {"start": 52.28, "end": 58.16, "text": " but it has a ton of action, so that feature takes on the value of 1.0."}, {"start": 58.16, "end": 65.39999999999999, "text": " So you recall that I had used the notation NU to denote the number of users, which is"}, {"start": 65.39999999999999, "end": 69.2, "text": " 4, and M to denote the number of movies, which is 5."}, {"start": 69.2, "end": 74.92, "text": " I'm going to also introduce N to denote the number of features we have here, and so N"}, {"start": 74.92, "end": 80.19999999999999, "text": " is equal to 2, because we have two features, X1 and X2, for each movie."}, {"start": 80.19999999999999, "end": 86.56, "text": " With these features, we have, for example, that the features for movie 1, that is the"}, {"start": 86.56, "end": 94.2, "text": " movie Love at Last, would be 0.9, 0, and the features for the third movie, Cute Puppies"}, {"start": 94.2, "end": 101.08, "text": " of Love, would be 0.99, and 0."}, {"start": 101.08, "end": 108.82000000000001, "text": " And let's start by taking a look at how we might make predictions for Alice's movie ratings."}, {"start": 108.82, "end": 118.33999999999999, "text": " So for user 1, that is Alice, let's say we predict the rating for movie i as w dot x"}, {"start": 118.33999999999999, "end": 121.32, "text": " of feature i plus b."}, {"start": 121.32, "end": 125.72, "text": " So this is just a lot like linear regression."}, {"start": 125.72, "end": 136.16, "text": " For example, if we end up choosing the parameter w1 equal 5, 0, and say b1 is equal to 0, then"}, {"start": 136.16, "end": 144.64, "text": " at our prediction for movie 3, where the features are 0.99 and 0, which we just copied from"}, {"start": 144.64, "end": 154.04, "text": " here, first feature is 0.99, second feature is 0, our prediction would be w dot x3 plus"}, {"start": 154.04, "end": 164.76, "text": " b equals 0.99 times 5 plus 0 times 0, which turns out to be equal to 4.95."}, {"start": 164.76, "end": 166.92, "text": " And this rating seems pretty plausible."}, {"start": 166.92, "end": 172.72, "text": " It looks like Alice has given high ratings to Love at Last and Romance Forever to two"}, {"start": 172.72, "end": 178.44, "text": " highly romantic movies, but given low ratings to the action movies Nonstop Conchases and"}, {"start": 178.44, "end": 180.32, "text": " Souls vs Karate."}, {"start": 180.32, "end": 186.23999999999998, "text": " So if we look at Cute Puppies of Love, well predicting that she might rate that 4.95 seems"}, {"start": 186.23999999999998, "end": 188.23999999999998, "text": " quite plausible."}, {"start": 188.24, "end": 194.82000000000002, "text": " And so these parameters w and b for Alice seems like a reasonable model for predicting"}, {"start": 194.82000000000002, "end": 197.28, "text": " her movie ratings."}, {"start": 197.28, "end": 202.44, "text": " Just to add a little bit of notation, because we have not just one user but multiple users,"}, {"start": 202.44, "end": 208.64000000000001, "text": " or really nu equals four users, I'm going to add a superscript 1 here to denote that"}, {"start": 208.64000000000001, "end": 215.48000000000002, "text": " this is the parameter w1 for user 1 and add a superscript 1 there as well."}, {"start": 215.48, "end": 223.48, "text": " And similarly here and here as well, so that we would actually have different parameters"}, {"start": 223.48, "end": 226.56, "text": " for each of the four users of our data set."}, {"start": 226.56, "end": 234.73999999999998, "text": " And more generally, in this model, we can for user j, not just user 1 now, we can predict"}, {"start": 234.73999999999998, "end": 242.76, "text": " user j's rating for movie i as wj dot product xi plus bj."}, {"start": 242.76, "end": 251.76, "text": " So here the parameters wj and bj are the parameters used to predict user j's rating for movie"}, {"start": 251.76, "end": 256.68, "text": " i, which is a function of xi, which is a features of movie i."}, {"start": 256.68, "end": 261.56, "text": " And this is a lot like linear regression, except that we're fitting a different linear"}, {"start": 261.56, "end": 265.44, "text": " regression model for each of the four users in the data set."}, {"start": 265.44, "end": 272.32, "text": " So let's take a look at how we can formulate the cost function for this algorithm."}, {"start": 272.32, "end": 279.12, "text": " As a reminder, our notation is that rij is equal to 1 if user j has rated movie i or"}, {"start": 279.12, "end": 286.44, "text": " zero otherwise, and yij is the rating given by user j on movie i."}, {"start": 286.44, "end": 293.92, "text": " And on the previous slide, we defined wj, bj as a parameters for user j, and xi as the"}, {"start": 293.92, "end": 296.88, "text": " feature vector for movie i."}, {"start": 296.88, "end": 303.4, "text": " So the model we have is for user j and movie i predict the rating to be wj dot product"}, {"start": 303.4, "end": 307.08, "text": " xi plus bj."}, {"start": 307.08, "end": 312.76, "text": " I'm going to introduce just one new piece of notation, which is I'm going to use mj"}, {"start": 312.76, "end": 316.32, "text": " to denote the number of movies rated by user j."}, {"start": 316.32, "end": 321.56, "text": " So if a user has rated four movies, then mj would be equal to four."}, {"start": 321.56, "end": 326.46, "text": " And if a user has rated three movies, then mj would be equal to three."}, {"start": 326.46, "end": 335.76, "text": " So what we'd like to do is to learn the parameters wj and bj given the data that we have, that"}, {"start": 335.76, "end": 341.38, "text": " is given the ratings a user has given of a set of movies."}, {"start": 341.38, "end": 347.2, "text": " So the algorithm we're going to use is very similar to linear regression."}, {"start": 347.2, "end": 352.56, "text": " So let's write out the cost function for learning the parameters wj and bj for a given user"}, {"start": 352.56, "end": 353.56, "text": " j."}, {"start": 353.56, "end": 361.72, "text": " This focus on one user on user j for now, I'm going to use the mean squared error criteria."}, {"start": 361.72, "end": 372.0, "text": " So the cost will be the prediction, which is wj dot xi plus bj minus the actual rating"}, {"start": 372.0, "end": 378.26, "text": " that the user had given so minus y i j squared."}, {"start": 378.26, "end": 384.68, "text": " And we're trying to choose parameters w and b to minimize the squared error between the"}, {"start": 384.68, "end": 390.56, "text": " predicted rating and the actual rating that was observed."}, {"start": 390.56, "end": 393.76, "text": " But the user hasn't rated all the movies."}, {"start": 393.76, "end": 400.36, "text": " So if we're going to sum over this, we're going to sum over only over the values of"}, {"start": 400.36, "end": 406.68, "text": " i where r i j is equal to one."}, {"start": 406.68, "end": 414.34000000000003, "text": " So we're going to sum only over the movies i that user j has actually rated."}, {"start": 414.34000000000003, "end": 415.72, "text": " So that's what this is."}, {"start": 415.72, "end": 422.06, "text": " Sum over all values of i where r i j is equal to one, meaning that user j has rated that"}, {"start": 422.06, "end": 424.0, "text": " movie i."}, {"start": 424.0, "end": 432.3, "text": " And then finally, we can take the usual normalization one over two mj."}, {"start": 432.3, "end": 438.24, "text": " And this is very much like the cost function we had for linear regression with m or really"}, {"start": 438.24, "end": 444.12, "text": " mj training examples where you're summing over the mj movies for which you have a rating"}, {"start": 444.12, "end": 448.76, "text": " taken a squared error and then normalizing by this one over two mj."}, {"start": 448.76, "end": 458.82, "text": " And this is going to be a cost function j of wj bj."}, {"start": 458.82, "end": 466.52, "text": " And if we minimize this as a function of wj and bj, then you should come up with a pretty"}, {"start": 466.52, "end": 472.2, "text": " good choice of parameters wj and bj for making predictions for user j's ratings."}, {"start": 472.2, "end": 476.03999999999996, "text": " Let me add just one more term to this cost function, which is the regularization term"}, {"start": 476.03999999999996, "end": 478.28, "text": " to prevent overfitting."}, {"start": 478.28, "end": 485.12, "text": " And so here's our usual regularization parameter lambda divided by two mj and then times the"}, {"start": 485.12, "end": 492.88, "text": " sum of the squared values of the parameters w."}, {"start": 492.88, "end": 498.36, "text": " And so n is a number of numbers in xi and that's the same as the number of numbers in"}, {"start": 498.36, "end": 500.64, "text": " wj."}, {"start": 500.64, "end": 506.9, "text": " If you were to minimize this cost function j as a function of w and b, you should get"}, {"start": 506.9, "end": 513.12, "text": " a pretty good set of parameters for predicting user j's ratings for other movies."}, {"start": 513.12, "end": 519.4, "text": " Now before moving on, it turns out that for recommender systems, it would be convenient"}, {"start": 519.4, "end": 525.52, "text": " to actually eliminate this division by mj term."}, {"start": 525.52, "end": 530.4, "text": " mj is just a constant in this expression and so even if you take it out, you should end"}, {"start": 530.4, "end": 533.68, "text": " up with the same value of w and b."}, {"start": 533.68, "end": 540.86, "text": " Now let me take this cost function down here at the bottom and copy it to the next slide."}, {"start": 540.86, "end": 547.0, "text": " So we have that to learn the parameters wj, bj for user j, we would minimize this cost"}, {"start": 547.0, "end": 551.98, "text": " function as a function of wj and bj."}, {"start": 551.98, "end": 556.7, "text": " But instead of focusing on a single user, let's look at how we learn the parameters"}, {"start": 556.7, "end": 559.4, "text": " for all of the users."}, {"start": 559.4, "end": 569.54, "text": " To learn the parameters w1, b1, w2, b2 through wnu, bnu for all of our n subscript u users,"}, {"start": 569.54, "end": 576.06, "text": " we would take this cost function on top and sum it over all the nu users."}, {"start": 576.06, "end": 587.18, "text": " So we would have sum from j equals 1 to nu of the same cost function that we had written"}, {"start": 587.18, "end": 598.64, "text": " up above and this becomes the cost for learning all the parameters for all of the users."}, {"start": 598.64, "end": 604.84, "text": " Even if we use gradient descent or any other optimization algorithm to minimize this as"}, {"start": 604.84, "end": 612.84, "text": " a function of w1, b1 all the way through wnu, bnu, then you have a pretty good set of parameters"}, {"start": 612.84, "end": 616.4399999999999, "text": " for predicting movie ratings for all the users."}, {"start": 616.4399999999999, "end": 621.72, "text": " And you may notice that this algorithm is a lot like linear regression where that plays"}, {"start": 621.72, "end": 629.94, "text": " a role similar to the output f of x of linear regression, only now we're training a different"}, {"start": 629.94, "end": 636.0600000000001, "text": " linear regression model for each of the n subscript u users."}, {"start": 636.0600000000001, "end": 642.14, "text": " So that's how you can learn parameters and predict movie ratings if you had access to"}, {"start": 642.14, "end": 648.1600000000001, "text": " these features x1 and x2 that tell you how much is each of the movies a romance movie"}, {"start": 648.1600000000001, "end": 651.6600000000001, "text": " and how much is each of the movies an action movie."}, {"start": 651.66, "end": 657.18, "text": " But where did these features come from and what if you don't have access to such features"}, {"start": 657.18, "end": 662.06, "text": " that give you enough detail about the movies with which to make these predictions?"}, {"start": 662.06, "end": 668.1, "text": " In the next video we'll look at a modification of this algorithm that will let you make predictions,"}, {"start": 668.1, "end": 673.86, "text": " let you make recommendations even if you don't have in advance features that describe the"}, {"start": 673.86, "end": 679.06, "text": " items of the movies in sufficient detail to run the algorithm that we just saw."}, {"start": 679.06, "end": 681.9399999999999, "text": " Let's go on and take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=lrNPMtBH75w
9.3 Collaborative Filtering | Collaborative filtering algorithm-- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you saw how if you have features for each movie, such as features X1 and X2, they tell you how much is this a romance movie and how much is this an action movie. Then you could use basically linear regression to learn to predict movie ratings. But what if you don't have those features, X1 and X2? Let's take a look at how you can learn or come up with those features, X1 and X2, from the data. So here's the data that we had before. But what if instead of having these numbers for X1 and X2, we didn't know in advance what the values of the features X1 and X2 are. So I'm going to replace them with question marks over here. Now just for the purposes of illustration, let's say we had somehow already learned parameters for the four users. So let's say that we learned parameters W1 equals 5 and 0 and B1 equals 0 for user 1. W2 is also 5, 0. B2 is 0. W3 is 0, 5. B3 is 0. And for user 4, W4 is also 0, 5 and B4 is equal to 0. We'll worry later about how we might have come up with these parameters W and B, but let's say we have them already. And as a reminder, to predict user j's rating on movie i, we're going to use wj.product the features of x i plus bj. So to simplify this example, all the values of b are actually equal to 0. So just to reduce a little bit of writing, I'm going to ignore b for the rest of this example. Let's take a look at how we can try to guess what might be reasonable features for movie 1. If these are the parameters you have on the left, then given that Alice rated movie 1 5, we should have that W1.x1 should be about equal to 5. And W2.x2 should also be about equal to 5 because Bob rated it 5. W3.x1 should be close to 0 and W4.x1 should be close to 0 as well. So the question is, given these values for W that we have up here, what choice for x1 would cause these values to be right? Well one possible choice would be if the features for that first movie were 1 0, in which case W1.x1 would equal to 5, W2.x1 would equal to 5, and similarly W3 or W4.product with this feature vector x1 would be equal to 0. So what we have is that if you have the parameters for all four users here, and if you have four ratings in this example that you want to try to match, you can take a reasonable guess at what is the feature vector x1 for movie 1 that would make good predictions for these four ratings up on top. And similarly, if you have these parameter vectors, you can also try to come up with a feature vector x2 for the second movie, a feature vector x3 for the third movie, and so on to try to make the algorithms predictions on these additional movies close to what was actually the ratings given by the users. Let's come up with a cost function for actually learning the values of x1 and x2. And by the way, notice that this works only because we have parameters for four users. That's what allows us to try to guess appropriate features x1. This is why in a typical linear regression application, if you are just a single user, you don't actually have enough information to figure out what would be the features x1 and x2, which is why in the linear regression context that you saw in course 1, you can't come up with features x1 and x2 from scratch. But in collective filtering, it is because you have ratings from multiple users of the same item with the same movie. That's what makes it possible to try to guess what are possible values for these features. So given w1, b1, w2, b2, and so on through w, n, u, b, n, u for the n subscript u users, if you want to learn the features xi for a specific movie i, here's a cost function we could use, which is that I'm going to want to minimize squared error as usual. So if the predicted rating by user j on movie i is given by this, let's take the squared difference from the actual movie rating yij. And as before, let's sum over all the users j, but this will be a sum over all values of j where rij is equal to 1. And I'll add a one half there as usual. And so if I define this as a cost function for xi, then if we minimize this as a function of xi, you'd be choosing the features xi for movie i so that for all the users j that had rated movie i, we would try to minimize the squared difference between what your choice of features xi results in in terms of the predicted movie rating minus the actual movie rating that the user had given it. And finally, if we want to add a regularization term, we add the usual plus lambda over 2, k equals 1 through n, where n is usual as the number of features of xi k squared. Lastly, to learn all the features x1 through xnm because we have nm movies, we can take this cost function on top and sum it over all the movies, so sum from i equals 1 through the number of movies, and then just take this term from above and this becomes a cost function for learning the features for all of the movies in the data set. And so if you have parameters w and b for all the users, then minimizing this cost function as a function of x1 through xnm using gradient descent or some other algorithm, this will actually allow you to take a pretty good guess at learning good features for the movies. And this is pretty remarkable. For most machine learning applications, the features had to be externally given, but in this algorithm, we can actually learn the features for a given movie. But in what we've done so far in this video, we assume you had those parameters w and b for the different users. Where do you get those parameters from? Well let's put together the algorithm from the last video for learning w and b and what we just talked about in this video for learning x, and that will give us our collaborative filtering algorithm. This is the cost function for learning the features. This is what we had derived on the last slide. Now it turns out that if we put these two together, this term here is exactly the same as this term here. Notice that sum over j of all values of i is that rij equals 1 is the same as summing over all values of i with all j where rij is equal to 1. This summation is just summing over all user movie pairs where there is a rating. And so what I'm going to do is put these two cost functions together and have this where I'm just writing out the summation more explicitly as summing over all pairs i and j where we do have a rating of the usual squared cost function. And then let me take the regularization term from learning the parameters w and b and put that here and take the regularization term from learning the features x and put them here and this ends up being our overall cost function for learning w, b, and x. And it turns out that if you minimize this cost function as a function of w and b as well as x, then this algorithm actually works. Here's what I mean. If we had three users and two movies and if you have ratings for these four movies but not those two, over here it does is it sums over all the users and for user one, it has the term the cost function for this for user two, it has terms and cost function for these for user three, it has a term and cost function for this. So we're summing over users first and then having one term for each movie where there is a rating. But an alternative way to carry out this summation is to first look at movie one, that's what this summation here does, and then to include all the users that rated movie one and then look at movie two and have a term for all the users that had rated movie two. And you see that in both cases we're just summing over these four pairs where the user had rated the corresponding movie. So that's why this summation on top and this summation here, there are two ways of summing over all of the pairs where the user had rated the movie. So how do you minimize this cost function as a function of W, B and X? One thing you could do is to use gradient descent. So in course one when we learned about linear regression, this is the gradient descent algorithm you had seen where we had a cost function J which is a function of the parameters W and B and we'd apply gradient descent as follows. With collaborative filtering, the cost function isn't a function of just W and B, it's now a function of W, B and X. And I'm using W and B here to denote the parameters for all of the users and X here just informally to denote the features for all of the movies. But if you're able to take partial derivatives with respect to the different parameters, you can then continue to update the parameters as follows. So now we need to optimize this with respect to X as well. So we also will want to update each of these parameters X using gradient descent as follows. And it turns out that if you do this, then you actually find pretty good values of W and B as well as X. And in this formulation of the problem, the parameters are W and B and X is also a parameter. And then finally to learn the values of X, we also will update X as X minus the partial derivative with respect to X of the cost WBX. I'm using the notation here a little bit informally and not keeping very careful track of this superstrips and subscripts. But the key takeaway I hope you have from this is that the parameters of this model are W and B and X now is also a parameter, which is why we minimize the cost function as a function of all three of these sets of parameters W and B as well as X. So the algorithm we just arrived is called collaborative filtering. And the name collaborative filtering refers to the sense that because multiple users have rated the same movie kind of collaboratively, given your sense of what this movie may be like that allows you to guess what are appropriate features for that movie. And this in turn allows you to predict how other users that haven't yet rated that same movie may decide to rate it in the future. So this collaborative filtering is this gathering of data from multiple users, this collaboration between users to help you predict ratings for even other users in the future. So far, our problem formulation has used movie ratings from one to five stars or from zero to five stars. A very common use case of recommender systems is when you have binary labels such as the user favorites or like or interact with an item. In the next video, let's take a look at the generalization of the model you've seen so far to binary labels. Let's go see that in the next video.
[{"start": 0.0, "end": 7.44, "text": " In the last video, you saw how if you have features for each movie, such as features"}, {"start": 7.44, "end": 12.88, "text": " X1 and X2, they tell you how much is this a romance movie and how much is this an action"}, {"start": 12.88, "end": 13.88, "text": " movie."}, {"start": 13.88, "end": 18.92, "text": " Then you could use basically linear regression to learn to predict movie ratings."}, {"start": 18.92, "end": 22.32, "text": " But what if you don't have those features, X1 and X2?"}, {"start": 22.32, "end": 27.76, "text": " Let's take a look at how you can learn or come up with those features, X1 and X2, from"}, {"start": 27.76, "end": 28.76, "text": " the data."}, {"start": 28.76, "end": 31.880000000000003, "text": " So here's the data that we had before."}, {"start": 31.880000000000003, "end": 38.64, "text": " But what if instead of having these numbers for X1 and X2, we didn't know in advance what"}, {"start": 38.64, "end": 41.160000000000004, "text": " the values of the features X1 and X2 are."}, {"start": 41.160000000000004, "end": 45.0, "text": " So I'm going to replace them with question marks over here."}, {"start": 45.0, "end": 52.040000000000006, "text": " Now just for the purposes of illustration, let's say we had somehow already learned parameters"}, {"start": 52.040000000000006, "end": 53.96, "text": " for the four users."}, {"start": 53.96, "end": 61.120000000000005, "text": " So let's say that we learned parameters W1 equals 5 and 0 and B1 equals 0 for user 1."}, {"start": 61.120000000000005, "end": 64.24, "text": " W2 is also 5, 0."}, {"start": 64.24, "end": 66.52, "text": " B2 is 0."}, {"start": 66.52, "end": 69.76, "text": " W3 is 0, 5."}, {"start": 69.76, "end": 72.84, "text": " B3 is 0."}, {"start": 72.84, "end": 78.64, "text": " And for user 4, W4 is also 0, 5 and B4 is equal to 0."}, {"start": 78.64, "end": 83.32, "text": " We'll worry later about how we might have come up with these parameters W and B, but"}, {"start": 83.32, "end": 86.67999999999999, "text": " let's say we have them already."}, {"start": 86.67999999999999, "end": 96.83999999999999, "text": " And as a reminder, to predict user j's rating on movie i, we're going to use wj.product"}, {"start": 96.83999999999999, "end": 102.0, "text": " the features of x i plus bj."}, {"start": 102.0, "end": 106.82, "text": " So to simplify this example, all the values of b are actually equal to 0."}, {"start": 106.82, "end": 110.8, "text": " So just to reduce a little bit of writing, I'm going to ignore b for the rest of this"}, {"start": 110.8, "end": 111.8, "text": " example."}, {"start": 111.8, "end": 118.08, "text": " Let's take a look at how we can try to guess what might be reasonable features for movie"}, {"start": 118.08, "end": 119.36, "text": " 1."}, {"start": 119.36, "end": 125.32, "text": " If these are the parameters you have on the left, then given that Alice rated movie 1"}, {"start": 125.32, "end": 133.4, "text": " 5, we should have that W1.x1 should be about equal to 5."}, {"start": 133.4, "end": 140.56, "text": " And W2.x2 should also be about equal to 5 because Bob rated it 5."}, {"start": 140.56, "end": 149.28, "text": " W3.x1 should be close to 0 and W4.x1 should be close to 0 as well."}, {"start": 149.28, "end": 158.32, "text": " So the question is, given these values for W that we have up here, what choice for x1"}, {"start": 158.32, "end": 164.96, "text": " would cause these values to be right?"}, {"start": 164.96, "end": 171.24, "text": " Well one possible choice would be if the features for that first movie were 1 0, in which case"}, {"start": 171.24, "end": 184.32, "text": " W1.x1 would equal to 5, W2.x1 would equal to 5, and similarly W3 or W4.product with"}, {"start": 184.32, "end": 188.34, "text": " this feature vector x1 would be equal to 0."}, {"start": 188.34, "end": 194.92000000000002, "text": " So what we have is that if you have the parameters for all four users here, and if you have four"}, {"start": 194.92, "end": 200.55999999999997, "text": " ratings in this example that you want to try to match, you can take a reasonable guess"}, {"start": 200.55999999999997, "end": 207.88, "text": " at what is the feature vector x1 for movie 1 that would make good predictions for these"}, {"start": 207.88, "end": 210.39999999999998, "text": " four ratings up on top."}, {"start": 210.39999999999998, "end": 217.16, "text": " And similarly, if you have these parameter vectors, you can also try to come up with"}, {"start": 217.16, "end": 223.88, "text": " a feature vector x2 for the second movie, a feature vector x3 for the third movie, and"}, {"start": 223.88, "end": 233.84, "text": " so on to try to make the algorithms predictions on these additional movies close to what was"}, {"start": 233.84, "end": 237.24, "text": " actually the ratings given by the users."}, {"start": 237.24, "end": 244.88, "text": " Let's come up with a cost function for actually learning the values of x1 and x2."}, {"start": 244.88, "end": 251.78, "text": " And by the way, notice that this works only because we have parameters for four users."}, {"start": 251.78, "end": 257.08, "text": " That's what allows us to try to guess appropriate features x1."}, {"start": 257.08, "end": 262.1, "text": " This is why in a typical linear regression application, if you are just a single user,"}, {"start": 262.1, "end": 266.2, "text": " you don't actually have enough information to figure out what would be the features x1"}, {"start": 266.2, "end": 272.94, "text": " and x2, which is why in the linear regression context that you saw in course 1, you can't"}, {"start": 272.94, "end": 277.36, "text": " come up with features x1 and x2 from scratch."}, {"start": 277.36, "end": 282.32, "text": " But in collective filtering, it is because you have ratings from multiple users of the"}, {"start": 282.32, "end": 284.68, "text": " same item with the same movie."}, {"start": 284.68, "end": 290.04, "text": " That's what makes it possible to try to guess what are possible values for these features."}, {"start": 290.04, "end": 299.96000000000004, "text": " So given w1, b1, w2, b2, and so on through w, n, u, b, n, u for the n subscript u users,"}, {"start": 299.96000000000004, "end": 306.2, "text": " if you want to learn the features xi for a specific movie i, here's a cost function we"}, {"start": 306.2, "end": 314.46, "text": " could use, which is that I'm going to want to minimize squared error as usual."}, {"start": 314.46, "end": 324.18, "text": " So if the predicted rating by user j on movie i is given by this, let's take the squared"}, {"start": 324.18, "end": 329.8, "text": " difference from the actual movie rating yij."}, {"start": 329.8, "end": 337.2, "text": " And as before, let's sum over all the users j, but this will be a sum over all values"}, {"start": 337.2, "end": 342.44, "text": " of j where rij is equal to 1."}, {"start": 342.44, "end": 344.54, "text": " And I'll add a one half there as usual."}, {"start": 344.54, "end": 352.68, "text": " And so if I define this as a cost function for xi, then if we minimize this as a function"}, {"start": 352.68, "end": 361.32, "text": " of xi, you'd be choosing the features xi for movie i so that for all the users j that had"}, {"start": 361.32, "end": 368.40000000000003, "text": " rated movie i, we would try to minimize the squared difference between what your choice"}, {"start": 368.40000000000003, "end": 374.96000000000004, "text": " of features xi results in in terms of the predicted movie rating minus the actual movie"}, {"start": 374.96000000000004, "end": 377.2, "text": " rating that the user had given it."}, {"start": 377.2, "end": 384.0, "text": " And finally, if we want to add a regularization term, we add the usual plus lambda over 2,"}, {"start": 384.0, "end": 391.32, "text": " k equals 1 through n, where n is usual as the number of features of xi k squared."}, {"start": 391.32, "end": 400.15999999999997, "text": " Lastly, to learn all the features x1 through xnm because we have nm movies, we can take"}, {"start": 400.16, "end": 407.64000000000004, "text": " this cost function on top and sum it over all the movies, so sum from i equals 1 through"}, {"start": 407.64000000000004, "end": 416.76000000000005, "text": " the number of movies, and then just take this term from above and this becomes a cost function"}, {"start": 416.76000000000005, "end": 422.82000000000005, "text": " for learning the features for all of the movies in the data set."}, {"start": 422.82, "end": 431.0, "text": " And so if you have parameters w and b for all the users, then minimizing this cost function"}, {"start": 431.0, "end": 437.08, "text": " as a function of x1 through xnm using gradient descent or some other algorithm, this will"}, {"start": 437.08, "end": 442.92, "text": " actually allow you to take a pretty good guess at learning good features for the movies."}, {"start": 442.92, "end": 445.2, "text": " And this is pretty remarkable."}, {"start": 445.2, "end": 451.15999999999997, "text": " For most machine learning applications, the features had to be externally given, but in"}, {"start": 451.16, "end": 455.84000000000003, "text": " this algorithm, we can actually learn the features for a given movie."}, {"start": 455.84000000000003, "end": 460.92, "text": " But in what we've done so far in this video, we assume you had those parameters w and b"}, {"start": 460.92, "end": 462.68, "text": " for the different users."}, {"start": 462.68, "end": 465.24, "text": " Where do you get those parameters from?"}, {"start": 465.24, "end": 469.88, "text": " Well let's put together the algorithm from the last video for learning w and b and what"}, {"start": 469.88, "end": 475.28000000000003, "text": " we just talked about in this video for learning x, and that will give us our collaborative"}, {"start": 475.28000000000003, "end": 477.8, "text": " filtering algorithm."}, {"start": 477.8, "end": 481.44, "text": " This is the cost function for learning the features."}, {"start": 481.44, "end": 484.84000000000003, "text": " This is what we had derived on the last slide."}, {"start": 484.84000000000003, "end": 493.16, "text": " Now it turns out that if we put these two together, this term here is exactly the same"}, {"start": 493.16, "end": 495.46000000000004, "text": " as this term here."}, {"start": 495.46000000000004, "end": 502.14, "text": " Notice that sum over j of all values of i is that rij equals 1 is the same as summing"}, {"start": 502.14, "end": 508.08, "text": " over all values of i with all j where rij is equal to 1."}, {"start": 508.08, "end": 514.02, "text": " This summation is just summing over all user movie pairs where there is a rating."}, {"start": 514.02, "end": 522.92, "text": " And so what I'm going to do is put these two cost functions together and have this where"}, {"start": 522.92, "end": 530.2, "text": " I'm just writing out the summation more explicitly as summing over all pairs i and j where we"}, {"start": 530.2, "end": 535.1600000000001, "text": " do have a rating of the usual squared cost function."}, {"start": 535.1600000000001, "end": 543.08, "text": " And then let me take the regularization term from learning the parameters w and b and put"}, {"start": 543.08, "end": 549.2800000000001, "text": " that here and take the regularization term from learning the features x and put them"}, {"start": 549.2800000000001, "end": 559.0400000000001, "text": " here and this ends up being our overall cost function for learning w, b, and x."}, {"start": 559.04, "end": 564.48, "text": " And it turns out that if you minimize this cost function as a function of w and b as"}, {"start": 564.48, "end": 568.52, "text": " well as x, then this algorithm actually works."}, {"start": 568.52, "end": 569.52, "text": " Here's what I mean."}, {"start": 569.52, "end": 578.12, "text": " If we had three users and two movies and if you have ratings for these four movies but"}, {"start": 578.12, "end": 584.52, "text": " not those two, over here it does is it sums over all the users and for user one, it has"}, {"start": 584.52, "end": 589.04, "text": " the term the cost function for this for user two, it has terms and cost function for these"}, {"start": 589.04, "end": 591.36, "text": " for user three, it has a term and cost function for this."}, {"start": 591.36, "end": 599.16, "text": " So we're summing over users first and then having one term for each movie where there"}, {"start": 599.16, "end": 600.16, "text": " is a rating."}, {"start": 600.16, "end": 605.88, "text": " But an alternative way to carry out this summation is to first look at movie one, that's what"}, {"start": 605.88, "end": 612.24, "text": " this summation here does, and then to include all the users that rated movie one and then"}, {"start": 612.24, "end": 619.08, "text": " look at movie two and have a term for all the users that had rated movie two."}, {"start": 619.08, "end": 626.2, "text": " And you see that in both cases we're just summing over these four pairs where the user"}, {"start": 626.2, "end": 628.96, "text": " had rated the corresponding movie."}, {"start": 628.96, "end": 633.8, "text": " So that's why this summation on top and this summation here, there are two ways of summing"}, {"start": 633.8, "end": 638.5600000000001, "text": " over all of the pairs where the user had rated the movie."}, {"start": 638.56, "end": 643.8, "text": " So how do you minimize this cost function as a function of W, B and X?"}, {"start": 643.8, "end": 648.04, "text": " One thing you could do is to use gradient descent."}, {"start": 648.04, "end": 655.1999999999999, "text": " So in course one when we learned about linear regression, this is the gradient descent algorithm"}, {"start": 655.1999999999999, "end": 660.28, "text": " you had seen where we had a cost function J which is a function of the parameters W"}, {"start": 660.28, "end": 663.56, "text": " and B and we'd apply gradient descent as follows."}, {"start": 663.56, "end": 669.88, "text": " With collaborative filtering, the cost function isn't a function of just W and B, it's now"}, {"start": 669.88, "end": 673.64, "text": " a function of W, B and X."}, {"start": 673.64, "end": 679.4, "text": " And I'm using W and B here to denote the parameters for all of the users and X here just informally"}, {"start": 679.4, "end": 682.28, "text": " to denote the features for all of the movies."}, {"start": 682.28, "end": 687.5999999999999, "text": " But if you're able to take partial derivatives with respect to the different parameters,"}, {"start": 687.5999999999999, "end": 692.56, "text": " you can then continue to update the parameters as follows."}, {"start": 692.56, "end": 695.9599999999999, "text": " So now we need to optimize this with respect to X as well."}, {"start": 695.9599999999999, "end": 704.76, "text": " So we also will want to update each of these parameters X using gradient descent as follows."}, {"start": 704.76, "end": 712.0, "text": " And it turns out that if you do this, then you actually find pretty good values of W"}, {"start": 712.0, "end": 714.4799999999999, "text": " and B as well as X."}, {"start": 714.48, "end": 724.96, "text": " And in this formulation of the problem, the parameters are W and B and X is also a parameter."}, {"start": 724.96, "end": 733.2, "text": " And then finally to learn the values of X, we also will update X as X minus the partial"}, {"start": 733.2, "end": 739.64, "text": " derivative with respect to X of the cost WBX."}, {"start": 739.64, "end": 744.44, "text": " I'm using the notation here a little bit informally and not keeping very careful track of this"}, {"start": 744.44, "end": 746.48, "text": " superstrips and subscripts."}, {"start": 746.48, "end": 751.6400000000001, "text": " But the key takeaway I hope you have from this is that the parameters of this model"}, {"start": 751.6400000000001, "end": 759.32, "text": " are W and B and X now is also a parameter, which is why we minimize the cost function"}, {"start": 759.32, "end": 765.6400000000001, "text": " as a function of all three of these sets of parameters W and B as well as X."}, {"start": 765.6400000000001, "end": 770.44, "text": " So the algorithm we just arrived is called collaborative filtering."}, {"start": 770.44, "end": 776.12, "text": " And the name collaborative filtering refers to the sense that because multiple users have"}, {"start": 776.12, "end": 781.96, "text": " rated the same movie kind of collaboratively, given your sense of what this movie may be"}, {"start": 781.96, "end": 787.3000000000001, "text": " like that allows you to guess what are appropriate features for that movie."}, {"start": 787.3000000000001, "end": 792.5600000000001, "text": " And this in turn allows you to predict how other users that haven't yet rated that same"}, {"start": 792.5600000000001, "end": 796.22, "text": " movie may decide to rate it in the future."}, {"start": 796.22, "end": 803.02, "text": " So this collaborative filtering is this gathering of data from multiple users, this collaboration"}, {"start": 803.02, "end": 809.72, "text": " between users to help you predict ratings for even other users in the future."}, {"start": 809.72, "end": 815.52, "text": " So far, our problem formulation has used movie ratings from one to five stars or from zero"}, {"start": 815.52, "end": 817.32, "text": " to five stars."}, {"start": 817.32, "end": 823.0, "text": " A very common use case of recommender systems is when you have binary labels such as the"}, {"start": 823.0, "end": 827.36, "text": " user favorites or like or interact with an item."}, {"start": 827.36, "end": 831.32, "text": " In the next video, let's take a look at the generalization of the model you've seen so"}, {"start": 831.32, "end": 833.8, "text": " far to binary labels."}, {"start": 833.8, "end": 854.0, "text": " Let's go see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=tnIiuLQk63I
9.4 Collaborative Filtering | Binary labels: favs, likes and clicks-- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Many important applications of recommended systems or collaborative filtering algorithms involve binary labels where instead of a user giving you a 1 to 5 star or 0 to 5 star rating, they just somehow give you a sense of they like this item or they did not like this item. Let's take a look at how to generalize the algorithm you've seen to this setting. The process we'll use to generalize the algorithm will be very much reminiscent to how we had gone from linear regression to logistic regression to predicting numbers to predicting a binary label back in course 1. Let's take a look. Here's an example of a collaborative filtering dataset with binary labels. A 1 denotes that the user liked or engaged with a particular movie, so label 1 could mean that Alice watched the movie Love at Last all the way to the end and watched Romance Forever all the way to the end, but after playing a few minutes of non-stop car chases decided to stop the video and move on. Or it could mean that she explicitly hit like or favorite on an app to indicate that she liked these movies, but after checking out non-stop car chases and Souls vs Karate did not hit like, and the question mark usually means the user has not yet seen the item and so they weren't in a position to decide whether or not to hit like or favorite on that particular item. So the question is, how can we take the collaborative filtering algorithm that you saw in the last video and get it to work on this dataset? And by predicting how likely Alice, Bob, Carol and Dave are to like the items that they have not yet rated, we can then decide how much we should recommend these items to them. There are many ways of defining what is a label 1 and what is a label 0 and what is a label question mark in collaborative filtering with binary labels. Let's take a look at a few examples. In an online shopping website, the label could denote whether or not user J chose to purchase an item after they were exposed to it, after they were shown the item. So 1 would denote that they purchased it, 0 would denote that they did not purchase it and the question mark would denote that they were not even shown, were not even exposed to the item. Or in a social media setting, the labels 1 or 0 could denote to the user favorites or like an item after they were shown it and question mark would be if they've not yet been shown the item. For many sites, instead of asking for explicit user rating, we'll use the user behavior to try to guess if the user liked the item. So for example, you can measure if a user spends at least 30 seconds of an item and if they did, then assign that a label 1 because the user found the item engaging or if a user was shown an item but did not spend at least 30 seconds with it, then assign that a label 0 or if the user was not shown the item yet then assign it a question mark. Another way to generate a rating implicitly as a function of the user behavior will be to see that the user clicked on an item. This is often done in online advertising where if the user has been shown an ad, if they clicked on it, assign it a label 1, if they did not click, assign it a label 0 and the question mark will refer to if the user has not even been shown that ad in the first place. So often these binary labels will have a rough meaning as follows. A label of 1 means that the user engaged after being shown an item. An engaged could mean that they clicked or spent 30 seconds or explicitly favored or liked or purchased the item. A 0 would reflect the user not engaging after being shown the item and the question mark would reflect the item not yet having been shown to the user. So given these binary labels, let's look at how we can generalize our algorithm which is a lot like linear regression from the previous couple of videos to predicting these binary outputs. Previously, we were predicting label yij as wj.prime.xi plus b. So this was a lot like a linear regression model. But binary labels were going to predict that the probability of yij being equal to 1 is given by not wj.xi plus b but instead by g of this formula where now g of z is 1 over 1 plus e to the negative z. So this is the logistic function just like we saw in logistic regression. And what we would do is take what was a lot like a linear regression model and turn it into something that would be a lot like a logistic regression model where we'll now predict the probability of yij being 1 that is of the user having engaged with or liked the item using this model. In order to build this algorithm, we'll also have to modify the cost function from the squared error cost function to a cost function that is more appropriate for binary labels for a logistic regression like model. So previously, this was the cost function that we had where this term played a role similar to f of x, the prediction of the algorithm. When you now have binary labels yij when the labels are 1 or 0 or question mark, then the prediction f of x becomes instead of wj.xi plus bj, it becomes g of this where g is the logistic function. And similar to when we had derived logistic regression, we had written out the following loss function for a single example, which was that the loss, if the algorithm predicts f of x and the true label was y, the loss was this. It was negative y log f minus 1 minus y log 1 minus f. This is also sometimes called the binary cross entropy cost function, but this is the standard cost function that we had used for logistic regression as well as for the binary classification problems when we were training neural networks. And so to adapt this to the collaborative filtering setting, let me write out the cost function, which is now a function of all the parameters w and b, as well as all the parameters x, which are the features of the individual rubies or items of, we now need to sum over all the pairs ij where r ij is equal to 1. Notice this is just similar to the summation up on top. And now instead of this squared error cost function, we're going to use that loss function as a function of f of x comma y ij, where f of x here, that's my abbreviation, my shorthand for g of w j dot x i plus v j. And if you plug this into here, then this gives you the cost function they can use for collaborative filtering on binary labels. So that's it. That's how you can take the linear regression like collaborative filtering algorithm and generalize it to work with binary labels. And this actually very significantly opens up the set of applications you can address with this algorithm. Now even though you've seen the key structure and cost function of the algorithm, there are also some implementational tips that will make your algorithm work much better. Let's go on to the next video to take a look at some details of how you implement this and some little modifications that will make the algorithm run much faster. Let's go on to the next video.
[{"start": 0.0, "end": 8.22, "text": " Many important applications of recommended systems or collaborative filtering algorithms"}, {"start": 8.22, "end": 15.700000000000001, "text": " involve binary labels where instead of a user giving you a 1 to 5 star or 0 to 5 star rating,"}, {"start": 15.700000000000001, "end": 21.32, "text": " they just somehow give you a sense of they like this item or they did not like this item."}, {"start": 21.32, "end": 25.5, "text": " Let's take a look at how to generalize the algorithm you've seen to this setting."}, {"start": 25.5, "end": 30.44, "text": " The process we'll use to generalize the algorithm will be very much reminiscent to how we had"}, {"start": 30.44, "end": 37.04, "text": " gone from linear regression to logistic regression to predicting numbers to predicting a binary"}, {"start": 37.04, "end": 39.0, "text": " label back in course 1."}, {"start": 39.0, "end": 40.0, "text": " Let's take a look."}, {"start": 40.0, "end": 45.8, "text": " Here's an example of a collaborative filtering dataset with binary labels."}, {"start": 45.8, "end": 53.28, "text": " A 1 denotes that the user liked or engaged with a particular movie, so label 1 could"}, {"start": 53.28, "end": 58.44, "text": " mean that Alice watched the movie Love at Last all the way to the end and watched Romance"}, {"start": 58.44, "end": 63.44, "text": " Forever all the way to the end, but after playing a few minutes of non-stop car chases"}, {"start": 63.44, "end": 66.64, "text": " decided to stop the video and move on."}, {"start": 66.64, "end": 73.32, "text": " Or it could mean that she explicitly hit like or favorite on an app to indicate that she"}, {"start": 73.32, "end": 77.88, "text": " liked these movies, but after checking out non-stop car chases and Souls vs Karate did"}, {"start": 77.88, "end": 83.6, "text": " not hit like, and the question mark usually means the user has not yet seen the item and"}, {"start": 83.6, "end": 89.52, "text": " so they weren't in a position to decide whether or not to hit like or favorite on that particular"}, {"start": 89.52, "end": 90.52, "text": " item."}, {"start": 90.52, "end": 94.91999999999999, "text": " So the question is, how can we take the collaborative filtering algorithm that you saw in the last"}, {"start": 94.91999999999999, "end": 97.94, "text": " video and get it to work on this dataset?"}, {"start": 97.94, "end": 103.91999999999999, "text": " And by predicting how likely Alice, Bob, Carol and Dave are to like the items that they have"}, {"start": 103.92, "end": 111.6, "text": " not yet rated, we can then decide how much we should recommend these items to them."}, {"start": 111.6, "end": 116.92, "text": " There are many ways of defining what is a label 1 and what is a label 0 and what is"}, {"start": 116.92, "end": 121.76, "text": " a label question mark in collaborative filtering with binary labels."}, {"start": 121.76, "end": 123.72, "text": " Let's take a look at a few examples."}, {"start": 123.72, "end": 131.88, "text": " In an online shopping website, the label could denote whether or not user J chose to purchase"}, {"start": 131.88, "end": 136.68, "text": " an item after they were exposed to it, after they were shown the item."}, {"start": 136.68, "end": 140.6, "text": " So 1 would denote that they purchased it, 0 would denote that they did not purchase"}, {"start": 140.6, "end": 145.04, "text": " it and the question mark would denote that they were not even shown, were not even exposed"}, {"start": 145.04, "end": 146.2, "text": " to the item."}, {"start": 146.2, "end": 152.84, "text": " Or in a social media setting, the labels 1 or 0 could denote to the user favorites or"}, {"start": 152.84, "end": 157.16, "text": " like an item after they were shown it and question mark would be if they've not yet"}, {"start": 157.16, "end": 158.84, "text": " been shown the item."}, {"start": 158.84, "end": 165.36, "text": " For many sites, instead of asking for explicit user rating, we'll use the user behavior"}, {"start": 165.36, "end": 168.92000000000002, "text": " to try to guess if the user liked the item."}, {"start": 168.92000000000002, "end": 175.44, "text": " So for example, you can measure if a user spends at least 30 seconds of an item and"}, {"start": 175.44, "end": 181.36, "text": " if they did, then assign that a label 1 because the user found the item engaging or if a user"}, {"start": 181.36, "end": 187.08, "text": " was shown an item but did not spend at least 30 seconds with it, then assign that a label"}, {"start": 187.08, "end": 192.92000000000002, "text": " 0 or if the user was not shown the item yet then assign it a question mark."}, {"start": 192.92000000000002, "end": 198.60000000000002, "text": " Another way to generate a rating implicitly as a function of the user behavior will be"}, {"start": 198.60000000000002, "end": 201.52, "text": " to see that the user clicked on an item."}, {"start": 201.52, "end": 207.08, "text": " This is often done in online advertising where if the user has been shown an ad, if they"}, {"start": 207.08, "end": 212.96, "text": " clicked on it, assign it a label 1, if they did not click, assign it a label 0 and the"}, {"start": 212.96, "end": 217.92000000000002, "text": " question mark will refer to if the user has not even been shown that ad in the first place."}, {"start": 217.92000000000002, "end": 223.08, "text": " So often these binary labels will have a rough meaning as follows."}, {"start": 223.08, "end": 227.52, "text": " A label of 1 means that the user engaged after being shown an item."}, {"start": 227.52, "end": 232.32, "text": " An engaged could mean that they clicked or spent 30 seconds or explicitly favored or"}, {"start": 232.32, "end": 234.28, "text": " liked or purchased the item."}, {"start": 234.28, "end": 239.3, "text": " A 0 would reflect the user not engaging after being shown the item and the question mark"}, {"start": 239.3, "end": 243.56, "text": " would reflect the item not yet having been shown to the user."}, {"start": 243.56, "end": 249.52, "text": " So given these binary labels, let's look at how we can generalize our algorithm which"}, {"start": 249.52, "end": 255.52, "text": " is a lot like linear regression from the previous couple of videos to predicting these binary"}, {"start": 255.52, "end": 256.52, "text": " outputs."}, {"start": 256.52, "end": 263.04, "text": " Previously, we were predicting label yij as wj.prime.xi plus b."}, {"start": 263.04, "end": 266.28000000000003, "text": " So this was a lot like a linear regression model."}, {"start": 266.28, "end": 274.28, "text": " But binary labels were going to predict that the probability of yij being equal to 1 is"}, {"start": 274.28, "end": 287.03999999999996, "text": " given by not wj.xi plus b but instead by g of this formula where now g of z is 1 over"}, {"start": 287.03999999999996, "end": 288.47999999999996, "text": " 1 plus e to the negative z."}, {"start": 288.47999999999996, "end": 292.52, "text": " So this is the logistic function just like we saw in logistic regression."}, {"start": 292.52, "end": 298.79999999999995, "text": " And what we would do is take what was a lot like a linear regression model and turn it"}, {"start": 298.79999999999995, "end": 304.68, "text": " into something that would be a lot like a logistic regression model where we'll now"}, {"start": 304.68, "end": 313.0, "text": " predict the probability of yij being 1 that is of the user having engaged with or liked"}, {"start": 313.0, "end": 316.47999999999996, "text": " the item using this model."}, {"start": 316.47999999999996, "end": 322.03999999999996, "text": " In order to build this algorithm, we'll also have to modify the cost function from the"}, {"start": 322.04, "end": 331.08000000000004, "text": " squared error cost function to a cost function that is more appropriate for binary labels"}, {"start": 331.08000000000004, "end": 334.56, "text": " for a logistic regression like model."}, {"start": 334.56, "end": 340.56, "text": " So previously, this was the cost function that we had where this term played a role"}, {"start": 340.56, "end": 344.48, "text": " similar to f of x, the prediction of the algorithm."}, {"start": 344.48, "end": 352.68, "text": " When you now have binary labels yij when the labels are 1 or 0 or question mark, then the"}, {"start": 352.68, "end": 365.24, "text": " prediction f of x becomes instead of wj.xi plus bj, it becomes g of this where g is the"}, {"start": 365.24, "end": 366.86, "text": " logistic function."}, {"start": 366.86, "end": 372.14000000000004, "text": " And similar to when we had derived logistic regression, we had written out the following"}, {"start": 372.14, "end": 378.2, "text": " loss function for a single example, which was that the loss, if the algorithm predicts"}, {"start": 378.2, "end": 382.68, "text": " f of x and the true label was y, the loss was this."}, {"start": 382.68, "end": 391.47999999999996, "text": " It was negative y log f minus 1 minus y log 1 minus f."}, {"start": 391.47999999999996, "end": 397.4, "text": " This is also sometimes called the binary cross entropy cost function, but this is the standard"}, {"start": 397.4, "end": 402.35999999999996, "text": " cost function that we had used for logistic regression as well as for the binary classification"}, {"start": 402.35999999999996, "end": 405.76, "text": " problems when we were training neural networks."}, {"start": 405.76, "end": 411.44, "text": " And so to adapt this to the collaborative filtering setting, let me write out the cost"}, {"start": 411.44, "end": 418.79999999999995, "text": " function, which is now a function of all the parameters w and b, as well as all the parameters"}, {"start": 418.79999999999995, "end": 426.59999999999997, "text": " x, which are the features of the individual rubies or items of, we now need to sum over"}, {"start": 426.6, "end": 432.88, "text": " all the pairs ij where r ij is equal to 1."}, {"start": 432.88, "end": 436.86, "text": " Notice this is just similar to the summation up on top."}, {"start": 436.86, "end": 443.08000000000004, "text": " And now instead of this squared error cost function, we're going to use that loss function"}, {"start": 443.08000000000004, "end": 453.16, "text": " as a function of f of x comma y ij, where f of x here, that's my abbreviation, my shorthand"}, {"start": 453.16, "end": 458.04, "text": " for g of w j dot x i plus v j."}, {"start": 458.04, "end": 465.36, "text": " And if you plug this into here, then this gives you the cost function they can use for"}, {"start": 465.36, "end": 468.52000000000004, "text": " collaborative filtering on binary labels."}, {"start": 468.52000000000004, "end": 469.52000000000004, "text": " So that's it."}, {"start": 469.52000000000004, "end": 474.68, "text": " That's how you can take the linear regression like collaborative filtering algorithm and"}, {"start": 474.68, "end": 477.64000000000004, "text": " generalize it to work with binary labels."}, {"start": 477.64000000000004, "end": 482.56, "text": " And this actually very significantly opens up the set of applications you can address"}, {"start": 482.56, "end": 484.92, "text": " with this algorithm."}, {"start": 484.92, "end": 490.92, "text": " Now even though you've seen the key structure and cost function of the algorithm, there"}, {"start": 490.92, "end": 496.08, "text": " are also some implementational tips that will make your algorithm work much better."}, {"start": 496.08, "end": 501.06, "text": " Let's go on to the next video to take a look at some details of how you implement this"}, {"start": 501.06, "end": 505.6, "text": " and some little modifications that will make the algorithm run much faster."}, {"start": 505.6, "end": 512.6, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=vF96LPtVM-A
9.5 Recommender Systems implementation | Mean normalization-- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Back in the first course, you had seen how for linear regression, feature normalization can help the algorithm run faster. In the case of building and recommend a system with numbers y, such as movie ratings from 1 to 5 or 0 to 5 stars, it turns out your algorithm will run more efficiently and also perform a bit better if you first carry out mean normalization. That is if you normalize the movie ratings to have a consistent average value. Let's take a look at what that means. So here's the data set that we've been using and down below is the cost function you would use to learn the parameters for the model. In order to explain mean normalization, I'm actually going to add a fifth user, Eve, who has not yet rated any movies. So you see in a little bit that adding mean normalization will help the algorithm make better predictions on the user Eve. In fact, if you were to train a collaborative filtering algorithm on this data, then because we are trying to make the parameters w small because of this regularization term, if you were to run the algorithm on this data set, you actually end up with the parameters w for the fifth user, for the user Eve, to be equal to 0, 0, as well as quite likely b5 equals 0. Because Eve hasn't rated any movies yet, the parameters w and b don't affect this first term in the cost function because none of Eve's movies rating play a role in this squared error cost function. And so minimizing this means making the parameters w as small as possible. We didn't really regularize b, but if you initialize b to 0 as a default, you end up with b5 equals 0 as well. But if these are the parameters for user 5, that is for Eve, then what the algorithm will end up doing is predict that all of Eve's movies ratings would be w5.x for movie i plus b5, and this is equal to 0 if w and b above equal to 0. And so this algorithm would predict that if you have a new user that has not yet rated anything, we think they'll rate all movies with 0 stars, and that's not particularly helpful. So in this video, we'll see that mean normalization will help this algorithm come up with better predictions of the movie ratings for a new user that has not yet rated any movies. In order to describe mean normalization, let me take all of the values here, including all the question marks for Eve, and put them in a two dimensional matrix like this, just to write out all the ratings, including the question marks, in a more succinct or a more compact way. To carry out mean normalization, what we're going to do is take all of these ratings and for each movie, compute the average rating that was given. So movie 1 had two 5s and two 0s, and so the average rating is 2.5. Movie 2 had a 5 and a 0, so that averages out to 2.5. Movie 3, 4 and 0, averages out to 2. Movie 4 averages out to 2.25 rating, and movie 5, not that popular, has an average 1.25 rating. So I'm going to take all of these five numbers and gather them into a vector, which I'm going to call mu, because this is the vector of the average ratings that each of the movies had, averaging over just the users that did rate that particular movie. Instead of using these original 0 to 5 star ratings over here, I'm going to take this and subtract from every rating the mean rating that it was given. So for example, this movie rating was 5, I'm going to subtract 2.5, giving me 2.5 over here. This movie had a 0 star rating, going to subtract 2.25, giving me a negative 2.25 rating, and so on. So I'm going to take all of the now five users, including the new user Eve, as well as for all five movies. Then these new values on the right become your new values of yij. We're going to pretend that user 1 had given a 2.5 rating to movie 1 and a negative 2.25 rating to movie 4. And using this, you can then learn wj, bj and xi. Same as before, for user j on movie i, you would predict wj.xi plus bj. But because we had subtracted off mu i for movie i during this mean normalization step, in order to predict not a negative star rating, which isn't possible if a user rates from zero to five stars, we have to add back this mu i, which is just the value we had subtracted out. So as a concrete example, if we look at what happens with user 5, with the new user Eve, because she had not yet rated any movies, the algorithm might learn parameters w5 equals zero and say b5 equals zero. And so if we look at the predicted rating for movie 1, we will predict that Eve would rate it w5.x1 plus b5, but this is zero, and then plus mu 1, which is equal to 2.5. So this seems more reasonable to think Eve was likely to rate this movie 2.5 rather than think Eve will rate all movies zero stars just because she hasn't rated any movies yet. And in fact, the effect of this algorithm is it will cause the initial guesses for the new user Eve to be just equal to the mean of whatever other users have rated these five movies. And that seems more reasonable to take the average rating of the movies rather than to guess that all the ratings by Eve will be zero. It turns out that by normalizing the mean of the different movies ratings to be zero, the optimization algorithm for the recommended system will also run just a little bit faster. But it does make the algorithm behave much better for users that have rated no movies or very small numbers of movies and the predictions will become more reasonable. In this example, what we did was normalize each of the rows of this matrix to have zero mean and we saw this helps when there's a new user that hasn't rated a lot of movies yet. There's one other alternative that you could use, which is to instead normalize the columns of this matrix to have zero mean. And that would be a reasonable thing to do too. But I think in this application, normalizing the rows so that you can give reasonable ratings for a new user seems more important than normalizing the columns. Normalizing the columns would help if there was a brand new movie that no one has rated yet. But if there's a brand new movie that no one has rated yet, you probably shouldn't show that movie to too many users initially because you don't know that much about that movie. So normalizing columns to help with the case of a movie with no ratings seems less important to me than normalizing the rows to help with the case of a new user that's hardly rated any movies yet. And when you're building your own recommender system in this week's practice lab, normalizing just the rows should work fine. So that's mean normalization. It makes the algorithm run a little bit faster, but even more important, it makes the algorithm give much better, much more reasonable predictions when there are users that have rated very few movies or even no movies at all. This implementational detail of mean normalization will make your recommender system work much better. Next, let's go on to the next video to talk about how you can implement this for yourself in TensorFlow.
[{"start": 0.0, "end": 7.16, "text": " Back in the first course, you had seen how for linear regression, feature normalization"}, {"start": 7.16, "end": 9.9, "text": " can help the algorithm run faster."}, {"start": 9.9, "end": 15.96, "text": " In the case of building and recommend a system with numbers y, such as movie ratings from"}, {"start": 15.96, "end": 20.98, "text": " 1 to 5 or 0 to 5 stars, it turns out your algorithm will run more efficiently and also"}, {"start": 20.98, "end": 25.560000000000002, "text": " perform a bit better if you first carry out mean normalization."}, {"start": 25.56, "end": 31.2, "text": " That is if you normalize the movie ratings to have a consistent average value."}, {"start": 31.2, "end": 34.0, "text": " Let's take a look at what that means."}, {"start": 34.0, "end": 39.32, "text": " So here's the data set that we've been using and down below is the cost function you would"}, {"start": 39.32, "end": 42.92, "text": " use to learn the parameters for the model."}, {"start": 42.92, "end": 51.019999999999996, "text": " In order to explain mean normalization, I'm actually going to add a fifth user, Eve, who"}, {"start": 51.019999999999996, "end": 53.96, "text": " has not yet rated any movies."}, {"start": 53.96, "end": 60.04, "text": " So you see in a little bit that adding mean normalization will help the algorithm make"}, {"start": 60.04, "end": 63.160000000000004, "text": " better predictions on the user Eve."}, {"start": 63.160000000000004, "end": 69.92, "text": " In fact, if you were to train a collaborative filtering algorithm on this data, then because"}, {"start": 69.92, "end": 76.2, "text": " we are trying to make the parameters w small because of this regularization term, if you"}, {"start": 76.2, "end": 83.48, "text": " were to run the algorithm on this data set, you actually end up with the parameters w"}, {"start": 83.48, "end": 92.36, "text": " for the fifth user, for the user Eve, to be equal to 0, 0, as well as quite likely b5"}, {"start": 92.36, "end": 93.96000000000001, "text": " equals 0."}, {"start": 93.96000000000001, "end": 100.68, "text": " Because Eve hasn't rated any movies yet, the parameters w and b don't affect this first"}, {"start": 100.68, "end": 107.32000000000001, "text": " term in the cost function because none of Eve's movies rating play a role in this squared"}, {"start": 107.32000000000001, "end": 109.36, "text": " error cost function."}, {"start": 109.36, "end": 116.03999999999999, "text": " And so minimizing this means making the parameters w as small as possible."}, {"start": 116.03999999999999, "end": 121.64, "text": " We didn't really regularize b, but if you initialize b to 0 as a default, you end up"}, {"start": 121.64, "end": 124.34, "text": " with b5 equals 0 as well."}, {"start": 124.34, "end": 130.76, "text": " But if these are the parameters for user 5, that is for Eve, then what the algorithm will"}, {"start": 130.76, "end": 141.84, "text": " end up doing is predict that all of Eve's movies ratings would be w5.x for movie i plus"}, {"start": 141.84, "end": 147.28, "text": " b5, and this is equal to 0 if w and b above equal to 0."}, {"start": 147.28, "end": 151.84, "text": " And so this algorithm would predict that if you have a new user that has not yet rated"}, {"start": 151.84, "end": 156.44, "text": " anything, we think they'll rate all movies with 0 stars, and that's not particularly"}, {"start": 156.44, "end": 158.23999999999998, "text": " helpful."}, {"start": 158.24, "end": 164.08, "text": " So in this video, we'll see that mean normalization will help this algorithm come up with better"}, {"start": 164.08, "end": 170.68, "text": " predictions of the movie ratings for a new user that has not yet rated any movies."}, {"start": 170.68, "end": 177.64000000000001, "text": " In order to describe mean normalization, let me take all of the values here, including"}, {"start": 177.64000000000001, "end": 183.12, "text": " all the question marks for Eve, and put them in a two dimensional matrix like this, just"}, {"start": 183.12, "end": 188.68, "text": " to write out all the ratings, including the question marks, in a more succinct or a more"}, {"start": 188.68, "end": 190.44, "text": " compact way."}, {"start": 190.44, "end": 196.6, "text": " To carry out mean normalization, what we're going to do is take all of these ratings and"}, {"start": 196.6, "end": 201.64000000000001, "text": " for each movie, compute the average rating that was given."}, {"start": 201.64000000000001, "end": 207.68, "text": " So movie 1 had two 5s and two 0s, and so the average rating is 2.5."}, {"start": 207.68, "end": 211.76, "text": " Movie 2 had a 5 and a 0, so that averages out to 2.5."}, {"start": 211.76, "end": 214.88, "text": " Movie 3, 4 and 0, averages out to 2."}, {"start": 214.88, "end": 225.39999999999998, "text": " Movie 4 averages out to 2.25 rating, and movie 5, not that popular, has an average 1.25 rating."}, {"start": 225.39999999999998, "end": 229.72, "text": " So I'm going to take all of these five numbers and gather them into a vector, which I'm going"}, {"start": 229.72, "end": 235.23999999999998, "text": " to call mu, because this is the vector of the average ratings that each of the movies"}, {"start": 235.23999999999998, "end": 240.07999999999998, "text": " had, averaging over just the users that did rate that particular movie."}, {"start": 240.08, "end": 245.96, "text": " Instead of using these original 0 to 5 star ratings over here, I'm going to take this"}, {"start": 245.96, "end": 251.28, "text": " and subtract from every rating the mean rating that it was given."}, {"start": 251.28, "end": 259.72, "text": " So for example, this movie rating was 5, I'm going to subtract 2.5, giving me 2.5 over"}, {"start": 259.72, "end": 261.04, "text": " here."}, {"start": 261.04, "end": 268.68, "text": " This movie had a 0 star rating, going to subtract 2.25, giving me a negative 2.25 rating, and"}, {"start": 268.68, "end": 269.68, "text": " so on."}, {"start": 269.68, "end": 274.12, "text": " So I'm going to take all of the now five users, including the new user Eve, as well as for"}, {"start": 274.12, "end": 275.92, "text": " all five movies."}, {"start": 275.92, "end": 279.44, "text": " Then these new values on the right become your new values of yij."}, {"start": 279.44, "end": 287.0, "text": " We're going to pretend that user 1 had given a 2.5 rating to movie 1 and a negative 2.25"}, {"start": 287.0, "end": 288.6, "text": " rating to movie 4."}, {"start": 288.6, "end": 294.92, "text": " And using this, you can then learn wj, bj and xi."}, {"start": 294.92, "end": 305.12, "text": " Same as before, for user j on movie i, you would predict wj.xi plus bj."}, {"start": 305.12, "end": 312.88, "text": " But because we had subtracted off mu i for movie i during this mean normalization step,"}, {"start": 312.88, "end": 318.56, "text": " in order to predict not a negative star rating, which isn't possible if a user rates from"}, {"start": 318.56, "end": 325.56, "text": " zero to five stars, we have to add back this mu i, which is just the value we had subtracted"}, {"start": 325.56, "end": 326.56, "text": " out."}, {"start": 326.56, "end": 333.52, "text": " So as a concrete example, if we look at what happens with user 5, with the new user Eve,"}, {"start": 333.52, "end": 338.76, "text": " because she had not yet rated any movies, the algorithm might learn parameters w5 equals"}, {"start": 338.76, "end": 343.14, "text": " zero and say b5 equals zero."}, {"start": 343.14, "end": 349.91999999999996, "text": " And so if we look at the predicted rating for movie 1, we will predict that Eve would"}, {"start": 349.91999999999996, "end": 365.59999999999997, "text": " rate it w5.x1 plus b5, but this is zero, and then plus mu 1, which is equal to 2.5."}, {"start": 365.59999999999997, "end": 371.97999999999996, "text": " So this seems more reasonable to think Eve was likely to rate this movie 2.5 rather than"}, {"start": 371.98, "end": 376.12, "text": " think Eve will rate all movies zero stars just because she hasn't rated any movies"}, {"start": 376.12, "end": 377.12, "text": " yet."}, {"start": 377.12, "end": 383.96000000000004, "text": " And in fact, the effect of this algorithm is it will cause the initial guesses for the"}, {"start": 383.96000000000004, "end": 390.28000000000003, "text": " new user Eve to be just equal to the mean of whatever other users have rated these five"}, {"start": 390.28000000000003, "end": 391.28000000000003, "text": " movies."}, {"start": 391.28000000000003, "end": 395.76, "text": " And that seems more reasonable to take the average rating of the movies rather than to"}, {"start": 395.76, "end": 400.38, "text": " guess that all the ratings by Eve will be zero."}, {"start": 400.38, "end": 406.44, "text": " It turns out that by normalizing the mean of the different movies ratings to be zero,"}, {"start": 406.44, "end": 412.15999999999997, "text": " the optimization algorithm for the recommended system will also run just a little bit faster."}, {"start": 412.15999999999997, "end": 417.76, "text": " But it does make the algorithm behave much better for users that have rated no movies"}, {"start": 417.76, "end": 424.28, "text": " or very small numbers of movies and the predictions will become more reasonable."}, {"start": 424.28, "end": 429.04, "text": " In this example, what we did was normalize each of the rows of this matrix to have zero"}, {"start": 429.04, "end": 433.84000000000003, "text": " mean and we saw this helps when there's a new user that hasn't rated a lot of movies"}, {"start": 433.84000000000003, "end": 434.84000000000003, "text": " yet."}, {"start": 434.84000000000003, "end": 441.0, "text": " There's one other alternative that you could use, which is to instead normalize the columns"}, {"start": 441.0, "end": 444.24, "text": " of this matrix to have zero mean."}, {"start": 444.24, "end": 446.88, "text": " And that would be a reasonable thing to do too."}, {"start": 446.88, "end": 453.20000000000005, "text": " But I think in this application, normalizing the rows so that you can give reasonable ratings"}, {"start": 453.2, "end": 460.0, "text": " for a new user seems more important than normalizing the columns."}, {"start": 460.0, "end": 464.24, "text": " Normalizing the columns would help if there was a brand new movie that no one has rated"}, {"start": 464.24, "end": 465.24, "text": " yet."}, {"start": 465.24, "end": 470.03999999999996, "text": " But if there's a brand new movie that no one has rated yet, you probably shouldn't show"}, {"start": 470.03999999999996, "end": 475.88, "text": " that movie to too many users initially because you don't know that much about that movie."}, {"start": 475.88, "end": 482.41999999999996, "text": " So normalizing columns to help with the case of a movie with no ratings seems less important"}, {"start": 482.42, "end": 487.94, "text": " to me than normalizing the rows to help with the case of a new user that's hardly rated"}, {"start": 487.94, "end": 489.62, "text": " any movies yet."}, {"start": 489.62, "end": 495.24, "text": " And when you're building your own recommender system in this week's practice lab, normalizing"}, {"start": 495.24, "end": 497.84000000000003, "text": " just the rows should work fine."}, {"start": 497.84000000000003, "end": 499.92, "text": " So that's mean normalization."}, {"start": 499.92, "end": 504.44, "text": " It makes the algorithm run a little bit faster, but even more important, it makes the algorithm"}, {"start": 504.44, "end": 510.1, "text": " give much better, much more reasonable predictions when there are users that have rated very"}, {"start": 510.1, "end": 513.44, "text": " few movies or even no movies at all."}, {"start": 513.44, "end": 518.24, "text": " This implementational detail of mean normalization will make your recommender system work much"}, {"start": 518.24, "end": 519.24, "text": " better."}, {"start": 519.24, "end": 524.5400000000001, "text": " Next, let's go on to the next video to talk about how you can implement this for yourself"}, {"start": 524.54, "end": 541.0799999999999, "text": " in TensorFlow."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=VIecm37hBuA
9.6 Recommender Systems implementation detail|TensorFlow implementation of collaborative filtering
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, we'll take a look at how you can use TensorFlow to implement the collaborative filtering algorithm. You might be used to thinking of TensorFlow as a tool for building neural networks, and it is. It's a great tool for building neural networks. And it turns out that TensorFlow can also be very helpful for building other types of learning algorithms as well, like the collaborative filtering algorithm. One of the reasons I like using TensorFlow for tasks like these is that for many applications, in order to implement gradient descent, say, you need to find the derivatives of the cost function. But TensorFlow can automatically figure out for you what are the derivatives of a cost function. All you have to do is implement the cost function. And without needing to know any calculus, without needing to take derivatives yourself, you can get TensorFlow with just a few lines of code to compute that derivative term that can then be used to optimize the cost function. Let's take a look at how all this works. You might remember this diagram here on the right from course one. This is exactly the diagram that we had looked at when we talked about optimizing W when we were working through our first linear regression example. And at that time, we had set B equals to zero. And so the model was just predicting f of x equals W dot x. And we wanted to find the value of W that minimizes the cost function J. So the way we were doing that was via a gradient descent update, which looks like this, where W gets repeatedly updated as W minus the learning rate alpha times the derivative term. If you were updating B as well, this is the expression you would use. But if you set B equals zero, you just forego the second update. And you keep on performing this gradient descent update until convergence. Sometimes computing this derivative or partial derivative term can be difficult. And it turns out that TensorFlow can help with that. Let's see how. I'm going to use a very simple cost function J equals W x minus one squared. So W x is our simplified f W of x. And y is equal to one. And so this would be the cost function. If we had f of x equals W x, y equals one for the one training example that we have. And if we were not optimizing this with respect to B. So the gradient descent algorithm would repeat until convergence this update over here. It turns out that if you implement the cost function J over here, TensorFlow can automatically compute for you this derivative term and thereby get gradient descent to work. I'll give you a high level overview of what this code does. If you equals tf.variable three takes the parameter W and initializes it to the value of three telling TensorFlow that W is a variable is how we tell it that W is a parameter that we want to optimize. I'm going to set x equals one y equals one and the learning rate alpha to be equal to zero point zero one. And let's run gradient descent for 30 iterations. So in this code will do for either arrange iterations. So for 30 iterations. And this is a syntax to get TensorFlow to automatically compute derivatives for you. TensorFlow has a feature called a gradient tape. And if you write this with TF by gradient tape as tape F, this is compute f of x as W times x and compute J as f of x minus y squared. Then by telling TensorFlow how to compute the cost J and by doing it with a gradient tape syntax as follows, TensorFlow will automatically record the sequence of steps, the sequence of operations needed to compute the cost J. And this is needed to enable automatic differentiation. Next, TensorFlow will have saved the sequence of operations in tape in the gradient tape. And with this syntax, TensorFlow will automatically compute this derivative term, which I'm going to call DJ DW. And TensorFlow knows you want to take the derivative with respect to W that W is a parameter you want to optimize because you had told it so up here and because we're also specifying it down here. So now that you've computed derivatives, finally, you can carry out this update by taking W and subtracting from it the learning rate alpha times that derivative term that we just got from up above. TensorFlow variables, tier variables, require special handling, which is why instead of setting W to be W minus alpha times the derivative in the usual way, we use this assign add function. But when you get to the practice lab, don't worry about it, we'll give you all the syntax you need in order to implement a collaborative filtering algorithm correctly. So notice that with the gradient tape feature of TensorFlow, the main work you need to do is to tell it how to compute the cost function J. And the rest of the syntax causes TensorFlow to automatically figure out for you what is that derivative. And with this, TensorFlow will start with finding the slope of this at three shown by this dashed line, take a gradient step and update W and compute the derivative again and update W over and over until eventually it gets to the optimal value of W, which is at W equals one. So this procedure allows you to implement gradient descent without ever having to figure out yourself how to compute this derivative term. This is a very powerful feature of TensorFlow called auto diff. And some other machine learning packages like PyTorch also support auto diff. Sometimes you hear people call this auto grad. The technically correct term is auto diff and auto grad is actually the name of a specific software package for doing automatic differentiation, for taking derivatives automatically. And sometimes if you hear someone refer to auto grad, they're just referring to this same concept of automatically taking derivatives. So let's take this and look at how you can implement the collaborative filtering algorithm using auto diff. And in fact, once you can compute derivatives automatically, you're not limited to just gradient descent. You can also use a more powerful optimization algorithm like the Adam optimization algorithm. In order to implement the collaborative filtering algorithm in TensorFlow, this is the syntax you can use. Let's start with specifying that the optimizer is Keras optimizers, Adam with learning rate specified here. And then for say 200 iterations, here's the syntax as before with TF gradient tape as tape. You need to provide code to compute the value of the cost function J. So recall that in collaborative filtering, the cost function J takes this input parameters X, W and B as well as the ratings mean normalized. So that's why I'm writing Y norm, R ij specifying which values have a rating, number of users or N U in a notation, number of movies or N M in a notation just now, as well as the regularization parameter lambda. And if you can implement this cost function J, then this syntax will cause TensorFlow to figure out the derivatives for you. Then this syntax will cause TensorFlow to record the sequence of operations used to compute the cost. And then by asking it to give you grads equals tape dot gradient, this will give you the derivative of the cost function with respect to X, W and B. And finally, with the optimizer that we had specified up on top as the Adam optimizer, you can use the optimizer with the gradients that we just computed. And does it function in Python is just a function that rearranges the numbers into an appropriate ordering for the applied gradients function. If you are using gradient descent for collaborative filtering, recall that the cost function J would be a function of W, B as well as X. And if you're applying gradient descent, you take the partial derivative with respect to W and then update W as follows. And you'd also take the partial derivative of this with respect to B and update B as follows and similarly update the features X as follows and you repeat until conversions. But as I mentioned earlier, with TensorFlow and auto diff, you're not limited to just gradient descent. You can also use a more powerful optimization algorithm like the Adam optimizer. The data set you use in the practice lab is a real data set comprising actual movies rated by actual people. This is the movie lens data set and is due to Harper and constant. And I hope you enjoy running this algorithm on a real data set of movies and ratings and see for yourself the results that this algorithm can get. So that's it. That's how you can implement the cloud filtering algorithm in TensorFlow. If you're wondering, why do we have to do it this way? Why couldn't we use a dense layer and then model compile and model fit? The reason we couldn't use that old recipe is the collaborative filtering algorithm and cost function, it doesn't neatly fit into the dense layer or the other standard neural network layer types of TensorFlow. That's why we had to implement it this other way, where we would implement the cost function ourselves, but then use TensorFlow's tools for automatic differentiation, also called auto diff and use TensorFlow's implementation of the Adam optimization algorithm to let it do a lot of the work for us of optimizing the cost function. If the model you have is a sequence of dense neural network layers or other types of layers supported by TensorFlow, then the old implementation recipe of model compile model fit works. But even when it isn't, these tools in TensorFlow give you a very effective way to implement other learning algorithms as well. And so I hope you enjoy playing more with the collaborative filtering exercise in this week's practice lab, and if it looks like there's a lot of code and lots of syntax, don't worry about it. We'll make sure you have what you need to complete that exercise successfully. And in the next video, I'd like to also move on to discuss more of the nuances of collaborative filtering and specifically the question of how do you find related items given one movie or the other movies similar to this one. Let's go on to the next video.
[{"start": 0.0, "end": 7.24, "text": " In this video, we'll take a look at how you can use TensorFlow to implement the collaborative"}, {"start": 7.24, "end": 9.44, "text": " filtering algorithm."}, {"start": 9.44, "end": 14.08, "text": " You might be used to thinking of TensorFlow as a tool for building neural networks, and"}, {"start": 14.08, "end": 15.08, "text": " it is."}, {"start": 15.08, "end": 17.56, "text": " It's a great tool for building neural networks."}, {"start": 17.56, "end": 22.32, "text": " And it turns out that TensorFlow can also be very helpful for building other types of"}, {"start": 22.32, "end": 27.240000000000002, "text": " learning algorithms as well, like the collaborative filtering algorithm."}, {"start": 27.24, "end": 33.879999999999995, "text": " One of the reasons I like using TensorFlow for tasks like these is that for many applications,"}, {"start": 33.879999999999995, "end": 38.68, "text": " in order to implement gradient descent, say, you need to find the derivatives of the cost"}, {"start": 38.68, "end": 39.68, "text": " function."}, {"start": 39.68, "end": 46.44, "text": " But TensorFlow can automatically figure out for you what are the derivatives of a cost"}, {"start": 46.44, "end": 47.519999999999996, "text": " function."}, {"start": 47.519999999999996, "end": 50.72, "text": " All you have to do is implement the cost function."}, {"start": 50.72, "end": 55.28, "text": " And without needing to know any calculus, without needing to take derivatives yourself,"}, {"start": 55.28, "end": 60.120000000000005, "text": " you can get TensorFlow with just a few lines of code to compute that derivative term that"}, {"start": 60.120000000000005, "end": 63.28, "text": " can then be used to optimize the cost function."}, {"start": 63.28, "end": 65.94, "text": " Let's take a look at how all this works."}, {"start": 65.94, "end": 70.92, "text": " You might remember this diagram here on the right from course one."}, {"start": 70.92, "end": 76.88, "text": " This is exactly the diagram that we had looked at when we talked about optimizing W when"}, {"start": 76.88, "end": 81.68, "text": " we were working through our first linear regression example."}, {"start": 81.68, "end": 85.02000000000001, "text": " And at that time, we had set B equals to zero."}, {"start": 85.02, "end": 89.47999999999999, "text": " And so the model was just predicting f of x equals W dot x."}, {"start": 89.47999999999999, "end": 94.75999999999999, "text": " And we wanted to find the value of W that minimizes the cost function J."}, {"start": 94.75999999999999, "end": 101.36, "text": " So the way we were doing that was via a gradient descent update, which looks like this, where"}, {"start": 101.36, "end": 107.8, "text": " W gets repeatedly updated as W minus the learning rate alpha times the derivative term."}, {"start": 107.8, "end": 112.03999999999999, "text": " If you were updating B as well, this is the expression you would use."}, {"start": 112.04, "end": 116.80000000000001, "text": " But if you set B equals zero, you just forego the second update."}, {"start": 116.80000000000001, "end": 122.16000000000001, "text": " And you keep on performing this gradient descent update until convergence."}, {"start": 122.16000000000001, "end": 128.68, "text": " Sometimes computing this derivative or partial derivative term can be difficult."}, {"start": 128.68, "end": 132.68, "text": " And it turns out that TensorFlow can help with that."}, {"start": 132.68, "end": 133.76, "text": " Let's see how."}, {"start": 133.76, "end": 143.0, "text": " I'm going to use a very simple cost function J equals W x minus one squared."}, {"start": 143.0, "end": 148.6, "text": " So W x is our simplified f W of x."}, {"start": 148.6, "end": 150.68, "text": " And y is equal to one."}, {"start": 150.68, "end": 153.85999999999999, "text": " And so this would be the cost function."}, {"start": 153.85999999999999, "end": 162.2, "text": " If we had f of x equals W x, y equals one for the one training example that we have."}, {"start": 162.2, "end": 165.48, "text": " And if we were not optimizing this with respect to B."}, {"start": 165.48, "end": 171.2, "text": " So the gradient descent algorithm would repeat until convergence this update over here."}, {"start": 171.2, "end": 177.39999999999998, "text": " It turns out that if you implement the cost function J over here, TensorFlow can automatically"}, {"start": 177.39999999999998, "end": 182.72, "text": " compute for you this derivative term and thereby get gradient descent to work."}, {"start": 182.72, "end": 187.67999999999998, "text": " I'll give you a high level overview of what this code does."}, {"start": 187.68, "end": 195.20000000000002, "text": " If you equals tf.variable three takes the parameter W and initializes it to the value"}, {"start": 195.20000000000002, "end": 203.20000000000002, "text": " of three telling TensorFlow that W is a variable is how we tell it that W is a parameter that"}, {"start": 203.20000000000002, "end": 204.68, "text": " we want to optimize."}, {"start": 204.68, "end": 209.64000000000001, "text": " I'm going to set x equals one y equals one and the learning rate alpha to be equal to"}, {"start": 209.64000000000001, "end": 211.76000000000002, "text": " zero point zero one."}, {"start": 211.76000000000002, "end": 215.38, "text": " And let's run gradient descent for 30 iterations."}, {"start": 215.38, "end": 218.79999999999998, "text": " So in this code will do for either arrange iterations."}, {"start": 218.79999999999998, "end": 220.84, "text": " So for 30 iterations."}, {"start": 220.84, "end": 226.24, "text": " And this is a syntax to get TensorFlow to automatically compute derivatives for you."}, {"start": 226.24, "end": 229.88, "text": " TensorFlow has a feature called a gradient tape."}, {"start": 229.88, "end": 238.16, "text": " And if you write this with TF by gradient tape as tape F, this is compute f of x as"}, {"start": 238.16, "end": 245.44, "text": " W times x and compute J as f of x minus y squared."}, {"start": 245.44, "end": 251.92, "text": " Then by telling TensorFlow how to compute the cost J and by doing it with a gradient"}, {"start": 251.92, "end": 257.64, "text": " tape syntax as follows, TensorFlow will automatically record the sequence of steps, the sequence"}, {"start": 257.64, "end": 265.36, "text": " of operations needed to compute the cost J. And this is needed to enable automatic differentiation."}, {"start": 265.36, "end": 273.08000000000004, "text": " Next, TensorFlow will have saved the sequence of operations in tape in the gradient tape."}, {"start": 273.08000000000004, "end": 279.68, "text": " And with this syntax, TensorFlow will automatically compute this derivative term, which I'm going"}, {"start": 279.68, "end": 283.32, "text": " to call DJ DW."}, {"start": 283.32, "end": 289.04, "text": " And TensorFlow knows you want to take the derivative with respect to W that W is a parameter"}, {"start": 289.04, "end": 294.12, "text": " you want to optimize because you had told it so up here and because we're also specifying"}, {"start": 294.12, "end": 295.84000000000003, "text": " it down here."}, {"start": 295.84000000000003, "end": 302.08, "text": " So now that you've computed derivatives, finally, you can carry out this update by"}, {"start": 302.08, "end": 308.28000000000003, "text": " taking W and subtracting from it the learning rate alpha times that derivative term that"}, {"start": 308.28000000000003, "end": 311.08, "text": " we just got from up above."}, {"start": 311.08, "end": 316.24, "text": " TensorFlow variables, tier variables, require special handling, which is why instead of"}, {"start": 316.24, "end": 323.48, "text": " setting W to be W minus alpha times the derivative in the usual way, we use this assign add function."}, {"start": 323.48, "end": 327.48, "text": " But when you get to the practice lab, don't worry about it, we'll give you all the syntax"}, {"start": 327.48, "end": 331.36, "text": " you need in order to implement a collaborative filtering algorithm correctly."}, {"start": 331.36, "end": 338.16, "text": " So notice that with the gradient tape feature of TensorFlow, the main work you need to do"}, {"start": 338.16, "end": 344.64000000000004, "text": " is to tell it how to compute the cost function J. And the rest of the syntax causes TensorFlow"}, {"start": 344.64000000000004, "end": 350.56, "text": " to automatically figure out for you what is that derivative."}, {"start": 350.56, "end": 355.24, "text": " And with this, TensorFlow will start with finding the slope of this at three shown by"}, {"start": 355.24, "end": 364.0, "text": " this dashed line, take a gradient step and update W and compute the derivative again"}, {"start": 364.0, "end": 370.72, "text": " and update W over and over until eventually it gets to the optimal value of W, which is"}, {"start": 370.72, "end": 373.06, "text": " at W equals one."}, {"start": 373.06, "end": 378.56, "text": " So this procedure allows you to implement gradient descent without ever having to figure"}, {"start": 378.56, "end": 382.72, "text": " out yourself how to compute this derivative term."}, {"start": 382.72, "end": 388.0, "text": " This is a very powerful feature of TensorFlow called auto diff."}, {"start": 388.0, "end": 394.16, "text": " And some other machine learning packages like PyTorch also support auto diff."}, {"start": 394.16, "end": 397.2, "text": " Sometimes you hear people call this auto grad."}, {"start": 397.2, "end": 402.4, "text": " The technically correct term is auto diff and auto grad is actually the name of a specific"}, {"start": 402.4, "end": 407.56, "text": " software package for doing automatic differentiation, for taking derivatives automatically."}, {"start": 407.56, "end": 411.4, "text": " And sometimes if you hear someone refer to auto grad, they're just referring to this"}, {"start": 411.4, "end": 414.68, "text": " same concept of automatically taking derivatives."}, {"start": 414.68, "end": 419.32, "text": " So let's take this and look at how you can implement the collaborative filtering algorithm"}, {"start": 419.32, "end": 421.68, "text": " using auto diff."}, {"start": 421.68, "end": 426.52, "text": " And in fact, once you can compute derivatives automatically, you're not limited to just"}, {"start": 426.52, "end": 427.52, "text": " gradient descent."}, {"start": 427.52, "end": 434.12, "text": " You can also use a more powerful optimization algorithm like the Adam optimization algorithm."}, {"start": 434.12, "end": 439.04, "text": " In order to implement the collaborative filtering algorithm in TensorFlow, this is the syntax"}, {"start": 439.04, "end": 440.04, "text": " you can use."}, {"start": 440.04, "end": 448.0, "text": " Let's start with specifying that the optimizer is Keras optimizers, Adam with learning rate"}, {"start": 448.0, "end": 449.92, "text": " specified here."}, {"start": 449.92, "end": 457.36, "text": " And then for say 200 iterations, here's the syntax as before with TF gradient tape as"}, {"start": 457.36, "end": 458.36, "text": " tape."}, {"start": 458.36, "end": 463.32, "text": " You need to provide code to compute the value of the cost function J."}, {"start": 463.32, "end": 470.08, "text": " So recall that in collaborative filtering, the cost function J takes this input parameters"}, {"start": 470.08, "end": 475.52, "text": " X, W and B as well as the ratings mean normalized."}, {"start": 475.52, "end": 482.88, "text": " So that's why I'm writing Y norm, R ij specifying which values have a rating, number of users"}, {"start": 482.88, "end": 488.32, "text": " or N U in a notation, number of movies or N M in a notation just now, as well as the"}, {"start": 488.32, "end": 490.68, "text": " regularization parameter lambda."}, {"start": 490.68, "end": 495.72, "text": " And if you can implement this cost function J, then this syntax will cause TensorFlow"}, {"start": 495.72, "end": 498.32, "text": " to figure out the derivatives for you."}, {"start": 498.32, "end": 502.16, "text": " Then this syntax will cause TensorFlow to record the sequence of operations used to"}, {"start": 502.16, "end": 503.66, "text": " compute the cost."}, {"start": 503.66, "end": 509.6, "text": " And then by asking it to give you grads equals tape dot gradient, this will give you the"}, {"start": 509.6, "end": 516.92, "text": " derivative of the cost function with respect to X, W and B."}, {"start": 516.92, "end": 522.8399999999999, "text": " And finally, with the optimizer that we had specified up on top as the Adam optimizer,"}, {"start": 522.8399999999999, "end": 528.4, "text": " you can use the optimizer with the gradients that we just computed."}, {"start": 528.4, "end": 533.16, "text": " And does it function in Python is just a function that rearranges the numbers into an appropriate"}, {"start": 533.16, "end": 535.76, "text": " ordering for the applied gradients function."}, {"start": 535.76, "end": 541.04, "text": " If you are using gradient descent for collaborative filtering, recall that the cost function J"}, {"start": 541.04, "end": 547.56, "text": " would be a function of W, B as well as X. And if you're applying gradient descent, you"}, {"start": 547.56, "end": 553.06, "text": " take the partial derivative with respect to W and then update W as follows."}, {"start": 553.06, "end": 557.64, "text": " And you'd also take the partial derivative of this with respect to B and update B as"}, {"start": 557.64, "end": 565.0799999999999, "text": " follows and similarly update the features X as follows and you repeat until conversions."}, {"start": 565.08, "end": 571.24, "text": " But as I mentioned earlier, with TensorFlow and auto diff, you're not limited to just"}, {"start": 571.24, "end": 572.5200000000001, "text": " gradient descent."}, {"start": 572.5200000000001, "end": 577.8000000000001, "text": " You can also use a more powerful optimization algorithm like the Adam optimizer."}, {"start": 577.8000000000001, "end": 583.36, "text": " The data set you use in the practice lab is a real data set comprising actual movies rated"}, {"start": 583.36, "end": 585.58, "text": " by actual people."}, {"start": 585.58, "end": 590.36, "text": " This is the movie lens data set and is due to Harper and constant."}, {"start": 590.36, "end": 595.8000000000001, "text": " And I hope you enjoy running this algorithm on a real data set of movies and ratings and"}, {"start": 595.8000000000001, "end": 599.08, "text": " see for yourself the results that this algorithm can get."}, {"start": 599.08, "end": 600.08, "text": " So that's it."}, {"start": 600.08, "end": 604.44, "text": " That's how you can implement the cloud filtering algorithm in TensorFlow."}, {"start": 604.44, "end": 606.6800000000001, "text": " If you're wondering, why do we have to do it this way?"}, {"start": 606.6800000000001, "end": 611.88, "text": " Why couldn't we use a dense layer and then model compile and model fit?"}, {"start": 611.88, "end": 616.6, "text": " The reason we couldn't use that old recipe is the collaborative filtering algorithm and"}, {"start": 616.6, "end": 622.08, "text": " cost function, it doesn't neatly fit into the dense layer or the other standard neural"}, {"start": 622.08, "end": 624.76, "text": " network layer types of TensorFlow."}, {"start": 624.76, "end": 628.84, "text": " That's why we had to implement it this other way, where we would implement the cost function"}, {"start": 628.84, "end": 634.08, "text": " ourselves, but then use TensorFlow's tools for automatic differentiation, also called"}, {"start": 634.08, "end": 639.52, "text": " auto diff and use TensorFlow's implementation of the Adam optimization algorithm to let"}, {"start": 639.52, "end": 644.38, "text": " it do a lot of the work for us of optimizing the cost function."}, {"start": 644.38, "end": 650.84, "text": " If the model you have is a sequence of dense neural network layers or other types of layers"}, {"start": 650.84, "end": 657.72, "text": " supported by TensorFlow, then the old implementation recipe of model compile model fit works."}, {"start": 657.72, "end": 663.04, "text": " But even when it isn't, these tools in TensorFlow give you a very effective way to implement"}, {"start": 663.04, "end": 665.58, "text": " other learning algorithms as well."}, {"start": 665.58, "end": 670.56, "text": " And so I hope you enjoy playing more with the collaborative filtering exercise in this"}, {"start": 670.56, "end": 674.5999999999999, "text": " week's practice lab, and if it looks like there's a lot of code and lots of syntax,"}, {"start": 674.5999999999999, "end": 675.5999999999999, "text": " don't worry about it."}, {"start": 675.5999999999999, "end": 680.8399999999999, "text": " We'll make sure you have what you need to complete that exercise successfully."}, {"start": 680.8399999999999, "end": 687.64, "text": " And in the next video, I'd like to also move on to discuss more of the nuances of collaborative"}, {"start": 687.64, "end": 693.9599999999999, "text": " filtering and specifically the question of how do you find related items given one movie"}, {"start": 693.9599999999999, "end": 696.4399999999999, "text": " or the other movies similar to this one."}, {"start": 696.44, "end": 703.44, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=uXMa7YwDVbE
9.7 Collaborative Filtering | Finding related items-- [Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
If you go to an online shopping website and are looking at a specific item, say maybe a specific book, the website may show you things like, here are some other books similar to this one. Or if you're browsing a specific movie, it may say, here are some other movies similar to this one. How do the websites do that so that when you look at one item, it gives you other similar related items to consider? It turns out the collaborative filtering algorithm that we've been talking about gives you a nice way to find related items. Let's take a look. As part of the collaborative filtering we've discussed, you learn features XI for every item I, for every movie I or other type of item that you're recommending to users. Whereas earlier this week, I had used a hypothetical example of the features representing how much a movie is a romance movie versus an action movie. In practice, when you use this algorithm to learn the features XI automatically, looking at the individual features like X1, X2, X3, you find them to be quite hard to interpret. It's quite hard to look at the features and say, oh, X1 is an action movie and X2 is a foreign film and so on. But nonetheless, these learned features collectively, X1 and X2 or X1, X2, X3, however many features in you have, collectively these features do convey something about what that movie is like. And it turns out that given features XI of item I, if you want to find other items, say other movies related to movie I, then what you can do is try to find the item K with features XK that is similar to XI. And in particular, given a feature vector XK, the way we determine whether or not it's similar to the feature XI is as follows. Use the sum from L equals 1 through N with N features of XKL minus XIL squared. This turns out to be the square distance between XK and XI. And in math, this square distance between these two vectors XK and XI is sometimes written as follows as well. And if you find not just the one movie with the smallest distance between XK and XI, but find, say, the five or 10 items with the most similar feature vectors, then you end up finding five or 10 related items to the item XI. So if you're building a website and want to help users find related products to a specific product they're looking at, this would be a nice way to do so. Because the features XI give a sense of what item I is about, other items XK with similar features will turn out to be similar to item I. It turns out later this week, this idea of finding related items will be a small building block that we'll use to get to an even more powerful recommender system as well. Before wrapping up this section, I want to mention a few limitations of collaborative filtering. In collaborative filtering, you have a set of items and a set of users, and the users have rated some subset of items. One of its weaknesses is that it's not very good at the code start problem. For example, if there's a new item in your catalog, say someone's just published a new movie and hardly anyone has rated that movie yet, how do you rank the new item if very few users have rated it before? Similarly, for new users that rated only a few items, how can we make sure we show them something reasonable? We did see in an earlier video how mean normalization can help with this, and it does help a lot. But perhaps there are even better ways to show users that rated very few items, things that are likely to interest them. This is called the code start problem because when you have a new item that few users have rated, or when you have a new user that's rated very few items, the results of collaborative filtering for that item or for that user may not be very accurate. A second limitation of collaborative filtering is it doesn't give you a natural way to use side information or additional information about items or users. For example, for a given movie in your catalog, you might know what is the genre of the movie, who are the movie stars, what is the studio, what is the budget, and so on. You may have a lot of features about a given movie. Or for a single user, you may know something about their demographics, such as their age, gender, location. They express preferences such as if they tell you they like certain movie genres but not other movie genres. Or it turns out if you know the user's IP address, that can tell you a lot about the user's location. And knowing the user's location might also help you guess what might the user be interested in. Or if you know whether the user is accessing your site on a mobile or on a desktop, or if you know what web browser they're using, it turns out all of these are little cues you can get that can be surprisingly correlated with the preferences of a user. It turns out, by the way, that it's known that users that use the Chrome versus Firefox versus the Safari versus the Microsoft Edge browser, they actually behave in very different ways. So even knowing the user web browser can give you a hint when you have collected an updater of what this particular user might like. So even though collective filtering, where you have multiple users give you ratings of multiple items is a very powerful set of algorithms, it also has some limitations. In the next video, let's go on to develop content based filtering algorithms, which can address a lot of these limitations. Content based filtering algorithms are a state of the art technique used in many commercial applications today. Let's go take a look at how they work.
[{"start": 0.0, "end": 8.2, "text": " If you go to an online shopping website and are looking at a specific item, say maybe"}, {"start": 8.2, "end": 13.280000000000001, "text": " a specific book, the website may show you things like, here are some other books similar"}, {"start": 13.280000000000001, "end": 14.280000000000001, "text": " to this one."}, {"start": 14.280000000000001, "end": 19.12, "text": " Or if you're browsing a specific movie, it may say, here are some other movies similar"}, {"start": 19.12, "end": 20.36, "text": " to this one."}, {"start": 20.36, "end": 25.0, "text": " How do the websites do that so that when you look at one item, it gives you other similar"}, {"start": 25.0, "end": 27.240000000000002, "text": " related items to consider?"}, {"start": 27.24, "end": 31.72, "text": " It turns out the collaborative filtering algorithm that we've been talking about gives you a"}, {"start": 31.72, "end": 34.4, "text": " nice way to find related items."}, {"start": 34.4, "end": 35.64, "text": " Let's take a look."}, {"start": 35.64, "end": 42.08, "text": " As part of the collaborative filtering we've discussed, you learn features XI for every"}, {"start": 42.08, "end": 47.68, "text": " item I, for every movie I or other type of item that you're recommending to users."}, {"start": 47.68, "end": 54.2, "text": " Whereas earlier this week, I had used a hypothetical example of the features representing how much"}, {"start": 54.2, "end": 57.96, "text": " a movie is a romance movie versus an action movie."}, {"start": 57.96, "end": 63.14, "text": " In practice, when you use this algorithm to learn the features XI automatically, looking"}, {"start": 63.14, "end": 70.52000000000001, "text": " at the individual features like X1, X2, X3, you find them to be quite hard to interpret."}, {"start": 70.52000000000001, "end": 78.0, "text": " It's quite hard to look at the features and say, oh, X1 is an action movie and X2 is a"}, {"start": 78.0, "end": 80.08, "text": " foreign film and so on."}, {"start": 80.08, "end": 88.44, "text": " But nonetheless, these learned features collectively, X1 and X2 or X1, X2, X3, however many features"}, {"start": 88.44, "end": 95.96, "text": " in you have, collectively these features do convey something about what that movie is"}, {"start": 95.96, "end": 97.16, "text": " like."}, {"start": 97.16, "end": 104.28, "text": " And it turns out that given features XI of item I, if you want to find other items, say"}, {"start": 104.28, "end": 111.96000000000001, "text": " other movies related to movie I, then what you can do is try to find the item K with"}, {"start": 111.96000000000001, "end": 118.44, "text": " features XK that is similar to XI."}, {"start": 118.44, "end": 125.28, "text": " And in particular, given a feature vector XK, the way we determine whether or not it's"}, {"start": 125.28, "end": 128.96, "text": " similar to the feature XI is as follows."}, {"start": 128.96, "end": 136.76000000000002, "text": " Use the sum from L equals 1 through N with N features of XKL minus XIL squared."}, {"start": 136.76000000000002, "end": 143.12, "text": " This turns out to be the square distance between XK and XI."}, {"start": 143.12, "end": 151.4, "text": " And in math, this square distance between these two vectors XK and XI is sometimes written"}, {"start": 151.4, "end": 152.96, "text": " as follows as well."}, {"start": 152.96, "end": 160.0, "text": " And if you find not just the one movie with the smallest distance between XK and XI, but"}, {"start": 160.0, "end": 167.20000000000002, "text": " find, say, the five or 10 items with the most similar feature vectors, then you end up finding"}, {"start": 167.20000000000002, "end": 171.72, "text": " five or 10 related items to the item XI."}, {"start": 171.72, "end": 176.76000000000002, "text": " So if you're building a website and want to help users find related products to a specific"}, {"start": 176.76000000000002, "end": 181.42000000000002, "text": " product they're looking at, this would be a nice way to do so."}, {"start": 181.42, "end": 189.33999999999997, "text": " Because the features XI give a sense of what item I is about, other items XK with similar"}, {"start": 189.33999999999997, "end": 193.32, "text": " features will turn out to be similar to item I."}, {"start": 193.32, "end": 198.51999999999998, "text": " It turns out later this week, this idea of finding related items will be a small building"}, {"start": 198.51999999999998, "end": 205.11999999999998, "text": " block that we'll use to get to an even more powerful recommender system as well."}, {"start": 205.12, "end": 211.68, "text": " Before wrapping up this section, I want to mention a few limitations of collaborative"}, {"start": 211.68, "end": 212.68, "text": " filtering."}, {"start": 212.68, "end": 216.72, "text": " In collaborative filtering, you have a set of items and a set of users, and the users"}, {"start": 216.72, "end": 219.48000000000002, "text": " have rated some subset of items."}, {"start": 219.48000000000002, "end": 224.96, "text": " One of its weaknesses is that it's not very good at the code start problem."}, {"start": 224.96, "end": 229.84, "text": " For example, if there's a new item in your catalog, say someone's just published a new"}, {"start": 229.84, "end": 236.32, "text": " movie and hardly anyone has rated that movie yet, how do you rank the new item if very"}, {"start": 236.32, "end": 239.4, "text": " few users have rated it before?"}, {"start": 239.4, "end": 246.16, "text": " Similarly, for new users that rated only a few items, how can we make sure we show them"}, {"start": 246.16, "end": 248.36, "text": " something reasonable?"}, {"start": 248.36, "end": 255.66, "text": " We did see in an earlier video how mean normalization can help with this, and it does help a lot."}, {"start": 255.66, "end": 261.2, "text": " But perhaps there are even better ways to show users that rated very few items, things"}, {"start": 261.2, "end": 263.52, "text": " that are likely to interest them."}, {"start": 263.52, "end": 269.84, "text": " This is called the code start problem because when you have a new item that few users have"}, {"start": 269.84, "end": 276.6, "text": " rated, or when you have a new user that's rated very few items, the results of collaborative"}, {"start": 276.6, "end": 281.08, "text": " filtering for that item or for that user may not be very accurate."}, {"start": 281.08, "end": 286.03999999999996, "text": " A second limitation of collaborative filtering is it doesn't give you a natural way to use"}, {"start": 286.03999999999996, "end": 290.59999999999997, "text": " side information or additional information about items or users."}, {"start": 290.59999999999997, "end": 296.76, "text": " For example, for a given movie in your catalog, you might know what is the genre of the movie,"}, {"start": 296.76, "end": 301.52, "text": " who are the movie stars, what is the studio, what is the budget, and so on."}, {"start": 301.52, "end": 305.36, "text": " You may have a lot of features about a given movie."}, {"start": 305.36, "end": 310.79999999999995, "text": " Or for a single user, you may know something about their demographics, such as their age,"}, {"start": 310.8, "end": 312.44, "text": " gender, location."}, {"start": 312.44, "end": 317.76, "text": " They express preferences such as if they tell you they like certain movie genres but not"}, {"start": 317.76, "end": 320.0, "text": " other movie genres."}, {"start": 320.0, "end": 324.56, "text": " Or it turns out if you know the user's IP address, that can tell you a lot about the"}, {"start": 324.56, "end": 326.48, "text": " user's location."}, {"start": 326.48, "end": 331.8, "text": " And knowing the user's location might also help you guess what might the user be interested"}, {"start": 331.8, "end": 332.8, "text": " in."}, {"start": 332.8, "end": 339.44, "text": " Or if you know whether the user is accessing your site on a mobile or on a desktop, or"}, {"start": 339.44, "end": 343.64, "text": " if you know what web browser they're using, it turns out all of these are little cues"}, {"start": 343.64, "end": 348.64, "text": " you can get that can be surprisingly correlated with the preferences of a user."}, {"start": 348.64, "end": 353.28, "text": " It turns out, by the way, that it's known that users that use the Chrome versus Firefox"}, {"start": 353.28, "end": 358.24, "text": " versus the Safari versus the Microsoft Edge browser, they actually behave in very different"}, {"start": 358.24, "end": 359.24, "text": " ways."}, {"start": 359.24, "end": 364.2, "text": " So even knowing the user web browser can give you a hint when you have collected an updater"}, {"start": 364.2, "end": 366.4, "text": " of what this particular user might like."}, {"start": 366.4, "end": 371.59999999999997, "text": " So even though collective filtering, where you have multiple users give you ratings of"}, {"start": 371.59999999999997, "end": 377.4, "text": " multiple items is a very powerful set of algorithms, it also has some limitations."}, {"start": 377.4, "end": 382.32, "text": " In the next video, let's go on to develop content based filtering algorithms, which"}, {"start": 382.32, "end": 385.59999999999997, "text": " can address a lot of these limitations."}, {"start": 385.59999999999997, "end": 389.79999999999995, "text": " Content based filtering algorithms are a state of the art technique used in many commercial"}, {"start": 389.79999999999995, "end": 391.35999999999996, "text": " applications today."}, {"start": 391.36, "end": 398.36, "text": " Let's go take a look at how they work."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=UgSBaD7s5HU
9.8 Content-based Filtering | Collaborative filtering vs Content-based filtering-- [ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, we'll start to develop a second type of recommender system called a content-based filtering algorithm. To get started, let's compare and contrast the collaborative filtering approach that we've been looking at so far with this new content-based filtering approach. Let's take a look. With collaborative filtering, the general approach is that we would recommend items to you based on ratings of users who gave similar ratings as you. So we have some number of users give some ratings for some items, and the algorithm figures out how to use that to recommend new items to you. In contrast, content-based filtering takes a different approach to deciding what to recommend to you. A content-based filtering algorithm will recommend items to you based on the features of users and features of the items to find a good match. In other words, it requires having some features of each user as well as some features of each item, and it uses those features to try to decide which items and users might be a good match for each other. With a content-based filtering algorithm, you still have data where users have rated some items. So with content-based filtering, we'll continue to use our ij to denote whether or not user j has rated item i, and we'll continue to use yij to denote the rating that user j has given item i, if it's defined. But the key to content-based filtering is that we'll be able to make good use of features of the user and of the items to find better matches than potentially a pure collaborative filtering approach might be able to. Let's take a look at how this works. In the case of movie recommendations, here are some examples of features. You may know the age of the user, or you may have the gender of the user. So this could be a one-hot feature, similar to what you saw when we were talking about decision trees, where you could have a one-hot feature with three values based on whether the user's self-identified gender is male or female or unknown. And you may know the country of the user. So if there are about 200 countries in the world, then does it be a one-hot feature with about 200 possible values? You can also look at past behaviors of the user to construct this feature vector. For example, if you look at the top thousand movies in your catalog, you might construct a thousand features that tells you of the thousand most popular movies in the world, which of these has the user watched. And in fact, you can also take ratings the user might have already given in order to construct new features. So it turns out that if you have a set of movies and if you know what genre each movie is in, then the average rating per genre that the user has given. So of all the romance movies that the user has rated, what was the average rating? Of all the action movies that the user has rated, what was the average rating? And so on for all the other genres, this too can be a powerful feature to describe the user. One interesting thing about this feature is that it actually depends on the ratings that the user had given, but there's nothing wrong with that. Constructing a feature vector that depends on the user's ratings is a completely fine way to develop a feature vector to describe that user. So with such features like these, you can then come up with a feature vector X subscript U, U stands for user, superscript J for user J. Similarly, you can also come up with a set of features for each movie or for each item, such as what was the year of the movie? What's the genre or genres of the movie of known? If there are critic reviews of the movie, you can construct one or multiple features to capture something about what the critics are saying about the movie. Or once again, you can actually take user ratings of the movie to construct a feature of say the average rating of this movie. This feature again depends on the ratings that users had given, but again, there's nothing wrong with that. You can construct a feature for a given movie that depends on the ratings the movie had received, such as the average rating of the movie. Or if you wish, you can also have average rating per country or average rating per user demographic and so on to construct other types of features of the movies as well. And so with this for each movie, you can then construct a feature vector, which I'm going to denote X subscript M, M stands for movie and superscript I for movie I. Given features like this, the task is to try to figure out whether a given movie I is going to be a good match for user J. Notice that the user features and movie features can be very different in size. For example, maybe the user features could be 1500 numbers and the movie features could be just 50 numbers. And that's okay too. In content based filtering, we're going to develop an algorithm that learns to match users and movies. Previously, we were predicting the rating of user J on movie I as WJ dot product of XI plus BJ. In order to develop content based filtering, I'm going to get rid of BJ. It turns out this won't hurt the performance of the content based filtering at all. Instead of writing WJ for a user J and XI for a movie I, I'm instead going to just replace this notation with VJ U. This V here stands for a vector. It'll be a list of numbers computed for user J and the U subscript here stands for user. And instead of XI, I'm going to compute a separate vector, subscript M, this stands for movie and for movie I is what the superscript stands for. So VJ U is a vector, is a list of numbers computed from the features of user J. And VIM is a list of numbers computed from the features like the ones you saw in the previous slide of user I. And if we're able to come up with an appropriate choice of these vectors, VJ U and VIM, then hopefully the dot product between these two vectors will be a good prediction of the rating that user J gives movie I. Just to illustrate what a learning algorithm could come up with, if VU that is a user vector turns out to capture the user's preferences, say is 4.9, 0.1, and so on, list of numbers like that. And the first number captures how much do they like romance movies? And then the second number captures how much do they like action movies? And so on. And if VM, the movie vector is 4.5, 0.2, and so on and so forth. With these numbers capturing how much is this a romance movie, how much this is an action movie, and so on. Then the dot product which multiplies these list of numbers element-wise and then takes a sum, hopefully will give a sense of how much this particular user will like this particular movie. So the challenge is given features of a user, say XJ U, how can we compute this vector VJ U that represents succinctly or compactly the user's preferences? And similarly given features of a movie, how can we compute VIM? Notice that whereas XU and XM could be different in size, one could be a very long list of numbers, one could be much shorter list, V here have to be the same size. Because if you want to take a dot product between VU and VM, then both of them have to have the same dimension, such as maybe both of these are, say, 32 numbers. So to summarize, in collaborative filtering, we had number of users give ratings of different items. In contrast, in content-based filtering, we have features of users and features of items, and we want to find a way to find good matches between the users and the items. And the way we're going to do so is to compute these vectors, VU for the users and VM for the items of the movies, and then take dot products between them to try to find good matches. How do we compute VU and VM? Let's take a look at that in the next video.
[{"start": 0.0, "end": 8.0, "text": " In this video, we'll start to develop a second type of recommender system called a content-based"}, {"start": 8.0, "end": 9.8, "text": " filtering algorithm."}, {"start": 9.8, "end": 13.8, "text": " To get started, let's compare and contrast the collaborative filtering approach that"}, {"start": 13.8, "end": 17.48, "text": " we've been looking at so far with this new content-based filtering approach."}, {"start": 17.48, "end": 19.240000000000002, "text": " Let's take a look."}, {"start": 19.240000000000002, "end": 25.16, "text": " With collaborative filtering, the general approach is that we would recommend items"}, {"start": 25.16, "end": 31.14, "text": " to you based on ratings of users who gave similar ratings as you."}, {"start": 31.14, "end": 36.44, "text": " So we have some number of users give some ratings for some items, and the algorithm"}, {"start": 36.44, "end": 40.96, "text": " figures out how to use that to recommend new items to you."}, {"start": 40.96, "end": 48.04, "text": " In contrast, content-based filtering takes a different approach to deciding what to recommend"}, {"start": 48.04, "end": 49.04, "text": " to you."}, {"start": 49.04, "end": 55.04, "text": " A content-based filtering algorithm will recommend items to you based on the features of users"}, {"start": 55.04, "end": 58.68, "text": " and features of the items to find a good match."}, {"start": 58.68, "end": 65.44, "text": " In other words, it requires having some features of each user as well as some features of each"}, {"start": 65.44, "end": 72.08, "text": " item, and it uses those features to try to decide which items and users might be a good"}, {"start": 72.08, "end": 73.9, "text": " match for each other."}, {"start": 73.9, "end": 79.36, "text": " With a content-based filtering algorithm, you still have data where users have rated"}, {"start": 79.36, "end": 80.6, "text": " some items."}, {"start": 80.6, "end": 88.28, "text": " So with content-based filtering, we'll continue to use our ij to denote whether or not user"}, {"start": 88.28, "end": 97.0, "text": " j has rated item i, and we'll continue to use yij to denote the rating that user j has"}, {"start": 97.0, "end": 99.8, "text": " given item i, if it's defined."}, {"start": 99.8, "end": 106.39999999999999, "text": " But the key to content-based filtering is that we'll be able to make good use of features"}, {"start": 106.4, "end": 113.72, "text": " of the user and of the items to find better matches than potentially a pure collaborative"}, {"start": 113.72, "end": 116.16000000000001, "text": " filtering approach might be able to."}, {"start": 116.16000000000001, "end": 117.96000000000001, "text": " Let's take a look at how this works."}, {"start": 117.96000000000001, "end": 122.12, "text": " In the case of movie recommendations, here are some examples of features."}, {"start": 122.12, "end": 128.6, "text": " You may know the age of the user, or you may have the gender of the user."}, {"start": 128.6, "end": 134.52, "text": " So this could be a one-hot feature, similar to what you saw when we were talking about"}, {"start": 134.52, "end": 140.28, "text": " decision trees, where you could have a one-hot feature with three values based on whether"}, {"start": 140.28, "end": 146.20000000000002, "text": " the user's self-identified gender is male or female or unknown."}, {"start": 146.20000000000002, "end": 149.48000000000002, "text": " And you may know the country of the user."}, {"start": 149.48000000000002, "end": 155.76000000000002, "text": " So if there are about 200 countries in the world, then does it be a one-hot feature with"}, {"start": 155.76000000000002, "end": 158.16000000000003, "text": " about 200 possible values?"}, {"start": 158.16000000000003, "end": 163.48000000000002, "text": " You can also look at past behaviors of the user to construct this feature vector."}, {"start": 163.48, "end": 168.79999999999998, "text": " For example, if you look at the top thousand movies in your catalog, you might construct"}, {"start": 168.79999999999998, "end": 174.28, "text": " a thousand features that tells you of the thousand most popular movies in the world,"}, {"start": 174.28, "end": 177.2, "text": " which of these has the user watched."}, {"start": 177.2, "end": 182.83999999999997, "text": " And in fact, you can also take ratings the user might have already given in order to"}, {"start": 182.83999999999997, "end": 184.92, "text": " construct new features."}, {"start": 184.92, "end": 190.79999999999998, "text": " So it turns out that if you have a set of movies and if you know what genre each movie"}, {"start": 190.8, "end": 196.48000000000002, "text": " is in, then the average rating per genre that the user has given."}, {"start": 196.48000000000002, "end": 202.84, "text": " So of all the romance movies that the user has rated, what was the average rating?"}, {"start": 202.84, "end": 207.58, "text": " Of all the action movies that the user has rated, what was the average rating?"}, {"start": 207.58, "end": 214.4, "text": " And so on for all the other genres, this too can be a powerful feature to describe the"}, {"start": 214.4, "end": 215.60000000000002, "text": " user."}, {"start": 215.6, "end": 221.44, "text": " One interesting thing about this feature is that it actually depends on the ratings that"}, {"start": 221.44, "end": 224.92, "text": " the user had given, but there's nothing wrong with that."}, {"start": 224.92, "end": 229.88, "text": " Constructing a feature vector that depends on the user's ratings is a completely fine"}, {"start": 229.88, "end": 233.79999999999998, "text": " way to develop a feature vector to describe that user."}, {"start": 233.79999999999998, "end": 240.68, "text": " So with such features like these, you can then come up with a feature vector X subscript"}, {"start": 240.68, "end": 244.56, "text": " U, U stands for user, superscript J for user J."}, {"start": 244.56, "end": 250.28, "text": " Similarly, you can also come up with a set of features for each movie or for each item,"}, {"start": 250.28, "end": 253.12, "text": " such as what was the year of the movie?"}, {"start": 253.12, "end": 256.7, "text": " What's the genre or genres of the movie of known?"}, {"start": 256.7, "end": 262.72, "text": " If there are critic reviews of the movie, you can construct one or multiple features"}, {"start": 262.72, "end": 267.04, "text": " to capture something about what the critics are saying about the movie."}, {"start": 267.04, "end": 272.44, "text": " Or once again, you can actually take user ratings of the movie to construct a feature"}, {"start": 272.44, "end": 276.02, "text": " of say the average rating of this movie."}, {"start": 276.02, "end": 283.12, "text": " This feature again depends on the ratings that users had given, but again, there's nothing"}, {"start": 283.12, "end": 284.12, "text": " wrong with that."}, {"start": 284.12, "end": 289.32, "text": " You can construct a feature for a given movie that depends on the ratings the movie had"}, {"start": 289.32, "end": 292.24, "text": " received, such as the average rating of the movie."}, {"start": 292.24, "end": 298.8, "text": " Or if you wish, you can also have average rating per country or average rating per user"}, {"start": 298.8, "end": 303.28000000000003, "text": " demographic and so on to construct other types of features of the movies as well."}, {"start": 303.28000000000003, "end": 308.32, "text": " And so with this for each movie, you can then construct a feature vector, which I'm going"}, {"start": 308.32, "end": 315.08000000000004, "text": " to denote X subscript M, M stands for movie and superscript I for movie I."}, {"start": 315.08000000000004, "end": 323.6, "text": " Given features like this, the task is to try to figure out whether a given movie I is going"}, {"start": 323.6, "end": 327.16, "text": " to be a good match for user J."}, {"start": 327.16, "end": 333.92, "text": " Notice that the user features and movie features can be very different in size."}, {"start": 333.92, "end": 341.48, "text": " For example, maybe the user features could be 1500 numbers and the movie features could"}, {"start": 341.48, "end": 343.64000000000004, "text": " be just 50 numbers."}, {"start": 343.64000000000004, "end": 344.88, "text": " And that's okay too."}, {"start": 344.88, "end": 349.12, "text": " In content based filtering, we're going to develop an algorithm that learns to match"}, {"start": 349.12, "end": 351.56, "text": " users and movies."}, {"start": 351.56, "end": 359.92, "text": " Previously, we were predicting the rating of user J on movie I as WJ dot product of"}, {"start": 359.92, "end": 363.0, "text": " XI plus BJ."}, {"start": 363.0, "end": 369.2, "text": " In order to develop content based filtering, I'm going to get rid of BJ."}, {"start": 369.2, "end": 372.92, "text": " It turns out this won't hurt the performance of the content based filtering at all."}, {"start": 372.92, "end": 382.28000000000003, "text": " Instead of writing WJ for a user J and XI for a movie I, I'm instead going to just replace"}, {"start": 382.28000000000003, "end": 386.64000000000004, "text": " this notation with VJ U."}, {"start": 386.64000000000004, "end": 388.56, "text": " This V here stands for a vector."}, {"start": 388.56, "end": 397.38, "text": " It'll be a list of numbers computed for user J and the U subscript here stands for user."}, {"start": 397.38, "end": 403.04, "text": " And instead of XI, I'm going to compute a separate vector, subscript M, this stands"}, {"start": 403.04, "end": 408.36, "text": " for movie and for movie I is what the superscript stands for."}, {"start": 408.36, "end": 418.84, "text": " So VJ U is a vector, is a list of numbers computed from the features of user J. And"}, {"start": 418.84, "end": 425.96, "text": " VIM is a list of numbers computed from the features like the ones you saw in the previous"}, {"start": 425.96, "end": 428.88, "text": " slide of user I."}, {"start": 428.88, "end": 437.35999999999996, "text": " And if we're able to come up with an appropriate choice of these vectors, VJ U and VIM, then"}, {"start": 437.35999999999996, "end": 443.24, "text": " hopefully the dot product between these two vectors will be a good prediction of the rating"}, {"start": 443.24, "end": 445.84, "text": " that user J gives movie I."}, {"start": 445.84, "end": 454.7, "text": " Just to illustrate what a learning algorithm could come up with, if VU that is a user vector"}, {"start": 454.7, "end": 463.24, "text": " turns out to capture the user's preferences, say is 4.9, 0.1, and so on, list of numbers"}, {"start": 463.24, "end": 464.24, "text": " like that."}, {"start": 464.24, "end": 469.4, "text": " And the first number captures how much do they like romance movies?"}, {"start": 469.4, "end": 473.88, "text": " And then the second number captures how much do they like action movies?"}, {"start": 473.88, "end": 474.88, "text": " And so on."}, {"start": 474.88, "end": 483.91999999999996, "text": " And if VM, the movie vector is 4.5, 0.2, and so on and so forth."}, {"start": 483.92, "end": 488.8, "text": " With these numbers capturing how much is this a romance movie, how much this is an action"}, {"start": 488.8, "end": 490.72, "text": " movie, and so on."}, {"start": 490.72, "end": 496.72, "text": " Then the dot product which multiplies these list of numbers element-wise and then takes"}, {"start": 496.72, "end": 502.72, "text": " a sum, hopefully will give a sense of how much this particular user will like this particular"}, {"start": 502.72, "end": 503.72, "text": " movie."}, {"start": 503.72, "end": 511.68, "text": " So the challenge is given features of a user, say XJ U, how can we compute this vector VJ"}, {"start": 511.68, "end": 517.48, "text": " U that represents succinctly or compactly the user's preferences?"}, {"start": 517.48, "end": 523.4, "text": " And similarly given features of a movie, how can we compute VIM?"}, {"start": 523.4, "end": 530.48, "text": " Notice that whereas XU and XM could be different in size, one could be a very long list of"}, {"start": 530.48, "end": 537.12, "text": " numbers, one could be much shorter list, V here have to be the same size."}, {"start": 537.12, "end": 542.24, "text": " Because if you want to take a dot product between VU and VM, then both of them have"}, {"start": 542.24, "end": 548.96, "text": " to have the same dimension, such as maybe both of these are, say, 32 numbers."}, {"start": 548.96, "end": 555.92, "text": " So to summarize, in collaborative filtering, we had number of users give ratings of different"}, {"start": 555.92, "end": 557.2, "text": " items."}, {"start": 557.2, "end": 563.16, "text": " In contrast, in content-based filtering, we have features of users and features of items,"}, {"start": 563.16, "end": 568.64, "text": " and we want to find a way to find good matches between the users and the items."}, {"start": 568.64, "end": 575.0799999999999, "text": " And the way we're going to do so is to compute these vectors, VU for the users and VM for"}, {"start": 575.0799999999999, "end": 579.92, "text": " the items of the movies, and then take dot products between them to try to find good"}, {"start": 579.92, "end": 580.92, "text": " matches."}, {"start": 580.92, "end": 583.36, "text": " How do we compute VU and VM?"}, {"start": 583.36, "end": 594.04, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=jhgnQB7fYKM
9.9 Content-based Filtering | Deep learning for content-based filtering-- [ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
A good way to develop a content-based filtering algorithm is to use deep learning. The approach you see in this video is the way that many important commercial, state-of-the-art content-based filtering algorithms are built today. Let's take a look. Recall that in our approach, given a feature vector describing a user, such as the age and gender and country and so on, we have to compute a vector v u. And similarly, given a vector describing a movie, such as its year release, the stars in the movie and so on, we have to compute a vector v m. In order to do the former, we're going to use a neural network. And the first neural network will be what we'll call the user network. Here's an example of a user network. It takes as input the list of features of the user, x u, so the age, the gender, the country of the user, and so on. And then using a few layers, say dense neural network layers, it will output this vector v u that describes the user. Notice that in this neural network, the output layer has 32 units, and so v u is actually a list of 32 numbers. Unlike most of the neural networks that we're using earlier, the final layer is not a layer with one unit. There's a layer with 32 units. Similarly, to compute v m for a movie, we can have a movie network as follows that takes as input features of the movie and through a few layers of a neural network as an outputting v m, that vector that describes the movie. Finally, we'll predict the rating of this user on that movie as v u dot product with v m. Notice that the user network and movie network can hypothetically have different numbers of hidden layers and different numbers of units per hidden layer. Only the output layer needs to have the same size or the same dimension. In the description you've seen so far, we were predicting the 1 to 5 or 0.25 star movie rating. If you had binary labels, if y was to the user like or favorite in item, then you can also modify this algorithm to output instead of v u dot v m, you can apply the sigmoid function to that and use this to predict the probability that y i j is 1. To flesh out this notation, we can also add superscripts i and j here. If you want to emphasize that this is the prediction by user j on movie i. I've drawn here the user network and the movie network as two separate neural networks, but it turns out that we can actually draw them together in a single diagram as if it was a single neural network. This is what it looks like. On the upper portion of this diagram, we have the user network which inputs x u and ends up computing v u. On the lower portion of this diagram, we have what was the movie network that inputs x m and ends up computing v m. And these two vectors are then dot producted together. This dot here represents dot product and this gives us our prediction. Now this model has a lot of parameters. Each of these layers of a neural network has a usual set of parameters of the neural network. So how do you train all the parameters of both the user network and the movie network? What we're going to do is construct a cost function j which is going to be very similar to the cost function that you saw in collaborative filtering, which is assuming that you do have some data of some users having rated some movies. We're going to sum over all pairs i and j of where you have labels where i j equals one of the difference between the prediction so that that would be v u j dot product with v m i minus y i j squared. And the way we would train this model is depending on the parameters of the neural network, you end up with different vectors here for the users and for the movies. And so what we like to do is train the parameters of the neural network so that you end up with vectors for the users and for the movies that results in small squared error in the predictions you get out here. And so to be clear, there's no separate training procedure for the user and movie networks. This expression down here, this is the cost function used to train all the parameters of the user and the movie networks. We're going to judge the two networks according to how well v u and v m predict y i j. And with this cost function, we're going to use gradient descent or some other optimization algorithm to tune the parameters of the neural network to cause the cost function j to be as small as possible. Oh, and if you want to regularize this model, we can also add the usual neural network regularization term to encourage the neural networks to keep the values of their parameters small. It turns out after you've trained this model, you can also use this to find similar items. This is akin to what we have seen with collaborative filtering features, helping you find similar items as well. Let's take a look. So v u j is a vector of length 32 that describes a user j that had features x u j. And similarly, v i m is a vector of length 32 that describes a movie with these features over here. So given a specific movie, what if you want to find other movies similar to it? Well, this vector v i m describes the movie i. So if you want to find other movies similar to it, you can then look for other movies k so that the distance between the vector describing movie k and the vector describing movie i that that distance or square distance is small. And this expression plays a role similar to what we had previously with collaborative filtering, where we talked about finding a movie with features x k that was similar to the features x i. And thus with this approach, you can also find items similar to a given item. One final note, this can be pre computed ahead of time. And by that, I mean, you can run a compute server overnight to go through the list of all your movies and for every movie find a similar movies to it. So that tomorrow, if a user comes to the website and they're browsing a specific movie, you can already have pre computed the 10 or 20 most similar movies to show to the user at that time. The fact that you can pre compute ahead of time what's similar to a given movie will turn out to be important later when we talk about scaling up this approach to a very large catalog of movies. So that's how you can use deep learning to build a content based filtering algorithm. You might remember when we were talking about decision trees and the pros and cons of decision trees versus neural networks. I've mentioned that one of the benefits of neural networks is that it's easier to take multiple neural networks and put them together to make them work in concert to build a larger system. And what you just saw was actually an example of that, where we could take a user network and the movie network and put them together and then take the inner product of the output and this ability to put two neural networks together. This how we've managed to come up with a more complex architecture that turns out to be quite powerful. One note, if you're implementing these algorithms in practice, I find that developers often end up spending a lot of time carefully designing the features needed to feed into these content based filtering algorithms. So if you end up building one of these systems commercially, it may be worth spending some time engineering good features for this application as well. And in terms of these applications, one limitation of the algorithm as we've described it is it can be computational, very expensive to run if you have a large catalog of a lot of different movies you may want to recommend. So in the next video, let's take a look at some of the practical issues and how you can modify this algorithm to make a skill to working on even very large item catalogs. Let's go see that in the next video.
[{"start": 0.0, "end": 6.8, "text": " A good way to develop a content-based filtering algorithm is to use deep learning."}, {"start": 6.8, "end": 13.56, "text": " The approach you see in this video is the way that many important commercial, state-of-the-art"}, {"start": 13.56, "end": 15.56, "text": " content-based filtering algorithms are built today."}, {"start": 15.56, "end": 17.2, "text": " Let's take a look."}, {"start": 17.2, "end": 25.240000000000002, "text": " Recall that in our approach, given a feature vector describing a user, such as the age"}, {"start": 25.24, "end": 31.68, "text": " and gender and country and so on, we have to compute a vector v u."}, {"start": 31.68, "end": 37.96, "text": " And similarly, given a vector describing a movie, such as its year release, the stars"}, {"start": 37.96, "end": 42.86, "text": " in the movie and so on, we have to compute a vector v m."}, {"start": 42.86, "end": 48.239999999999995, "text": " In order to do the former, we're going to use a neural network."}, {"start": 48.239999999999995, "end": 53.4, "text": " And the first neural network will be what we'll call the user network."}, {"start": 53.4, "end": 57.12, "text": " Here's an example of a user network."}, {"start": 57.12, "end": 63.519999999999996, "text": " It takes as input the list of features of the user, x u, so the age, the gender, the"}, {"start": 63.519999999999996, "end": 66.28, "text": " country of the user, and so on."}, {"start": 66.28, "end": 74.28, "text": " And then using a few layers, say dense neural network layers, it will output this vector"}, {"start": 74.28, "end": 78.28, "text": " v u that describes the user."}, {"start": 78.28, "end": 87.08, "text": " Notice that in this neural network, the output layer has 32 units, and so v u is actually"}, {"start": 87.08, "end": 89.56, "text": " a list of 32 numbers."}, {"start": 89.56, "end": 95.08, "text": " Unlike most of the neural networks that we're using earlier, the final layer is not a layer"}, {"start": 95.08, "end": 96.08, "text": " with one unit."}, {"start": 96.08, "end": 99.24000000000001, "text": " There's a layer with 32 units."}, {"start": 99.24, "end": 108.36, "text": " Similarly, to compute v m for a movie, we can have a movie network as follows that takes"}, {"start": 108.36, "end": 115.56, "text": " as input features of the movie and through a few layers of a neural network as an outputting"}, {"start": 115.56, "end": 119.72, "text": " v m, that vector that describes the movie."}, {"start": 119.72, "end": 129.76, "text": " Finally, we'll predict the rating of this user on that movie as v u dot product with"}, {"start": 129.76, "end": 131.24, "text": " v m."}, {"start": 131.24, "end": 136.8, "text": " Notice that the user network and movie network can hypothetically have different numbers"}, {"start": 136.8, "end": 140.32, "text": " of hidden layers and different numbers of units per hidden layer."}, {"start": 140.32, "end": 145.56, "text": " Only the output layer needs to have the same size or the same dimension."}, {"start": 145.56, "end": 152.76, "text": " In the description you've seen so far, we were predicting the 1 to 5 or 0.25 star movie"}, {"start": 152.76, "end": 153.76, "text": " rating."}, {"start": 153.76, "end": 160.46, "text": " If you had binary labels, if y was to the user like or favorite in item, then you can"}, {"start": 160.46, "end": 168.64000000000001, "text": " also modify this algorithm to output instead of v u dot v m, you can apply the sigmoid"}, {"start": 168.64, "end": 177.07999999999998, "text": " function to that and use this to predict the probability that y i j is 1."}, {"start": 177.07999999999998, "end": 182.11999999999998, "text": " To flesh out this notation, we can also add superscripts i and j here."}, {"start": 182.11999999999998, "end": 188.6, "text": " If you want to emphasize that this is the prediction by user j on movie i."}, {"start": 188.6, "end": 193.64, "text": " I've drawn here the user network and the movie network as two separate neural networks, but"}, {"start": 193.64, "end": 198.64, "text": " it turns out that we can actually draw them together in a single diagram as if it was"}, {"start": 198.64, "end": 201.23999999999998, "text": " a single neural network."}, {"start": 201.23999999999998, "end": 203.23999999999998, "text": " This is what it looks like."}, {"start": 203.23999999999998, "end": 210.23999999999998, "text": " On the upper portion of this diagram, we have the user network which inputs x u and ends"}, {"start": 210.23999999999998, "end": 212.23999999999998, "text": " up computing v u."}, {"start": 212.23999999999998, "end": 217.92, "text": " On the lower portion of this diagram, we have what was the movie network that inputs x m"}, {"start": 217.92, "end": 221.16, "text": " and ends up computing v m."}, {"start": 221.16, "end": 225.56, "text": " And these two vectors are then dot producted together."}, {"start": 225.56, "end": 232.28, "text": " This dot here represents dot product and this gives us our prediction."}, {"start": 232.28, "end": 236.44, "text": " Now this model has a lot of parameters."}, {"start": 236.44, "end": 241.51999999999998, "text": " Each of these layers of a neural network has a usual set of parameters of the neural network."}, {"start": 241.51999999999998, "end": 249.28, "text": " So how do you train all the parameters of both the user network and the movie network?"}, {"start": 249.28, "end": 255.24, "text": " What we're going to do is construct a cost function j which is going to be very similar"}, {"start": 255.24, "end": 260.98, "text": " to the cost function that you saw in collaborative filtering, which is assuming that you do have"}, {"start": 260.98, "end": 265.36, "text": " some data of some users having rated some movies."}, {"start": 265.36, "end": 272.0, "text": " We're going to sum over all pairs i and j of where you have labels where i j equals"}, {"start": 272.0, "end": 280.08, "text": " one of the difference between the prediction so that that would be v u j dot product with"}, {"start": 280.08, "end": 286.88, "text": " v m i minus y i j squared."}, {"start": 286.88, "end": 293.32, "text": " And the way we would train this model is depending on the parameters of the neural network, you"}, {"start": 293.32, "end": 299.28, "text": " end up with different vectors here for the users and for the movies."}, {"start": 299.28, "end": 304.79999999999995, "text": " And so what we like to do is train the parameters of the neural network so that you end up with"}, {"start": 304.79999999999995, "end": 311.79999999999995, "text": " vectors for the users and for the movies that results in small squared error in the predictions"}, {"start": 311.79999999999995, "end": 313.59999999999997, "text": " you get out here."}, {"start": 313.59999999999997, "end": 320.71999999999997, "text": " And so to be clear, there's no separate training procedure for the user and movie networks."}, {"start": 320.71999999999997, "end": 326.23999999999995, "text": " This expression down here, this is the cost function used to train all the parameters"}, {"start": 326.24, "end": 329.6, "text": " of the user and the movie networks."}, {"start": 329.6, "end": 336.84000000000003, "text": " We're going to judge the two networks according to how well v u and v m predict y i j."}, {"start": 336.84000000000003, "end": 341.64, "text": " And with this cost function, we're going to use gradient descent or some other optimization"}, {"start": 341.64, "end": 346.76, "text": " algorithm to tune the parameters of the neural network to cause the cost function j to be"}, {"start": 346.76, "end": 348.40000000000003, "text": " as small as possible."}, {"start": 348.40000000000003, "end": 355.56, "text": " Oh, and if you want to regularize this model, we can also add the usual neural network regularization"}, {"start": 355.56, "end": 362.04, "text": " term to encourage the neural networks to keep the values of their parameters small."}, {"start": 362.04, "end": 368.2, "text": " It turns out after you've trained this model, you can also use this to find similar items."}, {"start": 368.2, "end": 373.56, "text": " This is akin to what we have seen with collaborative filtering features, helping you find similar"}, {"start": 373.56, "end": 374.56, "text": " items as well."}, {"start": 374.56, "end": 376.56, "text": " Let's take a look."}, {"start": 376.56, "end": 385.64, "text": " So v u j is a vector of length 32 that describes a user j that had features x u j."}, {"start": 385.64, "end": 393.68, "text": " And similarly, v i m is a vector of length 32 that describes a movie with these features"}, {"start": 393.68, "end": 395.2, "text": " over here."}, {"start": 395.2, "end": 401.84000000000003, "text": " So given a specific movie, what if you want to find other movies similar to it?"}, {"start": 401.84, "end": 408.41999999999996, "text": " Well, this vector v i m describes the movie i."}, {"start": 408.41999999999996, "end": 414.12, "text": " So if you want to find other movies similar to it, you can then look for other movies"}, {"start": 414.12, "end": 421.7, "text": " k so that the distance between the vector describing movie k and the vector describing"}, {"start": 421.7, "end": 426.35999999999996, "text": " movie i that that distance or square distance is small."}, {"start": 426.36, "end": 431.92, "text": " And this expression plays a role similar to what we had previously with collaborative"}, {"start": 431.92, "end": 439.08000000000004, "text": " filtering, where we talked about finding a movie with features x k that was similar to"}, {"start": 439.08000000000004, "end": 441.52000000000004, "text": " the features x i."}, {"start": 441.52000000000004, "end": 447.72, "text": " And thus with this approach, you can also find items similar to a given item."}, {"start": 447.72, "end": 451.84000000000003, "text": " One final note, this can be pre computed ahead of time."}, {"start": 451.84, "end": 458.02, "text": " And by that, I mean, you can run a compute server overnight to go through the list of"}, {"start": 458.02, "end": 462.61999999999995, "text": " all your movies and for every movie find a similar movies to it."}, {"start": 462.61999999999995, "end": 468.64, "text": " So that tomorrow, if a user comes to the website and they're browsing a specific movie, you"}, {"start": 468.64, "end": 474.0, "text": " can already have pre computed the 10 or 20 most similar movies to show to the user at"}, {"start": 474.0, "end": 475.0, "text": " that time."}, {"start": 475.0, "end": 480.09999999999997, "text": " The fact that you can pre compute ahead of time what's similar to a given movie will"}, {"start": 480.1, "end": 485.92, "text": " turn out to be important later when we talk about scaling up this approach to a very large"}, {"start": 485.92, "end": 487.92, "text": " catalog of movies."}, {"start": 487.92, "end": 494.64000000000004, "text": " So that's how you can use deep learning to build a content based filtering algorithm."}, {"start": 494.64000000000004, "end": 499.28000000000003, "text": " You might remember when we were talking about decision trees and the pros and cons of decision"}, {"start": 499.28000000000003, "end": 501.24, "text": " trees versus neural networks."}, {"start": 501.24, "end": 506.28000000000003, "text": " I've mentioned that one of the benefits of neural networks is that it's easier to take"}, {"start": 506.28, "end": 511.11999999999995, "text": " multiple neural networks and put them together to make them work in concert to build a larger"}, {"start": 511.11999999999995, "end": 512.12, "text": " system."}, {"start": 512.12, "end": 517.8, "text": " And what you just saw was actually an example of that, where we could take a user network"}, {"start": 517.8, "end": 522.74, "text": " and the movie network and put them together and then take the inner product of the output"}, {"start": 522.74, "end": 527.16, "text": " and this ability to put two neural networks together."}, {"start": 527.16, "end": 531.92, "text": " This how we've managed to come up with a more complex architecture that turns out to be"}, {"start": 531.92, "end": 534.16, "text": " quite powerful."}, {"start": 534.16, "end": 538.9599999999999, "text": " One note, if you're implementing these algorithms in practice, I find that developers often"}, {"start": 538.9599999999999, "end": 544.9599999999999, "text": " end up spending a lot of time carefully designing the features needed to feed into these content"}, {"start": 544.9599999999999, "end": 545.9599999999999, "text": " based filtering algorithms."}, {"start": 545.9599999999999, "end": 550.42, "text": " So if you end up building one of these systems commercially, it may be worth spending some"}, {"start": 550.42, "end": 555.76, "text": " time engineering good features for this application as well."}, {"start": 555.76, "end": 562.1999999999999, "text": " And in terms of these applications, one limitation of the algorithm as we've described it is"}, {"start": 562.2, "end": 567.32, "text": " it can be computational, very expensive to run if you have a large catalog of a lot of"}, {"start": 567.32, "end": 570.1400000000001, "text": " different movies you may want to recommend."}, {"start": 570.1400000000001, "end": 573.96, "text": " So in the next video, let's take a look at some of the practical issues and how you can"}, {"start": 573.96, "end": 580.5600000000001, "text": " modify this algorithm to make a skill to working on even very large item catalogs."}, {"start": 580.56, "end": 593.3599999999999, "text": " Let's go see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=nNJPU5fwc8E
9.10 Advanced implementation | Recommending from a large catalogue -- [ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Today's recommended systems will sometimes need to pick a handful of items to recommend from a catalog of thousands or millions or tens of millions or even more items. How do you do this efficiently computationally? Let's take a look. Here's the neural network we've been using to make predictions about how a user might rate an item. Today, a large movie streaming site may have thousands of movies or a system that is trying to decide what ad to show may have a catalog of millions of ads to choose from or a music streaming site may have tens of millions of songs to choose from and large online shopping sites can have millions or even tens of millions of products to choose from. When a user shows up on your website, they have some feature XU, but if you need to take thousands or millions of items to feed through this neural network in order to compute the inner product to figure out which products you should recommend, then having to run neural network inference thousands or millions of times every time a user shows up on your website becomes computationally infeasible. Many large scale recommender systems are implemented as two steps, which are called the retrieval and ranking steps. The idea is during the retrieval step will generate a large list of plausible item candidates that tries to cover a lot of possible things you might recommend to the user. And it's okay during the retrieval step if you include a lot of items that the user is not likely to like and then during the ranking step will fine tune and pick the best items to recommend to the user. So here's an example. During the retrieval step, we might do something like for each of the last 10 movies that the user has watched, find the 10 most similar movies. So this means for example, if a user has watched the movie I with vector v i m, you can find the movies Hey, with vector v k m that is similar to that. And as you saw in the last video, finding the similar movies to a given movie can be pre computed. So having pre computed the most similar movies to given movie, you can just pull up the results using a lookup table. This would give you an initial set of maybe somewhat plausible movies to recommend to a user that just showed up on your website. Additionally, you might decide to add to it for whatever are the most viewed three genres of the user. Say there's a user that's watched a lot of romance movies, and a lot of comedy movies and a lot of historical dramas. Then we would add to the list of plausible item candidates to top 10 movies in each of these three genres. And then maybe we would also add to this list, the top 20 movies in the country of the user. So this retrieval step can be done very quickly and you may end up with a list of 100 or maybe hundreds of plausible movies to recommend to the user. And hopefully this list will recommend some good options, but it's also okay if it includes some options that the user won't like at all. The goal of the retrieval step is to ensure broad coverage, to have enough movies to at least have many good ones in there. Finally, we would then take all the items we retrieve during the retrieval step and combine them into a list, removing duplicates and removing items that the user has already watched or that the user has already purchased and that you may not want to recommend to them again. The second step of this is then the ranking step. During the ranking step, you would take the list retrieved during the retrieval step, so this may be just hundreds of possible movies and rank them using the learned model. And what that means is you will feed the user feature vector and the movie feature vector into this neural network. And for each of the user movie pairs, compute the predicted rating. And based on this, you now have all of the say 100 plus movies, the ones that the user is most likely to give a high rating to. And then you can just display the rank list of items to the user depending on what you think the user will give the highest rating to. One additional optimization is that if you have computed Vm for all the movies in advance, then all you need to do is to do inference on this part of the neural network a single time to compute Vu and then take that Vu that you just computed for the user on your website right now and take the inner product between Vu and Vm for the movies that you have retrieved during the retrieval step. So this computation can be done relatively quickly if the retrieval step just brings up say hundreds of movies. One of the decisions you need to make for this algorithm is how many items do you want to retrieve during the retrieval step to feed into the more accurate ranking step. During the retrieval step, retrieving more items will tend to result in better performance, but the algorithm will end up being slower. To analyze or to optimize the trade-off between how many items to retrieve, do you retrieve a hundred or five hundred or a thousand items, I would recommend carrying out offline experiments to see how much retrieving additional items results in more relevant recommendations. And in particular, if the estimated probability that yij is equal to one, according to your neural network model, or if the estimated rating of y being high of the retrieved items according to your model's prediction ends up being much higher, if only you were to retrieve say five hundred items instead of only one hundred items, then that would argue for maybe retrieving more items even if it slows down the algorithm a bit. So with the separate retrieval step and the ranking step, this allows many recommender systems today to give both fast as well as accurate results because retrieval step tries to prune out a lot of items that are just not worth doing the more detailed inference and inner product on, and then the ranking step makes a more careful prediction for what are the items that the user is actually likely to enjoy. So that's it. This is how you would make your recommender system work efficiently, even on very large catalogs of movies or products or what have you. Now it turns out that as commercially important as our recommender systems, there are some significant ethical issues associated with them as well. And unfortunately, there have been recommender systems that have created harm. So as you build your own recommender system, I hope you take an ethical approach and use it to serve your users and society as large as well as yourself and the company that you might be working for. Let's take a look at the ethical issues associated with recommender systems in the next video.
[{"start": 0.0, "end": 6.44, "text": " Today's recommended systems will sometimes need to pick a handful of items to recommend"}, {"start": 6.44, "end": 11.6, "text": " from a catalog of thousands or millions or tens of millions or even more items."}, {"start": 11.6, "end": 13.72, "text": " How do you do this efficiently computationally?"}, {"start": 13.72, "end": 16.04, "text": " Let's take a look."}, {"start": 16.04, "end": 21.080000000000002, "text": " Here's the neural network we've been using to make predictions about how a user might"}, {"start": 21.080000000000002, "end": 22.84, "text": " rate an item."}, {"start": 22.84, "end": 32.24, "text": " Today, a large movie streaming site may have thousands of movies or a system that is trying"}, {"start": 32.24, "end": 40.72, "text": " to decide what ad to show may have a catalog of millions of ads to choose from or a music"}, {"start": 40.72, "end": 47.94, "text": " streaming site may have tens of millions of songs to choose from and large online shopping"}, {"start": 47.94, "end": 52.8, "text": " sites can have millions or even tens of millions of products to choose from."}, {"start": 52.8, "end": 59.559999999999995, "text": " When a user shows up on your website, they have some feature XU, but if you need to take"}, {"start": 59.559999999999995, "end": 66.5, "text": " thousands or millions of items to feed through this neural network in order to compute the"}, {"start": 66.5, "end": 71.56, "text": " inner product to figure out which products you should recommend, then having to run neural"}, {"start": 71.56, "end": 77.2, "text": " network inference thousands or millions of times every time a user shows up on your website"}, {"start": 77.2, "end": 80.6, "text": " becomes computationally infeasible."}, {"start": 80.6, "end": 86.75999999999999, "text": " Many large scale recommender systems are implemented as two steps, which are called the retrieval"}, {"start": 86.75999999999999, "end": 89.08, "text": " and ranking steps."}, {"start": 89.08, "end": 97.63999999999999, "text": " The idea is during the retrieval step will generate a large list of plausible item candidates"}, {"start": 97.63999999999999, "end": 103.19999999999999, "text": " that tries to cover a lot of possible things you might recommend to the user."}, {"start": 103.19999999999999, "end": 107.96, "text": " And it's okay during the retrieval step if you include a lot of items that the user is"}, {"start": 107.96, "end": 115.32, "text": " not likely to like and then during the ranking step will fine tune and pick the best items"}, {"start": 115.32, "end": 117.36, "text": " to recommend to the user."}, {"start": 117.36, "end": 119.19999999999999, "text": " So here's an example."}, {"start": 119.19999999999999, "end": 125.0, "text": " During the retrieval step, we might do something like for each of the last 10 movies that the"}, {"start": 125.0, "end": 130.44, "text": " user has watched, find the 10 most similar movies."}, {"start": 130.44, "end": 138.85999999999999, "text": " So this means for example, if a user has watched the movie I with vector v i m, you can find"}, {"start": 138.85999999999999, "end": 145.32, "text": " the movies Hey, with vector v k m that is similar to that."}, {"start": 145.32, "end": 150.36, "text": " And as you saw in the last video, finding the similar movies to a given movie can be"}, {"start": 150.36, "end": 152.16, "text": " pre computed."}, {"start": 152.16, "end": 156.96, "text": " So having pre computed the most similar movies to given movie, you can just pull up the results"}, {"start": 156.96, "end": 158.72, "text": " using a lookup table."}, {"start": 158.72, "end": 163.35999999999999, "text": " This would give you an initial set of maybe somewhat plausible movies to recommend to"}, {"start": 163.35999999999999, "end": 165.88, "text": " a user that just showed up on your website."}, {"start": 165.88, "end": 172.5, "text": " Additionally, you might decide to add to it for whatever are the most viewed three genres"}, {"start": 172.5, "end": 174.07999999999998, "text": " of the user."}, {"start": 174.07999999999998, "end": 179.04, "text": " Say there's a user that's watched a lot of romance movies, and a lot of comedy movies"}, {"start": 179.04, "end": 181.92, "text": " and a lot of historical dramas."}, {"start": 181.92, "end": 187.56, "text": " Then we would add to the list of plausible item candidates to top 10 movies in each of"}, {"start": 187.56, "end": 189.56, "text": " these three genres."}, {"start": 189.56, "end": 196.72, "text": " And then maybe we would also add to this list, the top 20 movies in the country of the user."}, {"start": 196.72, "end": 202.92000000000002, "text": " So this retrieval step can be done very quickly and you may end up with a list of 100 or maybe"}, {"start": 202.92000000000002, "end": 207.4, "text": " hundreds of plausible movies to recommend to the user."}, {"start": 207.4, "end": 214.2, "text": " And hopefully this list will recommend some good options, but it's also okay if it includes"}, {"start": 214.2, "end": 217.56, "text": " some options that the user won't like at all."}, {"start": 217.56, "end": 223.2, "text": " The goal of the retrieval step is to ensure broad coverage, to have enough movies to at"}, {"start": 223.2, "end": 226.28, "text": " least have many good ones in there."}, {"start": 226.28, "end": 232.12, "text": " Finally, we would then take all the items we retrieve during the retrieval step and"}, {"start": 232.12, "end": 237.35999999999999, "text": " combine them into a list, removing duplicates and removing items that the user has already"}, {"start": 237.35999999999999, "end": 241.16, "text": " watched or that the user has already purchased and that you may not want to recommend to"}, {"start": 241.16, "end": 243.04, "text": " them again."}, {"start": 243.04, "end": 246.32, "text": " The second step of this is then the ranking step."}, {"start": 246.32, "end": 251.51999999999998, "text": " During the ranking step, you would take the list retrieved during the retrieval step,"}, {"start": 251.51999999999998, "end": 259.12, "text": " so this may be just hundreds of possible movies and rank them using the learned model."}, {"start": 259.12, "end": 265.68, "text": " And what that means is you will feed the user feature vector and the movie feature vector"}, {"start": 265.68, "end": 267.56, "text": " into this neural network."}, {"start": 267.56, "end": 273.48, "text": " And for each of the user movie pairs, compute the predicted rating."}, {"start": 273.48, "end": 280.04, "text": " And based on this, you now have all of the say 100 plus movies, the ones that the user"}, {"start": 280.04, "end": 282.96, "text": " is most likely to give a high rating to."}, {"start": 282.96, "end": 287.96, "text": " And then you can just display the rank list of items to the user depending on what you"}, {"start": 287.96, "end": 291.1, "text": " think the user will give the highest rating to."}, {"start": 291.1, "end": 298.76000000000005, "text": " One additional optimization is that if you have computed Vm for all the movies in advance,"}, {"start": 298.76000000000005, "end": 304.32000000000005, "text": " then all you need to do is to do inference on this part of the neural network a single"}, {"start": 304.32000000000005, "end": 310.64000000000004, "text": " time to compute Vu and then take that Vu that you just computed for the user on your website"}, {"start": 310.64000000000004, "end": 316.76000000000005, "text": " right now and take the inner product between Vu and Vm for the movies that you have retrieved"}, {"start": 316.76000000000005, "end": 318.72, "text": " during the retrieval step."}, {"start": 318.72, "end": 324.3, "text": " So this computation can be done relatively quickly if the retrieval step just brings"}, {"start": 324.3, "end": 326.88000000000005, "text": " up say hundreds of movies."}, {"start": 326.88000000000005, "end": 331.84000000000003, "text": " One of the decisions you need to make for this algorithm is how many items do you want"}, {"start": 331.84000000000003, "end": 339.20000000000005, "text": " to retrieve during the retrieval step to feed into the more accurate ranking step."}, {"start": 339.20000000000005, "end": 345.14000000000004, "text": " During the retrieval step, retrieving more items will tend to result in better performance,"}, {"start": 345.14, "end": 349.59999999999997, "text": " but the algorithm will end up being slower."}, {"start": 349.59999999999997, "end": 355.12, "text": " To analyze or to optimize the trade-off between how many items to retrieve, do you retrieve"}, {"start": 355.12, "end": 361.76, "text": " a hundred or five hundred or a thousand items, I would recommend carrying out offline experiments"}, {"start": 361.76, "end": 367.71999999999997, "text": " to see how much retrieving additional items results in more relevant recommendations."}, {"start": 367.71999999999997, "end": 375.12, "text": " And in particular, if the estimated probability that yij is equal to one, according to your"}, {"start": 375.12, "end": 382.08, "text": " neural network model, or if the estimated rating of y being high of the retrieved items"}, {"start": 382.08, "end": 387.4, "text": " according to your model's prediction ends up being much higher, if only you were to"}, {"start": 387.4, "end": 393.16, "text": " retrieve say five hundred items instead of only one hundred items, then that would argue"}, {"start": 393.16, "end": 399.24, "text": " for maybe retrieving more items even if it slows down the algorithm a bit."}, {"start": 399.24, "end": 405.08, "text": " So with the separate retrieval step and the ranking step, this allows many recommender"}, {"start": 405.08, "end": 412.08, "text": " systems today to give both fast as well as accurate results because retrieval step tries"}, {"start": 412.08, "end": 418.38, "text": " to prune out a lot of items that are just not worth doing the more detailed inference"}, {"start": 418.38, "end": 424.08, "text": " and inner product on, and then the ranking step makes a more careful prediction for what"}, {"start": 424.08, "end": 428.36, "text": " are the items that the user is actually likely to enjoy."}, {"start": 428.36, "end": 429.36, "text": " So that's it."}, {"start": 429.36, "end": 434.52000000000004, "text": " This is how you would make your recommender system work efficiently, even on very large"}, {"start": 434.52000000000004, "end": 438.68, "text": " catalogs of movies or products or what have you."}, {"start": 438.68, "end": 445.92, "text": " Now it turns out that as commercially important as our recommender systems, there are some"}, {"start": 445.92, "end": 449.96000000000004, "text": " significant ethical issues associated with them as well."}, {"start": 449.96000000000004, "end": 455.42, "text": " And unfortunately, there have been recommender systems that have created harm."}, {"start": 455.42, "end": 460.28000000000003, "text": " So as you build your own recommender system, I hope you take an ethical approach and use"}, {"start": 460.28000000000003, "end": 466.36, "text": " it to serve your users and society as large as well as yourself and the company that you"}, {"start": 466.36, "end": 468.04, "text": " might be working for."}, {"start": 468.04, "end": 486.44, "text": " Let's take a look at the ethical issues associated with recommender systems in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=GGnPWaNjaGM
9.11 Advanced implementation | Ethical use of recommender systems -- [ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Even though recommender systems have been very profitable for some businesses, there have been some use cases that have left people and society at large worse off. However you use recommender systems, or for that matter other learning algorithms, I hope you only do things that make society at large and people better off. Let's take a look at some of the problematic use cases of recommender systems as well as ameliorations to reduce harm or to increase the amount of good that they can do. As you've seen in the last few videos, there are many ways of configuring a recommender system. When we saw binary labels, the label Y could be does the user engage or do they click or do they explicitly like an item. So when designing a recommender system, choices in setting the goal of the recommender system and a lot of choices in deciding what to recommend to users. For example, you can decide to recommend to users movies most likely to be rated 5 stars by that user. So that seems fine. This seems like a fine way to show users movies that they will like. Or maybe you can recommend to the user products that they are most likely to purchase. And that seems like a very reasonable use of a recommender system as well. Ameliorations of recommender systems can also be used to decide what ads to show to a user. And one thing you could do is to recommend or really to show to the user as the most likely to be clicked on. Actually, what many companies will do is try to show ads that are likely to click on and where the advertiser had put in a high bid because for many ad models, the revenue that the company collects depends on whether the ad was clicked on and what the advertiser had bid per click. And so while this is a profit maximizing strategy, there are also some possible negative implications of this type of advertising. I'll give a specific example on the next slide. One other thing that many companies do is try to recommend products that generate the largest profit. If you go to a website and search for a product today, there are many websites that are not showing you the most relevant product or the product that you are most likely to purchase, but is instead trying to show you the products that would generate the largest profit for the company. And so if a certain product is more profitable for them because they can buy it more cheaply and sell it at a higher price, that gets ranked higher in the recommendations. Now many companies feel a pressure to maximize profit. So this doesn't seem like an unreasonable thing to do, but on the flip side, from the user perspective, when a website recommends to you a product, sometimes it feels like it would be nice if the website was transparent with you about the criteria by which it is deciding what to show you. Is it trying to maximize their profits or trying to show you things that are most useful to you? On video websites or social media websites, a recommended system can also be modified to try to show you the content that leads to the maximum watch time. So specifically, websites that earn ad revenue tend to have an incentive to keep you on the website for a long time. And so try to maximize the time you spend on the site is one way for the site to try to get more of your time so they can show you more ads. And recommended systems today are used to try to maximize user engagements or to maximize the amount of time that someone spends on a site or a specific app. So whereas the first two of these seem quite innocuous, the third, fourth, and fifth, they may be just fine. They may not cause any harm at all, or they could also be problematic use cases for recommended systems. Let's take a deeper look at some of these potentially problematic use cases. Let me start with the advertising example. It turns out that the advertising industry can sometimes be an amplifier of some of the most harmful businesses. It can also be an amplifier of some of the best and the most fruitful businesses. Let me illustrate with a good example and a bad example. Take the travel industry. I think in the travel industry, the way to succeed is to try to give good travel experiences to users, to really try to serve users. Now it turns out that if there's a really good travel company that can sell your trip to fantastic destinations and make sure you and your friends and family have a lot of fun, then a good travel business, I think, will often end up being more profitable. And if a business is more profitable, it can then bid higher for ads. It can afford to pay more to get users. And because it can afford to bid higher for ads, an online advertising site will show its ads more often and drive more users to this good company. And this is a virtuous cycle where the more users you serve well, the more profitable the business, and the more you can bid more for ads and the more traffic you get and so on. And this virtuous circle will maybe even tend to help the good travel companies do even better. So this is a good example. Let's look at a problematic example. The payday loan industry tends to charge extremely high interest rates, often to low income individuals. And one of the ways to do well in the payday loan business is to be really efficient at squeezing customers for every single dollar you can get out of them. So if there's a payday loan company that is very good at exploiting customers, really squeezing customers for every single dollar, then that company will be more profitable. And thus, they can bid higher for ads. And because they can bid higher for ads, they will get more traffic sent to them. And this allows them to squeeze even more customers and exploit even more people for profit. And this in turn also creates a positive feedback loop. So a positive feedback loop that can cause the most exploitative, the most harmful payday loan companies to get sent more traffic. And this seems like the opposite effect than what we think would be good for society. I don't know that there's an easy solution to this. And these are very difficult problems that recommend the systems face. One amelioration might be to refuse to set ads from exploitative businesses. Of course, that's easy to say, but how do you define what is an exploitative business and what is not is a very difficult question. But as we build recommended systems for advertising or for other things, I think these are questions that each one of us working on these technologies should ask ourselves so that we can hopefully invite open discussion and debate, get multiple opinions from multiple people and try to come up with design choices that allows our systems to try to do much more good than potential harm. Let's look at some of the examples. It's been widely reported in news that maximizing user engagement, such as the amount of time that someone watches videos on a website or the amount of time someone spends on social media. This has led to large social media and video sharing sites to amplify conspiracy theories or hate and toxicity because conspiracy theories and certain types of hate toxic content is highly engaging and causes people to spend a lot of time on it. Even if the effect of amplifying conspiracy theories or amplifying hate and toxicity turns out to be harmful to individuals and to society at large. One amelioration for this, partial and imperfect, is to try to filter our problematic content, such as hate speech, fraud, scams, maybe certain types of violent content. And again, the definitions of what exactly we should filter out is surprisingly tricky to develop. And this is a set of problems that I think companies and individuals and even governments have to continue to wrestle with. Just one last example, when a user goes to many apps or websites, I think users think the Apple website are trying to recommend to the user things that they will like. And I think many users don't realize that many apps and websites are trying to maximize their profit rather than necessarily the user's enjoyment of the media items that are being recommended. I would encourage you and other companies, if at all possible, to be transparent with users about the criteria by which you are deciding what to recommend to them. I know this isn't always easy, but ultimately, I hope that being more transparent with users about what we're showing them and why will increase trust and also cause our systems to do more good for society. So recommend the systems that are very powerful technology, a very profitable, a very lucrative technology, and there are also some problematic use cases. If you are building one of these systems using recommended technology or really any other machine learning or other technology, I hope you think through not just the benefits you can create, but also the possible harm and invite diverse perspectives and discuss and debate and please only build things and do things that you really believe can leave society better off. I hope that collectively, all of us in AI can only do work that makes people better off. Thanks for listening. And we have just one more video to go in recommender systems in which we'll take a look at some practical tips for how to implement a content-based filtering algorithm in TensorFlow. So let's go on to that last video on recommender systems.
[{"start": 0.0, "end": 6.8, "text": " Even though recommender systems have been very profitable for some businesses, there"}, {"start": 6.8, "end": 13.96, "text": " have been some use cases that have left people and society at large worse off."}, {"start": 13.96, "end": 19.2, "text": " However you use recommender systems, or for that matter other learning algorithms, I hope"}, {"start": 19.2, "end": 24.44, "text": " you only do things that make society at large and people better off."}, {"start": 24.44, "end": 29.84, "text": " Let's take a look at some of the problematic use cases of recommender systems as well as"}, {"start": 29.84, "end": 35.34, "text": " ameliorations to reduce harm or to increase the amount of good that they can do."}, {"start": 35.34, "end": 40.2, "text": " As you've seen in the last few videos, there are many ways of configuring a recommender"}, {"start": 40.2, "end": 41.62, "text": " system."}, {"start": 41.62, "end": 47.28, "text": " When we saw binary labels, the label Y could be does the user engage or do they click or"}, {"start": 47.28, "end": 50.8, "text": " do they explicitly like an item."}, {"start": 50.8, "end": 57.32, "text": " So when designing a recommender system, choices in setting the goal of the recommender system"}, {"start": 57.32, "end": 62.28, "text": " and a lot of choices in deciding what to recommend to users."}, {"start": 62.28, "end": 69.24, "text": " For example, you can decide to recommend to users movies most likely to be rated 5 stars"}, {"start": 69.24, "end": 70.44, "text": " by that user."}, {"start": 70.44, "end": 71.44, "text": " So that seems fine."}, {"start": 71.44, "end": 75.84, "text": " This seems like a fine way to show users movies that they will like."}, {"start": 75.84, "end": 81.72, "text": " Or maybe you can recommend to the user products that they are most likely to purchase."}, {"start": 81.72, "end": 86.92, "text": " And that seems like a very reasonable use of a recommender system as well."}, {"start": 86.92, "end": 94.52, "text": " Ameliorations of recommender systems can also be used to decide what ads to show to a user."}, {"start": 94.52, "end": 100.44, "text": " And one thing you could do is to recommend or really to show to the user as the most"}, {"start": 100.44, "end": 102.44, "text": " likely to be clicked on."}, {"start": 102.44, "end": 108.64, "text": " Actually, what many companies will do is try to show ads that are likely to click on and"}, {"start": 108.64, "end": 117.32, "text": " where the advertiser had put in a high bid because for many ad models, the revenue that"}, {"start": 117.32, "end": 122.36, "text": " the company collects depends on whether the ad was clicked on and what the advertiser"}, {"start": 122.36, "end": 125.08, "text": " had bid per click."}, {"start": 125.08, "end": 133.16, "text": " And so while this is a profit maximizing strategy, there are also some possible negative implications"}, {"start": 133.16, "end": 134.64, "text": " of this type of advertising."}, {"start": 134.64, "end": 138.52, "text": " I'll give a specific example on the next slide."}, {"start": 138.52, "end": 144.48000000000002, "text": " One other thing that many companies do is try to recommend products that generate the"}, {"start": 144.48000000000002, "end": 146.38000000000002, "text": " largest profit."}, {"start": 146.38000000000002, "end": 152.28, "text": " If you go to a website and search for a product today, there are many websites that are not"}, {"start": 152.28, "end": 158.60000000000002, "text": " showing you the most relevant product or the product that you are most likely to purchase,"}, {"start": 158.60000000000002, "end": 163.4, "text": " but is instead trying to show you the products that would generate the largest profit for"}, {"start": 163.4, "end": 165.28, "text": " the company."}, {"start": 165.28, "end": 171.4, "text": " And so if a certain product is more profitable for them because they can buy it more cheaply"}, {"start": 171.4, "end": 176.56, "text": " and sell it at a higher price, that gets ranked higher in the recommendations."}, {"start": 176.56, "end": 180.32, "text": " Now many companies feel a pressure to maximize profit."}, {"start": 180.32, "end": 186.2, "text": " So this doesn't seem like an unreasonable thing to do, but on the flip side, from the"}, {"start": 186.2, "end": 191.32, "text": " user perspective, when a website recommends to you a product, sometimes it feels like"}, {"start": 191.32, "end": 196.28, "text": " it would be nice if the website was transparent with you about the criteria by which it is"}, {"start": 196.28, "end": 197.88, "text": " deciding what to show you."}, {"start": 197.88, "end": 203.51999999999998, "text": " Is it trying to maximize their profits or trying to show you things that are most useful"}, {"start": 203.51999999999998, "end": 205.0, "text": " to you?"}, {"start": 205.0, "end": 211.95999999999998, "text": " On video websites or social media websites, a recommended system can also be modified"}, {"start": 211.95999999999998, "end": 219.04, "text": " to try to show you the content that leads to the maximum watch time."}, {"start": 219.04, "end": 225.84, "text": " So specifically, websites that earn ad revenue tend to have an incentive to keep you on the"}, {"start": 225.84, "end": 228.0, "text": " website for a long time."}, {"start": 228.0, "end": 233.79999999999998, "text": " And so try to maximize the time you spend on the site is one way for the site to try"}, {"start": 233.79999999999998, "end": 238.07999999999998, "text": " to get more of your time so they can show you more ads."}, {"start": 238.07999999999998, "end": 244.34, "text": " And recommended systems today are used to try to maximize user engagements or to maximize"}, {"start": 244.34, "end": 248.32, "text": " the amount of time that someone spends on a site or a specific app."}, {"start": 248.32, "end": 254.23999999999998, "text": " So whereas the first two of these seem quite innocuous, the third, fourth, and fifth, they"}, {"start": 254.23999999999998, "end": 255.23999999999998, "text": " may be just fine."}, {"start": 255.23999999999998, "end": 261.64, "text": " They may not cause any harm at all, or they could also be problematic use cases for recommended"}, {"start": 261.64, "end": 263.4, "text": " systems."}, {"start": 263.4, "end": 268.88, "text": " Let's take a deeper look at some of these potentially problematic use cases."}, {"start": 268.88, "end": 272.96, "text": " Let me start with the advertising example."}, {"start": 272.96, "end": 278.76, "text": " It turns out that the advertising industry can sometimes be an amplifier of some of the"}, {"start": 278.76, "end": 281.44, "text": " most harmful businesses."}, {"start": 281.44, "end": 287.4, "text": " It can also be an amplifier of some of the best and the most fruitful businesses."}, {"start": 287.4, "end": 290.84, "text": " Let me illustrate with a good example and a bad example."}, {"start": 290.84, "end": 292.44, "text": " Take the travel industry."}, {"start": 292.44, "end": 298.0, "text": " I think in the travel industry, the way to succeed is to try to give good travel experiences"}, {"start": 298.0, "end": 301.2, "text": " to users, to really try to serve users."}, {"start": 301.2, "end": 306.56, "text": " Now it turns out that if there's a really good travel company that can sell your trip"}, {"start": 306.56, "end": 310.88, "text": " to fantastic destinations and make sure you and your friends and family have a lot of"}, {"start": 310.88, "end": 317.88, "text": " fun, then a good travel business, I think, will often end up being more profitable."}, {"start": 317.88, "end": 323.08, "text": " And if a business is more profitable, it can then bid higher for ads."}, {"start": 323.08, "end": 327.08, "text": " It can afford to pay more to get users."}, {"start": 327.08, "end": 333.47999999999996, "text": " And because it can afford to bid higher for ads, an online advertising site will show"}, {"start": 333.47999999999996, "end": 337.88, "text": " its ads more often and drive more users to this good company."}, {"start": 337.88, "end": 342.47999999999996, "text": " And this is a virtuous cycle where the more users you serve well, the more profitable"}, {"start": 342.47999999999996, "end": 346.88, "text": " the business, and the more you can bid more for ads and the more traffic you get and so"}, {"start": 346.88, "end": 347.88, "text": " on."}, {"start": 347.88, "end": 353.71999999999997, "text": " And this virtuous circle will maybe even tend to help the good travel companies do even"}, {"start": 353.71999999999997, "end": 354.71999999999997, "text": " better."}, {"start": 354.71999999999997, "end": 356.52, "text": " So this is a good example."}, {"start": 356.52, "end": 358.96, "text": " Let's look at a problematic example."}, {"start": 358.96, "end": 367.32, "text": " The payday loan industry tends to charge extremely high interest rates, often to low income individuals."}, {"start": 367.32, "end": 371.96, "text": " And one of the ways to do well in the payday loan business is to be really efficient at"}, {"start": 371.96, "end": 376.2, "text": " squeezing customers for every single dollar you can get out of them."}, {"start": 376.2, "end": 382.2, "text": " So if there's a payday loan company that is very good at exploiting customers, really"}, {"start": 382.2, "end": 388.12, "text": " squeezing customers for every single dollar, then that company will be more profitable."}, {"start": 388.12, "end": 390.88, "text": " And thus, they can bid higher for ads."}, {"start": 390.88, "end": 396.44, "text": " And because they can bid higher for ads, they will get more traffic sent to them."}, {"start": 396.44, "end": 400.84, "text": " And this allows them to squeeze even more customers and exploit even more people for"}, {"start": 400.84, "end": 402.12, "text": " profit."}, {"start": 402.12, "end": 406.2, "text": " And this in turn also creates a positive feedback loop."}, {"start": 406.2, "end": 412.24, "text": " So a positive feedback loop that can cause the most exploitative, the most harmful payday"}, {"start": 412.24, "end": 416.2, "text": " loan companies to get sent more traffic."}, {"start": 416.2, "end": 422.32, "text": " And this seems like the opposite effect than what we think would be good for society."}, {"start": 422.32, "end": 425.2, "text": " I don't know that there's an easy solution to this."}, {"start": 425.2, "end": 430.48, "text": " And these are very difficult problems that recommend the systems face."}, {"start": 430.48, "end": 436.22, "text": " One amelioration might be to refuse to set ads from exploitative businesses."}, {"start": 436.22, "end": 440.84000000000003, "text": " Of course, that's easy to say, but how do you define what is an exploitative business"}, {"start": 440.84000000000003, "end": 444.44, "text": " and what is not is a very difficult question."}, {"start": 444.44, "end": 449.96000000000004, "text": " But as we build recommended systems for advertising or for other things, I think these are questions"}, {"start": 449.96000000000004, "end": 456.86, "text": " that each one of us working on these technologies should ask ourselves so that we can hopefully"}, {"start": 456.86, "end": 462.96000000000004, "text": " invite open discussion and debate, get multiple opinions from multiple people and try to come"}, {"start": 462.96000000000004, "end": 469.92, "text": " up with design choices that allows our systems to try to do much more good than potential"}, {"start": 469.92, "end": 470.92, "text": " harm."}, {"start": 470.92, "end": 472.92, "text": " Let's look at some of the examples."}, {"start": 472.92, "end": 478.48, "text": " It's been widely reported in news that maximizing user engagement, such as the amount of time"}, {"start": 478.48, "end": 484.28000000000003, "text": " that someone watches videos on a website or the amount of time someone spends on social"}, {"start": 484.28000000000003, "end": 485.6, "text": " media."}, {"start": 485.6, "end": 491.06, "text": " This has led to large social media and video sharing sites to amplify conspiracy theories"}, {"start": 491.06, "end": 498.48, "text": " or hate and toxicity because conspiracy theories and certain types of hate toxic content is"}, {"start": 498.48, "end": 502.78000000000003, "text": " highly engaging and causes people to spend a lot of time on it."}, {"start": 502.78000000000003, "end": 509.12, "text": " Even if the effect of amplifying conspiracy theories or amplifying hate and toxicity turns"}, {"start": 509.12, "end": 513.64, "text": " out to be harmful to individuals and to society at large."}, {"start": 513.64, "end": 519.4399999999999, "text": " One amelioration for this, partial and imperfect, is to try to filter our problematic content,"}, {"start": 519.4399999999999, "end": 524.08, "text": " such as hate speech, fraud, scams, maybe certain types of violent content."}, {"start": 524.08, "end": 530.68, "text": " And again, the definitions of what exactly we should filter out is surprisingly tricky"}, {"start": 530.68, "end": 532.22, "text": " to develop."}, {"start": 532.22, "end": 537.74, "text": " And this is a set of problems that I think companies and individuals and even governments"}, {"start": 537.74, "end": 540.22, "text": " have to continue to wrestle with."}, {"start": 540.22, "end": 547.32, "text": " Just one last example, when a user goes to many apps or websites, I think users think"}, {"start": 547.32, "end": 552.5400000000001, "text": " the Apple website are trying to recommend to the user things that they will like."}, {"start": 552.5400000000001, "end": 557.48, "text": " And I think many users don't realize that many apps and websites are trying to maximize"}, {"start": 557.48, "end": 564.4, "text": " their profit rather than necessarily the user's enjoyment of the media items that are being"}, {"start": 564.4, "end": 565.4, "text": " recommended."}, {"start": 565.4, "end": 570.3199999999999, "text": " I would encourage you and other companies, if at all possible, to be transparent with"}, {"start": 570.3199999999999, "end": 574.92, "text": " users about the criteria by which you are deciding what to recommend to them."}, {"start": 574.92, "end": 582.0799999999999, "text": " I know this isn't always easy, but ultimately, I hope that being more transparent with users"}, {"start": 582.0799999999999, "end": 587.92, "text": " about what we're showing them and why will increase trust and also cause our systems"}, {"start": 587.92, "end": 590.12, "text": " to do more good for society."}, {"start": 590.12, "end": 595.1999999999999, "text": " So recommend the systems that are very powerful technology, a very profitable, a very lucrative"}, {"start": 595.2, "end": 599.7800000000001, "text": " technology, and there are also some problematic use cases."}, {"start": 599.7800000000001, "end": 604.5600000000001, "text": " If you are building one of these systems using recommended technology or really any other"}, {"start": 604.5600000000001, "end": 609.94, "text": " machine learning or other technology, I hope you think through not just the benefits you"}, {"start": 609.94, "end": 615.6, "text": " can create, but also the possible harm and invite diverse perspectives and discuss and"}, {"start": 615.6, "end": 622.24, "text": " debate and please only build things and do things that you really believe can leave society"}, {"start": 622.24, "end": 624.46, "text": " better off."}, {"start": 624.46, "end": 629.88, "text": " I hope that collectively, all of us in AI can only do work that makes people better"}, {"start": 629.88, "end": 630.88, "text": " off."}, {"start": 630.88, "end": 633.48, "text": " Thanks for listening."}, {"start": 633.48, "end": 638.76, "text": " And we have just one more video to go in recommender systems in which we'll take a look at some"}, {"start": 638.76, "end": 644.9200000000001, "text": " practical tips for how to implement a content-based filtering algorithm in TensorFlow."}, {"start": 644.92, "end": 655.36, "text": " So let's go on to that last video on recommender systems."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=_ofRJ2tS_v8
9.12 Content-based Filtering| TensorFlow implementation of content-based filtering -[ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the practice lab, you'll see how to implement content-based filtering in TensorFlow. What I'd like to do in this video is just step through with you a few of the key concepts in the code that you get to play with. Let's take a look. Recall that our code has started with a user network as well as a movie network. And the way you can implement this in TensorFlow is very similar to how we had previously implemented a neural network with a set of dense layers. We're going to use a sequential model. We then, in this example, have two dense layers with the number of hidden units specified here. And then the final layer has 32 units and outputs 32 numbers. Then for the movie network, I'm going to call it the item network because the movies are the items here. This is what the code looks like. Once again, we have a couple of dense hidden layers followed by this layer, which outputs 32 numbers. And for the hidden layers, we'll use our default choice of activation function, which is the ReLU activation function. Next, we need to tell TensorFlow carers how to feed the user features or the item features, that is the movie features, to the two neural networks. This is the syntax for doing so. That extracts out the input features for the user and then feeds it to the user and then we had to find up here to compute VU, the vector for the user. And then one additional step that turns out to make this algorithm work a bit better is add this line here, which normalizes the vector VU to have length one. So this normalizes the length, also called the L2 norm, but basically the length of the vector VU to be equal to one. And then we do the same thing for the item network, for the movie network. This extracts out the item features and feeds it to the item neural network that we define up there. And this computes the movie vector VM. And then finally, this step also normalizes that vector to have length one. After having computed VU and VM, we then have to take the dot product between these two vectors. And this is the syntax for doing so. Carers has a special layer type. Notice we had here TF, carers layers dense. Here this is TF, carers layers dot. It turns out that there's a special carers layer that just takes a dot product between two numbers. And so we're going to use that to take the dot product between the vectors VU and VM. And this gives the output of the neural network. This gives the final prediction. Finally to tell Carers what are the inputs and outputs of the model, this line tells it that the overall model is a model with inputs being the user features and movie or the item features and the output is this output that we just defined up above. And the cost function that we use to train this model is going to be the mean squared error cost function. So these are the key code snippets for implementing content based filtering as a neural network. And you see the rest of the code in the practice lab. But hopefully you'll be able to play with that and see how all these code snippets fit together into a working TensorFlow implementation of a content based filtering algorithm. It turns out that there's one other step that I didn't talk about previously, but if you do this, which is normalize the length of the vector VU, that makes the algorithm work a bit better. And so TensorFlow has this L2 normalize function that normalizes the vector. It's also called normalizing the L2 norm of the vector, hence the name of the function. And so that's it. Thanks for sticking with me through all this material on recommender systems. It's an exciting technology and I hope you enjoy playing with these ideas in code in the practice lab for this week. And so that takes us to the last of these videos on recommender systems and to the end of the next to final week for this specialization. I look forward to seeing you next week as well. We'll talk about the exciting technology of reinforcement learning. Hope you have fun with the quizzes and with the practice labs and I look forward to seeing you next week.
[{"start": 0.0, "end": 7.28, "text": " In the practice lab, you'll see how to implement content-based filtering in TensorFlow."}, {"start": 7.28, "end": 11.94, "text": " What I'd like to do in this video is just step through with you a few of the key concepts"}, {"start": 11.94, "end": 14.42, "text": " in the code that you get to play with."}, {"start": 14.42, "end": 15.42, "text": " Let's take a look."}, {"start": 15.42, "end": 23.02, "text": " Recall that our code has started with a user network as well as a movie network."}, {"start": 23.02, "end": 29.86, "text": " And the way you can implement this in TensorFlow is very similar to how we had previously implemented"}, {"start": 29.86, "end": 34.44, "text": " a neural network with a set of dense layers."}, {"start": 34.44, "end": 36.519999999999996, "text": " We're going to use a sequential model."}, {"start": 36.519999999999996, "end": 41.36, "text": " We then, in this example, have two dense layers with the number of hidden units specified"}, {"start": 41.36, "end": 42.36, "text": " here."}, {"start": 42.36, "end": 47.44, "text": " And then the final layer has 32 units and outputs 32 numbers."}, {"start": 47.44, "end": 52.64, "text": " Then for the movie network, I'm going to call it the item network because the movies are"}, {"start": 52.64, "end": 54.019999999999996, "text": " the items here."}, {"start": 54.019999999999996, "end": 55.84, "text": " This is what the code looks like."}, {"start": 55.84, "end": 62.440000000000005, "text": " Once again, we have a couple of dense hidden layers followed by this layer, which outputs"}, {"start": 62.440000000000005, "end": 63.440000000000005, "text": " 32 numbers."}, {"start": 63.440000000000005, "end": 69.84, "text": " And for the hidden layers, we'll use our default choice of activation function, which is the"}, {"start": 69.84, "end": 72.60000000000001, "text": " ReLU activation function."}, {"start": 72.60000000000001, "end": 80.68, "text": " Next, we need to tell TensorFlow carers how to feed the user features or the item features,"}, {"start": 80.68, "end": 84.32000000000001, "text": " that is the movie features, to the two neural networks."}, {"start": 84.32, "end": 86.55999999999999, "text": " This is the syntax for doing so."}, {"start": 86.55999999999999, "end": 94.0, "text": " That extracts out the input features for the user and then feeds it to the user and then"}, {"start": 94.0, "end": 99.6, "text": " we had to find up here to compute VU, the vector for the user."}, {"start": 99.6, "end": 103.88, "text": " And then one additional step that turns out to make this algorithm work a bit better is"}, {"start": 103.88, "end": 109.72, "text": " add this line here, which normalizes the vector VU to have length one."}, {"start": 109.72, "end": 114.96, "text": " So this normalizes the length, also called the L2 norm, but basically the length of the"}, {"start": 114.96, "end": 117.88, "text": " vector VU to be equal to one."}, {"start": 117.88, "end": 122.48, "text": " And then we do the same thing for the item network, for the movie network."}, {"start": 122.48, "end": 129.82, "text": " This extracts out the item features and feeds it to the item neural network that we define"}, {"start": 129.82, "end": 131.1, "text": " up there."}, {"start": 131.1, "end": 136.32, "text": " And this computes the movie vector VM."}, {"start": 136.32, "end": 142.12, "text": " And then finally, this step also normalizes that vector to have length one."}, {"start": 142.12, "end": 149.72, "text": " After having computed VU and VM, we then have to take the dot product between these two"}, {"start": 149.72, "end": 150.72, "text": " vectors."}, {"start": 150.72, "end": 153.48, "text": " And this is the syntax for doing so."}, {"start": 153.48, "end": 156.92, "text": " Carers has a special layer type."}, {"start": 156.92, "end": 160.88, "text": " Notice we had here TF, carers layers dense."}, {"start": 160.88, "end": 163.98, "text": " Here this is TF, carers layers dot."}, {"start": 163.98, "end": 168.16, "text": " It turns out that there's a special carers layer that just takes a dot product between"}, {"start": 168.16, "end": 169.6, "text": " two numbers."}, {"start": 169.6, "end": 176.51999999999998, "text": " And so we're going to use that to take the dot product between the vectors VU and VM."}, {"start": 176.51999999999998, "end": 180.1, "text": " And this gives the output of the neural network."}, {"start": 180.1, "end": 183.44, "text": " This gives the final prediction."}, {"start": 183.44, "end": 188.72, "text": " Finally to tell Carers what are the inputs and outputs of the model, this line tells"}, {"start": 188.72, "end": 195.8, "text": " it that the overall model is a model with inputs being the user features and movie or"}, {"start": 195.8, "end": 201.72, "text": " the item features and the output is this output that we just defined up above."}, {"start": 201.72, "end": 207.12, "text": " And the cost function that we use to train this model is going to be the mean squared"}, {"start": 207.12, "end": 208.96, "text": " error cost function."}, {"start": 208.96, "end": 215.88, "text": " So these are the key code snippets for implementing content based filtering as a neural network."}, {"start": 215.88, "end": 220.24, "text": " And you see the rest of the code in the practice lab."}, {"start": 220.24, "end": 225.51999999999998, "text": " But hopefully you'll be able to play with that and see how all these code snippets fit"}, {"start": 225.51999999999998, "end": 231.48, "text": " together into a working TensorFlow implementation of a content based filtering algorithm."}, {"start": 231.48, "end": 235.16, "text": " It turns out that there's one other step that I didn't talk about previously, but if you"}, {"start": 235.16, "end": 240.94, "text": " do this, which is normalize the length of the vector VU, that makes the algorithm work"}, {"start": 240.94, "end": 242.26, "text": " a bit better."}, {"start": 242.26, "end": 248.95999999999998, "text": " And so TensorFlow has this L2 normalize function that normalizes the vector."}, {"start": 248.95999999999998, "end": 254.0, "text": " It's also called normalizing the L2 norm of the vector, hence the name of the function."}, {"start": 254.0, "end": 255.6, "text": " And so that's it."}, {"start": 255.6, "end": 260.28, "text": " Thanks for sticking with me through all this material on recommender systems."}, {"start": 260.28, "end": 265.08, "text": " It's an exciting technology and I hope you enjoy playing with these ideas in code in"}, {"start": 265.08, "end": 267.59999999999997, "text": " the practice lab for this week."}, {"start": 267.6, "end": 272.8, "text": " And so that takes us to the last of these videos on recommender systems and to the end"}, {"start": 272.8, "end": 277.28000000000003, "text": " of the next to final week for this specialization."}, {"start": 277.28000000000003, "end": 279.96000000000004, "text": " I look forward to seeing you next week as well."}, {"start": 279.96000000000004, "end": 283.72, "text": " We'll talk about the exciting technology of reinforcement learning."}, {"start": 283.72, "end": 287.24, "text": " Hope you have fun with the quizzes and with the practice labs and I look forward to seeing"}, {"start": 287.24, "end": 297.92, "text": " you next week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=I-z8EreP1Bs
10.1 Reinforcement Learning Introduction | What is Reinforcement Learning? -[ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to this final week of the machine learning specialization. It's a little bit bittersweet for me that we're approaching the end of this specialization, but I'm looking forward to this week sharing with you some exciting ideas about reinforcement learning. In machine learning, reinforcement learning is one of those ideas that, while not very widely applied in commercial applications yet today, is one of the pillars of machine learning and has lots of exciting research backing it up and improving it every single day. So let's start by taking a look at what is reinforcement learning. Let's start with an example. Here's a picture of an autonomous helicopter. This is actually the Stanford autonomous helicopter, weighs 32 pounds and is actually sitting in my office right now. Like many other autonomous helicopters, it's instrumented with an onboard computer, GPS, accelerometers, gyroscope, and magnetic compass, so it knows where it is at all times quite accurately. And if I were to give you the keys to this helicopter and ask you to write a program to fly it, how would you do so? Radio-controlled helicopters are controlled with joysticks like these, and so the task is 10 times per second you're given the position and orientation and speed and so on of the helicopter, and you have to decide how to move these two control sticks in order to keep the helicopter balanced in the air. By the way, I've flown radio-controlled helicopters as well as quad-rotor drones myself, and radio-controlled helicopters are actually quite a bit harder to fly, quite a bit harder to keep balanced in the air. So how would you write a program to do this automatically? Let me show you a fun video of something we got the Stanford autonomous helicopter to do. Here's a video of it flying under the control of a reinforcement learning algorithm, and let me play the video. I was actually the cameraman that day, and this is a helicopter flying under computer control, and if I zoom out the video, you see the trees planted in the sky. So using reinforcement learning, we actually got this helicopter to learn to fly upside down. We told it to fly upside down, and so reinforcement learning has been used to get helicopters to fly a wide range of stunts, or we call them aerobatic maneuvers. By the way, if you're interested in seeing other videos, you can also check them out at this URL. So how do you get a helicopter to fly itself using reinforcement learning? The task is, given the position of the helicopter, to decide how to move the control sticks. In reinforcement learning, we call the position and orientation and speed and so on of the helicopter the state s, and so the task is to find a function that maps from the state of the helicopter to an action a, meaning how far to push the two control sticks in order to keep the helicopter balanced in the air and flying and without crashing. One way you could attempt this problem is to use supervised learning. It turns out this is not a great approach for autonomous helicopter flying, but you could say, well, if we could get a bunch of observations of states and maybe have an expert human pilot tell us what's the best action y to take, you could then train a neural network using supervised learning to directly learn the mapping from the state s, which I'm calling x here, to an action a, which I'm calling the label y here. But it turns out that when the helicopter is moving through the air, it's actually very ambiguous what is the exact one right action to take. Do you tilt a bit to the left or a lot more to the left or increase the helicopter stress a little bit or a lot? It's actually very difficult to get a data set of x and the ideal action y. So that's why for a lot of tasks of controlling a robot like a helicopter and other robots, the supervised learning approach doesn't work well and we instead use reinforcement learning. Now, a key input to a reinforcement learning is something called the reward or the reward function, which tells the helicopter when it's doing well and when it's doing poorly. So the way I like to think of a reward function is it's a bit like training a dog. When I was growing up, my family had a dog and it was my job to train the dog or the puppy to behave. So how do you get the puppy to behave well? Well, you can't demonstrate that much to the puppy. Instead, you let us do a thing and whenever it does something good, you go, oh, good dog. And whenever it did something bad, you go, bad dog. And then hopefully it learns by itself how to do more of the good dog and fewer of the bad dog things. So training for reinforcement learning is like that. When the helicopter is flying well, you go, oh, good helicopter. And if it does something bad, like trash, you go, bad helicopter. And then it's a reinforcement learning algorithm's job to figure out how to get more of the good helicopter and fewer of the bad helicopter outcomes. One way to think of why reinforcement learning is so powerful is you have to tell it what to do rather than how to do it and specifying the reward function rather than the actual action gives you a lot more flexibility in how you design the system. Concretely, for flying the helicopter, whenever it is flying well, you may give it a reward of plus one every second that is flying well. And maybe whenever it's flying poorly, you may give it a negative reward. Or if it ever crashes, you may give it a very large negative reward, like negative one thousand. And so this will incentivize the helicopter to spend a lot more time flying well and hopefully to never crash. But here's another fun video. I was using the good dog, bad dog analogy for reinforcement learning for many years. And then one day I actually managed to get my hands on a robotic dog and could actually use this reinforcement learning good dog, bad dog methodology to train a robot dog to get over obstacles. So this is a video of a robot dog that using reinforcement learning, which rewards it moving toward the left of the screen, has learned how to place his feet carefully or climb over a variety of obstacles. And if you think about what it takes to program a dog like this, I have no idea. I really don't know how to tell it what's the best way to place his legs to get over a given obstacle. All of these things were figured out automatically by the robot just by giving it rewards that incentivizes it, making progress toward the goal on the left of the screen. Today, reinforcement learning has been successfully applied to a variety of applications ranging from controlling robots. And in fact, later this week in the practice lab, you implement for yourself a reinforcement learning algorithm to land a lunar lander in simulation. It's also been used for factory optimization. How do you rearrange things in the factory to maximize throughput and efficiency as well as financial stock trading? For example, one of my friends was working on efficient stock execution. So if you've decided to sell a million shares over the next several days, well, you may not want to dump a million shares on the stock market suddenly because that will move prices against you. So what's the best way to sequence out your trades over time so that you can sell the shares you want to sell and hopefully get the best possible price for them? Finally, there have also been many applications of reinforcement learning to play in games, everything from checkers to chess to the card game of bridge to go as well as for playing many video games. So that's reinforcement learning. Even though reinforcement learning is not used nearly as much as supervised learning, it is still used in a few applications today. And the key idea is rather than you needing to tell the algorithm what is the right output, why for every single input, all you have to do instead is specify a reward function that tells it when it's doing well and when it's doing poorly. And it's a job of the algorithm to automatically figure out how to choose good actions. With that, let's now go into the next video where we'll formalize the reinforcement learning problem and also start to develop algorithms for automatically picking good actions.
[{"start": 0.0, "end": 6.0, "text": " Welcome to this final week of the machine learning specialization."}, {"start": 6.0, "end": 10.0, "text": " It's a little bit bittersweet for me that we're approaching the end of this specialization,"}, {"start": 10.0, "end": 15.0, "text": " but I'm looking forward to this week sharing with you some exciting ideas about reinforcement learning."}, {"start": 15.0, "end": 20.0, "text": " In machine learning, reinforcement learning is one of those ideas that,"}, {"start": 20.0, "end": 24.0, "text": " while not very widely applied in commercial applications yet today,"}, {"start": 24.0, "end": 29.0, "text": " is one of the pillars of machine learning and has lots of exciting research backing it up"}, {"start": 29.0, "end": 31.0, "text": " and improving it every single day."}, {"start": 31.0, "end": 36.0, "text": " So let's start by taking a look at what is reinforcement learning."}, {"start": 36.0, "end": 38.0, "text": " Let's start with an example."}, {"start": 38.0, "end": 41.0, "text": " Here's a picture of an autonomous helicopter."}, {"start": 41.0, "end": 43.0, "text": " This is actually the Stanford autonomous helicopter,"}, {"start": 43.0, "end": 47.0, "text": " weighs 32 pounds and is actually sitting in my office right now."}, {"start": 47.0, "end": 51.0, "text": " Like many other autonomous helicopters, it's instrumented with an onboard computer,"}, {"start": 51.0, "end": 55.0, "text": " GPS, accelerometers, gyroscope, and magnetic compass,"}, {"start": 55.0, "end": 59.0, "text": " so it knows where it is at all times quite accurately."}, {"start": 59.0, "end": 65.0, "text": " And if I were to give you the keys to this helicopter and ask you to write a program to fly it,"}, {"start": 65.0, "end": 67.0, "text": " how would you do so?"}, {"start": 67.0, "end": 70.0, "text": " Radio-controlled helicopters are controlled with joysticks like these,"}, {"start": 70.0, "end": 76.0, "text": " and so the task is 10 times per second you're given the position and orientation and speed"}, {"start": 76.0, "end": 78.0, "text": " and so on of the helicopter,"}, {"start": 78.0, "end": 84.0, "text": " and you have to decide how to move these two control sticks in order to keep the helicopter balanced in the air."}, {"start": 84.0, "end": 90.0, "text": " By the way, I've flown radio-controlled helicopters as well as quad-rotor drones myself,"}, {"start": 90.0, "end": 93.0, "text": " and radio-controlled helicopters are actually quite a bit harder to fly,"}, {"start": 93.0, "end": 95.0, "text": " quite a bit harder to keep balanced in the air."}, {"start": 95.0, "end": 99.0, "text": " So how would you write a program to do this automatically?"}, {"start": 99.0, "end": 104.0, "text": " Let me show you a fun video of something we got the Stanford autonomous helicopter to do."}, {"start": 104.0, "end": 109.0, "text": " Here's a video of it flying under the control of a reinforcement learning algorithm,"}, {"start": 109.0, "end": 111.0, "text": " and let me play the video."}, {"start": 111.0, "end": 116.0, "text": " I was actually the cameraman that day, and this is a helicopter flying under computer control,"}, {"start": 116.0, "end": 121.0, "text": " and if I zoom out the video, you see the trees planted in the sky."}, {"start": 121.0, "end": 126.0, "text": " So using reinforcement learning, we actually got this helicopter to learn to fly upside down."}, {"start": 126.0, "end": 128.0, "text": " We told it to fly upside down,"}, {"start": 128.0, "end": 134.0, "text": " and so reinforcement learning has been used to get helicopters to fly a wide range of stunts,"}, {"start": 134.0, "end": 137.0, "text": " or we call them aerobatic maneuvers."}, {"start": 137.0, "end": 142.0, "text": " By the way, if you're interested in seeing other videos, you can also check them out at this URL."}, {"start": 142.0, "end": 148.0, "text": " So how do you get a helicopter to fly itself using reinforcement learning?"}, {"start": 148.0, "end": 155.0, "text": " The task is, given the position of the helicopter, to decide how to move the control sticks."}, {"start": 155.0, "end": 163.0, "text": " In reinforcement learning, we call the position and orientation and speed and so on of the helicopter the state s,"}, {"start": 163.0, "end": 171.0, "text": " and so the task is to find a function that maps from the state of the helicopter to an action a,"}, {"start": 171.0, "end": 180.0, "text": " meaning how far to push the two control sticks in order to keep the helicopter balanced in the air and flying and without crashing."}, {"start": 180.0, "end": 185.0, "text": " One way you could attempt this problem is to use supervised learning."}, {"start": 185.0, "end": 189.0, "text": " It turns out this is not a great approach for autonomous helicopter flying,"}, {"start": 189.0, "end": 201.0, "text": " but you could say, well, if we could get a bunch of observations of states and maybe have an expert human pilot tell us what's the best action y to take,"}, {"start": 201.0, "end": 208.0, "text": " you could then train a neural network using supervised learning to directly learn the mapping from the state s,"}, {"start": 208.0, "end": 214.0, "text": " which I'm calling x here, to an action a, which I'm calling the label y here."}, {"start": 214.0, "end": 224.0, "text": " But it turns out that when the helicopter is moving through the air, it's actually very ambiguous what is the exact one right action to take."}, {"start": 224.0, "end": 230.0, "text": " Do you tilt a bit to the left or a lot more to the left or increase the helicopter stress a little bit or a lot?"}, {"start": 230.0, "end": 236.0, "text": " It's actually very difficult to get a data set of x and the ideal action y."}, {"start": 236.0, "end": 242.0, "text": " So that's why for a lot of tasks of controlling a robot like a helicopter and other robots,"}, {"start": 242.0, "end": 248.0, "text": " the supervised learning approach doesn't work well and we instead use reinforcement learning."}, {"start": 248.0, "end": 256.0, "text": " Now, a key input to a reinforcement learning is something called the reward or the reward function,"}, {"start": 256.0, "end": 261.0, "text": " which tells the helicopter when it's doing well and when it's doing poorly."}, {"start": 261.0, "end": 266.0, "text": " So the way I like to think of a reward function is it's a bit like training a dog."}, {"start": 266.0, "end": 273.0, "text": " When I was growing up, my family had a dog and it was my job to train the dog or the puppy to behave."}, {"start": 273.0, "end": 276.0, "text": " So how do you get the puppy to behave well?"}, {"start": 276.0, "end": 279.0, "text": " Well, you can't demonstrate that much to the puppy."}, {"start": 279.0, "end": 284.0, "text": " Instead, you let us do a thing and whenever it does something good, you go, oh, good dog."}, {"start": 284.0, "end": 287.0, "text": " And whenever it did something bad, you go, bad dog."}, {"start": 287.0, "end": 294.0, "text": " And then hopefully it learns by itself how to do more of the good dog and fewer of the bad dog things."}, {"start": 294.0, "end": 297.0, "text": " So training for reinforcement learning is like that."}, {"start": 297.0, "end": 300.0, "text": " When the helicopter is flying well, you go, oh, good helicopter."}, {"start": 300.0, "end": 304.0, "text": " And if it does something bad, like trash, you go, bad helicopter."}, {"start": 304.0, "end": 308.0, "text": " And then it's a reinforcement learning algorithm's job to figure out how to get more of the good helicopter"}, {"start": 308.0, "end": 312.0, "text": " and fewer of the bad helicopter outcomes."}, {"start": 312.0, "end": 320.0, "text": " One way to think of why reinforcement learning is so powerful is you have to tell it what to do rather than how to do it"}, {"start": 320.0, "end": 328.0, "text": " and specifying the reward function rather than the actual action gives you a lot more flexibility in how you design the system."}, {"start": 328.0, "end": 339.0, "text": " Concretely, for flying the helicopter, whenever it is flying well, you may give it a reward of plus one every second that is flying well."}, {"start": 339.0, "end": 344.0, "text": " And maybe whenever it's flying poorly, you may give it a negative reward."}, {"start": 344.0, "end": 350.0, "text": " Or if it ever crashes, you may give it a very large negative reward, like negative one thousand."}, {"start": 350.0, "end": 357.0, "text": " And so this will incentivize the helicopter to spend a lot more time flying well and hopefully to never crash."}, {"start": 357.0, "end": 360.0, "text": " But here's another fun video."}, {"start": 360.0, "end": 366.0, "text": " I was using the good dog, bad dog analogy for reinforcement learning for many years."}, {"start": 366.0, "end": 371.0, "text": " And then one day I actually managed to get my hands on a robotic dog"}, {"start": 371.0, "end": 378.0, "text": " and could actually use this reinforcement learning good dog, bad dog methodology to train a robot dog to get over obstacles."}, {"start": 378.0, "end": 384.0, "text": " So this is a video of a robot dog that using reinforcement learning,"}, {"start": 384.0, "end": 393.0, "text": " which rewards it moving toward the left of the screen, has learned how to place his feet carefully or climb over a variety of obstacles."}, {"start": 393.0, "end": 399.0, "text": " And if you think about what it takes to program a dog like this, I have no idea."}, {"start": 399.0, "end": 405.0, "text": " I really don't know how to tell it what's the best way to place his legs to get over a given obstacle."}, {"start": 405.0, "end": 412.0, "text": " All of these things were figured out automatically by the robot just by giving it rewards that incentivizes it,"}, {"start": 412.0, "end": 416.0, "text": " making progress toward the goal on the left of the screen."}, {"start": 416.0, "end": 424.0, "text": " Today, reinforcement learning has been successfully applied to a variety of applications ranging from controlling robots."}, {"start": 424.0, "end": 435.0, "text": " And in fact, later this week in the practice lab, you implement for yourself a reinforcement learning algorithm to land a lunar lander in simulation."}, {"start": 435.0, "end": 438.0, "text": " It's also been used for factory optimization."}, {"start": 438.0, "end": 446.0, "text": " How do you rearrange things in the factory to maximize throughput and efficiency as well as financial stock trading?"}, {"start": 446.0, "end": 451.0, "text": " For example, one of my friends was working on efficient stock execution."}, {"start": 451.0, "end": 455.0, "text": " So if you've decided to sell a million shares over the next several days,"}, {"start": 455.0, "end": 462.0, "text": " well, you may not want to dump a million shares on the stock market suddenly because that will move prices against you."}, {"start": 462.0, "end": 472.0, "text": " So what's the best way to sequence out your trades over time so that you can sell the shares you want to sell and hopefully get the best possible price for them?"}, {"start": 472.0, "end": 477.0, "text": " Finally, there have also been many applications of reinforcement learning to play in games,"}, {"start": 477.0, "end": 486.0, "text": " everything from checkers to chess to the card game of bridge to go as well as for playing many video games."}, {"start": 486.0, "end": 488.0, "text": " So that's reinforcement learning."}, {"start": 488.0, "end": 497.0, "text": " Even though reinforcement learning is not used nearly as much as supervised learning, it is still used in a few applications today."}, {"start": 497.0, "end": 505.0, "text": " And the key idea is rather than you needing to tell the algorithm what is the right output, why for every single input,"}, {"start": 505.0, "end": 512.0, "text": " all you have to do instead is specify a reward function that tells it when it's doing well and when it's doing poorly."}, {"start": 512.0, "end": 518.0, "text": " And it's a job of the algorithm to automatically figure out how to choose good actions."}, {"start": 518.0, "end": 524.0, "text": " With that, let's now go into the next video where we'll formalize the reinforcement learning problem"}, {"start": 524.0, "end": 536.0, "text": " and also start to develop algorithms for automatically picking good actions."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=vxmDWRTi0PM
10.2 Reinforcement Learning formalism | Mars rover example -[ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
To flesh out the reinforcement learning formalism, instead of looking at something as complicated as a helicopter or a robot dog, we're going to use a simplified example that's loosely inspired by the Mars Rover. This is adapted from an example due to Stanford professor Emma Brunskell and one of my collaborators Jack Rite-Argerwal who had actually written code that is actually controlling the Mars Rover right now. It also helped me talk through and help develop this example. Let's take a look. We'll develop reinforcement learning using a simplified example inspired by the Mars Rover. In this application, the Rover can be in any of six positions, as shown by the six boxes here. The Rover might start off, say, in this position in the fourth box shown here. The position of the Mars Rover is called the state in reinforcement learning. I'm going to call these six states, state one, state two, state three, state four, state five, and state six. So the Rover is starting off in state four. Now, the Rover was sent to Mars to try to carry out different science missions. It can go to different places to use its sensors, such as a drill or radar or spectrometer, to analyze the rock at different places on the planet, or go to different places to take interesting pictures for scientists on Earth to look at. In this example, state one, here on the left, has a very interesting surface that scientists would love for the robot to sample. State six also has a pretty interesting surface that scientists would quite like the Rover to sample but not as interesting as state one. So we would more like to carry out the science mission at state one than at state six, but state one is further away. The way we will reflect state one being potentially more valuable is through the reward function. So the reward at state one is 100, and the reward at state six is 40, and the rewards at all of the other states in between, I'm going to write as a reward of zero because there's not as much interesting science to be done at these states two, three, four, and five. On each step, the Rover gets to choose one of two actions. It can either go to the left or it can go to the right. So the question is, what should the Rover do? In reinforcement learning, we pay a lot of attention to the rewards because that's how we know if the robot is doing well or poorly. So let's look at some examples of what might happen. If the robot were to go left starting from state four, then initially starting from state four, it will receive a reward of zero, and after going left, it gets to state three where it receives again a reward of zero, then it gets to state two, receives a reward of zero, and finally it gets to state one where it receives a reward of 100. For this application, I'm going to assume that when it gets to either state one or state six, that the day ends. So in reinforcement learning, we sometimes call this a terminal state. What that means is that after it gets to one of these terminal states, it gets a reward at that state, but then nothing more happens after that. Maybe the robots run out of fuel or run out of time for the day, which is why it only gets to either enjoy the 100 or the 40 reward, but then that's it for the day, and it doesn't get to earn additional rewards after that. Now, instead of going left, the robot could also choose to go to the right, in which case from state four, it would first have a reward of zero, and then it'll move right and get to state five, have another reward of zero, and then it will get to this other terminal state on the right, state six, and get a reward of 40. But going left and going right aren't the only options. One thing the robot could do is it could start from state four and decide to move to the right. So it goes from state four to five, gets a reward of zero in state four, reward of zero in state five, and then maybe it changes its mind and it decides to start going to the left. As follows, in which case, it would get a reward of zero at state four, at state three, at state two, and then a reward of 100 when it gets to state one. In this sequence of actions and states, the robot is wasting a bit of time. So this maybe isn't such a great way to take actions, but it is one choice that the algorithm could pick. But hopefully, it won't pick this one. So to summarize, at every time step, the robot is in some state, which I'm going to call s, and it gets to choose an action, and it also enjoys some rewards, r of s, that it gets from that state. As a result of this action, it gets to some new state, s prime. So as a concrete example, when the robot was in state four and it took the action, go left, it enjoyed, well, maybe didn't enjoy the reward of zero associated with that state four, and it wound up in a new state three. When you learn about specific reinforcement learning algorithms, you see that these four things, the state, action, the reward, and the next state, which is what happens basically every time you take an action, that this be a core elements of what reinforcement learning algorithms will look at when deciding how to take actions. Just for clarity, the reward here, r of s, this is the reward associated with this state. So this reward of zero is associated with the state four rather than with the state three. So that's the formalism of how a reinforcement learning application works. In the next video, let's take a look at how we specify exactly what we want the reinforcement learning algorithm to do. In particular, we'll talk about an important idea in reinforcement learning called the return. Let's go on to the next video to see what that means.
[{"start": 0.0, "end": 5.5600000000000005, "text": " To flesh out the reinforcement learning formalism,"}, {"start": 5.5600000000000005, "end": 10.8, "text": " instead of looking at something as complicated as a helicopter or a robot dog,"}, {"start": 10.8, "end": 17.240000000000002, "text": " we're going to use a simplified example that's loosely inspired by the Mars Rover."}, {"start": 17.240000000000002, "end": 22.16, "text": " This is adapted from an example due to Stanford professor Emma Brunskell and one of"}, {"start": 22.16, "end": 26.52, "text": " my collaborators Jack Rite-Argerwal who had actually written code"}, {"start": 26.52, "end": 29.48, "text": " that is actually controlling the Mars Rover right now."}, {"start": 29.48, "end": 32.96, "text": " It also helped me talk through and help develop this example."}, {"start": 32.96, "end": 34.24, "text": " Let's take a look."}, {"start": 34.24, "end": 38.28, "text": " We'll develop reinforcement learning using"}, {"start": 38.28, "end": 43.24, "text": " a simplified example inspired by the Mars Rover."}, {"start": 43.24, "end": 48.84, "text": " In this application, the Rover can be in any of six positions,"}, {"start": 48.84, "end": 51.72, "text": " as shown by the six boxes here."}, {"start": 51.72, "end": 54.120000000000005, "text": " The Rover might start off, say,"}, {"start": 54.120000000000005, "end": 58.480000000000004, "text": " in this position in the fourth box shown here."}, {"start": 58.48, "end": 64.08, "text": " The position of the Mars Rover is called the state in reinforcement learning."}, {"start": 64.08, "end": 66.47999999999999, "text": " I'm going to call these six states,"}, {"start": 66.47999999999999, "end": 69.2, "text": " state one, state two, state three,"}, {"start": 69.2, "end": 72.36, "text": " state four, state five, and state six."}, {"start": 72.36, "end": 75.8, "text": " So the Rover is starting off in state four."}, {"start": 75.8, "end": 81.84, "text": " Now, the Rover was sent to Mars to try to carry out different science missions."}, {"start": 81.84, "end": 86.24, "text": " It can go to different places to use its sensors,"}, {"start": 86.24, "end": 89.19999999999999, "text": " such as a drill or radar or spectrometer,"}, {"start": 89.19999999999999, "end": 92.64, "text": " to analyze the rock at different places on the planet,"}, {"start": 92.64, "end": 94.36, "text": " or go to different places to take"}, {"start": 94.36, "end": 96.96, "text": " interesting pictures for scientists on Earth to look at."}, {"start": 96.96, "end": 99.83999999999999, "text": " In this example, state one, here on the left,"}, {"start": 99.83999999999999, "end": 105.47999999999999, "text": " has a very interesting surface that scientists would love for the robot to sample."}, {"start": 105.47999999999999, "end": 110.0, "text": " State six also has a pretty interesting surface that scientists would"}, {"start": 110.0, "end": 114.11999999999999, "text": " quite like the Rover to sample but not as interesting as state one."}, {"start": 114.12, "end": 121.80000000000001, "text": " So we would more like to carry out the science mission at state one than at state six,"}, {"start": 121.80000000000001, "end": 124.12, "text": " but state one is further away."}, {"start": 124.12, "end": 131.32, "text": " The way we will reflect state one being potentially more valuable is through the reward function."}, {"start": 131.32, "end": 135.88, "text": " So the reward at state one is 100,"}, {"start": 135.88, "end": 139.72, "text": " and the reward at state six is 40,"}, {"start": 139.72, "end": 143.12, "text": " and the rewards at all of the other states in between,"}, {"start": 143.12, "end": 146.56, "text": " I'm going to write as a reward of zero because there's"}, {"start": 146.56, "end": 152.08, "text": " not as much interesting science to be done at these states two, three, four, and five."}, {"start": 152.08, "end": 156.92000000000002, "text": " On each step, the Rover gets to choose one of two actions."}, {"start": 156.92000000000002, "end": 162.64000000000001, "text": " It can either go to the left or it can go to the right."}, {"start": 162.64000000000001, "end": 166.20000000000002, "text": " So the question is, what should the Rover do?"}, {"start": 166.20000000000002, "end": 170.72, "text": " In reinforcement learning, we pay a lot of attention to the rewards because that's how"}, {"start": 170.72, "end": 174.08, "text": " we know if the robot is doing well or poorly."}, {"start": 174.08, "end": 178.0, "text": " So let's look at some examples of what might happen."}, {"start": 178.0, "end": 181.52, "text": " If the robot were to go left starting from state four,"}, {"start": 181.52, "end": 185.44, "text": " then initially starting from state four,"}, {"start": 185.44, "end": 187.68, "text": " it will receive a reward of zero,"}, {"start": 187.68, "end": 189.64, "text": " and after going left,"}, {"start": 189.64, "end": 193.52, "text": " it gets to state three where it receives again a reward of zero,"}, {"start": 193.52, "end": 195.36, "text": " then it gets to state two,"}, {"start": 195.36, "end": 197.04, "text": " receives a reward of zero,"}, {"start": 197.04, "end": 199.8, "text": " and finally it gets to state one where it receives"}, {"start": 199.8, "end": 201.96, "text": " a reward of 100."}, {"start": 201.96, "end": 207.72, "text": " For this application, I'm going to assume that when it gets to either state one or state six,"}, {"start": 207.72, "end": 209.52, "text": " that the day ends."}, {"start": 209.52, "end": 211.12, "text": " So in reinforcement learning,"}, {"start": 211.12, "end": 215.32000000000002, "text": " we sometimes call this a terminal state."}, {"start": 215.32000000000002, "end": 219.60000000000002, "text": " What that means is that after it gets to one of these terminal states,"}, {"start": 219.60000000000002, "end": 221.44, "text": " it gets a reward at that state,"}, {"start": 221.44, "end": 223.72000000000003, "text": " but then nothing more happens after that."}, {"start": 223.72000000000003, "end": 227.36, "text": " Maybe the robots run out of fuel or run out of time for the day,"}, {"start": 227.36, "end": 234.36, "text": " which is why it only gets to either enjoy the 100 or the 40 reward,"}, {"start": 234.36, "end": 236.4, "text": " but then that's it for the day,"}, {"start": 236.4, "end": 239.32000000000002, "text": " and it doesn't get to earn additional rewards after that."}, {"start": 239.32000000000002, "end": 241.68, "text": " Now, instead of going left,"}, {"start": 241.68, "end": 244.52, "text": " the robot could also choose to go to the right,"}, {"start": 244.52, "end": 246.4, "text": " in which case from state four,"}, {"start": 246.4, "end": 250.84, "text": " it would first have a reward of zero,"}, {"start": 250.84, "end": 253.4, "text": " and then it'll move right and get to state five,"}, {"start": 253.4, "end": 255.52, "text": " have another reward of zero,"}, {"start": 255.52, "end": 258.92, "text": " and then it will get to this other terminal state on the right,"}, {"start": 258.92, "end": 261.72, "text": " state six, and get a reward of 40."}, {"start": 261.72, "end": 266.52, "text": " But going left and going right aren't the only options."}, {"start": 266.52, "end": 272.84000000000003, "text": " One thing the robot could do is it could start from state four and decide to move to the right."}, {"start": 272.84000000000003, "end": 275.36, "text": " So it goes from state four to five,"}, {"start": 275.36, "end": 277.40000000000003, "text": " gets a reward of zero in state four,"}, {"start": 277.40000000000003, "end": 279.0, "text": " reward of zero in state five,"}, {"start": 279.0, "end": 283.32, "text": " and then maybe it changes its mind and it decides to start going to the left."}, {"start": 283.32, "end": 287.84, "text": " As follows, in which case, it would get a reward of zero at state four,"}, {"start": 287.84, "end": 289.8, "text": " at state three, at state two,"}, {"start": 289.8, "end": 293.24, "text": " and then a reward of 100 when it gets to state one."}, {"start": 293.24, "end": 296.8, "text": " In this sequence of actions and states,"}, {"start": 296.8, "end": 298.88, "text": " the robot is wasting a bit of time."}, {"start": 298.88, "end": 302.2, "text": " So this maybe isn't such a great way to take actions,"}, {"start": 302.2, "end": 304.96, "text": " but it is one choice that the algorithm could pick."}, {"start": 304.96, "end": 307.12, "text": " But hopefully, it won't pick this one."}, {"start": 307.12, "end": 310.92, "text": " So to summarize, at every time step,"}, {"start": 310.92, "end": 313.40000000000003, "text": " the robot is in some state,"}, {"start": 313.40000000000003, "end": 315.64000000000004, "text": " which I'm going to call s,"}, {"start": 315.64000000000004, "end": 319.40000000000003, "text": " and it gets to choose an action,"}, {"start": 319.40000000000003, "end": 323.44, "text": " and it also enjoys some rewards,"}, {"start": 323.44, "end": 326.64000000000004, "text": " r of s, that it gets from that state."}, {"start": 326.64000000000004, "end": 328.68, "text": " As a result of this action,"}, {"start": 328.68, "end": 332.08000000000004, "text": " it gets to some new state, s prime."}, {"start": 332.08000000000004, "end": 334.0, "text": " So as a concrete example,"}, {"start": 334.0, "end": 339.08000000000004, "text": " when the robot was in state four and it took the action, go left,"}, {"start": 339.08, "end": 341.12, "text": " it enjoyed, well,"}, {"start": 341.12, "end": 346.03999999999996, "text": " maybe didn't enjoy the reward of zero associated with that state four,"}, {"start": 346.03999999999996, "end": 349.4, "text": " and it wound up in a new state three."}, {"start": 349.4, "end": 352.8, "text": " When you learn about specific reinforcement learning algorithms,"}, {"start": 352.8, "end": 355.4, "text": " you see that these four things,"}, {"start": 355.4, "end": 357.2, "text": " the state, action, the reward,"}, {"start": 357.2, "end": 361.24, "text": " and the next state, which is what happens basically every time you take an action,"}, {"start": 361.24, "end": 365.03999999999996, "text": " that this be a core elements of what reinforcement learning algorithms will"}, {"start": 365.03999999999996, "end": 368.79999999999995, "text": " look at when deciding how to take actions."}, {"start": 368.8, "end": 372.28000000000003, "text": " Just for clarity, the reward here,"}, {"start": 372.28000000000003, "end": 376.04, "text": " r of s, this is the reward associated with this state."}, {"start": 376.04, "end": 381.44, "text": " So this reward of zero is associated with the state four rather than with the state three."}, {"start": 381.44, "end": 386.96000000000004, "text": " So that's the formalism of how a reinforcement learning application works."}, {"start": 386.96000000000004, "end": 390.36, "text": " In the next video, let's take a look at how we specify"}, {"start": 390.36, "end": 393.88, "text": " exactly what we want the reinforcement learning algorithm to do."}, {"start": 393.88, "end": 396.40000000000003, "text": " In particular, we'll talk about an important idea in"}, {"start": 396.4, "end": 398.96, "text": " reinforcement learning called the return."}, {"start": 398.96, "end": 426.52, "text": " Let's go on to the next video to see what that means."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=hdG6-fCsKh4
10.3 Reinforcement Learning formalism | The Return in reinforcement learning -[ML |Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
You saw in the last video what are the states of a reinforcement learning application, as well as how depending on the actions you take, you go through different states and also get to enjoy different rewards. But how do you know if a particular set of rewards is better or worse than a different set of rewards? The return in reinforcement learning, which we'll define in this video, allows us to capture that. So if you go through this, one analogy that you might find helpful is if you imagine you have a $5 bill at your feet, you can reach down and pick up, or half an hour across town, you can walk half an hour and pick up a $10 bill. Which one would you rather go after? $10 is much better than $5, but if you need to walk for half an hour to go and get that $10 bill, then maybe it'd be more convenient to just pick up the $5 bill instead. So the concept of a return captures that rewards you can get quicker are maybe more attractive than rewards that take you a long time to get to. Let's take a look at exactly how that works. Here's our Mars rover example. If starting from state four, you go to the left, we saw that the rewards you get would be zero on the first step from state four, zero from state three, zero from state two, and then 100 at state one, the terminal state. And so the return is defined as the sum of these rewards, but weighted by one additional factor, which is called the discount factor. So the discount factor is a number a little bit less than one. So let me pick 0.9 as the discount factor. I'm going to weight the reward on the first step is just zero. The reward on the second step is a discount factor, 0.9 times that reward, and then plus the discount factor squared times that reward, and then plus the discount factor cubed times that reward. And if you calculate this out, this turns out to be 0.729 times 100, which is 72.9. The more general formula for the return is that if your robot goes through some sequence of states and gets reward R1 on the first step and R2 on the second step and R3 on the third step and so on, then the return is R1 plus the discount factor gamma, that's Greek alphabet gamma, which I've set to 0.9 as this example, the gamma times R2 plus gamma squared times R3 plus gamma cubed times R4 and so on until you get to the terminal state. What the discount factor gamma does is it has the effect of making the reinforcement learning algorithm a little bit impatient because the return gives full credit to the first reward is 100%, is 1 times R1, but then it gives a little bit less credit to the reward you get at the second step, that's multiplied by 0.9, and then even less credit to the reward you get at the next time step R3 and so on. And so getting rewards sooner results in a higher value for the total return. In many reinforcement learning algorithms, a common choice for the discount factor would be a number pretty close to 1, like 0.9 or 0.99 or even 0.999. But for illustrative purposes, in the running example I'm going to use, I'm actually going to use a discount factor of 0.5. So this very heavily down weights or very heavily we say discounts rewards in the future, because with every additional passing time step, you get only half as much credit as rewards that you would have gotten one step earlier. And so if gamma were equal to 0.5, the return under the example above would have been 0.9, 0 plus 0.5 times 0, replacing this equation on top, plus 0.5 squared, 0 plus 0.5 cubed times 100. That's the last reward because state 1 is a terminal state, and this turns out to be a return of 12.5. In financial applications, the discount factor also has a very natural interpretation as the interest rate or the time value of money. So if you can have a dollar today, that may be worth a little bit more than if you could only get a dollar in the future, because even a dollar today, you can put in the bank, earn some interest and end up with a little bit more money a year from now. So for financial applications, often that discount factor represents how much less is a dollar in the future worth compared to a dollar today. Let's look at some concrete examples of returns. The return you get depends on the rewards, and the rewards depends on the actions you take. And so the return depends on the actions you take. Let's use our usual example and say for this example, I'm going to always go to the left. And so we already saw previously that if the robot were to start off in state 4, the return is 12.5. As we worked out on the previous slide, it turns out that if it were to start off in state 3, the return would be 25 because it gets to the 100 reward one step sooner. And so it's discounted less. If it were to start off in state 2, the return would be 50. And if it were to just start off in state 1, well, it gets the reward of 100 right away. So it's not discounted at all. And so the return, if it were to start off in state 1, would be 100. And then the return in these two states are 6.25. It turns out if you start off in state 6, which is terminal state, you just get the reward and thus the return of 40. Now if you were to take a different set of actions, the returns would actually be different. For example, if we were to always go to the right, if those were our actions, then if you were to start in state 4, get a reward of 0, then you get to state 5, get a reward of 0, and then get to state 6, and get a reward of 40. In this case, the return would be 0 plus 0.5, the discount factor, times 0 plus 0.5 squared times 40. And that turns out to be equal to 0.5 squared is one quarter. So one quarter of 40 is 10. And so the return from this state, from state 4, is 10. If you were to take actions, always go to the right. And through similar reasoning, the return from this state is 20, the return from this state is 5, the return from this state is 2.5, and then the return at the terminal state is 140. By the way, if these numbers don't fully make sense, feel free to pause the video and double check the math and see if you can convince yourself that these are the appropriate values for the return, for if you start from different states and if you were to always go to the right. And so we see that if we were to always go to the right, the return you expect to get is lower from most states. So maybe always going to the right isn't as good an idea as always going to the left. But it turns out that we don't have to always go to the left or always go to the right. We could also decide if you're in state 2, go left. If you're in state 3, go left. If you're in state 4, go left. But if you're in state 5, then you're so close to this reward, let's go right. So this would be a different way of choosing actions to take based on what state you're in. And it turns out that the return you get from the different states will be 100, 50, 25, 12.5, 20, and 40. Just to illustrate one case, if you were to start off in state 5, here you would go to the right and so the rewards you get would be 0 first in state 5 and then 40. And so the return is 0, the first reward, plus the discount factor 0.5 times 40, which is 20, which is why the return from this state is 20 if you take actions shown here. So to summarize, the return in reinforcement learning is the sum of the rewards that the system gets, but weighted by the discount factor, where rewards in the far future are weighted by the discount factor raised to a higher power. Now this actually has an interesting effect when you have systems with negative rewards. In the example we went through, all the rewards were 0 or positive, but if there are any rewards are negative, then the discount factor actually incentivizes the system to push out the negative rewards as far into the future as possible. Taking a financial example, if you had to pay someone $10, maybe that's a negative reward of minus 10, but if you could postpone payment by a few years, then you're actually better off because $10 a few years from now, because of the interest rate, is actually worth less than $10 that you had to pay today. So for systems with negative rewards, it causes the algorithm to try to push out the negative rewards as far into the future as possible. And for financial applications and for other applications, that actually turns out to be the right thing for the system to do. You now know what is the return in reinforcement learning. Let's go on to the next video to formalize the goal of reinforcement learning algorithm.
[{"start": 0.0, "end": 6.76, "text": " You saw in the last video what are the states of a reinforcement learning application, as"}, {"start": 6.76, "end": 12.120000000000001, "text": " well as how depending on the actions you take, you go through different states and also get"}, {"start": 12.120000000000001, "end": 14.76, "text": " to enjoy different rewards."}, {"start": 14.76, "end": 19.64, "text": " But how do you know if a particular set of rewards is better or worse than a different"}, {"start": 19.64, "end": 21.16, "text": " set of rewards?"}, {"start": 21.16, "end": 26.04, "text": " The return in reinforcement learning, which we'll define in this video, allows us to capture"}, {"start": 26.04, "end": 27.04, "text": " that."}, {"start": 27.04, "end": 31.96, "text": " So if you go through this, one analogy that you might find helpful is if you imagine you"}, {"start": 31.96, "end": 37.96, "text": " have a $5 bill at your feet, you can reach down and pick up, or half an hour across town,"}, {"start": 37.96, "end": 41.44, "text": " you can walk half an hour and pick up a $10 bill."}, {"start": 41.44, "end": 43.879999999999995, "text": " Which one would you rather go after?"}, {"start": 43.879999999999995, "end": 49.96, "text": " $10 is much better than $5, but if you need to walk for half an hour to go and get that"}, {"start": 49.96, "end": 54.92, "text": " $10 bill, then maybe it'd be more convenient to just pick up the $5 bill instead."}, {"start": 54.92, "end": 60.64, "text": " So the concept of a return captures that rewards you can get quicker are maybe more attractive"}, {"start": 60.64, "end": 63.6, "text": " than rewards that take you a long time to get to."}, {"start": 63.6, "end": 66.2, "text": " Let's take a look at exactly how that works."}, {"start": 66.2, "end": 68.36, "text": " Here's our Mars rover example."}, {"start": 68.36, "end": 74.04, "text": " If starting from state four, you go to the left, we saw that the rewards you get would"}, {"start": 74.04, "end": 80.04, "text": " be zero on the first step from state four, zero from state three, zero from state two,"}, {"start": 80.04, "end": 83.88, "text": " and then 100 at state one, the terminal state."}, {"start": 83.88, "end": 91.75999999999999, "text": " And so the return is defined as the sum of these rewards, but weighted by one additional"}, {"start": 91.75999999999999, "end": 95.25999999999999, "text": " factor, which is called the discount factor."}, {"start": 95.25999999999999, "end": 99.44, "text": " So the discount factor is a number a little bit less than one."}, {"start": 99.44, "end": 102.28, "text": " So let me pick 0.9 as the discount factor."}, {"start": 102.28, "end": 105.52, "text": " I'm going to weight the reward on the first step is just zero."}, {"start": 105.52, "end": 112.32, "text": " The reward on the second step is a discount factor, 0.9 times that reward, and then plus"}, {"start": 112.32, "end": 118.39999999999999, "text": " the discount factor squared times that reward, and then plus the discount factor cubed times"}, {"start": 118.39999999999999, "end": 119.6, "text": " that reward."}, {"start": 119.6, "end": 129.44, "text": " And if you calculate this out, this turns out to be 0.729 times 100, which is 72.9."}, {"start": 129.44, "end": 135.56, "text": " The more general formula for the return is that if your robot goes through some sequence"}, {"start": 135.56, "end": 143.04, "text": " of states and gets reward R1 on the first step and R2 on the second step and R3 on the"}, {"start": 143.04, "end": 152.4, "text": " third step and so on, then the return is R1 plus the discount factor gamma, that's Greek"}, {"start": 152.4, "end": 160.32, "text": " alphabet gamma, which I've set to 0.9 as this example, the gamma times R2 plus gamma squared"}, {"start": 160.32, "end": 168.72, "text": " times R3 plus gamma cubed times R4 and so on until you get to the terminal state."}, {"start": 168.72, "end": 176.07999999999998, "text": " What the discount factor gamma does is it has the effect of making the reinforcement"}, {"start": 176.07999999999998, "end": 181.79999999999998, "text": " learning algorithm a little bit impatient because the return gives full credit to the"}, {"start": 181.79999999999998, "end": 188.64, "text": " first reward is 100%, is 1 times R1, but then it gives a little bit less credit to the reward"}, {"start": 188.64, "end": 193.48, "text": " you get at the second step, that's multiplied by 0.9, and then even less credit to the reward"}, {"start": 193.48, "end": 197.29999999999998, "text": " you get at the next time step R3 and so on."}, {"start": 197.29999999999998, "end": 203.07999999999998, "text": " And so getting rewards sooner results in a higher value for the total return."}, {"start": 203.07999999999998, "end": 207.79999999999998, "text": " In many reinforcement learning algorithms, a common choice for the discount factor would"}, {"start": 207.79999999999998, "end": 215.23999999999998, "text": " be a number pretty close to 1, like 0.9 or 0.99 or even 0.999."}, {"start": 215.24, "end": 219.76000000000002, "text": " But for illustrative purposes, in the running example I'm going to use, I'm actually going"}, {"start": 219.76000000000002, "end": 224.0, "text": " to use a discount factor of 0.5."}, {"start": 224.0, "end": 230.88, "text": " So this very heavily down weights or very heavily we say discounts rewards in the future,"}, {"start": 230.88, "end": 235.8, "text": " because with every additional passing time step, you get only half as much credit as"}, {"start": 235.8, "end": 239.66000000000003, "text": " rewards that you would have gotten one step earlier."}, {"start": 239.66000000000003, "end": 245.20000000000002, "text": " And so if gamma were equal to 0.5, the return under the example above would have been 0.9,"}, {"start": 245.2, "end": 256.52, "text": " 0 plus 0.5 times 0, replacing this equation on top, plus 0.5 squared, 0 plus 0.5 cubed"}, {"start": 256.52, "end": 258.52, "text": " times 100."}, {"start": 258.52, "end": 263.28, "text": " That's the last reward because state 1 is a terminal state, and this turns out to be"}, {"start": 263.28, "end": 266.88, "text": " a return of 12.5."}, {"start": 266.88, "end": 272.2, "text": " In financial applications, the discount factor also has a very natural interpretation as"}, {"start": 272.2, "end": 275.44, "text": " the interest rate or the time value of money."}, {"start": 275.44, "end": 281.64, "text": " So if you can have a dollar today, that may be worth a little bit more than if you could"}, {"start": 281.64, "end": 286.52, "text": " only get a dollar in the future, because even a dollar today, you can put in the bank, earn"}, {"start": 286.52, "end": 290.41999999999996, "text": " some interest and end up with a little bit more money a year from now."}, {"start": 290.41999999999996, "end": 295.59999999999997, "text": " So for financial applications, often that discount factor represents how much less is"}, {"start": 295.59999999999997, "end": 299.4, "text": " a dollar in the future worth compared to a dollar today."}, {"start": 299.4, "end": 303.15999999999997, "text": " Let's look at some concrete examples of returns."}, {"start": 303.15999999999997, "end": 308.4, "text": " The return you get depends on the rewards, and the rewards depends on the actions you"}, {"start": 308.4, "end": 309.4, "text": " take."}, {"start": 309.4, "end": 313.35999999999996, "text": " And so the return depends on the actions you take."}, {"start": 313.35999999999996, "end": 321.59999999999997, "text": " Let's use our usual example and say for this example, I'm going to always go to the left."}, {"start": 321.59999999999997, "end": 328.79999999999995, "text": " And so we already saw previously that if the robot were to start off in state 4, the return"}, {"start": 328.8, "end": 329.8, "text": " is 12.5."}, {"start": 329.8, "end": 336.12, "text": " As we worked out on the previous slide, it turns out that if it were to start off in"}, {"start": 336.12, "end": 344.74, "text": " state 3, the return would be 25 because it gets to the 100 reward one step sooner."}, {"start": 344.74, "end": 347.62, "text": " And so it's discounted less."}, {"start": 347.62, "end": 351.6, "text": " If it were to start off in state 2, the return would be 50."}, {"start": 351.6, "end": 356.62, "text": " And if it were to just start off in state 1, well, it gets the reward of 100 right away."}, {"start": 356.62, "end": 358.0, "text": " So it's not discounted at all."}, {"start": 358.0, "end": 362.02, "text": " And so the return, if it were to start off in state 1, would be 100."}, {"start": 362.02, "end": 365.88, "text": " And then the return in these two states are 6.25."}, {"start": 365.88, "end": 370.52, "text": " It turns out if you start off in state 6, which is terminal state, you just get the"}, {"start": 370.52, "end": 374.56, "text": " reward and thus the return of 40."}, {"start": 374.56, "end": 380.82, "text": " Now if you were to take a different set of actions, the returns would actually be different."}, {"start": 380.82, "end": 387.72, "text": " For example, if we were to always go to the right, if those were our actions, then if"}, {"start": 387.72, "end": 394.16, "text": " you were to start in state 4, get a reward of 0, then you get to state 5, get a reward"}, {"start": 394.16, "end": 398.82000000000005, "text": " of 0, and then get to state 6, and get a reward of 40."}, {"start": 398.82000000000005, "end": 407.84000000000003, "text": " In this case, the return would be 0 plus 0.5, the discount factor, times 0 plus 0.5 squared"}, {"start": 407.84000000000003, "end": 409.56, "text": " times 40."}, {"start": 409.56, "end": 413.8, "text": " And that turns out to be equal to 0.5 squared is one quarter."}, {"start": 413.8, "end": 416.18, "text": " So one quarter of 40 is 10."}, {"start": 416.18, "end": 420.36, "text": " And so the return from this state, from state 4, is 10."}, {"start": 420.36, "end": 423.72, "text": " If you were to take actions, always go to the right."}, {"start": 423.72, "end": 428.24, "text": " And through similar reasoning, the return from this state is 20, the return from this"}, {"start": 428.24, "end": 434.68, "text": " state is 5, the return from this state is 2.5, and then the return at the terminal state"}, {"start": 434.68, "end": 437.68, "text": " is 140."}, {"start": 437.68, "end": 441.96000000000004, "text": " By the way, if these numbers don't fully make sense, feel free to pause the video and double"}, {"start": 441.96, "end": 446.79999999999995, "text": " check the math and see if you can convince yourself that these are the appropriate values"}, {"start": 446.79999999999995, "end": 452.59999999999997, "text": " for the return, for if you start from different states and if you were to always go to the"}, {"start": 452.59999999999997, "end": 454.15999999999997, "text": " right."}, {"start": 454.15999999999997, "end": 459.91999999999996, "text": " And so we see that if we were to always go to the right, the return you expect to get"}, {"start": 459.91999999999996, "end": 462.71999999999997, "text": " is lower from most states."}, {"start": 462.71999999999997, "end": 469.35999999999996, "text": " So maybe always going to the right isn't as good an idea as always going to the left."}, {"start": 469.36, "end": 473.72, "text": " But it turns out that we don't have to always go to the left or always go to the right."}, {"start": 473.72, "end": 478.1, "text": " We could also decide if you're in state 2, go left."}, {"start": 478.1, "end": 480.04, "text": " If you're in state 3, go left."}, {"start": 480.04, "end": 481.76, "text": " If you're in state 4, go left."}, {"start": 481.76, "end": 488.08000000000004, "text": " But if you're in state 5, then you're so close to this reward, let's go right."}, {"start": 488.08000000000004, "end": 493.6, "text": " So this would be a different way of choosing actions to take based on what state you're"}, {"start": 493.6, "end": 494.6, "text": " in."}, {"start": 494.6, "end": 504.6, "text": " And it turns out that the return you get from the different states will be 100, 50, 25,"}, {"start": 504.6, "end": 509.28000000000003, "text": " 12.5, 20, and 40."}, {"start": 509.28000000000003, "end": 515.44, "text": " Just to illustrate one case, if you were to start off in state 5, here you would go to"}, {"start": 515.44, "end": 522.36, "text": " the right and so the rewards you get would be 0 first in state 5 and then 40."}, {"start": 522.36, "end": 528.8000000000001, "text": " And so the return is 0, the first reward, plus the discount factor 0.5 times 40, which"}, {"start": 528.8000000000001, "end": 535.3000000000001, "text": " is 20, which is why the return from this state is 20 if you take actions shown here."}, {"start": 535.3000000000001, "end": 540.6, "text": " So to summarize, the return in reinforcement learning is the sum of the rewards that the"}, {"start": 540.6, "end": 546.08, "text": " system gets, but weighted by the discount factor, where rewards in the far future are"}, {"start": 546.08, "end": 550.84, "text": " weighted by the discount factor raised to a higher power."}, {"start": 550.84, "end": 556.5600000000001, "text": " Now this actually has an interesting effect when you have systems with negative rewards."}, {"start": 556.5600000000001, "end": 562.64, "text": " In the example we went through, all the rewards were 0 or positive, but if there are any rewards"}, {"start": 562.64, "end": 569.24, "text": " are negative, then the discount factor actually incentivizes the system to push out the negative"}, {"start": 569.24, "end": 572.64, "text": " rewards as far into the future as possible."}, {"start": 572.64, "end": 578.0400000000001, "text": " Taking a financial example, if you had to pay someone $10, maybe that's a negative reward"}, {"start": 578.04, "end": 584.5999999999999, "text": " of minus 10, but if you could postpone payment by a few years, then you're actually better"}, {"start": 584.5999999999999, "end": 590.88, "text": " off because $10 a few years from now, because of the interest rate, is actually worth less"}, {"start": 590.88, "end": 594.74, "text": " than $10 that you had to pay today."}, {"start": 594.74, "end": 600.92, "text": " So for systems with negative rewards, it causes the algorithm to try to push out the negative"}, {"start": 600.92, "end": 603.88, "text": " rewards as far into the future as possible."}, {"start": 603.88, "end": 608.0, "text": " And for financial applications and for other applications, that actually turns out to be"}, {"start": 608.0, "end": 611.0, "text": " the right thing for the system to do."}, {"start": 611.0, "end": 614.2, "text": " You now know what is the return in reinforcement learning."}, {"start": 614.2, "end": 634.6, "text": " Let's go on to the next video to formalize the goal of reinforcement learning algorithm."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Zd9npj3C6QE
10.4 Reinforcement Learning formalism | Making decisions: Policies in reinforcement learning -ML| Ng
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's formalize how a reinforcement learning algorithm picks actions. In this video, you'll learn about what is a policy in a reinforcement learning algorithm. Let's take a look. As we've seen, there are many different ways that you can take actions in a reinforcement learning problem. For example, we could decide to always go for the nearer reward. So you go left if this leftmost reward is nearer, or go right if this rightmost reward is nearer. Another way we could choose actions is to always go for the larger reward. Or we could always go for the smaller reward. It doesn't seem like a good idea, but it is another option. Or you could choose to go left unless you're just one step away from the lesser reward, in which case you go for that one. In reinforcement learning, our goal is to come up with a function which is called a policy pi whose job it is to take as input any state s and map it to some action a that it wants us to take. So for example, for this policy here at the bottom, this policy would say that if you're in state two, then it maps us to the left action. If you're in state three, the policy says go left. If you're in state four, also go left. And if you're in state five, go right. And so pi applied to state s tells us what action it wants us to take in that state. And so the goal of reinforcement learning is to find a policy pi or pi of s that tells you what action to take in every state so as to maximize the return. By the way, I don't know if policy is the most descriptive term of what pi is, but it's one of those terms that's become standard in reinforcement learning. Maybe calling pi a controller rather than a policy would be more natural terminology. But policy is what everyone in reinforcement learning now calls this. In the last video, we've gone through quite a few concepts in reinforcement learning from states to actions to rewards to returns to policies. Let's do a quick review of them in the next video and then we'll go on to start developing algorithms for finding good policies. Let's go on to the next video.
[{"start": 0.0, "end": 6.12, "text": " Let's formalize how a reinforcement learning algorithm picks actions."}, {"start": 6.12, "end": 10.88, "text": " In this video, you'll learn about what is a policy in a reinforcement learning algorithm."}, {"start": 10.88, "end": 12.76, "text": " Let's take a look."}, {"start": 12.76, "end": 17.88, "text": " As we've seen, there are many different ways that you can take actions in a reinforcement"}, {"start": 17.88, "end": 19.64, "text": " learning problem."}, {"start": 19.64, "end": 25.96, "text": " For example, we could decide to always go for the nearer reward."}, {"start": 25.96, "end": 32.0, "text": " So you go left if this leftmost reward is nearer, or go right if this rightmost reward"}, {"start": 32.0, "end": 34.28, "text": " is nearer."}, {"start": 34.28, "end": 39.88, "text": " Another way we could choose actions is to always go for the larger reward."}, {"start": 39.88, "end": 43.480000000000004, "text": " Or we could always go for the smaller reward."}, {"start": 43.480000000000004, "end": 47.24, "text": " It doesn't seem like a good idea, but it is another option."}, {"start": 47.24, "end": 52.88, "text": " Or you could choose to go left unless you're just one step away from the lesser reward,"}, {"start": 52.88, "end": 55.68, "text": " in which case you go for that one."}, {"start": 55.68, "end": 63.0, "text": " In reinforcement learning, our goal is to come up with a function which is called a"}, {"start": 63.0, "end": 73.68, "text": " policy pi whose job it is to take as input any state s and map it to some action a that"}, {"start": 73.68, "end": 75.96000000000001, "text": " it wants us to take."}, {"start": 75.96000000000001, "end": 82.28, "text": " So for example, for this policy here at the bottom, this policy would say that if you're"}, {"start": 82.28, "end": 87.84, "text": " in state two, then it maps us to the left action."}, {"start": 87.84, "end": 91.52, "text": " If you're in state three, the policy says go left."}, {"start": 91.52, "end": 94.76, "text": " If you're in state four, also go left."}, {"start": 94.76, "end": 97.84, "text": " And if you're in state five, go right."}, {"start": 97.84, "end": 105.24000000000001, "text": " And so pi applied to state s tells us what action it wants us to take in that state."}, {"start": 105.24, "end": 112.96, "text": " And so the goal of reinforcement learning is to find a policy pi or pi of s that tells"}, {"start": 112.96, "end": 117.67999999999999, "text": " you what action to take in every state so as to maximize the return."}, {"start": 117.67999999999999, "end": 125.03999999999999, "text": " By the way, I don't know if policy is the most descriptive term of what pi is, but it's"}, {"start": 125.03999999999999, "end": 128.56, "text": " one of those terms that's become standard in reinforcement learning."}, {"start": 128.56, "end": 134.51999999999998, "text": " Maybe calling pi a controller rather than a policy would be more natural terminology."}, {"start": 134.52, "end": 139.68, "text": " But policy is what everyone in reinforcement learning now calls this."}, {"start": 139.68, "end": 144.28, "text": " In the last video, we've gone through quite a few concepts in reinforcement learning from"}, {"start": 144.28, "end": 148.60000000000002, "text": " states to actions to rewards to returns to policies."}, {"start": 148.60000000000002, "end": 152.64000000000001, "text": " Let's do a quick review of them in the next video and then we'll go on to start developing"}, {"start": 152.64000000000001, "end": 155.92000000000002, "text": " algorithms for finding good policies."}, {"start": 155.92, "end": 178.56, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=zRhXn9n05b4
10.5 Reinforcement Learning formalism | Review of key concepts -[Machine Learning| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
We've developed a reinforcement learning formalism using the six-state mass rover example. Let's do a quick review of the key concepts and also see how this set of concepts can be used for other applications as well. Some of the concepts we've discussed are states of a reinforcement learning problem, the set of actions, the rewards, a discount factor, then how rewards and the discount factor are together used to compute the return, and then finally a policy whose job it is to help you pick actions so as to maximize the return. For the mass rover example, we had six states that we numbered one to six and the actions were to go left or to go right. The rewards were 100 for the leftmost state, 40 for the rightmost state, and 0 in between, and I was using a discount factor of 0.5. The return was given by this formula and we could have different policies pi that pick actions, depending on what state you're in. This same formalism of states, actions, rewards, and so on can be used for many other applications as well. Take the problem of flying an autonomous helicopter. The set of states would be the set of possible positions and orientations and speeds and so on of the helicopter. The possible actions would be the set of possible ways to move the control stick of a helicopter. And the rewards may be a plus one if it's flying well and a negative 1000 if it doesn't feel really bad or crashes. So a reward function that tells you how well the helicopter is flying. The discount factor, a number slightly less than one, maybe say 0.99. And then based on the rewards and the discount factor, you compute the return using the same formula. And the job of reinforcement learning algorithm would be to find some policy pi of s so that given as input the position of the helicopter s, it tells you what action to take, that is, tells you how to move the control sticks. Here's one more example. Here's a game playing one. Say you want to use reinforcement learning to learn to play chess. The state of this problem would be the position of all the pieces on the board. By the way, if you play chess and know the rules well, I know that there's a little bit more information than just the position of the pieces as important for chess. I'll simplify a little bit for this video. The actions are the possible legal moves in the game. And then a common choice of reward would be if you give your system a reward of plus one, if it wins a game, minus one, if it loses a game, and a reward of zero, if it ties a game. For chess, usually a discount factor very close to one would be used. So maybe 0.99 or even 0.995 or 0.999. And the return uses the same formula as the other applications. And once again, the goal is given a board position to pick a good action using a policy pie. This formalism of a reinforcement learning application actually has a name. It's called a Markov decision process. And I know that sounds like a big technical complicated term. But if you ever hear this term Markov decision process or MDP for short, that's just the formalism that we've been talking about in the last few videos. The term Markov in the MDP or Markov decision process refers to that the future only depends on the current state and not on anything that might have occurred prior to getting to the current state. In other words, in a Markov decision process, the future depends only on where you are now, not on how you got here. One other way to think of the Markov decision process formalism is that we have a robot or some other agent that we wish to control. And what we get to do is choose actions A and based on those actions, something will happen in the world or in the environment, such as opposition in the world changes, or we get to sample a piece of rock and execute the science mission. The way we choose the actions A is with a policy pie. And based on what happens in the world, we then get to see or we observe back what state we're in as well as what reward are that we get. And so you sometimes see different authors use a diagram like this to represent the Markov decision process or the MDP formalism. But this is just another way of illustrating the set of concepts that you learned about in the last few videos. So you now know how a reinforcement learning problem works. In the next video, we'll start to develop an algorithm for picking good actions. The first step to that will be to define and then eventually learn to compute the state action value function. This turns out to be one of the key quantities for when we want to develop a learning algorithm. Let's go on to the next video to see what is this state action value function.
[{"start": 0.0, "end": 8.76, "text": " We've developed a reinforcement learning formalism using the six-state mass rover example."}, {"start": 8.76, "end": 13.44, "text": " Let's do a quick review of the key concepts and also see how this set of concepts can"}, {"start": 13.44, "end": 16.84, "text": " be used for other applications as well."}, {"start": 16.84, "end": 23.32, "text": " Some of the concepts we've discussed are states of a reinforcement learning problem, the set"}, {"start": 23.32, "end": 30.0, "text": " of actions, the rewards, a discount factor, then how rewards and the discount factor are"}, {"start": 30.0, "end": 36.4, "text": " together used to compute the return, and then finally a policy whose job it is to help you"}, {"start": 36.4, "end": 39.36, "text": " pick actions so as to maximize the return."}, {"start": 39.36, "end": 45.5, "text": " For the mass rover example, we had six states that we numbered one to six and the actions"}, {"start": 45.5, "end": 48.72, "text": " were to go left or to go right."}, {"start": 48.72, "end": 56.12, "text": " The rewards were 100 for the leftmost state, 40 for the rightmost state, and 0 in between,"}, {"start": 56.12, "end": 59.72, "text": " and I was using a discount factor of 0.5."}, {"start": 59.72, "end": 65.08, "text": " The return was given by this formula and we could have different policies pi that pick"}, {"start": 65.08, "end": 68.03999999999999, "text": " actions, depending on what state you're in."}, {"start": 68.03999999999999, "end": 73.36, "text": " This same formalism of states, actions, rewards, and so on can be used for many other applications"}, {"start": 73.36, "end": 74.84, "text": " as well."}, {"start": 74.84, "end": 78.28, "text": " Take the problem of flying an autonomous helicopter."}, {"start": 78.28, "end": 84.56, "text": " The set of states would be the set of possible positions and orientations and speeds and"}, {"start": 84.56, "end": 86.56, "text": " so on of the helicopter."}, {"start": 86.56, "end": 93.44, "text": " The possible actions would be the set of possible ways to move the control stick of a helicopter."}, {"start": 93.44, "end": 99.36, "text": " And the rewards may be a plus one if it's flying well and a negative 1000 if it doesn't"}, {"start": 99.36, "end": 101.4, "text": " feel really bad or crashes."}, {"start": 101.4, "end": 105.56, "text": " So a reward function that tells you how well the helicopter is flying."}, {"start": 105.56, "end": 111.0, "text": " The discount factor, a number slightly less than one, maybe say 0.99."}, {"start": 111.0, "end": 116.04, "text": " And then based on the rewards and the discount factor, you compute the return using the same"}, {"start": 116.04, "end": 117.88, "text": " formula."}, {"start": 117.88, "end": 124.56, "text": " And the job of reinforcement learning algorithm would be to find some policy pi of s so that"}, {"start": 124.56, "end": 130.08, "text": " given as input the position of the helicopter s, it tells you what action to take, that"}, {"start": 130.08, "end": 132.56, "text": " is, tells you how to move the control sticks."}, {"start": 132.56, "end": 134.04, "text": " Here's one more example."}, {"start": 134.04, "end": 135.56, "text": " Here's a game playing one."}, {"start": 135.56, "end": 139.23999999999998, "text": " Say you want to use reinforcement learning to learn to play chess."}, {"start": 139.23999999999998, "end": 144.79999999999998, "text": " The state of this problem would be the position of all the pieces on the board."}, {"start": 144.79999999999998, "end": 149.51999999999998, "text": " By the way, if you play chess and know the rules well, I know that there's a little bit"}, {"start": 149.51999999999998, "end": 153.56, "text": " more information than just the position of the pieces as important for chess."}, {"start": 153.56, "end": 156.64, "text": " I'll simplify a little bit for this video."}, {"start": 156.64, "end": 161.0, "text": " The actions are the possible legal moves in the game."}, {"start": 161.0, "end": 166.4, "text": " And then a common choice of reward would be if you give your system a reward of plus one,"}, {"start": 166.4, "end": 172.8, "text": " if it wins a game, minus one, if it loses a game, and a reward of zero, if it ties a"}, {"start": 172.8, "end": 174.04, "text": " game."}, {"start": 174.04, "end": 178.92000000000002, "text": " For chess, usually a discount factor very close to one would be used."}, {"start": 178.92000000000002, "end": 185.32, "text": " So maybe 0.99 or even 0.995 or 0.999."}, {"start": 185.32, "end": 188.8, "text": " And the return uses the same formula as the other applications."}, {"start": 188.8, "end": 197.28, "text": " And once again, the goal is given a board position to pick a good action using a policy"}, {"start": 197.28, "end": 198.4, "text": " pie."}, {"start": 198.4, "end": 203.76000000000002, "text": " This formalism of a reinforcement learning application actually has a name."}, {"start": 203.76000000000002, "end": 206.76000000000002, "text": " It's called a Markov decision process."}, {"start": 206.76000000000002, "end": 210.20000000000002, "text": " And I know that sounds like a big technical complicated term."}, {"start": 210.20000000000002, "end": 216.96, "text": " But if you ever hear this term Markov decision process or MDP for short, that's just the"}, {"start": 216.96, "end": 220.68, "text": " formalism that we've been talking about in the last few videos."}, {"start": 220.68, "end": 228.12, "text": " The term Markov in the MDP or Markov decision process refers to that the future only depends"}, {"start": 228.12, "end": 233.12, "text": " on the current state and not on anything that might have occurred prior to getting to the"}, {"start": 233.12, "end": 234.4, "text": " current state."}, {"start": 234.4, "end": 240.76000000000002, "text": " In other words, in a Markov decision process, the future depends only on where you are now,"}, {"start": 240.76000000000002, "end": 242.84, "text": " not on how you got here."}, {"start": 242.84, "end": 250.24, "text": " One other way to think of the Markov decision process formalism is that we have a robot"}, {"start": 250.24, "end": 255.32, "text": " or some other agent that we wish to control."}, {"start": 255.32, "end": 264.32, "text": " And what we get to do is choose actions A and based on those actions, something will"}, {"start": 264.32, "end": 271.0, "text": " happen in the world or in the environment, such as opposition in the world changes, or"}, {"start": 271.0, "end": 275.0, "text": " we get to sample a piece of rock and execute the science mission."}, {"start": 275.0, "end": 278.68, "text": " The way we choose the actions A is with a policy pie."}, {"start": 278.68, "end": 285.16, "text": " And based on what happens in the world, we then get to see or we observe back what state"}, {"start": 285.16, "end": 291.62, "text": " we're in as well as what reward are that we get."}, {"start": 291.62, "end": 297.32, "text": " And so you sometimes see different authors use a diagram like this to represent the Markov"}, {"start": 297.32, "end": 300.56, "text": " decision process or the MDP formalism."}, {"start": 300.56, "end": 305.04, "text": " But this is just another way of illustrating the set of concepts that you learned about"}, {"start": 305.04, "end": 306.72, "text": " in the last few videos."}, {"start": 306.72, "end": 311.72, "text": " So you now know how a reinforcement learning problem works."}, {"start": 311.72, "end": 316.56, "text": " In the next video, we'll start to develop an algorithm for picking good actions."}, {"start": 316.56, "end": 321.52, "text": " The first step to that will be to define and then eventually learn to compute the state"}, {"start": 321.52, "end": 323.28, "text": " action value function."}, {"start": 323.28, "end": 329.6, "text": " This turns out to be one of the key quantities for when we want to develop a learning algorithm."}, {"start": 329.6, "end": 334.08000000000004, "text": " Let's go on to the next video to see what is this state action value function."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=dVmFCFiON3E
10.6 State-action value function | State-action value function definition -[ML| Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
When we start to develop reinforcement learning arrows later this week, you see that there's a key quantity that reinforcement learning arrows will try to compute, and that's called the state action value function. Let's take a look at what this function is. The state action value function is a function typically denoted by the letter uppercase Q and is a function of a state you might be in, as well as the action you might choose to take in that state. Q of s, a will give a number that equals the return. If you start in that state s and take the action a just once, and after taking action a once, you then behave optimally after that. After that, you take whatever actions will result in the highest possible return. Now, you might be thinking, there's something a little bit strange about this definition, because how do we know what is the optimal behavior? If we knew what's the optimal behavior, if we already knew what's the best action to take in every state, why do we still need to compute Q of s, a because we already have the optimal policy? I do want to acknowledge that there's something a little bit strange about this definition. There's almost something a little bit circular about this definition. But rest assured, when we look at specific reinforcement learning arrows later, we'll resolve this slightly circular definition, and we'll come up with a way to compute the Q function. Even before, we've come up with the optimal policy, but you see that in the later video, so don't worry about this for now. Let's look at an example. We saw previously that this is a pretty good policy. Go left from states 2, 3, and 4, and go right from state 5. It turns out that this is actually the optimal policy for the Mars Rover application when the discount factor gamma is 0.5. So Q of s, a will be equal to the total return. If you start from state s, take the action a, and then behave optimally after that, meaning take actions according to this policy shown over here. Let's figure out what Q of s, a is for a few different states. Let's look at say Q of state 2, and what if we take the action to go right? Well, if you're in state 2 and you go right, then you end up at state 3, and then after that, if you behave optimally, you're going to go left from state 3, and then go left from state 2, and then eventually you get the reward of a 100. In this case, the rewards you get would be 0 from state 2, 0 when you get to state 3, 0 when you get back to state 2, and then 100 when you finally get to the terminal state 1. So the return will be 0 plus 0.5 times that, plus 0.5 squared times that plus 0.5 cubed times 100, and this turns out to be 12.5. So Q of state 2 of going right is equal to 12.5. Note that this passes no judgment on whether going right is a good idea or not. It's actually not that good an idea from state 2 to go right, but it just faithfully reports out the return if you take action A and then behave optimally afterward. Here's another example. If you're in state 2 and you were to go left, then the sequence of rewards you get will be 0 when you're in state 2, followed by 100, and so the return is 0 plus 0.5 times 100, and that's equal to 50. In order to write down the values of QsA in this diagram, I'm going to write 12.5 here on the right to denote that this is Q of state 2 going to the right, and then when I write a little 50 here on the left, to denote that this is Q of state 2 and going to the left. Just to take one more example, what if we're in state 4 and we decide to go left? Well, if you're in state 4 and you go left, you get reward 0 and then you take action left here. So 0 again, take action left here, 0 and then 100. So Q of 4 left results in rewards 0, because the first action is left and then because we follow the optimal policy afterwards, you get rewards 0, 0, 100, and so the return is 0 plus 0.5 times that plus 0.5 squared times that, plus 0.5 cubed times that, which is therefore equal to 12.5. So Q of 4 left is 12.5, I'm going to write this here as 12.5. It turns out if you were to carry out this exercise for all of the other states and all of the other actions, you end up with this being the Q of SA for different states and different actions. Then finally, at the terminal state, well, it doesn't matter what you do, you just get that terminal reward 100 or 40. So I'm just write down those terminal rewards over here. So this is Q of SA for every state, state 1 through 6, and for the two actions, action left and action right. Because the state action value function is almost always denoted by the letter Q, this is also often called the Q function. So the terms Q function and state action value function are used interchangeably, and it tells you what are your returns or really what is the value, how good is it to take action A and state S, and then behave optimally after that. Now, it turns out that once you can compute the Q function, this would give you a way to pick actions as well. Here's the policy and return, and here are the values Q of SA from the previous slide. You notice one interesting thing when you look at the different states, which is that if you take state 2, taking the action left results in a Q value or state action value of 50, which is actually the best possible return you can get from that state. In state 3, Q of SA for the action left also gives you that higher return. In state 4, the action left gives you the return you want, and in state 5, is actually the action going to the right that gives you that higher return of 20. It turns out that the best possible return from any state S is the largest value of Q of SA maximizing over A. Just to make sure this is clear, what I'm saying is that in state 4, there is Q of state 4 left, which is 12.5, and Q of state 4 right, which turns out to be 10, and the larger of these two values, which is 12.5, is the best possible return from that state 4. In other words, the highest return you can hope to get from state 4 is 12.5, and it's actually the larger of these two numbers, 12.5 and 10. Moreover, if you want your Mars rover to enjoy a return of 12.5 rather than say 10, then the action you should take is the action A that gives you the larger value of Q of SA. The best possible action in state S is the action A that actually maximizes Q of SA. This might give you a hint for why computing Q of SA is an important part of the reinforcement learning algorithm that we'll build later. Namely, if you have a way of computing Q of SA for every state and for every action, then when you're in some state S, all you have to do is look at the different actions A, and pick the action A that maximizes Q of SA. So pi of S can just pick the action A that gives the largest value of Q of SA, and that will turn out to be a good action. In fact, it will turn out to be the optimal action. Another intuition about why this makes sense is Q of SA is returned if you start in state S and take the action A and then behave optimally after that. So in order to earn the biggest possible return, what you really want is to take the action A that results in the biggest total return. That's why if only we have a way of computing Q of SA for every state, taking the action A that maximizes return under these circumstances seems like it's the best action to take in that state. Although this isn't something you need to know for this course, I want to mention also that if you look online or look at the reinforcement learning literature, sometimes you also see this Q function written as Q star instead of Q, and this Q function is sometimes also called the optimal Q function. These terms just refer to the Q function exactly as we've defined it. So if you look at the reinforcement learning literature and read about Q star or the optimal Q function, that just means the state action value function that we've been talking about. But for the purposes of this course, you don't need to worry about this. So to summarize, if you can compute Q of SA for every state and every action, then that gives us a good way to compute the optimal policy pi of S. So that's the state action value function or the Q function. We'll talk later about how to come up with an algorithm to compute them, despite the slightly circular aspect of the definition of the Q function. But first, let's take a look at the next video as some specific examples of what these values QSA actually look like.
[{"start": 0.0, "end": 5.72, "text": " When we start to develop reinforcement learning arrows later this week,"}, {"start": 5.72, "end": 8.34, "text": " you see that there's a key quantity that"}, {"start": 8.34, "end": 10.74, "text": " reinforcement learning arrows will try to compute,"}, {"start": 10.74, "end": 13.9, "text": " and that's called the state action value function."}, {"start": 13.9, "end": 16.38, "text": " Let's take a look at what this function is."}, {"start": 16.38, "end": 21.32, "text": " The state action value function is a function typically denoted by"}, {"start": 21.32, "end": 28.080000000000002, "text": " the letter uppercase Q and is a function of a state you might be in,"}, {"start": 28.08, "end": 32.76, "text": " as well as the action you might choose to take in that state."}, {"start": 32.76, "end": 39.08, "text": " Q of s, a will give a number that equals the return."}, {"start": 39.08, "end": 44.959999999999994, "text": " If you start in that state s and take the action a just once,"}, {"start": 44.959999999999994, "end": 47.519999999999996, "text": " and after taking action a once,"}, {"start": 47.519999999999996, "end": 51.08, "text": " you then behave optimally after that."}, {"start": 51.08, "end": 56.019999999999996, "text": " After that, you take whatever actions will result in the highest possible return."}, {"start": 56.02, "end": 59.760000000000005, "text": " Now, you might be thinking, there's something a little bit strange about this definition,"}, {"start": 59.760000000000005, "end": 63.2, "text": " because how do we know what is the optimal behavior?"}, {"start": 63.2, "end": 65.48, "text": " If we knew what's the optimal behavior,"}, {"start": 65.48, "end": 68.92, "text": " if we already knew what's the best action to take in every state,"}, {"start": 68.92, "end": 71.12, "text": " why do we still need to compute Q of s,"}, {"start": 71.12, "end": 73.92, "text": " a because we already have the optimal policy?"}, {"start": 73.92, "end": 78.16, "text": " I do want to acknowledge that there's something a little bit strange about this definition."}, {"start": 78.16, "end": 80.98, "text": " There's almost something a little bit circular about this definition."}, {"start": 80.98, "end": 85.72, "text": " But rest assured, when we look at specific reinforcement learning arrows later,"}, {"start": 85.72, "end": 89.76, "text": " we'll resolve this slightly circular definition,"}, {"start": 89.76, "end": 92.6, "text": " and we'll come up with a way to compute the Q function."}, {"start": 92.6, "end": 95.48, "text": " Even before, we've come up with the optimal policy,"}, {"start": 95.48, "end": 97.2, "text": " but you see that in the later video,"}, {"start": 97.2, "end": 99.2, "text": " so don't worry about this for now."}, {"start": 99.2, "end": 101.28, "text": " Let's look at an example."}, {"start": 101.28, "end": 106.08, "text": " We saw previously that this is a pretty good policy."}, {"start": 106.08, "end": 108.44, "text": " Go left from states 2, 3,"}, {"start": 108.44, "end": 110.82, "text": " and 4, and go right from state 5."}, {"start": 110.82, "end": 114.08, "text": " It turns out that this is actually the optimal policy for"}, {"start": 114.08, "end": 119.6, "text": " the Mars Rover application when the discount factor gamma is 0.5."}, {"start": 119.6, "end": 125.34, "text": " So Q of s, a will be equal to the total return."}, {"start": 125.34, "end": 127.08, "text": " If you start from state s,"}, {"start": 127.08, "end": 128.8, "text": " take the action a,"}, {"start": 128.8, "end": 132.32, "text": " and then behave optimally after that,"}, {"start": 132.32, "end": 136.82, "text": " meaning take actions according to this policy shown over here."}, {"start": 136.82, "end": 138.56, "text": " Let's figure out what Q of s,"}, {"start": 138.56, "end": 141.36, "text": " a is for a few different states."}, {"start": 141.36, "end": 146.16000000000003, "text": " Let's look at say Q of state 2,"}, {"start": 146.16000000000003, "end": 150.08, "text": " and what if we take the action to go right?"}, {"start": 150.08, "end": 153.12, "text": " Well, if you're in state 2 and you go right,"}, {"start": 153.12, "end": 156.32000000000002, "text": " then you end up at state 3,"}, {"start": 156.32000000000002, "end": 157.88000000000002, "text": " and then after that,"}, {"start": 157.88000000000002, "end": 159.34, "text": " if you behave optimally,"}, {"start": 159.34, "end": 161.56, "text": " you're going to go left from state 3,"}, {"start": 161.56, "end": 162.96, "text": " and then go left from state 2,"}, {"start": 162.96, "end": 165.76000000000002, "text": " and then eventually you get the reward of a 100."}, {"start": 165.76000000000002, "end": 170.48000000000002, "text": " In this case, the rewards you get would be 0 from state 2,"}, {"start": 170.48, "end": 172.88, "text": " 0 when you get to state 3,"}, {"start": 172.88, "end": 175.29999999999998, "text": " 0 when you get back to state 2,"}, {"start": 175.29999999999998, "end": 180.39999999999998, "text": " and then 100 when you finally get to the terminal state 1."}, {"start": 180.39999999999998, "end": 185.92, "text": " So the return will be 0 plus 0.5 times that,"}, {"start": 185.92, "end": 190.72, "text": " plus 0.5 squared times that plus 0.5 cubed times 100,"}, {"start": 190.72, "end": 194.51999999999998, "text": " and this turns out to be 12.5."}, {"start": 194.51999999999998, "end": 199.6, "text": " So Q of state 2 of going right is equal to 12.5."}, {"start": 199.6, "end": 202.23999999999998, "text": " Note that this passes no judgment on"}, {"start": 202.23999999999998, "end": 204.44, "text": " whether going right is a good idea or not."}, {"start": 204.44, "end": 207.68, "text": " It's actually not that good an idea from state 2 to go right,"}, {"start": 207.68, "end": 211.92, "text": " but it just faithfully reports out the return if you"}, {"start": 211.92, "end": 214.92, "text": " take action A and then behave optimally afterward."}, {"start": 214.92, "end": 216.64, "text": " Here's another example."}, {"start": 216.64, "end": 221.24, "text": " If you're in state 2 and you were to go left,"}, {"start": 221.24, "end": 226.12, "text": " then the sequence of rewards you get will be 0 when you're in state 2,"}, {"start": 226.12, "end": 227.95999999999998, "text": " followed by 100,"}, {"start": 227.96, "end": 232.48000000000002, "text": " and so the return is 0 plus 0.5 times 100,"}, {"start": 232.48000000000002, "end": 234.52, "text": " and that's equal to 50."}, {"start": 234.52, "end": 241.28, "text": " In order to write down the values of QsA in this diagram,"}, {"start": 241.28, "end": 245.56, "text": " I'm going to write 12.5 here on the right to"}, {"start": 245.56, "end": 250.16, "text": " denote that this is Q of state 2 going to the right,"}, {"start": 250.16, "end": 253.12, "text": " and then when I write a little 50 here on the left,"}, {"start": 253.12, "end": 258.12, "text": " to denote that this is Q of state 2 and going to the left."}, {"start": 258.12, "end": 260.56, "text": " Just to take one more example,"}, {"start": 260.56, "end": 264.44, "text": " what if we're in state 4 and we decide to go left?"}, {"start": 264.44, "end": 267.12, "text": " Well, if you're in state 4 and you go left,"}, {"start": 267.12, "end": 271.4, "text": " you get reward 0 and then you take action left here."}, {"start": 271.4, "end": 276.08, "text": " So 0 again, take action left here, 0 and then 100."}, {"start": 276.08, "end": 282.4, "text": " So Q of 4 left results in rewards 0,"}, {"start": 282.4, "end": 285.52, "text": " because the first action is left and then because we"}, {"start": 285.52, "end": 288.23999999999995, "text": " follow the optimal policy afterwards,"}, {"start": 288.23999999999995, "end": 292.03999999999996, "text": " you get rewards 0, 0, 100,"}, {"start": 292.03999999999996, "end": 297.47999999999996, "text": " and so the return is 0 plus 0.5 times that plus 0.5 squared times that,"}, {"start": 297.47999999999996, "end": 299.59999999999997, "text": " plus 0.5 cubed times that,"}, {"start": 299.59999999999997, "end": 303.59999999999997, "text": " which is therefore equal to 12.5."}, {"start": 303.59999999999997, "end": 306.52, "text": " So Q of 4 left is 12.5,"}, {"start": 306.52, "end": 310.44, "text": " I'm going to write this here as 12.5."}, {"start": 310.44, "end": 314.8, "text": " It turns out if you were to carry out this exercise for all of"}, {"start": 314.8, "end": 316.8, "text": " the other states and all of the other actions,"}, {"start": 316.8, "end": 320.44, "text": " you end up with this being the Q of"}, {"start": 320.44, "end": 324.68, "text": " SA for different states and different actions."}, {"start": 324.68, "end": 326.8, "text": " Then finally, at the terminal state,"}, {"start": 326.8, "end": 328.68, "text": " well, it doesn't matter what you do,"}, {"start": 328.68, "end": 332.56, "text": " you just get that terminal reward 100 or 40."}, {"start": 332.56, "end": 335.56, "text": " So I'm just write down those terminal rewards over here."}, {"start": 335.56, "end": 340.76, "text": " So this is Q of SA for every state, state 1 through 6,"}, {"start": 340.76, "end": 342.28000000000003, "text": " and for the two actions,"}, {"start": 342.28000000000003, "end": 344.8, "text": " action left and action right."}, {"start": 344.8, "end": 351.72, "text": " Because the state action value function is almost always denoted by the letter Q,"}, {"start": 351.72, "end": 355.88, "text": " this is also often called the Q function."}, {"start": 355.88, "end": 361.2, "text": " So the terms Q function and state action value function are used interchangeably,"}, {"start": 361.2, "end": 365.28, "text": " and it tells you what are your returns or really what is the value,"}, {"start": 365.28, "end": 369.03999999999996, "text": " how good is it to take action A and state S,"}, {"start": 369.03999999999996, "end": 371.47999999999996, "text": " and then behave optimally after that."}, {"start": 371.47999999999996, "end": 376.52, "text": " Now, it turns out that once you can compute the Q function,"}, {"start": 376.52, "end": 379.4, "text": " this would give you a way to pick actions as well."}, {"start": 379.4, "end": 382.08, "text": " Here's the policy and return,"}, {"start": 382.08, "end": 386.44, "text": " and here are the values Q of SA from the previous slide."}, {"start": 386.44, "end": 391.15999999999997, "text": " You notice one interesting thing when you look at the different states,"}, {"start": 391.15999999999997, "end": 393.91999999999996, "text": " which is that if you take state 2,"}, {"start": 393.92, "end": 400.44, "text": " taking the action left results in a Q value or state action value of 50,"}, {"start": 400.44, "end": 403.56, "text": " which is actually the best possible return you can get from that state."}, {"start": 403.56, "end": 410.12, "text": " In state 3, Q of SA for the action left also gives you that higher return."}, {"start": 410.12, "end": 415.16, "text": " In state 4, the action left gives you the return you want,"}, {"start": 415.16, "end": 417.20000000000005, "text": " and in state 5,"}, {"start": 417.20000000000005, "end": 423.08000000000004, "text": " is actually the action going to the right that gives you that higher return of 20."}, {"start": 423.08, "end": 429.24, "text": " It turns out that the best possible return from any state S is"}, {"start": 429.24, "end": 433.84, "text": " the largest value of Q of SA maximizing over A."}, {"start": 433.84, "end": 435.44, "text": " Just to make sure this is clear,"}, {"start": 435.44, "end": 438.88, "text": " what I'm saying is that in state 4,"}, {"start": 438.88, "end": 441.91999999999996, "text": " there is Q of state 4 left,"}, {"start": 441.91999999999996, "end": 443.88, "text": " which is 12.5,"}, {"start": 443.88, "end": 447.28, "text": " and Q of state 4 right,"}, {"start": 447.28, "end": 449.24, "text": " which turns out to be 10,"}, {"start": 449.24, "end": 452.71999999999997, "text": " and the larger of these two values,"}, {"start": 452.72, "end": 457.8, "text": " which is 12.5, is the best possible return from that state 4."}, {"start": 457.8, "end": 462.04, "text": " In other words, the highest return you can hope to get from state 4 is 12.5,"}, {"start": 462.04, "end": 466.12, "text": " and it's actually the larger of these two numbers, 12.5 and 10."}, {"start": 466.12, "end": 474.0, "text": " Moreover, if you want your Mars rover to enjoy a return of 12.5 rather than say 10,"}, {"start": 474.0, "end": 482.04, "text": " then the action you should take is the action A that gives you the larger value of Q of SA."}, {"start": 482.04, "end": 490.52000000000004, "text": " The best possible action in state S is the action A that actually maximizes Q of SA."}, {"start": 490.52000000000004, "end": 497.16, "text": " This might give you a hint for why computing Q of SA is"}, {"start": 497.16, "end": 501.88, "text": " an important part of the reinforcement learning algorithm that we'll build later."}, {"start": 501.88, "end": 508.12, "text": " Namely, if you have a way of computing Q of SA for every state and for every action,"}, {"start": 508.12, "end": 514.76, "text": " then when you're in some state S, all you have to do is look at the different actions A,"}, {"start": 514.76, "end": 518.76, "text": " and pick the action A that maximizes Q of SA."}, {"start": 518.76, "end": 524.92, "text": " So pi of S can just pick the action A that gives the largest value of Q of SA,"}, {"start": 524.92, "end": 527.92, "text": " and that will turn out to be a good action."}, {"start": 527.92, "end": 530.44, "text": " In fact, it will turn out to be the optimal action."}, {"start": 530.44, "end": 535.84, "text": " Another intuition about why this makes sense is Q of SA is returned if you"}, {"start": 535.84, "end": 540.6800000000001, "text": " start in state S and take the action A and then behave optimally after that."}, {"start": 540.6800000000001, "end": 544.5600000000001, "text": " So in order to earn the biggest possible return,"}, {"start": 544.5600000000001, "end": 552.6, "text": " what you really want is to take the action A that results in the biggest total return."}, {"start": 552.6, "end": 557.48, "text": " That's why if only we have a way of computing Q of SA for every state,"}, {"start": 557.48, "end": 560.72, "text": " taking the action A that maximizes return under"}, {"start": 560.72, "end": 565.0400000000001, "text": " these circumstances seems like it's the best action to take in that state."}, {"start": 565.04, "end": 568.28, "text": " Although this isn't something you need to know for this course,"}, {"start": 568.28, "end": 574.8399999999999, "text": " I want to mention also that if you look online or look at the reinforcement learning literature,"}, {"start": 574.8399999999999, "end": 580.64, "text": " sometimes you also see this Q function written as Q star instead of Q,"}, {"start": 580.64, "end": 585.7199999999999, "text": " and this Q function is sometimes also called the optimal Q function."}, {"start": 585.7199999999999, "end": 590.0799999999999, "text": " These terms just refer to the Q function exactly as we've defined it."}, {"start": 590.0799999999999, "end": 594.04, "text": " So if you look at the reinforcement learning literature and read about Q star or"}, {"start": 594.04, "end": 599.12, "text": " the optimal Q function, that just means the state action value function that we've been talking about."}, {"start": 599.12, "end": 602.48, "text": " But for the purposes of this course, you don't need to worry about this."}, {"start": 602.48, "end": 605.0799999999999, "text": " So to summarize,"}, {"start": 605.0799999999999, "end": 609.7199999999999, "text": " if you can compute Q of SA for every state and every action,"}, {"start": 609.7199999999999, "end": 615.28, "text": " then that gives us a good way to compute the optimal policy pi of S."}, {"start": 615.28, "end": 620.68, "text": " So that's the state action value function or the Q function."}, {"start": 620.68, "end": 624.68, "text": " We'll talk later about how to come up with an algorithm to compute them,"}, {"start": 624.68, "end": 628.88, "text": " despite the slightly circular aspect of the definition of the Q function."}, {"start": 628.88, "end": 651.28, "text": " But first, let's take a look at the next video as some specific examples of what these values QSA actually look like."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=kbIedKCb94I
10.7 State-action value function | State-action value function example -[Machine Learning|Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Using the Mars Rover example, you've seen what the values of QSA are like. In order to keep holding our intuition about reinforcement learning problems and how the values of QSA change depending on the problem, we've provided an optional lab that lets you play around, modify the Mars Rover example and see for yourself how Q of SA will change. Let's take a look. Here's a Jupyter notebook that I hope you play with. After watching this video, I'm going to run these helper functions. Now, notice here that this specifies the number of states, that are two actions, so I wouldn't change these. This specifies the terminal left and the terminal right rewards, which has been 140, and then zero was the rewards of the intermediate states. The discount factor gamma was 0.5, and let's ignore the misstep probability for now. We'll talk about that in a later video. With these values, if you run this code, this will compute and visualize the optimal policy, as well as the Q function, Q of SA. You learn later about how to develop a learning algorithm to estimate or compute Q of SA yourself. For now, don't worry about what code we had written to compute Q of SA, but you see that the values here, Q of SA are the values we saw in the lecture. Now, here's where the fun starts. Let's change around some of the values and see how these things change. I'm going to update the terminal right reward to a much smaller value, say it's only 10. If I now rerun the code, look at how Q of SA changes. It now thinks that if you're in state five, then if you go left and behave optimally, you get 6.25, whereas if you go right and behave optimally after that, you get a return of only five. Now, when the reward at the right is so small, it's only 10, even when you're so close to it, you rather go left all the way. In fact, the optimal policy is now to go left from every single state. Let's make some other changes. I'm going to change the terminal right reward back to 40, but let me change the discount factor to 0.9. With a discount factor that's closer to one, this makes the mouse rover less impatient. It's willing to take longer to hold out for a higher reward because rewards in the future are not multiplied by 0.5 to some high power, it's multiplied by 0.9 to some high power, and so it's willing to be more patient because rewards in the future are not discounted or multiplied by as small a number, as when the discount was 0.5. So let's rerun the code. And now you see this is Q of sA for the different states. And now for state five, going left actually gives you a higher reward of 65.61 compared to 36. Notice by the way that 36 is 0.9 times this terminal reward of 40. So these numbers make sense. But when it's more patient, it's willing to go to the left even when you're in state five. Now let's change gamma to a much smaller number like 0.3. So this very heavily discounts rewards in the future. This makes it incredibly impatient. So let me rerun this code. And now the behavior has changed. Notice that now in state four is not going to have the patience to go for the larger 100 reward because the discount factor gamma is now so small is 0.3, it would rather go for the reward of 40. Even though it's a much smaller reward is closer, and that's what it would choose to do. So I hope that you can get a sense by playing around with these numbers yourself and running this code how the values of Q of SA change as well as how the optimal return, which you notice is a larger of these two numbers QSA, how that changes as well as how the optimal policy also changes. So I hope you go and play with the optional lab and change the reward function and change the discount factor gamma, and try different values and see for yourself how the values of Q of SA change, how the optimal return from different states change, and how the optimal policy changes depending on these different values. And by doing so, I hope that will sharpen your intuition about how these different quantities are affected depending on the rewards and so on in reinforcement learning application. After you play through the lab, we then be ready to come back and talk about what's probably the single most important equation in reinforcement learning, which is something called the Bellman equation. So I hope you have fun playing with the optional lab. And after that, let's come back to talk about Bellman equations.
[{"start": 0.0, "end": 4.2, "text": " Using the Mars Rover example,"}, {"start": 4.2, "end": 7.88, "text": " you've seen what the values of QSA are like."}, {"start": 7.88, "end": 11.24, "text": " In order to keep holding our intuition about"}, {"start": 11.24, "end": 14.48, "text": " reinforcement learning problems and how the values of"}, {"start": 14.48, "end": 17.76, "text": " QSA change depending on the problem,"}, {"start": 17.76, "end": 21.28, "text": " we've provided an optional lab that lets you play around,"}, {"start": 21.28, "end": 23.84, "text": " modify the Mars Rover example and see for"}, {"start": 23.84, "end": 27.560000000000002, "text": " yourself how Q of SA will change. Let's take a look."}, {"start": 27.56, "end": 30.919999999999998, "text": " Here's a Jupyter notebook that I hope you play with."}, {"start": 30.919999999999998, "end": 32.58, "text": " After watching this video,"}, {"start": 32.58, "end": 35.1, "text": " I'm going to run these helper functions."}, {"start": 35.1, "end": 39.519999999999996, "text": " Now, notice here that this specifies the number of states,"}, {"start": 39.519999999999996, "end": 40.76, "text": " that are two actions,"}, {"start": 40.76, "end": 43.04, "text": " so I wouldn't change these."}, {"start": 43.04, "end": 46.58, "text": " This specifies the terminal left and the terminal right rewards,"}, {"start": 46.58, "end": 48.519999999999996, "text": " which has been 140,"}, {"start": 48.519999999999996, "end": 53.04, "text": " and then zero was the rewards of the intermediate states."}, {"start": 53.04, "end": 55.519999999999996, "text": " The discount factor gamma was 0.5,"}, {"start": 55.52, "end": 58.480000000000004, "text": " and let's ignore the misstep probability for now."}, {"start": 58.480000000000004, "end": 60.56, "text": " We'll talk about that in a later video."}, {"start": 60.56, "end": 62.580000000000005, "text": " With these values,"}, {"start": 62.580000000000005, "end": 65.22, "text": " if you run this code,"}, {"start": 65.22, "end": 69.32000000000001, "text": " this will compute and visualize the optimal policy,"}, {"start": 69.32000000000001, "end": 73.44, "text": " as well as the Q function, Q of SA."}, {"start": 73.44, "end": 75.96000000000001, "text": " You learn later about how to develop"}, {"start": 75.96000000000001, "end": 80.12, "text": " a learning algorithm to estimate or compute Q of SA yourself."}, {"start": 80.12, "end": 85.12, "text": " For now, don't worry about what code we had written to compute Q of SA,"}, {"start": 85.12, "end": 87.32000000000001, "text": " but you see that the values here,"}, {"start": 87.32000000000001, "end": 91.12, "text": " Q of SA are the values we saw in the lecture."}, {"start": 91.12, "end": 93.32000000000001, "text": " Now, here's where the fun starts."}, {"start": 93.32000000000001, "end": 97.64, "text": " Let's change around some of the values and see how these things change."}, {"start": 97.64, "end": 103.32000000000001, "text": " I'm going to update the terminal right reward to a much smaller value,"}, {"start": 103.32000000000001, "end": 104.92, "text": " say it's only 10."}, {"start": 104.92, "end": 107.16, "text": " If I now rerun the code,"}, {"start": 107.16, "end": 110.48, "text": " look at how Q of SA changes."}, {"start": 110.48, "end": 114.44, "text": " It now thinks that if you're in state five,"}, {"start": 114.44, "end": 117.2, "text": " then if you go left and behave optimally,"}, {"start": 117.2, "end": 118.96, "text": " you get 6.25,"}, {"start": 118.96, "end": 121.96, "text": " whereas if you go right and behave optimally after that,"}, {"start": 121.96, "end": 123.96, "text": " you get a return of only five."}, {"start": 123.96, "end": 126.96, "text": " Now, when the reward at the right is so small,"}, {"start": 126.96, "end": 129.8, "text": " it's only 10, even when you're so close to it,"}, {"start": 129.8, "end": 132.64, "text": " you rather go left all the way."}, {"start": 132.64, "end": 137.6, "text": " In fact, the optimal policy is now to go left from every single state."}, {"start": 137.6, "end": 139.26, "text": " Let's make some other changes."}, {"start": 139.26, "end": 142.78, "text": " I'm going to change the terminal right reward back to 40,"}, {"start": 142.78, "end": 148.48, "text": " but let me change the discount factor to 0.9."}, {"start": 148.48, "end": 152.04, "text": " With a discount factor that's closer to one,"}, {"start": 152.04, "end": 155.28, "text": " this makes the mouse rover less impatient."}, {"start": 155.28, "end": 160.84, "text": " It's willing to take longer to hold out for a higher reward because rewards in"}, {"start": 160.84, "end": 165.48, "text": " the future are not multiplied by 0.5 to some high power,"}, {"start": 165.48, "end": 168.6, "text": " it's multiplied by 0.9 to some high power,"}, {"start": 168.6, "end": 172.24, "text": " and so it's willing to be more patient because"}, {"start": 172.24, "end": 177.60000000000002, "text": " rewards in the future are not discounted or multiplied by as small a number,"}, {"start": 177.60000000000002, "end": 180.12, "text": " as when the discount was 0.5."}, {"start": 180.12, "end": 182.44, "text": " So let's rerun the code."}, {"start": 182.44, "end": 187.56, "text": " And now you see this is Q of sA for the different states."}, {"start": 187.56, "end": 190.42000000000002, "text": " And now for state five,"}, {"start": 190.42000000000002, "end": 198.56, "text": " going left actually gives you a higher reward of 65.61 compared to 36."}, {"start": 198.56, "end": 203.32, "text": " Notice by the way that 36 is 0.9 times this terminal reward of 40."}, {"start": 203.32, "end": 204.8, "text": " So these numbers make sense."}, {"start": 204.8, "end": 206.12, "text": " But when it's more patient,"}, {"start": 206.12, "end": 209.84, "text": " it's willing to go to the left even when you're in state five."}, {"start": 209.84, "end": 215.2, "text": " Now let's change gamma to a much smaller number like 0.3."}, {"start": 215.2, "end": 218.52, "text": " So this very heavily discounts rewards in the future."}, {"start": 218.52, "end": 221.02, "text": " This makes it incredibly impatient."}, {"start": 221.02, "end": 223.04, "text": " So let me rerun this code."}, {"start": 223.04, "end": 224.82, "text": " And now the behavior has changed."}, {"start": 224.82, "end": 230.84, "text": " Notice that now in state four is not going to have"}, {"start": 230.84, "end": 235.4, "text": " the patience to go for the larger 100 reward because"}, {"start": 235.4, "end": 238.76, "text": " the discount factor gamma is now so small is 0.3,"}, {"start": 238.76, "end": 242.0, "text": " it would rather go for the reward of 40."}, {"start": 242.0, "end": 244.76, "text": " Even though it's a much smaller reward is closer,"}, {"start": 244.76, "end": 246.51999999999998, "text": " and that's what it would choose to do."}, {"start": 246.51999999999998, "end": 250.6, "text": " So I hope that you can get a sense by playing around with"}, {"start": 250.6, "end": 253.72, "text": " these numbers yourself and running this code how"}, {"start": 253.72, "end": 259.36, "text": " the values of Q of SA change as well as how the optimal return,"}, {"start": 259.36, "end": 263.92, "text": " which you notice is a larger of these two numbers QSA,"}, {"start": 263.92, "end": 269.28, "text": " how that changes as well as how the optimal policy also changes."}, {"start": 269.28, "end": 273.32, "text": " So I hope you go and play with the optional lab and"}, {"start": 273.32, "end": 277.84, "text": " change the reward function and change the discount factor gamma,"}, {"start": 277.84, "end": 283.16, "text": " and try different values and see for yourself how the values of Q of SA change,"}, {"start": 283.16, "end": 285.96000000000004, "text": " how the optimal return from different states change,"}, {"start": 285.96000000000004, "end": 290.52000000000004, "text": " and how the optimal policy changes depending on these different values."}, {"start": 290.52000000000004, "end": 291.96000000000004, "text": " And by doing so,"}, {"start": 291.96000000000004, "end": 296.92, "text": " I hope that will sharpen your intuition about how these different quantities"}, {"start": 296.92, "end": 303.24, "text": " are affected depending on the rewards and so on in reinforcement learning application."}, {"start": 303.24, "end": 305.20000000000005, "text": " After you play through the lab,"}, {"start": 305.20000000000005, "end": 308.6, "text": " we then be ready to come back and talk about what's probably"}, {"start": 308.6, "end": 311.74, "text": " the single most important equation in reinforcement learning,"}, {"start": 311.74, "end": 314.48, "text": " which is something called the Bellman equation."}, {"start": 314.48, "end": 318.16, "text": " So I hope you have fun playing with the optional lab."}, {"start": 318.16, "end": 342.52000000000004, "text": " And after that, let's come back to talk about Bellman equations."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Jv8cKma_yIs
10.8 State-action value function | Bellman Equations -[Machine Learning|Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let me summarize where we are. If you can compute the state action value function q of s, a, then it gives you a way to pick a good action from every state. Just pick the action a that gives you the largest value of q of s, a. So the question is, how do you compute these values q of s, a? In reinforcement learning, there's a key equation called the Bellman equation that will help us to compute the state action value function. Let's take a look at what is this equation. As a reminder, this is the definition of q of s, a. It is returned if you start in state s, take the action a once, and then behave optimally after that. In order to describe the Bellman equation, I'm going to use the following notation. I'm going to use s to denote the current state. Next, I'm going to use r of s to denote the reward of the current state. So for our little MDP example, we would have that r of 1, state 1 is 100, the reward of state 2 is 0, and so on, and the reward of state 6 is 40. I'm going to use the alphabet a to denote the current action. So the action that you take in the state s, after you take the action a, you get to some new state. For example, if you're in state 4 and you take the action left, then you get to state 3. And so I'm going to use s prime to denote the state you get to after taking that action a from the current state s. I'm also going to use a prime to denote the action that you might take in state s prime, the new state that you got to. The notation convention, by the way, is that s a correspond to the current state and action, and when we add the prime, that's the next state and the next action. The Bellman equation is the following. It says that q of s a, that is, the return under this set of assumptions, that's equal to r of s, so the reward you get for being in that state, plus the discount factor gamma times max over all possible actions a prime of q of s prime, the new state you just got to, and then of a prime. There's a lot going on in this equation, so let's first take a look at some examples that will come back to see why this equation might make sense. Let's look at an example. Let's look at q of state 2 and action right and apply Bellman equation to this to see what value it gives us. So if the current state is state 2 and the action is to go right, then the next state you get to after going right, s prime, will be the state 3. So the Bellman equation says q of 2 right is r of s, so this r of state 2, which is just a reward 0, plus the discount factor gamma, which we've set to 0.5 in this example, times max of the q values in state s prime, in state 3. So this is going to be the max of 25 and 6.25, since this is max over a prime of q of s prime comma a prime, and this is taking the larger of 25 or 6.25, because those are the two choices for state 3. And this turns out to be equal to 0 plus 0.5 times 25, which is equal to 12.5, which fortunately is q of 2 and then the action right. Let's look at just one more example. Let me pick state 4 and see what is q of state 4 if you decide to go left. In this case, the current state is 4, current action is to go left, and so the next state, if you start from 4 and go left, you end up also at state 3. So s prime is 3 again. The equation will say this is equal to r of s, so r of state 4, which is 0 plus 0.5 to discount factor gamma of max over a prime of q of s prime, that is to say 3 again, comma a prime. So once again, the q values for state 3 are 25 and 6.25, and the larger of these is 25. And so this works out to be r of 4 is 0 plus 0.5 times 25, which is again equal to 12.5. So that's why q of 4 with the action left is also equal to 12.5. Just one note, if you're in a terminal state, then Bellman equation simplifies to q of sA equals to r of s because there's no state s prime, and so that second term would go away, which is why q of sA in the terminal states is just 100, 100, or 40, 40. If you wish, feel free to pause the video and apply the Bellman equation to any other state action in this NDP and check for yourself if this math works out. Just to recap, this is how we had defined q of sA, and we saw earlier that the best possible return from any state s is max over a q of sA. In fact, just to rename s and a, it turns out that the best possible return from a state s prime is max over s prime of a prime, right? I didn't really do anything other than rename s to s prime and a to a prime, but this will make some of the intuitions a little bit easier later. But for any state s prime, like state 3, the best possible return from, say, state 3 is the max of all possible actions of q of s prime a prime. So here again is the Bellman equation, and the intuition that this captures is if you're starting from state s and you're going to take action a and then act optimally after that, then you're going to see some sequence of rewards over time. In particular, the return will be computed from the reward at the first step plus gamma times the reward at the second step plus gamma squared times the reward at the third step and so on, plus dot dot dot until you get the terminal state. So what Bellman equation says is this sequence of rewards with the discount factors can be broken down into two components. First, this R of s, that's the reward you get right away. In the reinforcement learning literature, this is sometimes also called the immediate reward, but that's what R1 is, is the reward you get for starting out in some state s. The second term then is the following. After you start in state s and take action a, you get to some new state s prime. The definition of q of s a assumes we're going to behave optimally after that. So after we get to s prime, we're going to behave optimally and get the best possible return from the state s prime. And so what this is, max over a prime of q of s prime a prime, this is the return from behaving optimally starting from the state s prime. That's exactly what we had written up here, is the best possible return for when you start from state s prime. Another way of phrasing this is, this total return down here is also equal to R1 plus, and then I'm going to factor out gamma in the math, is gamma times R2 plus, and then instead of gamma squared, it's just gamma times R3 plus gamma squared times R4 plus dot dot dot. Notice that if you were starting from state s prime, the sequence of rewards you get will be R2, then R3, then R4, and so on. And that's why this expression here, that's the total return if you were to start from state s prime. And if you were to behave optimally, then this expression should be the best possible return for starting from state s prime, which is why this sequence of this counter was equals that max of a prime of q of s prime a prime, and there were also leftover with this extra discount factor gamma there, which is why q of s a is also equal to this expression over here. In case you think this is quite complicated and you aren't following all the details, don't worry about it. So long as you apply this equation, you will manage to get the right results. But the high level intuition I hope you take away is that the total return you get in the reinforcement learning problem has two parts. The first part is this reward that you get right away. And then the second part is gamma times the return you get starting from the next state s prime. And as these two components together, R of s plus gamma times return from the next state that is equal to the total return from the current state s. That is the essence of development equation. So just to relate this back to our earlier example, q of four left, that's the total return for starting state four and going left. So if you were to go left in state four, the rewards you get are zero in state four, zero in state three, zero in state two, and then 100, which is why the total return is this 0.5 squared plus 0.5 cubed, which was 12.5. And what Bellman equation is saying is that we can break this up into two pieces. There is this zero, which is R of the state four, and then plus 0.5 times this other sequence, zero plus 0.5, zero plus 0.5 squared times 100. But if you look at what this sequence is, this is really the optimal return from the next state s prime that you got to after taking the action left from state four. So that's why this is equal to the reward four plus 0.5 times the optimal return from state three, because if you were to start from state three, the rewards you get would be zero followed by zero followed by 100. So this is optimal return from state three. And that's why this is just R of four plus 0.5 max over a prime q of state three, a prime. I know the Bellman equation is a somewhat complicated equation, breaking down your total returns into the reward you get right away, the immediate reward plus gamma times the returns from the next state s prime. If it kind of makes sense to you, but not fully, it's okay. Don't worry about it. You can still apply Bellman's equations to get a reinforcement learning algorithm to work correctly. But I hope that at least a high level intuition of why breaking down the rewards into what you get right away plus what you get in the future. I hope that makes sense. Before moving on to develop a reinforcement learning algorithm, we have coming up next an optional video on stochastic Markov decision processes or on reinforcement learning applications where the actions that you take can have a slightly random effect. Take a look at the optional video if you wish. And then after that, we'll start to develop a reinforcement learning algorithm.
[{"start": 0.0, "end": 4.44, "text": " Let me summarize where we are."}, {"start": 4.44, "end": 8.4, "text": " If you can compute the state action value function q of s,"}, {"start": 8.4, "end": 13.0, "text": " a, then it gives you a way to pick a good action from every state."}, {"start": 13.0, "end": 17.0, "text": " Just pick the action a that gives you the largest value of q of s,"}, {"start": 17.0, "end": 18.0, "text": " a."}, {"start": 18.0, "end": 21.44, "text": " So the question is, how do you compute these values q of s,"}, {"start": 21.44, "end": 22.44, "text": " a?"}, {"start": 22.44, "end": 27.12, "text": " In reinforcement learning, there's a key equation called the Bellman equation that will help"}, {"start": 27.12, "end": 30.560000000000002, "text": " us to compute the state action value function."}, {"start": 30.560000000000002, "end": 33.760000000000005, "text": " Let's take a look at what is this equation."}, {"start": 33.760000000000005, "end": 38.08, "text": " As a reminder, this is the definition of q of s, a."}, {"start": 38.08, "end": 42.120000000000005, "text": " It is returned if you start in state s, take the action a once, and then behave optimally"}, {"start": 42.120000000000005, "end": 43.82, "text": " after that."}, {"start": 43.82, "end": 48.64, "text": " In order to describe the Bellman equation, I'm going to use the following notation."}, {"start": 48.64, "end": 52.120000000000005, "text": " I'm going to use s to denote the current state."}, {"start": 52.12, "end": 59.08, "text": " Next, I'm going to use r of s to denote the reward of the current state."}, {"start": 59.08, "end": 66.8, "text": " So for our little MDP example, we would have that r of 1, state 1 is 100, the reward of"}, {"start": 66.8, "end": 73.32, "text": " state 2 is 0, and so on, and the reward of state 6 is 40."}, {"start": 73.32, "end": 78.38, "text": " I'm going to use the alphabet a to denote the current action."}, {"start": 78.38, "end": 84.92, "text": " So the action that you take in the state s, after you take the action a, you get to some"}, {"start": 84.92, "end": 85.92, "text": " new state."}, {"start": 85.92, "end": 91.03999999999999, "text": " For example, if you're in state 4 and you take the action left, then you get to state"}, {"start": 91.03999999999999, "end": 92.36, "text": " 3."}, {"start": 92.36, "end": 98.16, "text": " And so I'm going to use s prime to denote the state you get to after taking that action"}, {"start": 98.16, "end": 100.82, "text": " a from the current state s."}, {"start": 100.82, "end": 108.38, "text": " I'm also going to use a prime to denote the action that you might take in state s prime,"}, {"start": 108.38, "end": 110.11999999999999, "text": " the new state that you got to."}, {"start": 110.11999999999999, "end": 115.94, "text": " The notation convention, by the way, is that s a correspond to the current state and action,"}, {"start": 115.94, "end": 120.39999999999999, "text": " and when we add the prime, that's the next state and the next action."}, {"start": 120.39999999999999, "end": 123.44, "text": " The Bellman equation is the following."}, {"start": 123.44, "end": 132.28, "text": " It says that q of s a, that is, the return under this set of assumptions, that's equal"}, {"start": 132.28, "end": 142.24, "text": " to r of s, so the reward you get for being in that state, plus the discount factor gamma"}, {"start": 142.24, "end": 150.72, "text": " times max over all possible actions a prime of q of s prime, the new state you just got"}, {"start": 150.72, "end": 154.92, "text": " to, and then of a prime."}, {"start": 154.92, "end": 159.48, "text": " There's a lot going on in this equation, so let's first take a look at some examples that"}, {"start": 159.48, "end": 163.24, "text": " will come back to see why this equation might make sense."}, {"start": 163.24, "end": 164.66, "text": " Let's look at an example."}, {"start": 164.66, "end": 172.64, "text": " Let's look at q of state 2 and action right and apply Bellman equation to this to see"}, {"start": 172.64, "end": 175.26, "text": " what value it gives us."}, {"start": 175.26, "end": 183.22, "text": " So if the current state is state 2 and the action is to go right, then the next state"}, {"start": 183.22, "end": 187.94, "text": " you get to after going right, s prime, will be the state 3."}, {"start": 187.94, "end": 198.0, "text": " So the Bellman equation says q of 2 right is r of s, so this r of state 2, which is"}, {"start": 198.0, "end": 206.64, "text": " just a reward 0, plus the discount factor gamma, which we've set to 0.5 in this example,"}, {"start": 206.64, "end": 214.62, "text": " times max of the q values in state s prime, in state 3."}, {"start": 214.62, "end": 224.4, "text": " So this is going to be the max of 25 and 6.25, since this is max over a prime of q of s prime"}, {"start": 224.4, "end": 233.72, "text": " comma a prime, and this is taking the larger of 25 or 6.25, because those are the two choices"}, {"start": 233.72, "end": 235.34, "text": " for state 3."}, {"start": 235.34, "end": 246.5, "text": " And this turns out to be equal to 0 plus 0.5 times 25, which is equal to 12.5, which fortunately"}, {"start": 246.5, "end": 250.28, "text": " is q of 2 and then the action right."}, {"start": 250.28, "end": 251.86, "text": " Let's look at just one more example."}, {"start": 251.86, "end": 259.78000000000003, "text": " Let me pick state 4 and see what is q of state 4 if you decide to go left."}, {"start": 259.78000000000003, "end": 265.62, "text": " In this case, the current state is 4, current action is to go left, and so the next state,"}, {"start": 265.62, "end": 270.24, "text": " if you start from 4 and go left, you end up also at state 3."}, {"start": 270.24, "end": 272.5, "text": " So s prime is 3 again."}, {"start": 272.5, "end": 282.26, "text": " The equation will say this is equal to r of s, so r of state 4, which is 0 plus 0.5 to"}, {"start": 282.26, "end": 290.56, "text": " discount factor gamma of max over a prime of q of s prime, that is to say 3 again, comma"}, {"start": 290.56, "end": 292.12, "text": " a prime."}, {"start": 292.12, "end": 300.12, "text": " So once again, the q values for state 3 are 25 and 6.25, and the larger of these is 25."}, {"start": 300.12, "end": 310.88, "text": " And so this works out to be r of 4 is 0 plus 0.5 times 25, which is again equal to 12.5."}, {"start": 310.88, "end": 317.48, "text": " So that's why q of 4 with the action left is also equal to 12.5."}, {"start": 317.48, "end": 324.64, "text": " Just one note, if you're in a terminal state, then Bellman equation simplifies to q of sA"}, {"start": 324.64, "end": 330.4, "text": " equals to r of s because there's no state s prime, and so that second term would go"}, {"start": 330.4, "end": 337.53999999999996, "text": " away, which is why q of sA in the terminal states is just 100, 100, or 40, 40."}, {"start": 337.53999999999996, "end": 341.82, "text": " If you wish, feel free to pause the video and apply the Bellman equation to any other"}, {"start": 341.82, "end": 348.65999999999997, "text": " state action in this NDP and check for yourself if this math works out."}, {"start": 348.66, "end": 356.20000000000005, "text": " Just to recap, this is how we had defined q of sA, and we saw earlier that the best"}, {"start": 356.20000000000005, "end": 362.28000000000003, "text": " possible return from any state s is max over a q of sA."}, {"start": 362.28000000000003, "end": 368.20000000000005, "text": " In fact, just to rename s and a, it turns out that the best possible return from a state"}, {"start": 368.20000000000005, "end": 374.04, "text": " s prime is max over s prime of a prime, right?"}, {"start": 374.04, "end": 379.28000000000003, "text": " I didn't really do anything other than rename s to s prime and a to a prime, but this will"}, {"start": 379.28000000000003, "end": 382.64000000000004, "text": " make some of the intuitions a little bit easier later."}, {"start": 382.64000000000004, "end": 388.16, "text": " But for any state s prime, like state 3, the best possible return from, say, state 3 is"}, {"start": 388.16, "end": 392.88, "text": " the max of all possible actions of q of s prime a prime."}, {"start": 392.88, "end": 400.32000000000005, "text": " So here again is the Bellman equation, and the intuition that this captures is if you're"}, {"start": 400.32, "end": 406.24, "text": " starting from state s and you're going to take action a and then act optimally after"}, {"start": 406.24, "end": 411.56, "text": " that, then you're going to see some sequence of rewards over time."}, {"start": 411.56, "end": 419.34, "text": " In particular, the return will be computed from the reward at the first step plus gamma"}, {"start": 419.34, "end": 424.98, "text": " times the reward at the second step plus gamma squared times the reward at the third step"}, {"start": 424.98, "end": 429.0, "text": " and so on, plus dot dot dot until you get the terminal state."}, {"start": 429.0, "end": 436.6, "text": " So what Bellman equation says is this sequence of rewards with the discount factors can be"}, {"start": 436.6, "end": 438.88, "text": " broken down into two components."}, {"start": 438.88, "end": 445.72, "text": " First, this R of s, that's the reward you get right away."}, {"start": 445.72, "end": 450.04, "text": " In the reinforcement learning literature, this is sometimes also called the immediate"}, {"start": 450.04, "end": 457.12, "text": " reward, but that's what R1 is, is the reward you get for starting out in some state s."}, {"start": 457.12, "end": 460.52, "text": " The second term then is the following."}, {"start": 460.52, "end": 467.52, "text": " After you start in state s and take action a, you get to some new state s prime."}, {"start": 467.52, "end": 472.8, "text": " The definition of q of s a assumes we're going to behave optimally after that."}, {"start": 472.8, "end": 477.72, "text": " So after we get to s prime, we're going to behave optimally and get the best possible"}, {"start": 477.72, "end": 480.68, "text": " return from the state s prime."}, {"start": 480.68, "end": 488.28000000000003, "text": " And so what this is, max over a prime of q of s prime a prime, this is the return from"}, {"start": 488.28000000000003, "end": 493.98, "text": " behaving optimally starting from the state s prime."}, {"start": 493.98, "end": 500.2, "text": " That's exactly what we had written up here, is the best possible return for when you start"}, {"start": 500.2, "end": 502.4, "text": " from state s prime."}, {"start": 502.4, "end": 511.35999999999996, "text": " Another way of phrasing this is, this total return down here is also equal to R1 plus,"}, {"start": 511.35999999999996, "end": 516.92, "text": " and then I'm going to factor out gamma in the math, is gamma times R2 plus, and then"}, {"start": 516.92, "end": 524.16, "text": " instead of gamma squared, it's just gamma times R3 plus gamma squared times R4 plus"}, {"start": 524.16, "end": 525.8, "text": " dot dot dot."}, {"start": 525.8, "end": 531.4399999999999, "text": " Notice that if you were starting from state s prime, the sequence of rewards you get will"}, {"start": 531.44, "end": 537.12, "text": " be R2, then R3, then R4, and so on."}, {"start": 537.12, "end": 545.08, "text": " And that's why this expression here, that's the total return if you were to start from"}, {"start": 545.08, "end": 547.36, "text": " state s prime."}, {"start": 547.36, "end": 552.5, "text": " And if you were to behave optimally, then this expression should be the best possible"}, {"start": 552.5, "end": 561.2, "text": " return for starting from state s prime, which is why this sequence of this counter was equals"}, {"start": 561.2, "end": 567.5200000000001, "text": " that max of a prime of q of s prime a prime, and there were also leftover with this extra"}, {"start": 567.5200000000001, "end": 574.32, "text": " discount factor gamma there, which is why q of s a is also equal to this expression"}, {"start": 574.32, "end": 576.48, "text": " over here."}, {"start": 576.48, "end": 580.6, "text": " In case you think this is quite complicated and you aren't following all the details,"}, {"start": 580.6, "end": 582.1, "text": " don't worry about it."}, {"start": 582.1, "end": 587.0, "text": " So long as you apply this equation, you will manage to get the right results."}, {"start": 587.0, "end": 593.44, "text": " But the high level intuition I hope you take away is that the total return you get in the"}, {"start": 593.44, "end": 596.88, "text": " reinforcement learning problem has two parts."}, {"start": 596.88, "end": 601.76, "text": " The first part is this reward that you get right away."}, {"start": 601.76, "end": 608.44, "text": " And then the second part is gamma times the return you get starting from the next state"}, {"start": 608.44, "end": 609.9, "text": " s prime."}, {"start": 609.9, "end": 616.6, "text": " And as these two components together, R of s plus gamma times return from the next state"}, {"start": 616.6, "end": 621.4, "text": " that is equal to the total return from the current state s."}, {"start": 621.4, "end": 624.72, "text": " That is the essence of development equation."}, {"start": 624.72, "end": 631.2, "text": " So just to relate this back to our earlier example, q of four left, that's the total"}, {"start": 631.2, "end": 635.0400000000001, "text": " return for starting state four and going left."}, {"start": 635.0400000000001, "end": 641.76, "text": " So if you were to go left in state four, the rewards you get are zero in state four, zero"}, {"start": 641.76, "end": 647.36, "text": " in state three, zero in state two, and then 100, which is why the total return is this"}, {"start": 647.36, "end": 652.64, "text": " 0.5 squared plus 0.5 cubed, which was 12.5."}, {"start": 652.64, "end": 657.0, "text": " And what Bellman equation is saying is that we can break this up into two pieces."}, {"start": 657.0, "end": 667.52, "text": " There is this zero, which is R of the state four, and then plus 0.5 times this other sequence,"}, {"start": 667.52, "end": 675.0, "text": " zero plus 0.5, zero plus 0.5 squared times 100."}, {"start": 675.0, "end": 679.92, "text": " But if you look at what this sequence is, this is really the optimal return from the"}, {"start": 679.92, "end": 685.76, "text": " next state s prime that you got to after taking the action left from state four."}, {"start": 685.76, "end": 694.0799999999999, "text": " So that's why this is equal to the reward four plus 0.5 times the optimal return from"}, {"start": 694.08, "end": 698.1600000000001, "text": " state three, because if you were to start from state three, the rewards you get would"}, {"start": 698.1600000000001, "end": 702.1600000000001, "text": " be zero followed by zero followed by 100."}, {"start": 702.1600000000001, "end": 706.6800000000001, "text": " So this is optimal return from state three."}, {"start": 706.6800000000001, "end": 715.94, "text": " And that's why this is just R of four plus 0.5 max over a prime q of state three, a prime."}, {"start": 715.94, "end": 720.4000000000001, "text": " I know the Bellman equation is a somewhat complicated equation, breaking down your total"}, {"start": 720.4, "end": 726.12, "text": " returns into the reward you get right away, the immediate reward plus gamma times the"}, {"start": 726.12, "end": 729.24, "text": " returns from the next state s prime."}, {"start": 729.24, "end": 732.36, "text": " If it kind of makes sense to you, but not fully, it's okay."}, {"start": 732.36, "end": 733.36, "text": " Don't worry about it."}, {"start": 733.36, "end": 737.72, "text": " You can still apply Bellman's equations to get a reinforcement learning algorithm to"}, {"start": 737.72, "end": 738.72, "text": " work correctly."}, {"start": 738.72, "end": 744.52, "text": " But I hope that at least a high level intuition of why breaking down the rewards into what"}, {"start": 744.52, "end": 747.4399999999999, "text": " you get right away plus what you get in the future."}, {"start": 747.4399999999999, "end": 750.16, "text": " I hope that makes sense."}, {"start": 750.16, "end": 754.92, "text": " Before moving on to develop a reinforcement learning algorithm, we have coming up next"}, {"start": 754.92, "end": 761.68, "text": " an optional video on stochastic Markov decision processes or on reinforcement learning applications"}, {"start": 761.68, "end": 767.0, "text": " where the actions that you take can have a slightly random effect."}, {"start": 767.0, "end": 768.8399999999999, "text": " Take a look at the optional video if you wish."}, {"start": 768.84, "end": 780.48, "text": " And then after that, we'll start to develop a reinforcement learning algorithm."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=-5W5c1ZrSZ8
10.9 State-action value function | Random (stochastic) environment (Optional) -[ML | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In some applications, when you take an action, the outcome is not always completely reliable. For example, if you command your Mars rover to go left, maybe there's a little bit of a rock slide or maybe the floor is really slippery and so it slips and goes in the wrong direction. In practice, many robots don't always manage to do exactly what you tell them because of wind blowing and off course or the wheel slipping or something else. So there's a generalization of the reinforcement learning framework we've talked about so far, which models random or stochastic environments. In this optional video, we'll talk about how these reinforcement learning problems work. Continuing with our simplified Mars rover example, let's say you take the action and command it to go left. Most of the time it will succeed, but what if 10% of the time or 0.1% of the time, it actually ends up accidentally slipping and going in the opposite direction. So if you command it to go left, it has a 90% chance or 0.9 chance of correctly going the left direction, but an 0.1 chance of actually heading to the right. So that it has a 9% chance of ending up in state 3 in this example and a 10% chance of ending up in state 5. Conversely, if you were to command it to go right and take the action right, it has a 0.9 chance of ending up in state 5 and a 0.1 chance of ending up in state 3. This would be an example of a stochastic environment. Let's see what happens in this reinforcement learning problem. Let's say you use this policy shown here, where you go left in states 2, 3, and 4 and go right or try to go right in state 5. If you were to start in state 4 and you were to follow this policy, then the actual sequence of states you visit may be random. For example, in state 4, you will go left and maybe you're a little bit lucky and it actually gets to state 3. And then you try to go left again and maybe it actually gets there, you try to go left again and it gets to that state. If this is what happens, you end up with the sequence of rewards 0, 0, 0, 100. But if you were to try this exact same policy a second time, maybe you're a little less lucky. The second time you start here, try to go left and say it succeeds. So 0 from state 4, 0 from state 3. Here you try to go left, but you got unlucky this time and the robot slips and ends up heading back to state 4 instead. And then you try to go left, then left, then left and eventually it gets to that reward of 100. In that case, this will be the sequence of rewards you observe. This went from 4 to 3, back to 4, 3, 2, then 1. Or it's even possible if you tell it from state 4 to go left following the policy, you may get unlucky even on the first step and you end up going to state 5 because it slipped. And then state 5, you command it to go right and it succeeds as you end up here. And in this case, the sequence of rewards you see will be 0, 0, 40 because it went from 4 to 5 and then state 6. We had previously written out the return as this sum of discounted rewards. But when the reinforcement learning problem is stochastic, there isn't one sequence of rewards that you see for sure. Instead, you see this sequence of different rewards. So in a stochastic reinforcement learning problem, what we're interested in is not maximizing the return because that's a random number. What we're interested in is maximizing the average value of the sum of discounted rewards. And by average value, I mean if you were to take your policy and try it out a thousand times or a hundred thousand times or a million times, you get lots of different reward sequences like that. And if you were to take the average over all of these different sequences of the sum of discounted rewards, then that's what we call the expected return. In statistics, the term expected is just another way of saying average. But what this means is we want to maximize what we expect to get on average in terms of the sum of discounted rewards. The mathematical notation for this is to write this as E. E stands for expected value of R1 plus gamma R2 plus and so on. So the job of reinforcement learning algorithm is to choose a policy pi to maximize the average or the expected sum of discounted rewards. So to summarize, when you have a stochastic reinforcement learning problem or a stochastic Markov decision process, the goal is to choose a policy that tells what action A to take in state S so as to maximize the expected return. The last way that this changes what we've talked about is it modifies Bellman equation a little bit. So here's Bellman equation exactly as we've written down. But the difference now is that when you take the action A in state S, the next state S prime you get to is random. When you're in state V and you try to go left, the next state S prime, it could be the state two, or it could be the state four. So S prime is now random, which is why we also put an average operator or an expected operator here. So we say that the total return from state S taking action A once and then behaving optimally is equal to the reward you get right away, also called the immediate reward, plus the discount factor gamma plus what you expect to get on average of the future returns. If you want to sharpen your intuition about what happens with these stochastic reinforcement learning problems, you go back to the optional lab that I had shown you just now, where this parameter misstep probability is the probability of your Mars rover going in the opposite direction than you had commanded it to. So if we set misstep prop to be 0.1 and we execute the notebook, and so these numbers up here are the optimal return if you were to take the best possible actions, take this optimal policy, but the robot were to step in the wrong direction 10% of the time. And these are the Q values for the stochastic MDP. Notice that these values are now a little bit lower because you can't control the robot as well as before. The Q values as well as the optimal returns have gone down a bit. And in fact, if you were to increase the misstep probability, say 40% of the time, the robot doesn't even go in the direction you had commanded it to, only 60% of the time it goes where you told it to, then these values end up even lower because your degree of control over the robot has decreased. So I encourage you to play with the optional lab and change the value of the misstep probability and see how that affects the optimal return or the optimal expected return as well as the Q values Q of SA. Now in everything we've done so far, we've been using this Markov decision process, this Mars rover with just six states. For many practical applications, the number of states will be much larger. In the next video, we'll take the reinforcement learning or Markov decision process framework we've talked about so far and generalize it to this much richer and maybe even more interesting set of problems with much larger and particular with continuous state spaces. Let's take a look at that in the next video.
[{"start": 0.0, "end": 9.040000000000001, "text": " In some applications, when you take an action, the outcome is not always completely reliable."}, {"start": 9.040000000000001, "end": 13.82, "text": " For example, if you command your Mars rover to go left, maybe there's a little bit of"}, {"start": 13.82, "end": 18.16, "text": " a rock slide or maybe the floor is really slippery and so it slips and goes in the wrong"}, {"start": 18.16, "end": 19.16, "text": " direction."}, {"start": 19.16, "end": 25.2, "text": " In practice, many robots don't always manage to do exactly what you tell them because of"}, {"start": 25.2, "end": 28.88, "text": " wind blowing and off course or the wheel slipping or something else."}, {"start": 28.88, "end": 34.12, "text": " So there's a generalization of the reinforcement learning framework we've talked about so far,"}, {"start": 34.12, "end": 38.12, "text": " which models random or stochastic environments."}, {"start": 38.12, "end": 43.08, "text": " In this optional video, we'll talk about how these reinforcement learning problems work."}, {"start": 43.08, "end": 48.58, "text": " Continuing with our simplified Mars rover example, let's say you take the action and"}, {"start": 48.58, "end": 51.239999999999995, "text": " command it to go left."}, {"start": 51.239999999999995, "end": 56.8, "text": " Most of the time it will succeed, but what if 10% of the time or 0.1% of the time, it"}, {"start": 56.8, "end": 63.12, "text": " actually ends up accidentally slipping and going in the opposite direction."}, {"start": 63.12, "end": 69.24, "text": " So if you command it to go left, it has a 90% chance or 0.9 chance of correctly going"}, {"start": 69.24, "end": 74.86, "text": " the left direction, but an 0.1 chance of actually heading to the right."}, {"start": 74.86, "end": 80.72, "text": " So that it has a 9% chance of ending up in state 3 in this example and a 10% chance of"}, {"start": 80.72, "end": 82.75999999999999, "text": " ending up in state 5."}, {"start": 82.76, "end": 88.48, "text": " Conversely, if you were to command it to go right and take the action right, it has a"}, {"start": 88.48, "end": 96.48, "text": " 0.9 chance of ending up in state 5 and a 0.1 chance of ending up in state 3."}, {"start": 96.48, "end": 101.08000000000001, "text": " This would be an example of a stochastic environment."}, {"start": 101.08000000000001, "end": 105.32000000000001, "text": " Let's see what happens in this reinforcement learning problem."}, {"start": 105.32, "end": 113.91999999999999, "text": " Let's say you use this policy shown here, where you go left in states 2, 3, and 4 and"}, {"start": 113.91999999999999, "end": 117.11999999999999, "text": " go right or try to go right in state 5."}, {"start": 117.11999999999999, "end": 124.11999999999999, "text": " If you were to start in state 4 and you were to follow this policy, then the actual sequence"}, {"start": 124.11999999999999, "end": 127.39999999999999, "text": " of states you visit may be random."}, {"start": 127.39999999999999, "end": 133.44, "text": " For example, in state 4, you will go left and maybe you're a little bit lucky and it"}, {"start": 133.44, "end": 136.04, "text": " actually gets to state 3."}, {"start": 136.04, "end": 140.68, "text": " And then you try to go left again and maybe it actually gets there, you try to go left"}, {"start": 140.68, "end": 143.76, "text": " again and it gets to that state."}, {"start": 143.76, "end": 151.12, "text": " If this is what happens, you end up with the sequence of rewards 0, 0, 0, 100."}, {"start": 151.12, "end": 156.48, "text": " But if you were to try this exact same policy a second time, maybe you're a little less"}, {"start": 156.48, "end": 157.48, "text": " lucky."}, {"start": 157.48, "end": 161.92, "text": " The second time you start here, try to go left and say it succeeds."}, {"start": 161.92, "end": 164.6, "text": " So 0 from state 4, 0 from state 3."}, {"start": 164.6, "end": 169.76, "text": " Here you try to go left, but you got unlucky this time and the robot slips and ends up"}, {"start": 169.76, "end": 172.32, "text": " heading back to state 4 instead."}, {"start": 172.32, "end": 177.07999999999998, "text": " And then you try to go left, then left, then left and eventually it gets to that reward"}, {"start": 177.07999999999998, "end": 178.07999999999998, "text": " of 100."}, {"start": 178.07999999999998, "end": 182.44, "text": " In that case, this will be the sequence of rewards you observe."}, {"start": 182.44, "end": 187.32, "text": " This went from 4 to 3, back to 4, 3, 2, then 1."}, {"start": 187.32, "end": 192.23999999999998, "text": " Or it's even possible if you tell it from state 4 to go left following the policy, you"}, {"start": 192.23999999999998, "end": 197.88, "text": " may get unlucky even on the first step and you end up going to state 5 because it slipped."}, {"start": 197.88, "end": 202.28, "text": " And then state 5, you command it to go right and it succeeds as you end up here."}, {"start": 202.28, "end": 207.51999999999998, "text": " And in this case, the sequence of rewards you see will be 0, 0, 40 because it went from"}, {"start": 207.51999999999998, "end": 210.79999999999998, "text": " 4 to 5 and then state 6."}, {"start": 210.8, "end": 218.56, "text": " We had previously written out the return as this sum of discounted rewards."}, {"start": 218.56, "end": 224.64000000000001, "text": " But when the reinforcement learning problem is stochastic, there isn't one sequence of"}, {"start": 224.64000000000001, "end": 226.24, "text": " rewards that you see for sure."}, {"start": 226.24, "end": 230.0, "text": " Instead, you see this sequence of different rewards."}, {"start": 230.0, "end": 237.32000000000002, "text": " So in a stochastic reinforcement learning problem, what we're interested in is not maximizing"}, {"start": 237.32000000000002, "end": 240.28, "text": " the return because that's a random number."}, {"start": 240.28, "end": 247.0, "text": " What we're interested in is maximizing the average value of the sum of discounted rewards."}, {"start": 247.0, "end": 252.56, "text": " And by average value, I mean if you were to take your policy and try it out a thousand"}, {"start": 252.56, "end": 257.36, "text": " times or a hundred thousand times or a million times, you get lots of different reward sequences"}, {"start": 257.36, "end": 258.36, "text": " like that."}, {"start": 258.36, "end": 263.4, "text": " And if you were to take the average over all of these different sequences of the sum of"}, {"start": 263.4, "end": 269.2, "text": " discounted rewards, then that's what we call the expected return."}, {"start": 269.2, "end": 275.08, "text": " In statistics, the term expected is just another way of saying average."}, {"start": 275.08, "end": 282.5, "text": " But what this means is we want to maximize what we expect to get on average in terms"}, {"start": 282.5, "end": 285.15999999999997, "text": " of the sum of discounted rewards."}, {"start": 285.15999999999997, "end": 293.36, "text": " The mathematical notation for this is to write this as E. E stands for expected value of"}, {"start": 293.36, "end": 297.48, "text": " R1 plus gamma R2 plus and so on."}, {"start": 297.48, "end": 304.6, "text": " So the job of reinforcement learning algorithm is to choose a policy pi to maximize the average"}, {"start": 304.6, "end": 307.92, "text": " or the expected sum of discounted rewards."}, {"start": 307.92, "end": 313.20000000000005, "text": " So to summarize, when you have a stochastic reinforcement learning problem or a stochastic"}, {"start": 313.20000000000005, "end": 318.52000000000004, "text": " Markov decision process, the goal is to choose a policy that tells what action A to take"}, {"start": 318.52000000000004, "end": 322.40000000000003, "text": " in state S so as to maximize the expected return."}, {"start": 322.4, "end": 328.35999999999996, "text": " The last way that this changes what we've talked about is it modifies Bellman equation"}, {"start": 328.35999999999996, "end": 329.78, "text": " a little bit."}, {"start": 329.78, "end": 333.15999999999997, "text": " So here's Bellman equation exactly as we've written down."}, {"start": 333.15999999999997, "end": 338.12, "text": " But the difference now is that when you take the action A in state S, the next state S"}, {"start": 338.12, "end": 340.67999999999995, "text": " prime you get to is random."}, {"start": 340.67999999999995, "end": 345.59999999999997, "text": " When you're in state V and you try to go left, the next state S prime, it could be the state"}, {"start": 345.59999999999997, "end": 349.2, "text": " two, or it could be the state four."}, {"start": 349.2, "end": 356.28, "text": " So S prime is now random, which is why we also put an average operator or an expected"}, {"start": 356.28, "end": 358.15999999999997, "text": " operator here."}, {"start": 358.15999999999997, "end": 365.2, "text": " So we say that the total return from state S taking action A once and then behaving optimally"}, {"start": 365.2, "end": 370.28, "text": " is equal to the reward you get right away, also called the immediate reward, plus the"}, {"start": 370.28, "end": 378.64, "text": " discount factor gamma plus what you expect to get on average of the future returns."}, {"start": 378.64, "end": 385.15999999999997, "text": " If you want to sharpen your intuition about what happens with these stochastic reinforcement"}, {"start": 385.15999999999997, "end": 391.52, "text": " learning problems, you go back to the optional lab that I had shown you just now, where this"}, {"start": 391.52, "end": 399.56, "text": " parameter misstep probability is the probability of your Mars rover going in the opposite direction"}, {"start": 399.56, "end": 401.4, "text": " than you had commanded it to."}, {"start": 401.4, "end": 408.96, "text": " So if we set misstep prop to be 0.1 and we execute the notebook, and so these numbers"}, {"start": 408.96, "end": 416.91999999999996, "text": " up here are the optimal return if you were to take the best possible actions, take this"}, {"start": 416.91999999999996, "end": 424.08, "text": " optimal policy, but the robot were to step in the wrong direction 10% of the time."}, {"start": 424.08, "end": 428.09999999999997, "text": " And these are the Q values for the stochastic MDP."}, {"start": 428.1, "end": 432.68, "text": " Notice that these values are now a little bit lower because you can't control the robot"}, {"start": 432.68, "end": 434.44, "text": " as well as before."}, {"start": 434.44, "end": 438.8, "text": " The Q values as well as the optimal returns have gone down a bit."}, {"start": 438.8, "end": 444.88, "text": " And in fact, if you were to increase the misstep probability, say 40% of the time, the robot"}, {"start": 444.88, "end": 449.84000000000003, "text": " doesn't even go in the direction you had commanded it to, only 60% of the time it goes where"}, {"start": 449.84000000000003, "end": 455.44, "text": " you told it to, then these values end up even lower because your degree of control over"}, {"start": 455.44, "end": 457.8, "text": " the robot has decreased."}, {"start": 457.8, "end": 463.92, "text": " So I encourage you to play with the optional lab and change the value of the misstep probability"}, {"start": 463.92, "end": 469.16, "text": " and see how that affects the optimal return or the optimal expected return as well as"}, {"start": 469.16, "end": 472.68, "text": " the Q values Q of SA."}, {"start": 472.68, "end": 478.72, "text": " Now in everything we've done so far, we've been using this Markov decision process, this"}, {"start": 478.72, "end": 481.8, "text": " Mars rover with just six states."}, {"start": 481.8, "end": 486.56, "text": " For many practical applications, the number of states will be much larger."}, {"start": 486.56, "end": 491.72, "text": " In the next video, we'll take the reinforcement learning or Markov decision process framework"}, {"start": 491.72, "end": 496.96, "text": " we've talked about so far and generalize it to this much richer and maybe even more interesting"}, {"start": 496.96, "end": 502.64, "text": " set of problems with much larger and particular with continuous state spaces."}, {"start": 502.64, "end": 517.3199999999999, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=VjWHeGciHQE
10.10 Continuous State Spaces | Example of continuous state space applications -[ML | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Many robotic control applications, including the lunar lander application that you work on in the practice lab, have continuous state spaces. Let's take a look at what that means and how to generalize the concepts we've talked about to these continuous state spaces. The simplified Mars rover example we use, had used a discrete set of states. What that means is that simplified Mars rover could only be in one of six possible positions. But most robots can be in more than one of six or any discrete number of positions. Instead, they can be in any of a very large number of continuous value positions. For example, if the Mars rover could be anywhere on a line, so its position was indicated by a number ranging from zero to six kilometers, where any number in between is valid, that would be an example of a continuous state space. Because the position would be represented by a number, such as that is 2.7 kilometers along or 4.8 kilometers, or any other number between zero and six. Let's look at another example. I'm going to use for this example, the application of controlling a car or a truck. Here's a toy car, right here, toy truck. This one belongs to my daughter. If you're building a self-driving car or a self-driving truck, and you want to control this to drive smoothly, then the state of this truck might include a few numbers, such as its x-position, its y-position, maybe its orientation, what way is it facing. Assuming the truck stays on the ground, you probably don't need to worry about how tall it is, how high up it is. This state would include x, y, and its angle theta, as well as maybe its speed in x direction, the speed in the y direction, and how quickly it is turning. Is it turning at one degree per second, or is it turning at 30 degrees per second, or is it turning really quickly at 90 degrees per second? For a truck or a car, the state might include not just one number, like how many kilometers it is along on this line, but it might include six numbers. Its x-position, its y-position, its orientation, which I'm going to denote using Greek alphabet theta, as well as its velocity in the x direction, which I'm going to denote using x dot. That means how quickly is this x coordinate changing, y dot, how quickly is the y coordinate changing, and then finally theta dot, which is how quickly is the angle of the car changing. Whereas for the 6th state Mars Rover example, the state was just one of six possible numbers. It could be 1, 2, 3, 4, 5, or 6. For the car, the state would comprise this vector of six numbers and any of these numbers can take on any value within this valid range. For example, theta should range between 0 and 360 degrees. Let's look at another example. What if you're building a reinforcement learning algorithm to control an autonomous helicopter? How would you characterize the position of a helicopter? To illustrate, I have with me here a small toy helicopter. The position of the helicopter would include its x-position, such as how far north or south is the helicopter, its y-position, maybe how far on the east-west axis is the helicopter, and then also z, the height of the helicopter above ground. But other than the position, the helicopter also has an orientation. Conventionally, one way to capture this orientation is with three additional numbers. One of which captures the row of the helicopter. Is it rolling to the left or the right? The pitch, is it pitching forward or pitching up, pitching back? Then finally, the yaw, which is what's the compass orientation is it facing? Is it facing north or east or south or west? To summarize, the state of the helicopter includes its position in the, say, north-south direction, its position in the east-west direction, y, its height above ground, and also the row, the pitch, and then also the yaw of the helicopter. To write this down, the state therefore includes the position x, y, z, and then the row, pitch, and yaw denoted with Greek alphabets, Phi, Theta, and Omega. But to control the helicopter, we also need to know its speed in the x direction, in the y direction, and in the z direction, as well as its rate of turning, also called the angular velocity. So how fast is this row changing, and how fast is this pitch changing, and how fast is this yaw changing? So this is actually the state used to control autonomous helicopters. It's this list of 12 numbers that is input to a policy, and the job of a policy is to look at these 12 numbers and decide what's an appropriate action to take in a helicopter. So in a continuous state reinforcement learning problem, or a continuous state Markov decision process, continuous state MTP, the state of the problem isn't just one of a small number of possible discrete values, like a number from one to six. Instead, it's a vector of numbers, any of which could take any of a large number of values. In the practice lab for this week, you get to implement for yourself a reinforcement learning algorithm applied to a simulated lunar lander application, landing something on the moon in simulation. Let's take a look in the next video at what that application entails, since that would be another continuous state application.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Many robotic control applications,"}, {"start": 4.5600000000000005, "end": 9.16, "text": " including the lunar lander application that you work on in the practice lab,"}, {"start": 9.16, "end": 11.52, "text": " have continuous state spaces."}, {"start": 11.52, "end": 14.52, "text": " Let's take a look at what that means and how to generalize"}, {"start": 14.52, "end": 18.34, "text": " the concepts we've talked about to these continuous state spaces."}, {"start": 18.34, "end": 21.92, "text": " The simplified Mars rover example we use,"}, {"start": 21.92, "end": 24.8, "text": " had used a discrete set of states."}, {"start": 24.8, "end": 29.48, "text": " What that means is that simplified Mars rover could only be in"}, {"start": 29.48, "end": 32.36, "text": " one of six possible positions."}, {"start": 32.36, "end": 39.52, "text": " But most robots can be in more than one of six or any discrete number of positions."}, {"start": 39.52, "end": 41.68, "text": " Instead, they can be in any of"}, {"start": 41.68, "end": 46.32, "text": " a very large number of continuous value positions."}, {"start": 46.32, "end": 52.24, "text": " For example, if the Mars rover could be anywhere on a line,"}, {"start": 52.24, "end": 59.46, "text": " so its position was indicated by a number ranging from zero to six kilometers,"}, {"start": 59.46, "end": 61.74, "text": " where any number in between is valid,"}, {"start": 61.74, "end": 66.2, "text": " that would be an example of a continuous state space."}, {"start": 66.2, "end": 70.28, "text": " Because the position would be represented by a number,"}, {"start": 70.28, "end": 75.56, "text": " such as that is 2.7 kilometers along or 4.8 kilometers,"}, {"start": 75.56, "end": 78.06, "text": " or any other number between zero and six."}, {"start": 78.06, "end": 79.9, "text": " Let's look at another example."}, {"start": 79.9, "end": 82.28, "text": " I'm going to use for this example,"}, {"start": 82.28, "end": 85.52000000000001, "text": " the application of controlling a car or a truck."}, {"start": 85.52000000000001, "end": 87.7, "text": " Here's a toy car, right here, toy truck."}, {"start": 87.7, "end": 89.72, "text": " This one belongs to my daughter."}, {"start": 89.72, "end": 93.04, "text": " If you're building a self-driving car or a self-driving truck,"}, {"start": 93.04, "end": 95.76, "text": " and you want to control this to drive smoothly,"}, {"start": 95.76, "end": 99.72, "text": " then the state of this truck might include a few numbers,"}, {"start": 99.72, "end": 103.88, "text": " such as its x-position, its y-position,"}, {"start": 103.88, "end": 107.16, "text": " maybe its orientation, what way is it facing."}, {"start": 107.16, "end": 109.12, "text": " Assuming the truck stays on the ground,"}, {"start": 109.12, "end": 113.32000000000001, "text": " you probably don't need to worry about how tall it is, how high up it is."}, {"start": 113.32, "end": 119.52, "text": " This state would include x, y, and its angle theta,"}, {"start": 119.52, "end": 122.88, "text": " as well as maybe its speed in x direction,"}, {"start": 122.88, "end": 124.67999999999999, "text": " the speed in the y direction,"}, {"start": 124.67999999999999, "end": 126.19999999999999, "text": " and how quickly it is turning."}, {"start": 126.19999999999999, "end": 128.23999999999998, "text": " Is it turning at one degree per second,"}, {"start": 128.23999999999998, "end": 130.28, "text": " or is it turning at 30 degrees per second,"}, {"start": 130.28, "end": 133.6, "text": " or is it turning really quickly at 90 degrees per second?"}, {"start": 133.6, "end": 136.79999999999998, "text": " For a truck or a car,"}, {"start": 136.79999999999998, "end": 140.95999999999998, "text": " the state might include not just one number,"}, {"start": 140.96, "end": 144.0, "text": " like how many kilometers it is along on this line,"}, {"start": 144.0, "end": 146.28, "text": " but it might include six numbers."}, {"start": 146.28, "end": 149.20000000000002, "text": " Its x-position, its y-position,"}, {"start": 149.20000000000002, "end": 154.56, "text": " its orientation, which I'm going to denote using Greek alphabet theta,"}, {"start": 154.56, "end": 157.68, "text": " as well as its velocity in the x direction,"}, {"start": 157.68, "end": 160.0, "text": " which I'm going to denote using x dot."}, {"start": 160.0, "end": 163.84, "text": " That means how quickly is this x coordinate changing,"}, {"start": 163.84, "end": 167.60000000000002, "text": " y dot, how quickly is the y coordinate changing,"}, {"start": 167.60000000000002, "end": 169.38, "text": " and then finally theta dot,"}, {"start": 169.38, "end": 174.84, "text": " which is how quickly is the angle of the car changing."}, {"start": 174.84, "end": 178.14, "text": " Whereas for the 6th state Mars Rover example,"}, {"start": 178.14, "end": 181.72, "text": " the state was just one of six possible numbers."}, {"start": 181.72, "end": 185.56, "text": " It could be 1, 2, 3, 4, 5, or 6."}, {"start": 185.56, "end": 190.32, "text": " For the car, the state would comprise this vector of"}, {"start": 190.32, "end": 194.04, "text": " six numbers and any of these numbers can take on"}, {"start": 194.04, "end": 198.0, "text": " any value within this valid range."}, {"start": 198.0, "end": 203.4, "text": " For example, theta should range between 0 and 360 degrees."}, {"start": 203.4, "end": 205.56, "text": " Let's look at another example."}, {"start": 205.56, "end": 207.04, "text": " What if you're building"}, {"start": 207.04, "end": 211.2, "text": " a reinforcement learning algorithm to control an autonomous helicopter?"}, {"start": 211.2, "end": 214.36, "text": " How would you characterize the position of a helicopter?"}, {"start": 214.36, "end": 217.84, "text": " To illustrate, I have with me here a small toy helicopter."}, {"start": 217.84, "end": 222.08, "text": " The position of the helicopter would include its x-position,"}, {"start": 222.08, "end": 225.6, "text": " such as how far north or south is the helicopter,"}, {"start": 225.6, "end": 231.12, "text": " its y-position, maybe how far on the east-west axis is the helicopter,"}, {"start": 231.12, "end": 235.16, "text": " and then also z, the height of the helicopter above ground."}, {"start": 235.16, "end": 237.72, "text": " But other than the position,"}, {"start": 237.72, "end": 240.64, "text": " the helicopter also has an orientation."}, {"start": 240.64, "end": 245.57999999999998, "text": " Conventionally, one way to capture this orientation is with three additional numbers."}, {"start": 245.57999999999998, "end": 249.24, "text": " One of which captures the row of the helicopter."}, {"start": 249.24, "end": 251.28, "text": " Is it rolling to the left or the right?"}, {"start": 251.28, "end": 255.04, "text": " The pitch, is it pitching forward or pitching up, pitching back?"}, {"start": 255.04, "end": 256.96, "text": " Then finally, the yaw,"}, {"start": 256.96, "end": 260.0, "text": " which is what's the compass orientation is it facing?"}, {"start": 260.0, "end": 262.71999999999997, "text": " Is it facing north or east or south or west?"}, {"start": 262.71999999999997, "end": 268.4, "text": " To summarize, the state of the helicopter includes its position in the,"}, {"start": 268.4, "end": 270.2, "text": " say, north-south direction,"}, {"start": 270.2, "end": 273.32, "text": " its position in the east-west direction, y,"}, {"start": 273.32, "end": 276.92, "text": " its height above ground, and also the row,"}, {"start": 276.92, "end": 282.14, "text": " the pitch, and then also the yaw of the helicopter."}, {"start": 282.14, "end": 283.88, "text": " To write this down,"}, {"start": 283.88, "end": 287.71999999999997, "text": " the state therefore includes the position x, y,"}, {"start": 287.71999999999997, "end": 291.96, "text": " z, and then the row, pitch,"}, {"start": 291.96, "end": 296.32, "text": " and yaw denoted with Greek alphabets,"}, {"start": 296.32, "end": 299.52, "text": " Phi, Theta, and Omega."}, {"start": 299.52, "end": 300.96, "text": " But to control the helicopter,"}, {"start": 300.96, "end": 305.68, "text": " we also need to know its speed in the x direction,"}, {"start": 305.68, "end": 307.2, "text": " in the y direction,"}, {"start": 307.2, "end": 308.92, "text": " and in the z direction,"}, {"start": 308.92, "end": 311.32, "text": " as well as its rate of turning,"}, {"start": 311.32, "end": 313.24, "text": " also called the angular velocity."}, {"start": 313.24, "end": 315.92, "text": " So how fast is this row changing,"}, {"start": 315.92, "end": 318.28000000000003, "text": " and how fast is this pitch changing,"}, {"start": 318.28000000000003, "end": 321.40000000000003, "text": " and how fast is this yaw changing?"}, {"start": 321.40000000000003, "end": 326.12, "text": " So this is actually the state used to control autonomous helicopters."}, {"start": 326.12, "end": 331.64, "text": " It's this list of 12 numbers that is input to a policy,"}, {"start": 331.64, "end": 335.36, "text": " and the job of a policy is to look at these 12 numbers and"}, {"start": 335.36, "end": 338.6, "text": " decide what's an appropriate action to take in a helicopter."}, {"start": 338.6, "end": 342.2, "text": " So in a continuous state reinforcement learning problem,"}, {"start": 342.2, "end": 345.32, "text": " or a continuous state Markov decision process,"}, {"start": 345.32, "end": 346.96, "text": " continuous state MTP,"}, {"start": 346.96, "end": 350.28, "text": " the state of the problem isn't just one of"}, {"start": 350.28, "end": 352.96, "text": " a small number of possible discrete values,"}, {"start": 352.96, "end": 354.76, "text": " like a number from one to six."}, {"start": 354.76, "end": 358.59999999999997, "text": " Instead, it's a vector of numbers,"}, {"start": 358.59999999999997, "end": 363.12, "text": " any of which could take any of a large number of values."}, {"start": 363.12, "end": 365.68, "text": " In the practice lab for this week,"}, {"start": 365.68, "end": 367.64, "text": " you get to implement for yourself"}, {"start": 367.64, "end": 370.0, "text": " a reinforcement learning algorithm applied to"}, {"start": 370.0, "end": 373.44, "text": " a simulated lunar lander application,"}, {"start": 373.44, "end": 376.76, "text": " landing something on the moon in simulation."}, {"start": 376.76, "end": 381.0, "text": " Let's take a look in the next video at what that application entails,"}, {"start": 381.0, "end": 400.84, "text": " since that would be another continuous state application."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=d8w5g94Wa9E
10.11 Continuous State Spaces | Lunar lander -[Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
The lunar lander lets you land a simulated vehicle on the moon. It's like a fun little video game that's been used by a lot of reinforcement learning researchers. Let's take a look at what it is. In this application, you're in command of a lunar lander that is rapidly approaching the surface of the moon, and your job is to fire thrusters at the appropriate times to land it safely on a landing pad. To give you a sense of what it looks like, this is the lunar lander landing successfully, and it's firing thrusters downward and to the left and right to position itself to land between these two yellow flags. Or if the reinforcement learning algorithms policy does not do well, then this is what it might look like where the lander unfortunately has crashed on the surface of the moon. In this application, you have four possible actions on every time step. You could either do nothing, in which case the forces of inertia and gravity pull you toward the surface of the moon, or you can fire a left thruster. When you see a little red dot come out on the left that's firing the left thruster, they'll tend to push the lunar lander to the right, or you can fire the main engine that's thrusting down the bottom here, or you can fire the right thruster, and that's firing the right thruster which will push you to the left. And your job is to keep on picking actions over time so as to land the lunar lander safely between these two flags here on the landing pad. In order to give the actions a shorter name, I'm sometimes going to call the actions nothing, meaning do nothing, or left, meaning fire the left thruster, or main, meaning fire the main engine downward, or right. So I'm going to call the actions nothing, left, main, and right for short later in this video. How about the state space of this MTP? The states are this position x and y, so how far to the left or right and how high up is it, as well as velocity x dot y dot. How fast is it moving in the horizontal and vertical directions? And then also, is angle, so how far is the lunar lander tilted to the left or tilted to the right? Is angular velocity theta dot? And then finally, because a small difference in positioning makes a big difference in whether or not it's landed, we're going to have two other variables in the state vector, which we'll call L and R, which corresponds to whether the left leg is grounded, meaning whether or not the left leg is sitting on the ground, as well as R, which corresponds to whether or not the right leg is sitting on the ground. So whereas x, y, x dot y dot theta, theta dot are numbers, L and R will be binary valued and can take on only values 0 or 1, depending on whether the left and right legs are touching the ground. Finally, here's the reward function for the lunar lander. If it manages to get to the landing pad, then it receives a reward between 100 and 140, depending on how well it's flown and gotten to the center of the landing pad. We also give it an additional reward for moving toward or away from the pad. So if it moves closer to the pad, it receives a positive reward. If it moves away and drifts away, it receives a negative reward. If it crashes, it gets a large negative 100 reward. If it achieves a soft landing, that is a landing that's not a crash, it gets a plus 100 reward. For each leg, the left leg or the right leg that gets grounded, it receives a plus 10 reward. And then finally, to encourage it not to waste too much fuel and fire thrusters unnecessarily, every time it fires the main engine, we give it a negative 0.3 reward. And every time it fires the left or the right side thrusters, we give it a negative 0.03 reward. Notice that this is a moderately complex reward function. The designers of the lunar lander application actually put some thought into exactly what behavior you want and codified it in the reward function to incentivize more of the behaviors you want and fewer of the behaviors like crashing that you don't want. You find when you're building your own reinforcement learning application, it usually takes some thought to specify exactly what you want or don't want and to codify that in the reward function. But specifying the reward function should still turn out to be much easier than specifying the exact right action to take from every single state, which is much harder for this and many other reinforcement learning applications. So the lunar lander problem is as follows. Our goal is to learn a policy pi that when given a state s as written here, takes an action a equals pi of s so as to maximize the return, the sum of discounted rewards. And usually for the lunar lander, we use a fairly large value for gamma. We'll use the value of gamma that's equal to 0.985. So pretty close to one. And if you can learn a policy pi that does this, then you successfully land this lunar lander. Exciting application and we're now finally ready to develop a learning algorithm, which will turn out to use deep learning on neural networks to come up with a policy to land the lunar lander. Let's go on to the next video where we'll start to learn about deep reinforcement learning.
[{"start": 0.0, "end": 6.88, "text": " The lunar lander lets you land a simulated vehicle on the moon."}, {"start": 6.88, "end": 12.200000000000001, "text": " It's like a fun little video game that's been used by a lot of reinforcement learning researchers."}, {"start": 12.200000000000001, "end": 14.76, "text": " Let's take a look at what it is."}, {"start": 14.76, "end": 20.16, "text": " In this application, you're in command of a lunar lander that is rapidly approaching"}, {"start": 20.16, "end": 25.400000000000002, "text": " the surface of the moon, and your job is to fire thrusters at the appropriate times to"}, {"start": 25.400000000000002, "end": 28.400000000000002, "text": " land it safely on a landing pad."}, {"start": 28.4, "end": 33.9, "text": " To give you a sense of what it looks like, this is the lunar lander landing successfully,"}, {"start": 33.9, "end": 38.9, "text": " and it's firing thrusters downward and to the left and right to position itself to land"}, {"start": 38.9, "end": 41.68, "text": " between these two yellow flags."}, {"start": 41.68, "end": 46.76, "text": " Or if the reinforcement learning algorithms policy does not do well, then this is what"}, {"start": 46.76, "end": 52.04, "text": " it might look like where the lander unfortunately has crashed on the surface of the moon."}, {"start": 52.04, "end": 58.5, "text": " In this application, you have four possible actions on every time step."}, {"start": 58.5, "end": 63.879999999999995, "text": " You could either do nothing, in which case the forces of inertia and gravity pull you"}, {"start": 63.879999999999995, "end": 69.84, "text": " toward the surface of the moon, or you can fire a left thruster."}, {"start": 69.84, "end": 73.6, "text": " When you see a little red dot come out on the left that's firing the left thruster,"}, {"start": 73.6, "end": 80.28, "text": " they'll tend to push the lunar lander to the right, or you can fire the main engine that's"}, {"start": 80.28, "end": 87.28, "text": " thrusting down the bottom here, or you can fire the right thruster, and that's firing"}, {"start": 87.28, "end": 90.72, "text": " the right thruster which will push you to the left."}, {"start": 90.72, "end": 97.96000000000001, "text": " And your job is to keep on picking actions over time so as to land the lunar lander safely"}, {"start": 97.96000000000001, "end": 101.52000000000001, "text": " between these two flags here on the landing pad."}, {"start": 101.52000000000001, "end": 106.44, "text": " In order to give the actions a shorter name, I'm sometimes going to call the actions nothing,"}, {"start": 106.44, "end": 112.03999999999999, "text": " meaning do nothing, or left, meaning fire the left thruster, or main, meaning fire the"}, {"start": 112.03999999999999, "end": 114.36, "text": " main engine downward, or right."}, {"start": 114.36, "end": 119.24, "text": " So I'm going to call the actions nothing, left, main, and right for short later in this"}, {"start": 119.24, "end": 120.24, "text": " video."}, {"start": 120.24, "end": 122.56, "text": " How about the state space of this MTP?"}, {"start": 122.56, "end": 129.28, "text": " The states are this position x and y, so how far to the left or right and how high up is"}, {"start": 129.28, "end": 134.1, "text": " it, as well as velocity x dot y dot."}, {"start": 134.1, "end": 137.28, "text": " How fast is it moving in the horizontal and vertical directions?"}, {"start": 137.28, "end": 142.74, "text": " And then also, is angle, so how far is the lunar lander tilted to the left or tilted"}, {"start": 142.74, "end": 143.74, "text": " to the right?"}, {"start": 143.74, "end": 146.16, "text": " Is angular velocity theta dot?"}, {"start": 146.16, "end": 151.64, "text": " And then finally, because a small difference in positioning makes a big difference in whether"}, {"start": 151.64, "end": 157.79999999999998, "text": " or not it's landed, we're going to have two other variables in the state vector, which"}, {"start": 157.79999999999998, "end": 162.92, "text": " we'll call L and R, which corresponds to whether the left leg is grounded, meaning whether"}, {"start": 162.92, "end": 168.67999999999998, "text": " or not the left leg is sitting on the ground, as well as R, which corresponds to whether"}, {"start": 168.67999999999998, "end": 171.83999999999997, "text": " or not the right leg is sitting on the ground."}, {"start": 171.83999999999997, "end": 180.04, "text": " So whereas x, y, x dot y dot theta, theta dot are numbers, L and R will be binary valued"}, {"start": 180.04, "end": 185.23999999999998, "text": " and can take on only values 0 or 1, depending on whether the left and right legs are touching"}, {"start": 185.23999999999998, "end": 186.23999999999998, "text": " the ground."}, {"start": 186.23999999999998, "end": 189.48, "text": " Finally, here's the reward function for the lunar lander."}, {"start": 189.48, "end": 195.51999999999998, "text": " If it manages to get to the landing pad, then it receives a reward between 100 and 140,"}, {"start": 195.51999999999998, "end": 200.48, "text": " depending on how well it's flown and gotten to the center of the landing pad."}, {"start": 200.48, "end": 205.17999999999998, "text": " We also give it an additional reward for moving toward or away from the pad."}, {"start": 205.17999999999998, "end": 208.88, "text": " So if it moves closer to the pad, it receives a positive reward."}, {"start": 208.88, "end": 213.2, "text": " If it moves away and drifts away, it receives a negative reward."}, {"start": 213.2, "end": 217.6, "text": " If it crashes, it gets a large negative 100 reward."}, {"start": 217.6, "end": 224.88, "text": " If it achieves a soft landing, that is a landing that's not a crash, it gets a plus 100 reward."}, {"start": 224.88, "end": 229.76, "text": " For each leg, the left leg or the right leg that gets grounded, it receives a plus 10"}, {"start": 229.76, "end": 230.76, "text": " reward."}, {"start": 230.76, "end": 236.48, "text": " And then finally, to encourage it not to waste too much fuel and fire thrusters unnecessarily,"}, {"start": 236.48, "end": 241.88, "text": " every time it fires the main engine, we give it a negative 0.3 reward."}, {"start": 241.88, "end": 247.76, "text": " And every time it fires the left or the right side thrusters, we give it a negative 0.03"}, {"start": 247.76, "end": 248.76, "text": " reward."}, {"start": 248.76, "end": 253.2, "text": " Notice that this is a moderately complex reward function."}, {"start": 253.2, "end": 258.04, "text": " The designers of the lunar lander application actually put some thought into exactly what"}, {"start": 258.04, "end": 265.32, "text": " behavior you want and codified it in the reward function to incentivize more of the behaviors"}, {"start": 265.32, "end": 270.28, "text": " you want and fewer of the behaviors like crashing that you don't want."}, {"start": 270.28, "end": 275.08, "text": " You find when you're building your own reinforcement learning application, it usually takes some"}, {"start": 275.08, "end": 280.67999999999995, "text": " thought to specify exactly what you want or don't want and to codify that in the reward"}, {"start": 280.67999999999995, "end": 281.88, "text": " function."}, {"start": 281.88, "end": 287.64, "text": " But specifying the reward function should still turn out to be much easier than specifying"}, {"start": 287.64, "end": 291.91999999999996, "text": " the exact right action to take from every single state, which is much harder for this"}, {"start": 291.91999999999996, "end": 294.67999999999995, "text": " and many other reinforcement learning applications."}, {"start": 294.67999999999995, "end": 299.44, "text": " So the lunar lander problem is as follows."}, {"start": 299.44, "end": 308.16, "text": " Our goal is to learn a policy pi that when given a state s as written here, takes an"}, {"start": 308.16, "end": 318.44, "text": " action a equals pi of s so as to maximize the return, the sum of discounted rewards."}, {"start": 318.44, "end": 323.36, "text": " And usually for the lunar lander, we use a fairly large value for gamma."}, {"start": 323.36, "end": 327.44, "text": " We'll use the value of gamma that's equal to 0.985."}, {"start": 327.44, "end": 329.8, "text": " So pretty close to one."}, {"start": 329.8, "end": 336.28, "text": " And if you can learn a policy pi that does this, then you successfully land this lunar"}, {"start": 336.28, "end": 337.28, "text": " lander."}, {"start": 337.28, "end": 341.96, "text": " Exciting application and we're now finally ready to develop a learning algorithm, which"}, {"start": 341.96, "end": 347.56, "text": " will turn out to use deep learning on neural networks to come up with a policy to land"}, {"start": 347.56, "end": 349.12, "text": " the lunar lander."}, {"start": 349.12, "end": 357.8, "text": " Let's go on to the next video where we'll start to learn about deep reinforcement learning."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=02vHFyFzhqw
10.12 Continuous State Spaces | Learning the state-value function -[Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's see how we can use reinforcement learning to control the lunar lander or for other reinforcement learning problems. The key idea is that we're going to train a neural network to compute or to approximate the state action value function Q of s, a, and that in turn will let us pick good actions. Let's see how this works. The heart of the learning algorithm is we're going to train a neural network that inputs the current state and the current action and computes or approximates Q of s, a. In particular, for the lunar lander, we're going to take the state s and any action a and put them together. Concretely, the state was that list of eight numbers that we saw previously. So you have x, y, x dot, y dot, theta, theta dot, and then LR for whether the lakes are grounded. So that's a list of eight numbers to describe the state. Then finally, we have four possible actions, nothing left, main or main engine and right. And we can encode any of those four actions using a one heart feature vector. So if action were the first action, we may encode it using 1, 0, 0, 0. Or if it was the second action to find the left thruster, we may encode it as 0, 1, 0, 0. So this list of 12 numbers, eight numbers for the state and then four numbers, a one heart encoding of the action is the inputs we'll have to the neural network. And I'm going to call this x. We'll then take these 12 numbers and feed them to a neural network with say 64 units in the first hidden layer, 64 units in the second hidden layer, and then a single output in the output layer. And the job of the neural network is an output Q of SA, the state action value function for the lunar lander given the input S and A. And because we'll be using neural network training algorithms in a little bit, I'm also going to refer to this value Q of SA as the target value y that we'll train the neural network to approximate. Notice that I did say reinforcement learning is different from supervised learning. But what we're going to do is not input a state and have it output an action. What we're going to do is input a state action pair and have it try to output Q of SA and using a neural network inside your reinforcement learning algorithm this way will turn out to work pretty well. We'll see the details in a little bit. So don't worry about it if it doesn't make sense yet. But if you can train a neural network with appropriate choices of parameters in hidden layers and in the output layer to give you a good estimates of Q of SA, then whenever your lunar lander is in some state S, you can then use the neural network to compute Q of SA for all four actions. You can compute Q of S nothing, Q of S left, Q of S main, Q of S right. And then finally, whichever of these has the highest value, you pick the corresponding action A. So for example, if out of these four values, Q of S main is largest, then you would decide to go and fire the main engine of the lunar lander. So the question becomes, how do you train a neural network to output Q of SA? It turns out the approach will be to use Bellman's equations to create a training set with lots of examples X and Y, and then we'll use supervised learning exactly as you learned in the second course when we talked about neural networks to learn using supervised learning, a mapping from X to Y. That is a mapping from the state action pair to this target value Q of SA. But how do you get a training set with values for X and Y that you can then train a neural network on? Let's take a look. So here's the Bellman equation. Q of SA equals R of S plus gamma max of A prime Q of S prime A prime. So the right hand side is what you want Q of SA to be equal to. So I'm going to call this value on the right hand side Y and the input to the neural network is a state and an action. So I'm going to call that X. And the job of a neural network is to input X, that is input the state action pair and try to accurately predict what will be the value on the right. So in supervised learning, we were training a neural network to learn a function F, which depends on a bunch of parameters W and B, the parameters of the various layers of the neural network. And it was a job of the neural network to input X and hopefully output something close to the target value Y. So the question is, how can we come up with a training set with values X and Y for a neural network to learn from? Here's what we're going to do. We're going to use the lunar lander and just try taking different actions in it. If we don't have a good policy yet, we'll take actions randomly. By the left fester, by the right fester, by the main engine, do nothing. And by just trying out different things in the lunar lander simulator, we'll observe a lot of examples of when we're in some state and we took some action, maybe a good action, maybe a terrible action, either way. And then we got some rewards R of S for being in that state. And as a result of our action, we got to some new state S prime. As you take different actions in the lunar lander, you see these S, A, R of S, S prime, and we call them tuples in Python codes many times. For example, maybe one time you're in some state S and just to give this an index, I'm going to call this S1. And you happen to take some action A1, this could be nothing left main fester or right. As a result of which you got some reward, and you wound up at some state S prime one. And maybe a different time you're in some other state S2, you took some other action, could be a good action, could be a bad action, could be any of the four actions, and you got the reward and then you wound up with S prime two, and so on, multiple times. And maybe you've done this 10,000 times or even more than 10,000 times. So you would have to save the way with not just S1, A1, and so on, but up to S10,000, A10,000. It turns out that each of these lists of four elements, each of these tuples will be enough to create a single training example, X1, Y1. In particular, here's how you do it. There are four elements in this first tuple. The first two will be used to compute X1, and the second two will be used to compute Y1. In particular, X1 is just going to be S1, A1 put together. S1 would be eight numbers, the state of the lunar lander, A1 would be four numbers, the one-hot encoding of whatever action this was, and Y1 would be computed using the right hand side of the Bellman equation. In particular, the Bellman equation says when you input S1, A1, you want Q of S1, A1 to be this right hand side, to be equal to R of S1 plus gamma max over A prime of Q of S1 prime A prime. And notice that these two elements of the tuple on the right give you enough information to compute this. You know what is R of S1, that's the reward you've saved away here, plus the discount factor gamma times max over all actions A prime of Q of S prime 1, that's the state you got to in this example, and then take the max over all possible actions A prime. And so I'm going to call this Y1, and when you compute this, this will be some number like 12.5 or 17 or 0.5 or some other number, and we'll save that number here as Y1 so that this pair, X1, Y1, becomes the first trading example in this little data set we're computing. Now you may be wondering, wait, where does Q of S prime A prime or Q of S prime 1 A prime come from? Well, initially, we don't know what is the Q function, but it turns out that when you don't know what is the Q function, you can start off with taking a totally random guess for what is the Q function. And we'll see on the next slide that the algorithm will work nonetheless. And every step, Q here is just going to be some guess that will get better over time, it turns out, of what is the actual Q function. Let's look at the second example. If you had a second experience where you state S2, took action A2, got that reward, and then got to that state, then we would create a second training example in this data set, X2, where the input is now S2, A2, so the first two elements go to computing the input X, and then Y2 will be equal to R of S2 plus gamma max over A prime Q of S prime 2 A prime. And whatever this number is, Y2, we put this over here in our small but growing training set. And so on and so forth until maybe you end up with 10,000 training examples with these X, Y pairs. And what we'll see later is that we'll actually take this training set where the Xs are inputs with 12 features and the Ys are just numbers, and we train a neural network with, say, the mean squared error loss to try to predict Y as a function of the input X. So what I describe here is just one piece of the learning algorithm we'll use. Let's put it all together on the next slide and see how it all comes together into a single algorithm. So let's take a look at what the full algorithm for learning the Q function is like. First, we're going to take our neural network and initialize all the parameters of the neural network randomly. Initially, we have no idea what is the Q function. So let's just pick totally random values of the weights and we'll pretend that this neural network is our initial random guess for the Q function. This is a little bit like when you are training linear regression and you initialize all the parameters randomly and then use gradient descent to improve the parameters. Initializing randomly for now is fine. What's important is whether the algorithm can slowly improve the parameters to get to a better estimate. Next, we will repeatedly do the following. We will take actions in the lunar land. So fly around randomly, take some good actions, take some bad actions. It's okay either way. But you get lots of these tuples of when it was in some state, you took some action A, got a reward R of S, and you got to some state S prime. And what we will do is store the 10,000 most recent examples of these tuples. As you run this algorithm, you will see many, many steps in the lunar lander, maybe hundreds of thousands of steps. But to make sure we don't end up using excessive computer memory, common practice is to just remember the 10,000 most recent such tuples that we saw taking actions in the MDP. This technique of storing the most recent examples only is sometimes called the replay buffer in a reinforcement learning algorithm. So for now, we just find the lunar lander randomly, sometimes crashing, sometimes not, and getting these tuples as experience for our learning algorithm. Occasionally, then, we will train the neural network. In order to train the neural network, here's what we'll do, we'll look at these 10,000 most recent tuples we had saved and create a training set of 10,000 examples. So training set needs lots of pairs of X and Y. And for our training examples, X will be the SA from this part of the tuple. So it'll be a list of 12 numbers, the eight numbers for the state and the four numbers for the one-hot encoding of the action. And the target value that we want a neural network to try to predict will be Y equals R of S plus gamma max of A prime Q of S prime A prime. How do we get this value of Q? Well, initially, is this neural network that we had randomly initialized. So it may not be a very good guess, but it's a guess. Simply creating these 10,000 training examples will have training examples X1, Y1 through X 10,000, Y 10,000. And so we'll train a neural network. And I'm going to call the new neural network Q new, such that Q new of SA learns to approximate Y. So this is exactly training that neural network to output F with parameters W and B to input X to try to approximate the target value Y. Now, this neural network should be a slightly better estimate of what the Q function or the state action value function should be. And so what we'll do is we're going to take Q and set it to this new neural network that we had just learned. Many of the ideas in this algorithm are due to min at all. And it turns out that if you run this algorithm where you start with a really random guess of the Q function, but use Bellman's equations to repeatedly try to improve the estimates of the Q function, then by doing this over and over, taking lots of actions, training a model that will improve your guess for the Q function. And so for the next model you train, you now have a slightly better estimate of what is the Q function. And then the next model you train will be even better. And when you update Q equals Q new, then for the next time you train a model, Q of S prime A prime will be an even better estimate. And so as you run this algorithm on every iteration, Q of S prime A prime hopefully becomes an even better estimate of the Q function. So that when you run the algorithm long enough, this will actually become a pretty good estimate of the true value of Q of S A so that you can then use this to pick hopefully good actions for the MDP. The algorithm you just saw is sometimes called the DQN algorithm, which stands for deep Q network, because you're using deep learning and neural network to train a model to learn the Q function. So hence DQN or deep Q network, DQ using a neural network. And if you use the algorithm as I described it, it will kind of work okay on the lunar lander. Maybe it'll take a long time to converge. Maybe it won't land perfectly, but it'll sort of work. But it turns out that with a couple refinements to the algorithm, it can work much better. So in the next few videos, let's take a look at some refinements to the algorithm that you just saw.
[{"start": 0.0, "end": 8.64, "text": " Let's see how we can use reinforcement learning to control the lunar lander or for other reinforcement"}, {"start": 8.64, "end": 10.200000000000001, "text": " learning problems."}, {"start": 10.200000000000001, "end": 16.86, "text": " The key idea is that we're going to train a neural network to compute or to approximate"}, {"start": 16.86, "end": 23.64, "text": " the state action value function Q of s, a, and that in turn will let us pick good actions."}, {"start": 23.64, "end": 25.2, "text": " Let's see how this works."}, {"start": 25.2, "end": 30.48, "text": " The heart of the learning algorithm is we're going to train a neural network that inputs"}, {"start": 30.48, "end": 38.879999999999995, "text": " the current state and the current action and computes or approximates Q of s, a."}, {"start": 38.879999999999995, "end": 45.8, "text": " In particular, for the lunar lander, we're going to take the state s and any action a"}, {"start": 45.8, "end": 47.519999999999996, "text": " and put them together."}, {"start": 47.519999999999996, "end": 53.519999999999996, "text": " Concretely, the state was that list of eight numbers that we saw previously."}, {"start": 53.52, "end": 61.52, "text": " So you have x, y, x dot, y dot, theta, theta dot, and then LR for whether the lakes are"}, {"start": 61.52, "end": 62.84, "text": " grounded."}, {"start": 62.84, "end": 66.2, "text": " So that's a list of eight numbers to describe the state."}, {"start": 66.2, "end": 73.0, "text": " Then finally, we have four possible actions, nothing left, main or main engine and right."}, {"start": 73.0, "end": 78.66, "text": " And we can encode any of those four actions using a one heart feature vector."}, {"start": 78.66, "end": 86.24, "text": " So if action were the first action, we may encode it using 1, 0, 0, 0."}, {"start": 86.24, "end": 92.84, "text": " Or if it was the second action to find the left thruster, we may encode it as 0, 1, 0,"}, {"start": 92.84, "end": 93.84, "text": " 0."}, {"start": 93.84, "end": 99.96, "text": " So this list of 12 numbers, eight numbers for the state and then four numbers, a one"}, {"start": 99.96, "end": 105.24, "text": " heart encoding of the action is the inputs we'll have to the neural network."}, {"start": 105.24, "end": 107.52, "text": " And I'm going to call this x."}, {"start": 107.52, "end": 113.84, "text": " We'll then take these 12 numbers and feed them to a neural network with say 64 units"}, {"start": 113.84, "end": 119.16, "text": " in the first hidden layer, 64 units in the second hidden layer, and then a single output"}, {"start": 119.16, "end": 121.32, "text": " in the output layer."}, {"start": 121.32, "end": 128.07999999999998, "text": " And the job of the neural network is an output Q of SA, the state action value function for"}, {"start": 128.07999999999998, "end": 134.6, "text": " the lunar lander given the input S and A. And because we'll be using neural network"}, {"start": 134.6, "end": 140.84, "text": " training algorithms in a little bit, I'm also going to refer to this value Q of SA as the"}, {"start": 140.84, "end": 146.16, "text": " target value y that we'll train the neural network to approximate."}, {"start": 146.16, "end": 150.95999999999998, "text": " Notice that I did say reinforcement learning is different from supervised learning."}, {"start": 150.95999999999998, "end": 155.88, "text": " But what we're going to do is not input a state and have it output an action."}, {"start": 155.88, "end": 162.4, "text": " What we're going to do is input a state action pair and have it try to output Q of SA and"}, {"start": 162.4, "end": 167.36, "text": " using a neural network inside your reinforcement learning algorithm this way will turn out"}, {"start": 167.36, "end": 169.0, "text": " to work pretty well."}, {"start": 169.0, "end": 171.56, "text": " We'll see the details in a little bit."}, {"start": 171.56, "end": 174.8, "text": " So don't worry about it if it doesn't make sense yet."}, {"start": 174.8, "end": 180.0, "text": " But if you can train a neural network with appropriate choices of parameters in hidden"}, {"start": 180.0, "end": 187.36, "text": " layers and in the output layer to give you a good estimates of Q of SA, then whenever"}, {"start": 187.36, "end": 193.96, "text": " your lunar lander is in some state S, you can then use the neural network to compute"}, {"start": 193.96, "end": 197.08, "text": " Q of SA for all four actions."}, {"start": 197.08, "end": 202.84, "text": " You can compute Q of S nothing, Q of S left, Q of S main, Q of S right."}, {"start": 202.84, "end": 208.72000000000003, "text": " And then finally, whichever of these has the highest value, you pick the corresponding"}, {"start": 208.72000000000003, "end": 216.68, "text": " action A. So for example, if out of these four values, Q of S main is largest, then"}, {"start": 216.68, "end": 221.64000000000001, "text": " you would decide to go and fire the main engine of the lunar lander."}, {"start": 221.64000000000001, "end": 228.44, "text": " So the question becomes, how do you train a neural network to output Q of SA?"}, {"start": 228.44, "end": 234.32, "text": " It turns out the approach will be to use Bellman's equations to create a training set with lots"}, {"start": 234.32, "end": 241.64000000000001, "text": " of examples X and Y, and then we'll use supervised learning exactly as you learned in the second"}, {"start": 241.64, "end": 247.67999999999998, "text": " course when we talked about neural networks to learn using supervised learning, a mapping"}, {"start": 247.67999999999998, "end": 248.92, "text": " from X to Y."}, {"start": 248.92, "end": 255.79999999999998, "text": " That is a mapping from the state action pair to this target value Q of SA."}, {"start": 255.79999999999998, "end": 262.47999999999996, "text": " But how do you get a training set with values for X and Y that you can then train a neural"}, {"start": 262.47999999999996, "end": 263.47999999999996, "text": " network on?"}, {"start": 263.47999999999996, "end": 265.2, "text": " Let's take a look."}, {"start": 265.2, "end": 267.36, "text": " So here's the Bellman equation."}, {"start": 267.36, "end": 273.28000000000003, "text": " Q of SA equals R of S plus gamma max of A prime Q of S prime A prime."}, {"start": 273.28000000000003, "end": 278.90000000000003, "text": " So the right hand side is what you want Q of SA to be equal to."}, {"start": 278.90000000000003, "end": 285.32, "text": " So I'm going to call this value on the right hand side Y and the input to the neural network"}, {"start": 285.32, "end": 287.86, "text": " is a state and an action."}, {"start": 287.86, "end": 294.56, "text": " So I'm going to call that X. And the job of a neural network is to input X, that is input"}, {"start": 294.56, "end": 301.4, "text": " the state action pair and try to accurately predict what will be the value on the right."}, {"start": 301.4, "end": 307.38, "text": " So in supervised learning, we were training a neural network to learn a function F, which"}, {"start": 307.38, "end": 312.36, "text": " depends on a bunch of parameters W and B, the parameters of the various layers of the"}, {"start": 312.36, "end": 313.92, "text": " neural network."}, {"start": 313.92, "end": 323.04, "text": " And it was a job of the neural network to input X and hopefully output something close"}, {"start": 323.04, "end": 325.36, "text": " to the target value Y."}, {"start": 325.36, "end": 334.48, "text": " So the question is, how can we come up with a training set with values X and Y for a neural"}, {"start": 334.48, "end": 336.40000000000003, "text": " network to learn from?"}, {"start": 336.40000000000003, "end": 337.68, "text": " Here's what we're going to do."}, {"start": 337.68, "end": 343.26, "text": " We're going to use the lunar lander and just try taking different actions in it."}, {"start": 343.26, "end": 347.20000000000005, "text": " If we don't have a good policy yet, we'll take actions randomly."}, {"start": 347.2, "end": 353.08, "text": " By the left fester, by the right fester, by the main engine, do nothing."}, {"start": 353.08, "end": 360.15999999999997, "text": " And by just trying out different things in the lunar lander simulator, we'll observe"}, {"start": 360.15999999999997, "end": 365.64, "text": " a lot of examples of when we're in some state and we took some action, maybe a good action,"}, {"start": 365.64, "end": 368.32, "text": " maybe a terrible action, either way."}, {"start": 368.32, "end": 373.32, "text": " And then we got some rewards R of S for being in that state."}, {"start": 373.32, "end": 378.58, "text": " And as a result of our action, we got to some new state S prime."}, {"start": 378.58, "end": 384.84, "text": " As you take different actions in the lunar lander, you see these S, A, R of S, S prime,"}, {"start": 384.84, "end": 388.15999999999997, "text": " and we call them tuples in Python codes many times."}, {"start": 388.15999999999997, "end": 393.32, "text": " For example, maybe one time you're in some state S and just to give this an index, I'm"}, {"start": 393.32, "end": 395.53999999999996, "text": " going to call this S1."}, {"start": 395.53999999999996, "end": 401.48, "text": " And you happen to take some action A1, this could be nothing left main fester or right."}, {"start": 401.48, "end": 408.8, "text": " As a result of which you got some reward, and you wound up at some state S prime one."}, {"start": 408.8, "end": 413.40000000000003, "text": " And maybe a different time you're in some other state S2, you took some other action,"}, {"start": 413.40000000000003, "end": 418.08000000000004, "text": " could be a good action, could be a bad action, could be any of the four actions, and you"}, {"start": 418.08000000000004, "end": 424.40000000000003, "text": " got the reward and then you wound up with S prime two, and so on, multiple times."}, {"start": 424.40000000000003, "end": 428.8, "text": " And maybe you've done this 10,000 times or even more than 10,000 times."}, {"start": 428.8, "end": 435.48, "text": " So you would have to save the way with not just S1, A1, and so on, but up to S10,000,"}, {"start": 435.48, "end": 436.88, "text": " A10,000."}, {"start": 436.88, "end": 443.6, "text": " It turns out that each of these lists of four elements, each of these tuples will be enough"}, {"start": 443.6, "end": 449.32, "text": " to create a single training example, X1, Y1."}, {"start": 449.32, "end": 451.56, "text": " In particular, here's how you do it."}, {"start": 451.56, "end": 454.96000000000004, "text": " There are four elements in this first tuple."}, {"start": 454.96, "end": 461.32, "text": " The first two will be used to compute X1, and the second two will be used to compute"}, {"start": 461.32, "end": 463.08, "text": " Y1."}, {"start": 463.08, "end": 471.32, "text": " In particular, X1 is just going to be S1, A1 put together."}, {"start": 471.32, "end": 477.88, "text": " S1 would be eight numbers, the state of the lunar lander, A1 would be four numbers, the"}, {"start": 477.88, "end": 483.4, "text": " one-hot encoding of whatever action this was, and Y1 would be computed using the right hand"}, {"start": 483.4, "end": 486.23999999999995, "text": " side of the Bellman equation."}, {"start": 486.23999999999995, "end": 494.64, "text": " In particular, the Bellman equation says when you input S1, A1, you want Q of S1, A1 to"}, {"start": 494.64, "end": 504.88, "text": " be this right hand side, to be equal to R of S1 plus gamma max over A prime of Q of"}, {"start": 504.88, "end": 508.32, "text": " S1 prime A prime."}, {"start": 508.32, "end": 513.72, "text": " And notice that these two elements of the tuple on the right give you enough information"}, {"start": 513.72, "end": 515.12, "text": " to compute this."}, {"start": 515.12, "end": 520.98, "text": " You know what is R of S1, that's the reward you've saved away here, plus the discount"}, {"start": 520.98, "end": 527.96, "text": " factor gamma times max over all actions A prime of Q of S prime 1, that's the state"}, {"start": 527.96, "end": 533.42, "text": " you got to in this example, and then take the max over all possible actions A prime."}, {"start": 533.42, "end": 540.86, "text": " And so I'm going to call this Y1, and when you compute this, this will be some number"}, {"start": 540.86, "end": 550.9599999999999, "text": " like 12.5 or 17 or 0.5 or some other number, and we'll save that number here as Y1 so that"}, {"start": 550.9599999999999, "end": 558.38, "text": " this pair, X1, Y1, becomes the first trading example in this little data set we're computing."}, {"start": 558.38, "end": 566.32, "text": " Now you may be wondering, wait, where does Q of S prime A prime or Q of S prime 1 A prime"}, {"start": 566.32, "end": 567.32, "text": " come from?"}, {"start": 567.32, "end": 572.84, "text": " Well, initially, we don't know what is the Q function, but it turns out that when you"}, {"start": 572.84, "end": 577.04, "text": " don't know what is the Q function, you can start off with taking a totally random guess"}, {"start": 577.04, "end": 579.16, "text": " for what is the Q function."}, {"start": 579.16, "end": 583.48, "text": " And we'll see on the next slide that the algorithm will work nonetheless."}, {"start": 583.48, "end": 589.16, "text": " And every step, Q here is just going to be some guess that will get better over time,"}, {"start": 589.16, "end": 592.72, "text": " it turns out, of what is the actual Q function."}, {"start": 592.72, "end": 594.24, "text": " Let's look at the second example."}, {"start": 594.24, "end": 600.04, "text": " If you had a second experience where you state S2, took action A2, got that reward, and then"}, {"start": 600.04, "end": 605.48, "text": " got to that state, then we would create a second training example in this data set,"}, {"start": 605.48, "end": 613.6800000000001, "text": " X2, where the input is now S2, A2, so the first two elements go to computing the input"}, {"start": 613.6800000000001, "end": 628.24, "text": " X, and then Y2 will be equal to R of S2 plus gamma max over A prime Q of S prime 2 A prime."}, {"start": 628.24, "end": 634.48, "text": " And whatever this number is, Y2, we put this over here in our small but growing training"}, {"start": 634.48, "end": 635.64, "text": " set."}, {"start": 635.64, "end": 643.48, "text": " And so on and so forth until maybe you end up with 10,000 training examples with these"}, {"start": 643.48, "end": 646.72, "text": " X, Y pairs."}, {"start": 646.72, "end": 653.32, "text": " And what we'll see later is that we'll actually take this training set where the Xs are inputs"}, {"start": 653.32, "end": 661.5600000000001, "text": " with 12 features and the Ys are just numbers, and we train a neural network with, say, the"}, {"start": 661.56, "end": 668.8399999999999, "text": " mean squared error loss to try to predict Y as a function of the input X."}, {"start": 668.8399999999999, "end": 674.3199999999999, "text": " So what I describe here is just one piece of the learning algorithm we'll use."}, {"start": 674.3199999999999, "end": 678.1199999999999, "text": " Let's put it all together on the next slide and see how it all comes together into a single"}, {"start": 678.1199999999999, "end": 679.76, "text": " algorithm."}, {"start": 679.76, "end": 685.4799999999999, "text": " So let's take a look at what the full algorithm for learning the Q function is like."}, {"start": 685.4799999999999, "end": 690.92, "text": " First, we're going to take our neural network and initialize all the parameters of the neural"}, {"start": 690.92, "end": 692.36, "text": " network randomly."}, {"start": 692.36, "end": 696.28, "text": " Initially, we have no idea what is the Q function."}, {"start": 696.28, "end": 700.92, "text": " So let's just pick totally random values of the weights and we'll pretend that this neural"}, {"start": 700.92, "end": 705.24, "text": " network is our initial random guess for the Q function."}, {"start": 705.24, "end": 710.24, "text": " This is a little bit like when you are training linear regression and you initialize all the"}, {"start": 710.24, "end": 715.5999999999999, "text": " parameters randomly and then use gradient descent to improve the parameters."}, {"start": 715.5999999999999, "end": 717.56, "text": " Initializing randomly for now is fine."}, {"start": 717.56, "end": 722.3599999999999, "text": " What's important is whether the algorithm can slowly improve the parameters to get to"}, {"start": 722.3599999999999, "end": 723.64, "text": " a better estimate."}, {"start": 723.64, "end": 726.88, "text": " Next, we will repeatedly do the following."}, {"start": 726.88, "end": 729.7199999999999, "text": " We will take actions in the lunar land."}, {"start": 729.7199999999999, "end": 733.1999999999999, "text": " So fly around randomly, take some good actions, take some bad actions."}, {"start": 733.1999999999999, "end": 734.8599999999999, "text": " It's okay either way."}, {"start": 734.8599999999999, "end": 740.16, "text": " But you get lots of these tuples of when it was in some state, you took some action A,"}, {"start": 740.16, "end": 743.7199999999999, "text": " got a reward R of S, and you got to some state S prime."}, {"start": 743.72, "end": 751.2, "text": " And what we will do is store the 10,000 most recent examples of these tuples."}, {"start": 751.2, "end": 757.76, "text": " As you run this algorithm, you will see many, many steps in the lunar lander, maybe hundreds"}, {"start": 757.76, "end": 759.36, "text": " of thousands of steps."}, {"start": 759.36, "end": 765.0400000000001, "text": " But to make sure we don't end up using excessive computer memory, common practice is to just"}, {"start": 765.0400000000001, "end": 772.5600000000001, "text": " remember the 10,000 most recent such tuples that we saw taking actions in the MDP."}, {"start": 772.56, "end": 779.56, "text": " This technique of storing the most recent examples only is sometimes called the replay"}, {"start": 779.56, "end": 782.88, "text": " buffer in a reinforcement learning algorithm."}, {"start": 782.88, "end": 789.1999999999999, "text": " So for now, we just find the lunar lander randomly, sometimes crashing, sometimes not,"}, {"start": 789.1999999999999, "end": 793.52, "text": " and getting these tuples as experience for our learning algorithm."}, {"start": 793.52, "end": 797.5999999999999, "text": " Occasionally, then, we will train the neural network."}, {"start": 797.6, "end": 803.08, "text": " In order to train the neural network, here's what we'll do, we'll look at these 10,000"}, {"start": 803.08, "end": 810.9200000000001, "text": " most recent tuples we had saved and create a training set of 10,000 examples."}, {"start": 810.9200000000001, "end": 814.6800000000001, "text": " So training set needs lots of pairs of X and Y."}, {"start": 814.6800000000001, "end": 822.1600000000001, "text": " And for our training examples, X will be the SA from this part of the tuple."}, {"start": 822.1600000000001, "end": 826.1600000000001, "text": " So it'll be a list of 12 numbers, the eight numbers for the state and the four numbers"}, {"start": 826.16, "end": 828.7199999999999, "text": " for the one-hot encoding of the action."}, {"start": 828.7199999999999, "end": 835.12, "text": " And the target value that we want a neural network to try to predict will be Y equals"}, {"start": 835.12, "end": 840.6, "text": " R of S plus gamma max of A prime Q of S prime A prime."}, {"start": 840.6, "end": 842.16, "text": " How do we get this value of Q?"}, {"start": 842.16, "end": 846.4, "text": " Well, initially, is this neural network that we had randomly initialized."}, {"start": 846.4, "end": 849.64, "text": " So it may not be a very good guess, but it's a guess."}, {"start": 849.64, "end": 856.72, "text": " Simply creating these 10,000 training examples will have training examples X1, Y1 through"}, {"start": 856.72, "end": 861.92, "text": " X 10,000, Y 10,000."}, {"start": 861.92, "end": 864.8, "text": " And so we'll train a neural network."}, {"start": 864.8, "end": 872.4, "text": " And I'm going to call the new neural network Q new, such that Q new of SA learns to approximate"}, {"start": 872.4, "end": 873.4, "text": " Y."}, {"start": 873.4, "end": 880.88, "text": " So this is exactly training that neural network to output F with parameters W and B to input"}, {"start": 880.88, "end": 885.68, "text": " X to try to approximate the target value Y."}, {"start": 885.68, "end": 892.0, "text": " Now, this neural network should be a slightly better estimate of what the Q function or"}, {"start": 892.0, "end": 894.52, "text": " the state action value function should be."}, {"start": 894.52, "end": 900.56, "text": " And so what we'll do is we're going to take Q and set it to this new neural network that"}, {"start": 900.56, "end": 902.52, "text": " we had just learned."}, {"start": 902.52, "end": 906.96, "text": " Many of the ideas in this algorithm are due to min at all."}, {"start": 906.96, "end": 913.1999999999999, "text": " And it turns out that if you run this algorithm where you start with a really random guess"}, {"start": 913.1999999999999, "end": 918.96, "text": " of the Q function, but use Bellman's equations to repeatedly try to improve the estimates"}, {"start": 918.96, "end": 923.96, "text": " of the Q function, then by doing this over and over, taking lots of actions, training"}, {"start": 923.96, "end": 928.3199999999999, "text": " a model that will improve your guess for the Q function."}, {"start": 928.32, "end": 933.1600000000001, "text": " And so for the next model you train, you now have a slightly better estimate of what is"}, {"start": 933.1600000000001, "end": 934.6400000000001, "text": " the Q function."}, {"start": 934.6400000000001, "end": 937.7600000000001, "text": " And then the next model you train will be even better."}, {"start": 937.7600000000001, "end": 943.1600000000001, "text": " And when you update Q equals Q new, then for the next time you train a model, Q of S prime"}, {"start": 943.1600000000001, "end": 945.84, "text": " A prime will be an even better estimate."}, {"start": 945.84, "end": 951.0, "text": " And so as you run this algorithm on every iteration, Q of S prime A prime hopefully"}, {"start": 951.0, "end": 955.5400000000001, "text": " becomes an even better estimate of the Q function."}, {"start": 955.54, "end": 960.4399999999999, "text": " So that when you run the algorithm long enough, this will actually become a pretty good estimate"}, {"start": 960.4399999999999, "end": 968.68, "text": " of the true value of Q of S A so that you can then use this to pick hopefully good actions"}, {"start": 968.68, "end": 970.36, "text": " for the MDP."}, {"start": 970.36, "end": 975.64, "text": " The algorithm you just saw is sometimes called the DQN algorithm, which stands for deep Q"}, {"start": 975.64, "end": 982.4, "text": " network, because you're using deep learning and neural network to train a model to learn"}, {"start": 982.4, "end": 983.68, "text": " the Q function."}, {"start": 983.68, "end": 989.0799999999999, "text": " So hence DQN or deep Q network, DQ using a neural network."}, {"start": 989.0799999999999, "end": 994.8, "text": " And if you use the algorithm as I described it, it will kind of work okay on the lunar"}, {"start": 994.8, "end": 995.8, "text": " lander."}, {"start": 995.8, "end": 997.64, "text": " Maybe it'll take a long time to converge."}, {"start": 997.64, "end": 1001.0799999999999, "text": " Maybe it won't land perfectly, but it'll sort of work."}, {"start": 1001.0799999999999, "end": 1005.92, "text": " But it turns out that with a couple refinements to the algorithm, it can work much better."}, {"start": 1005.92, "end": 1009.88, "text": " So in the next few videos, let's take a look at some refinements to the algorithm that"}, {"start": 1009.88, "end": 1016.88, "text": " you just saw."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=4hlH4TXtNms
10.13 Continuous State Spaces|Algorithm refinement Improved neural network architecture-ML Andrew Ng
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, we saw a neural network architecture that would input the state in action and attempt to output the Q function, Q of sA. It turns out that there's a change to neural network architecture that makes this algorithm much more efficient. So most implementations of DQN actually use this more efficient architecture that we'll see in this video. Let's take a look. This was the neural network architecture we saw previously where it would input 12 numbers and output Q of sA. Whenever we are in some state s, we would have to carry out inference in the neural network separately four times to compute these four values so as to pick the action A that gives us the largest Q value. This is inefficient because we have to carry out inference four times from every single state. Instead, it turns out to be more efficient to train a single neural network to output all four of these values simultaneously. This is what it looks like. Here's a modified neural network architecture where the input is eight numbers corresponding to the state of the lunar lander. It then goes through the neural network with 64 units in the first hidden layer, 64 units in the second hidden layer. Now the output unit has four output units. The job of the neural network is to have the four output units output Q of s nothing, Q of s left, Q of s main, and Q of s right. The job of the neural network is to compute simultaneously the Q value for all four possible actions for when we are in the state s. This turns out to be more efficient because given a state s, we can run inference just once and get all four of these values and then very quickly pick the action A that maximizes Q of sA. You notice also in Bellman's equations, there's a step in which we have to compute max over A prime, Q of s prime A prime, this is multiplied by gamma and then there was plus R of s up here. This neural network also makes it much more efficient to compute this because we're getting Q of s prime A prime for all actions A prime at the same time. So you then just pick the max to compute this value for the right hand side of Bellman's equations. This change to the neural network architecture makes the algorithm much more efficient and so we will be using this architecture in the practice lab. Next, there's one other idea that will help the algorithm a lot, which is something called an Epsilon greedy policy, which affects how you choose actions even while you're still learning. Let's take a look at the next video at what that means.
[{"start": 0.0, "end": 7.640000000000001, "text": " In the last video, we saw a neural network architecture that would input the state in"}, {"start": 7.640000000000001, "end": 12.82, "text": " action and attempt to output the Q function, Q of sA."}, {"start": 12.82, "end": 17.12, "text": " It turns out that there's a change to neural network architecture that makes this algorithm"}, {"start": 17.12, "end": 18.92, "text": " much more efficient."}, {"start": 18.92, "end": 24.28, "text": " So most implementations of DQN actually use this more efficient architecture that we'll"}, {"start": 24.28, "end": 25.28, "text": " see in this video."}, {"start": 25.28, "end": 27.240000000000002, "text": " Let's take a look."}, {"start": 27.24, "end": 33.36, "text": " This was the neural network architecture we saw previously where it would input 12 numbers"}, {"start": 33.36, "end": 35.64, "text": " and output Q of sA."}, {"start": 35.64, "end": 41.56, "text": " Whenever we are in some state s, we would have to carry out inference in the neural"}, {"start": 41.56, "end": 48.84, "text": " network separately four times to compute these four values so as to pick the action A that"}, {"start": 48.84, "end": 52.16, "text": " gives us the largest Q value."}, {"start": 52.16, "end": 58.12, "text": " This is inefficient because we have to carry out inference four times from every single"}, {"start": 58.12, "end": 59.12, "text": " state."}, {"start": 59.12, "end": 64.36, "text": " Instead, it turns out to be more efficient to train a single neural network to output"}, {"start": 64.36, "end": 69.22, "text": " all four of these values simultaneously."}, {"start": 69.22, "end": 70.75999999999999, "text": " This is what it looks like."}, {"start": 70.75999999999999, "end": 78.19999999999999, "text": " Here's a modified neural network architecture where the input is eight numbers corresponding"}, {"start": 78.19999999999999, "end": 81.36, "text": " to the state of the lunar lander."}, {"start": 81.36, "end": 86.03999999999999, "text": " It then goes through the neural network with 64 units in the first hidden layer, 64 units"}, {"start": 86.03999999999999, "end": 87.92, "text": " in the second hidden layer."}, {"start": 87.92, "end": 92.68, "text": " Now the output unit has four output units."}, {"start": 92.68, "end": 99.56, "text": " The job of the neural network is to have the four output units output Q of s nothing, Q"}, {"start": 99.56, "end": 104.4, "text": " of s left, Q of s main, and Q of s right."}, {"start": 104.4, "end": 110.44, "text": " The job of the neural network is to compute simultaneously the Q value for all four possible"}, {"start": 110.44, "end": 115.52, "text": " actions for when we are in the state s."}, {"start": 115.52, "end": 120.72, "text": " This turns out to be more efficient because given a state s, we can run inference just"}, {"start": 120.72, "end": 128.04, "text": " once and get all four of these values and then very quickly pick the action A that maximizes"}, {"start": 128.04, "end": 129.88, "text": " Q of sA."}, {"start": 129.88, "end": 135.96, "text": " You notice also in Bellman's equations, there's a step in which we have to compute max over"}, {"start": 135.96, "end": 142.96, "text": " A prime, Q of s prime A prime, this is multiplied by gamma and then there was plus R of s up"}, {"start": 142.96, "end": 144.28, "text": " here."}, {"start": 144.28, "end": 149.54000000000002, "text": " This neural network also makes it much more efficient to compute this because we're getting"}, {"start": 149.54000000000002, "end": 153.56, "text": " Q of s prime A prime for all actions A prime at the same time."}, {"start": 153.56, "end": 159.08, "text": " So you then just pick the max to compute this value for the right hand side of Bellman's"}, {"start": 159.08, "end": 160.08, "text": " equations."}, {"start": 160.08, "end": 163.76000000000002, "text": " This change to the neural network architecture makes the algorithm much more efficient and"}, {"start": 163.76, "end": 168.2, "text": " so we will be using this architecture in the practice lab."}, {"start": 168.2, "end": 172.76, "text": " Next, there's one other idea that will help the algorithm a lot, which is something called"}, {"start": 172.76, "end": 177.35999999999999, "text": " an Epsilon greedy policy, which affects how you choose actions even while you're still"}, {"start": 177.35999999999999, "end": 178.35999999999999, "text": " learning."}, {"start": 178.36, "end": 194.72000000000003, "text": " Let's take a look at the next video at what that means."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=tX7L_441Jlo
10.14 Continuous State Spaces | Algorithm refinement ϵ greedy policy -[Machine Learning | Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the learning algorithm that we developed, even while you're still learning how to approximate QFSA, you need to take some actions in the lunar lander. So how do you pick those actions while you're still learning? The most common way to do so is to use something called an epsilon greedy policy. Let's take a look at how that works. Here's the algorithm that you saw earlier. One of the steps in the algorithm is to take actions in the lunar lander. So when the learning algorithm is still running, we don't really know what's the best action to take in every state. If we did, we'd already be done learning. But even while we're still learning and don't have a very good estimate of QFSA yet, how do we take actions in this step of the learning algorithm? Let's look at some options. When you're in some state S, we might not want to take actions totally at random because that will often be a bad action. So one natural option would be to pick, whenever we're in state S, pick an action A that maximizes QFSA. So we may say, even if QFSA is not a great estimate of the Q function, let's just do our best and use our current guess of QFSA and pick the action A that maximizes it. It turns out this may work okay, but isn't the best option. Instead, here's what is commonly done. Here's option two, which is most of the time, let's say with probability 0.95, pick the action that maximizes QFSA. So most of the time, we'll try to pick a good action using our current guess of QFSA. But a small fraction of the time, let's say 5% of the time, we'll pick an action A randomly. Why do we want to occasionally pick an action randomly? Well here's why. Suppose there's some strange reason that QFSA was initialized randomly so that the learning algorithm thinks that firing the main thrusters is never a good idea, maybe the neural network parameters were initialized so that Q of S main is always very low. If that's the case, then the neural network, because it's trying to pick the action A that maximizes QFSA, it will never ever try firing the main thruster. And because it never ever tries firing the main thruster, it will never learn that firing the main thruster is actually sometimes a good idea. And so because of the random initialization, if the neural network somehow initially gets stuck in its mind that something's a bad idea just by chance, then option one means that it will never try out those actions and discover that maybe it's actually a good idea to take that action, like firing the main thruster sometimes. So under option two, on every step, we have some small probability of trying out different actions so that the neural network can learn to overcome its own possible preconceptions about what might be a bad idea that turns out not to be the case. This idea of picking actions randomly is sometimes called an exploration step because we're going to try out something that may not be the best idea, but we're going to just try out some action in some circumstance to explore and learn more about an action in a circumstance where we may not have had as much experience before. Taking an action that maximizes QFSA, sometimes this is called a greedy action because we're trying to actually maximize our return by picking this. Or in the reinforcement learning literature, sometimes you also hear this as an exploitation step. I know that exploitation is not a good thing. Nobody should ever exploit anyone else, but historically this was the term user-enforced learning to say, let's exploit everything we've learned to do the best we can. So in the reinforcement learning literature, sometimes you hear people talk about the exploration versus exploitation trade-off, which refers to how often do you take actions randomly or take actions that may not be the best in order to learn more versus trying to maximize your return by say taking the action that maximizes QFSA. This approach that is option two has a name is called an epsilon greedy policy where here epsilon is 0.05, is the probability of picking an action randomly. And this is the most common way to make your reinforcement learning algorithm explore a little bit, even while occasionally or maybe most of the time taking greedy actions. Oh, and by the way, a lot of people have commented that the name epsilon greedy policy is confusing because you're actually being greedy 95% of the time, not 5% of the time. So maybe one minus epsilon greedy policy, because it's 95% greedy, 5% exploring. That's actually a more accurate description of the algorithm. But for historical reasons, the name epsilon greedy policy is what has stuck. And so this is the name that people use to refer to the policy that explores actually epsilon fraction of the time rather than this greedy epsilon fraction of the time. Lastly, one of the trick that sometimes user enforcement learning is to start off epsilon high. So initially, you are taking random actions a lot of the time, and then gradually decrease it so that over time, you are less likely to take actions randomly and more likely to use your improving estimates of the Q function to pick good actions. So for example, in the lunar lander exercise, you might start off with epsilon very, very high, maybe even epsilon equals 1.0. So you're just picking actions completely at random initially, and then gradually decrease it all the way down to say, 0.01. So that eventually you're taking greedy actions 99% of the time and acting randomly only a very small 1% of the time. If this seems complicated, don't worry about it. We'll provide the code in the practice lab, in the Jupyter lab that shows you how to do this. If you were to implement the algorithm as we've described it with the more efficient neural network architecture and with an epsilon greedy exploration policy, you find that it will work pretty well on the lunar lander. One of the things that I've noticed for reinforcement learning algorithm is that compared to supervised learning, they're more finicky in terms of the choice of hyperparameters. So for example, in supervised learning, if you set the learning rate a little bit too small, then maybe the algorithm will take longer to learn. Maybe it takes three times as long to train, which is annoying, but maybe not that bad. Whereas in reinforcement learning, find that if you set the value of epsilon not quite as well or set other parameters not quite as well, it doesn't take three times as long to learn. It may take 10 times or 100 times as long to learn. Reinforcement learning algorithms, I think because they're less mature than supervised learning algorithms, are much more finicky to little choices of parameters like that. It actually sometimes is frankly more frustrating to tune these parameters for reinforcement learning algorithm compared to a supervised learning algorithm. But again, if you're worried about the practice lab, the programming exercise, we'll give you a sense of good parameters to use in the programming exercise so that you should be able to do that and successfully land the lunar lander, hopefully without too many problems. In the next optional video, I want to describe a couple more algorithm refinements, mini batching and also using soft updates, even without these additional refinements, the algorithm will work okay. But these are additional refinements that make the algorithm run much faster. And it's okay if you skip this video, we've provided everything you need in the practice lab to hopefully successfully complete it. But if you're interested in learning about more of these details of tuning reinforcement learning algorithms, then come with me and let's see in the next video, mini batching and soft updates.
[{"start": 0.0, "end": 8.32, "text": " In the learning algorithm that we developed, even while you're still learning how to approximate"}, {"start": 8.32, "end": 13.88, "text": " QFSA, you need to take some actions in the lunar lander."}, {"start": 13.88, "end": 17.080000000000002, "text": " So how do you pick those actions while you're still learning?"}, {"start": 17.080000000000002, "end": 21.56, "text": " The most common way to do so is to use something called an epsilon greedy policy."}, {"start": 21.56, "end": 24.48, "text": " Let's take a look at how that works."}, {"start": 24.48, "end": 27.88, "text": " Here's the algorithm that you saw earlier."}, {"start": 27.88, "end": 33.6, "text": " One of the steps in the algorithm is to take actions in the lunar lander."}, {"start": 33.6, "end": 38.72, "text": " So when the learning algorithm is still running, we don't really know what's the best action"}, {"start": 38.72, "end": 39.96, "text": " to take in every state."}, {"start": 39.96, "end": 42.8, "text": " If we did, we'd already be done learning."}, {"start": 42.8, "end": 48.4, "text": " But even while we're still learning and don't have a very good estimate of QFSA yet, how"}, {"start": 48.4, "end": 52.84, "text": " do we take actions in this step of the learning algorithm?"}, {"start": 52.84, "end": 54.64, "text": " Let's look at some options."}, {"start": 54.64, "end": 60.76, "text": " When you're in some state S, we might not want to take actions totally at random because"}, {"start": 60.76, "end": 64.12, "text": " that will often be a bad action."}, {"start": 64.12, "end": 72.44, "text": " So one natural option would be to pick, whenever we're in state S, pick an action A that maximizes"}, {"start": 72.44, "end": 74.36, "text": " QFSA."}, {"start": 74.36, "end": 80.66, "text": " So we may say, even if QFSA is not a great estimate of the Q function, let's just do"}, {"start": 80.66, "end": 87.24, "text": " our best and use our current guess of QFSA and pick the action A that maximizes it."}, {"start": 87.24, "end": 92.96, "text": " It turns out this may work okay, but isn't the best option."}, {"start": 92.96, "end": 96.52, "text": " Instead, here's what is commonly done."}, {"start": 96.52, "end": 102.94, "text": " Here's option two, which is most of the time, let's say with probability 0.95, pick the"}, {"start": 102.94, "end": 107.08, "text": " action that maximizes QFSA."}, {"start": 107.08, "end": 113.0, "text": " So most of the time, we'll try to pick a good action using our current guess of QFSA."}, {"start": 113.0, "end": 119.84, "text": " But a small fraction of the time, let's say 5% of the time, we'll pick an action A randomly."}, {"start": 119.84, "end": 123.2, "text": " Why do we want to occasionally pick an action randomly?"}, {"start": 123.2, "end": 125.0, "text": " Well here's why."}, {"start": 125.0, "end": 131.12, "text": " Suppose there's some strange reason that QFSA was initialized randomly so that the learning"}, {"start": 131.12, "end": 137.56, "text": " algorithm thinks that firing the main thrusters is never a good idea, maybe the neural network"}, {"start": 137.56, "end": 145.5, "text": " parameters were initialized so that Q of S main is always very low."}, {"start": 145.5, "end": 150.36, "text": " If that's the case, then the neural network, because it's trying to pick the action A that"}, {"start": 150.36, "end": 155.92000000000002, "text": " maximizes QFSA, it will never ever try firing the main thruster."}, {"start": 155.92000000000002, "end": 161.04000000000002, "text": " And because it never ever tries firing the main thruster, it will never learn that firing"}, {"start": 161.04, "end": 164.4, "text": " the main thruster is actually sometimes a good idea."}, {"start": 164.4, "end": 170.6, "text": " And so because of the random initialization, if the neural network somehow initially gets"}, {"start": 170.6, "end": 177.35999999999999, "text": " stuck in its mind that something's a bad idea just by chance, then option one means that"}, {"start": 177.35999999999999, "end": 183.35999999999999, "text": " it will never try out those actions and discover that maybe it's actually a good idea to take"}, {"start": 183.35999999999999, "end": 186.6, "text": " that action, like firing the main thruster sometimes."}, {"start": 186.6, "end": 192.32, "text": " So under option two, on every step, we have some small probability of trying out different"}, {"start": 192.32, "end": 200.22, "text": " actions so that the neural network can learn to overcome its own possible preconceptions"}, {"start": 200.22, "end": 204.92, "text": " about what might be a bad idea that turns out not to be the case."}, {"start": 204.92, "end": 212.57999999999998, "text": " This idea of picking actions randomly is sometimes called an exploration step because we're going"}, {"start": 212.58, "end": 217.52, "text": " to try out something that may not be the best idea, but we're going to just try out some"}, {"start": 217.52, "end": 222.64000000000001, "text": " action in some circumstance to explore and learn more about an action in a circumstance"}, {"start": 222.64000000000001, "end": 226.36, "text": " where we may not have had as much experience before."}, {"start": 226.36, "end": 234.60000000000002, "text": " Taking an action that maximizes QFSA, sometimes this is called a greedy action because we're"}, {"start": 234.60000000000002, "end": 240.64000000000001, "text": " trying to actually maximize our return by picking this."}, {"start": 240.64, "end": 246.55999999999997, "text": " Or in the reinforcement learning literature, sometimes you also hear this as an exploitation"}, {"start": 246.55999999999997, "end": 247.55999999999997, "text": " step."}, {"start": 247.55999999999997, "end": 250.95999999999998, "text": " I know that exploitation is not a good thing."}, {"start": 250.95999999999998, "end": 256.03999999999996, "text": " Nobody should ever exploit anyone else, but historically this was the term user-enforced"}, {"start": 256.03999999999996, "end": 260.68, "text": " learning to say, let's exploit everything we've learned to do the best we can."}, {"start": 260.68, "end": 266.03999999999996, "text": " So in the reinforcement learning literature, sometimes you hear people talk about the exploration"}, {"start": 266.04, "end": 271.92, "text": " versus exploitation trade-off, which refers to how often do you take actions randomly"}, {"start": 271.92, "end": 279.0, "text": " or take actions that may not be the best in order to learn more versus trying to maximize"}, {"start": 279.0, "end": 283.70000000000005, "text": " your return by say taking the action that maximizes QFSA."}, {"start": 283.70000000000005, "end": 292.12, "text": " This approach that is option two has a name is called an epsilon greedy policy where here"}, {"start": 292.12, "end": 299.04, "text": " epsilon is 0.05, is the probability of picking an action randomly."}, {"start": 299.04, "end": 306.16, "text": " And this is the most common way to make your reinforcement learning algorithm explore a"}, {"start": 306.16, "end": 311.6, "text": " little bit, even while occasionally or maybe most of the time taking greedy actions."}, {"start": 311.6, "end": 317.76, "text": " Oh, and by the way, a lot of people have commented that the name epsilon greedy policy is confusing"}, {"start": 317.76, "end": 323.44, "text": " because you're actually being greedy 95% of the time, not 5% of the time."}, {"start": 323.44, "end": 330.48, "text": " So maybe one minus epsilon greedy policy, because it's 95% greedy, 5% exploring."}, {"start": 330.48, "end": 333.48, "text": " That's actually a more accurate description of the algorithm."}, {"start": 333.48, "end": 338.96, "text": " But for historical reasons, the name epsilon greedy policy is what has stuck."}, {"start": 338.96, "end": 344.48, "text": " And so this is the name that people use to refer to the policy that explores actually"}, {"start": 344.48, "end": 349.92, "text": " epsilon fraction of the time rather than this greedy epsilon fraction of the time."}, {"start": 349.92, "end": 355.12, "text": " Lastly, one of the trick that sometimes user enforcement learning is to start off epsilon"}, {"start": 355.12, "end": 356.46000000000004, "text": " high."}, {"start": 356.46000000000004, "end": 364.12, "text": " So initially, you are taking random actions a lot of the time, and then gradually decrease"}, {"start": 364.12, "end": 371.48, "text": " it so that over time, you are less likely to take actions randomly and more likely to"}, {"start": 371.48, "end": 377.16, "text": " use your improving estimates of the Q function to pick good actions."}, {"start": 377.16, "end": 383.04, "text": " So for example, in the lunar lander exercise, you might start off with epsilon very, very"}, {"start": 383.04, "end": 386.24, "text": " high, maybe even epsilon equals 1.0."}, {"start": 386.24, "end": 390.72, "text": " So you're just picking actions completely at random initially, and then gradually decrease"}, {"start": 390.72, "end": 394.62, "text": " it all the way down to say, 0.01."}, {"start": 394.62, "end": 401.44, "text": " So that eventually you're taking greedy actions 99% of the time and acting randomly only"}, {"start": 401.44, "end": 404.16, "text": " a very small 1% of the time."}, {"start": 404.16, "end": 406.71999999999997, "text": " If this seems complicated, don't worry about it."}, {"start": 406.71999999999997, "end": 411.92, "text": " We'll provide the code in the practice lab, in the Jupyter lab that shows you how to do"}, {"start": 411.92, "end": 413.24, "text": " this."}, {"start": 413.24, "end": 417.96, "text": " If you were to implement the algorithm as we've described it with the more efficient"}, {"start": 417.96, "end": 423.6, "text": " neural network architecture and with an epsilon greedy exploration policy, you find that it"}, {"start": 423.6, "end": 427.08, "text": " will work pretty well on the lunar lander."}, {"start": 427.08, "end": 432.44, "text": " One of the things that I've noticed for reinforcement learning algorithm is that compared to supervised"}, {"start": 432.44, "end": 437.0, "text": " learning, they're more finicky in terms of the choice of hyperparameters."}, {"start": 437.0, "end": 442.03999999999996, "text": " So for example, in supervised learning, if you set the learning rate a little bit too"}, {"start": 442.03999999999996, "end": 445.76, "text": " small, then maybe the algorithm will take longer to learn."}, {"start": 445.76, "end": 451.24, "text": " Maybe it takes three times as long to train, which is annoying, but maybe not that bad."}, {"start": 451.24, "end": 456.76, "text": " Whereas in reinforcement learning, find that if you set the value of epsilon not quite"}, {"start": 456.76, "end": 461.71999999999997, "text": " as well or set other parameters not quite as well, it doesn't take three times as long"}, {"start": 461.71999999999997, "end": 462.71999999999997, "text": " to learn."}, {"start": 462.71999999999997, "end": 467.52, "text": " It may take 10 times or 100 times as long to learn."}, {"start": 467.52, "end": 471.28, "text": " Reinforcement learning algorithms, I think because they're less mature than supervised"}, {"start": 471.28, "end": 476.88, "text": " learning algorithms, are much more finicky to little choices of parameters like that."}, {"start": 476.88, "end": 482.84, "text": " It actually sometimes is frankly more frustrating to tune these parameters for reinforcement"}, {"start": 482.84, "end": 487.28, "text": " learning algorithm compared to a supervised learning algorithm."}, {"start": 487.28, "end": 492.11999999999995, "text": " But again, if you're worried about the practice lab, the programming exercise, we'll give"}, {"start": 492.11999999999995, "end": 496.44, "text": " you a sense of good parameters to use in the programming exercise so that you should be"}, {"start": 496.44, "end": 502.59999999999997, "text": " able to do that and successfully land the lunar lander, hopefully without too many problems."}, {"start": 502.59999999999997, "end": 508.79999999999995, "text": " In the next optional video, I want to describe a couple more algorithm refinements, mini"}, {"start": 508.8, "end": 514.72, "text": " batching and also using soft updates, even without these additional refinements, the"}, {"start": 514.72, "end": 516.52, "text": " algorithm will work okay."}, {"start": 516.52, "end": 521.12, "text": " But these are additional refinements that make the algorithm run much faster."}, {"start": 521.12, "end": 525.64, "text": " And it's okay if you skip this video, we've provided everything you need in the practice"}, {"start": 525.64, "end": 528.52, "text": " lab to hopefully successfully complete it."}, {"start": 528.52, "end": 532.52, "text": " But if you're interested in learning about more of these details of tuning reinforcement"}, {"start": 532.52, "end": 537.8, "text": " learning algorithms, then come with me and let's see in the next video, mini batching"}, {"start": 537.8, "end": 539.04, "text": " and soft updates."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=3FkPgerAhXo
10.15 Continuous State Spaces | Algorithm refinement Mini-batch and soft updates (optional)-[ML-Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, we'll look at two further refinements to the reinforcement learning algorithm you've seen. The first idea is called using mini-batches, and this turns out to be an idea that can both speed up your reinforcement learning algorithm and is also applicable to supervised learning and can help you speed up your supervised learning algorithm as well, like training a neural network or training a linear regression or logistic regression model. The second idea we'll look at is soft updates, which it turns out will help your reinforcement learning algorithm do a better job to converge to a good solution. Let's take a look at mini-batches and soft updates. To understand mini-batches, let's just look at supervised learning to start. Here's a data set of housing sizes and prices that you had seen way back in the first course of this specialization on using linear regression to predict housing prices. There we had come up with this cost function for the parameters w and b. It was 1 over 2m sum of the prediction minus the actual value y squared. And the gradient descent algorithm was to repeatedly update w as w minus the learning rate alpha times the partial derivative with respect to w of the cost j of wb. And similarly to update b as follows. Let me just take this definition of j of wb and substitute it in here. Now when we looked at this example way back when we're starting to talk about linear regression and supervised learning, the training set size m was pretty small. I think we had 47 training examples. But what if you have a very, very large training set, say m equals 100 million. There are many countries, including the United States, with over 100 million housing units. And so a national census will give you a data set that is this order of magnitude of size. The problem with this algorithm when your data set is this big is that every single step of gradient descent requires computing this average over 100 million examples. And this turns out to be very slow. Every step of gradient descent means you would compute this sum or this average over 100 million examples. Then you take one tiny gradient descent step and you go back and have to scan over your entire 100 million example data set again to compute the derivative on the next step. Then you take another tiny gradient descent step and so on and so on. So when the training set size is very large, this gradient descent algorithm turns out to be quite slow. The idea of mini-batch gradient descent is to not use all 100 million training examples on every single iteration through this loop. Instead, we may pick a smaller number, let me call it m prime equals say 1000. And on every step, instead of using all 100 million examples, we would pick some subset of 1000 or m prime examples. And so this inner term becomes 1 over 2 m prime sum over some m prime examples. And so now each iteration through gradient descent requires looking only at 1000 rather than 100 million examples. And every step takes much less time and just leads to a more efficient algorithm. What mini-batch gradient descent does is on the first iteration through the algorithm, maybe it looks at that subset of the data. On the next iteration, maybe it looks at that subset of the data and so on. For the third iteration and so on. So that every iteration is looking at just a subset of the data. So each iteration runs much more quickly. To see why this might be a reasonable algorithm, here's the housing data set. And if on the first iteration, we were to look at just say five examples, this is not the whole data set, but it's slightly representative of the straight line you might want to fit in the end. So taking one gradient descent step to make the algorithm better fit these five examples is okay. But then on the next iteration, you take a different five examples like that shown here. You take one gradient descent step using these five examples. And on the next iteration, you use a different five examples and so on and so forth. You can scan through this list of examples from top to bottom. That would be one way. Another way would be if on every single iteration, you just pick a totally different five examples to use. So you might remember with bash gradient descent, if these are the contours of the cost function J, then bash gradient descent would say start here and take a step, take a step, take a step, take a step, take a step. So every step of gradient descent causes the parameters to reliably get closer to the global minimum of the cost function here in the middle. In contrast, mini-bash gradient descent or a mini-bash learning algorithm will do something like this. If you start here, then the first iteration uses just five examples. So it'll kind of head in the right direction, but maybe not the best gradient descent direction. Then the next iteration, it may do that. The next iteration, that and that, that. And sometimes just by chance, the five examples you chose may be an unlucky choice. They even head in the wrong direction, away from the global minimum and so on and so forth. But on average, mini-bash gradient descent will tend towards the global minimum, not reliably and somewhat noisily, but every iteration is much more computationally inexpensive. And so mini-bash learning or mini-bash gradient descent turns out to be a much faster algorithm when you have a very large training set. So in fact, for supervised learning, when you have a very large training set, mini-bash learning or mini-bash gradient descent or a mini-bash version with other optimization algorithms like Adam is used more common than bash gradient descent. Going back to our reinforcement learning algorithm, this is the algorithm that we had seen previously. So the mini-bash version of this would be, even if you have stored the 10,000 most recent tuples in the replay buffer, what you might choose to do is not to use all 10,000 every time you train a model. Instead, what you might do is just take a subset. So you might choose just 1,000 examples of these SAR of S as prime tuples and use it to create just 1,000 training examples to train the neural network. And it turns out that this will make each iteration of training a model a little bit more noisy, but much faster. And this will overall tend to speed up this reinforcement learning algorithm. So that's how mini-bashing can speed up both a supervised learning algorithm like linear regression as well as this reinforcement learning algorithm where you may use a mini-bash size of say, 1,000 examples, even if you've stored away 10,000 of these tuples in your replay buffer. Finally, there's one other refinement to the algorithm that can make it converge more reliably, which is I've written out this step here of set Q equals Q new, but it turns out that this can make a very abrupt change to Q. If you train a new neural network, Q new, maybe just by chance is not a very good neural network, maybe it's even a little bit worse than the old one, then you just overwrote your Q function with a potentially worse, noisy neural network. So the soft update method helps to prevent Q new from through just one unlucky step getting worse. In particular, the neural network Q will have some parameters W and B, all the parameters for all the layers of the neural network. And when you train the new neural network, you get some parameters W new and B new. So in the original algorithm as described on that slide, you would set W to be equal to W new and B equals B new, right? That's what set Q equals Q new means. With the soft update, what we do is instead set W equals 0.01 times W new plus 0.99 times W. In other words, we're going to make W to be 99% the old version of W plus 1% of the new version W new. So this is called a soft update because whenever we train a new neural network W new, we're only going to accept a little bit of the new value. And similarly, B equals 0.01 times B new plus 0.99 times B. These numbers 0.01 and 0.99, these are hyperparameters that you could set, but it controls how aggressively you move W to W new. And these two numbers are expected to add up to one. One extreme would be if you were to set W equals 1 times W new plus 0 times W, in which case you're back to the original algorithm up here where you're just copying W new onto W. But the soft update allows you to make a more gradual change to Q or to the new network parameters W and B that affect your current guess for the Q function Q of S A. And it turns out that using the soft update method causes the reinforcement learning algorithm to converge more reliably. It makes it less likely that the reinforcement learning algorithm will oscillate or diverge or have other undesirable properties. And so with these two final refinements to the algorithm, mini-batching, which actually applies very well to supervised learning as well, not just reinforcement learning, as well as the idea of soft updates, you should be able to get your learning algorithm to work really well on the lunar lander. The lunar lander is actually a decently complex, decently challenging application and so that you can get it to work and land safely on the moon. I think that's actually really cool and I hope you enjoy playing with the practice lab. Now we've talked a lot about reinforcement learning. Before we wrap up, I'd like to share with you my thoughts on the state of reinforcement learning so that as you go out and build applications using different machine learning techniques, be it supervised and unsupervised reinforcement learning techniques, that you have a framework for understanding where reinforcement learning fits in to the world of machine learning today. So let's go take a look at that in the next video.
[{"start": 0.0, "end": 7.44, "text": " In this video, we'll look at two further refinements to the reinforcement learning algorithm you've"}, {"start": 7.44, "end": 8.44, "text": " seen."}, {"start": 8.44, "end": 13.92, "text": " The first idea is called using mini-batches, and this turns out to be an idea that can"}, {"start": 13.92, "end": 19.240000000000002, "text": " both speed up your reinforcement learning algorithm and is also applicable to supervised"}, {"start": 19.240000000000002, "end": 23.2, "text": " learning and can help you speed up your supervised learning algorithm as well, like training"}, {"start": 23.2, "end": 28.32, "text": " a neural network or training a linear regression or logistic regression model."}, {"start": 28.32, "end": 33.32, "text": " The second idea we'll look at is soft updates, which it turns out will help your reinforcement"}, {"start": 33.32, "end": 37.36, "text": " learning algorithm do a better job to converge to a good solution."}, {"start": 37.36, "end": 41.2, "text": " Let's take a look at mini-batches and soft updates."}, {"start": 41.2, "end": 47.879999999999995, "text": " To understand mini-batches, let's just look at supervised learning to start."}, {"start": 47.879999999999995, "end": 55.04, "text": " Here's a data set of housing sizes and prices that you had seen way back in the first course"}, {"start": 55.04, "end": 60.48, "text": " of this specialization on using linear regression to predict housing prices."}, {"start": 60.48, "end": 66.48, "text": " There we had come up with this cost function for the parameters w and b."}, {"start": 66.48, "end": 73.48, "text": " It was 1 over 2m sum of the prediction minus the actual value y squared."}, {"start": 73.48, "end": 78.88, "text": " And the gradient descent algorithm was to repeatedly update w as w minus the learning"}, {"start": 78.88, "end": 87.52, "text": " rate alpha times the partial derivative with respect to w of the cost j of wb."}, {"start": 87.52, "end": 90.64, "text": " And similarly to update b as follows."}, {"start": 90.64, "end": 98.67999999999999, "text": " Let me just take this definition of j of wb and substitute it in here."}, {"start": 98.67999999999999, "end": 104.91999999999999, "text": " Now when we looked at this example way back when we're starting to talk about linear regression"}, {"start": 104.92, "end": 109.16, "text": " and supervised learning, the training set size m was pretty small."}, {"start": 109.16, "end": 112.04, "text": " I think we had 47 training examples."}, {"start": 112.04, "end": 118.68, "text": " But what if you have a very, very large training set, say m equals 100 million."}, {"start": 118.68, "end": 124.8, "text": " There are many countries, including the United States, with over 100 million housing units."}, {"start": 124.8, "end": 131.64, "text": " And so a national census will give you a data set that is this order of magnitude of size."}, {"start": 131.64, "end": 136.92, "text": " The problem with this algorithm when your data set is this big is that every single"}, {"start": 136.92, "end": 146.07999999999998, "text": " step of gradient descent requires computing this average over 100 million examples."}, {"start": 146.07999999999998, "end": 148.67999999999998, "text": " And this turns out to be very slow."}, {"start": 148.67999999999998, "end": 154.48, "text": " Every step of gradient descent means you would compute this sum or this average over 100"}, {"start": 154.48, "end": 155.95999999999998, "text": " million examples."}, {"start": 155.95999999999998, "end": 161.35999999999999, "text": " Then you take one tiny gradient descent step and you go back and have to scan over your"}, {"start": 161.36, "end": 167.24, "text": " entire 100 million example data set again to compute the derivative on the next step."}, {"start": 167.24, "end": 171.60000000000002, "text": " Then you take another tiny gradient descent step and so on and so on."}, {"start": 171.60000000000002, "end": 177.88000000000002, "text": " So when the training set size is very large, this gradient descent algorithm turns out"}, {"start": 177.88000000000002, "end": 179.32000000000002, "text": " to be quite slow."}, {"start": 179.32000000000002, "end": 186.06, "text": " The idea of mini-batch gradient descent is to not use all 100 million training examples"}, {"start": 186.06, "end": 188.56, "text": " on every single iteration through this loop."}, {"start": 188.56, "end": 195.52, "text": " Instead, we may pick a smaller number, let me call it m prime equals say 1000."}, {"start": 195.52, "end": 203.8, "text": " And on every step, instead of using all 100 million examples, we would pick some subset"}, {"start": 203.8, "end": 207.86, "text": " of 1000 or m prime examples."}, {"start": 207.86, "end": 214.56, "text": " And so this inner term becomes 1 over 2 m prime sum over some m prime examples."}, {"start": 214.56, "end": 221.24, "text": " And so now each iteration through gradient descent requires looking only at 1000 rather"}, {"start": 221.24, "end": 223.44, "text": " than 100 million examples."}, {"start": 223.44, "end": 228.96, "text": " And every step takes much less time and just leads to a more efficient algorithm."}, {"start": 228.96, "end": 234.36, "text": " What mini-batch gradient descent does is on the first iteration through the algorithm,"}, {"start": 234.36, "end": 237.58, "text": " maybe it looks at that subset of the data."}, {"start": 237.58, "end": 242.84, "text": " On the next iteration, maybe it looks at that subset of the data and so on."}, {"start": 242.84, "end": 245.84, "text": " For the third iteration and so on."}, {"start": 245.84, "end": 250.6, "text": " So that every iteration is looking at just a subset of the data."}, {"start": 250.6, "end": 253.52, "text": " So each iteration runs much more quickly."}, {"start": 253.52, "end": 259.62, "text": " To see why this might be a reasonable algorithm, here's the housing data set."}, {"start": 259.62, "end": 266.56, "text": " And if on the first iteration, we were to look at just say five examples, this is not"}, {"start": 266.56, "end": 270.9, "text": " the whole data set, but it's slightly representative of the straight line you might want to fit"}, {"start": 270.9, "end": 271.9, "text": " in the end."}, {"start": 271.9, "end": 276.35999999999996, "text": " So taking one gradient descent step to make the algorithm better fit these five examples"}, {"start": 276.35999999999996, "end": 277.64, "text": " is okay."}, {"start": 277.64, "end": 283.32, "text": " But then on the next iteration, you take a different five examples like that shown here."}, {"start": 283.32, "end": 286.59999999999997, "text": " You take one gradient descent step using these five examples."}, {"start": 286.59999999999997, "end": 291.28, "text": " And on the next iteration, you use a different five examples and so on and so forth."}, {"start": 291.28, "end": 296.44, "text": " You can scan through this list of examples from top to bottom."}, {"start": 296.44, "end": 298.32, "text": " That would be one way."}, {"start": 298.32, "end": 303.84, "text": " Another way would be if on every single iteration, you just pick a totally different five examples"}, {"start": 303.84, "end": 305.36, "text": " to use."}, {"start": 305.36, "end": 311.68, "text": " So you might remember with bash gradient descent, if these are the contours of the cost function"}, {"start": 311.68, "end": 318.48, "text": " J, then bash gradient descent would say start here and take a step, take a step, take a"}, {"start": 318.48, "end": 321.08, "text": " step, take a step, take a step."}, {"start": 321.08, "end": 327.12, "text": " So every step of gradient descent causes the parameters to reliably get closer to the global"}, {"start": 327.12, "end": 330.12, "text": " minimum of the cost function here in the middle."}, {"start": 330.12, "end": 335.44, "text": " In contrast, mini-bash gradient descent or a mini-bash learning algorithm will do something"}, {"start": 335.44, "end": 336.68, "text": " like this."}, {"start": 336.68, "end": 340.96, "text": " If you start here, then the first iteration uses just five examples."}, {"start": 340.96, "end": 345.76, "text": " So it'll kind of head in the right direction, but maybe not the best gradient descent direction."}, {"start": 345.76, "end": 348.4, "text": " Then the next iteration, it may do that."}, {"start": 348.4, "end": 352.16, "text": " The next iteration, that and that, that."}, {"start": 352.16, "end": 356.68, "text": " And sometimes just by chance, the five examples you chose may be an unlucky choice."}, {"start": 356.68, "end": 362.36, "text": " They even head in the wrong direction, away from the global minimum and so on and so forth."}, {"start": 362.36, "end": 368.64, "text": " But on average, mini-bash gradient descent will tend towards the global minimum, not"}, {"start": 368.64, "end": 375.8, "text": " reliably and somewhat noisily, but every iteration is much more computationally inexpensive."}, {"start": 375.8, "end": 381.72, "text": " And so mini-bash learning or mini-bash gradient descent turns out to be a much faster algorithm"}, {"start": 381.72, "end": 384.24, "text": " when you have a very large training set."}, {"start": 384.24, "end": 390.08, "text": " So in fact, for supervised learning, when you have a very large training set, mini-bash"}, {"start": 390.08, "end": 395.84000000000003, "text": " learning or mini-bash gradient descent or a mini-bash version with other optimization"}, {"start": 395.84000000000003, "end": 400.88, "text": " algorithms like Adam is used more common than bash gradient descent."}, {"start": 400.88, "end": 408.92, "text": " Going back to our reinforcement learning algorithm, this is the algorithm that we had seen previously."}, {"start": 408.92, "end": 417.44, "text": " So the mini-bash version of this would be, even if you have stored the 10,000 most recent"}, {"start": 417.44, "end": 424.56, "text": " tuples in the replay buffer, what you might choose to do is not to use all 10,000 every"}, {"start": 424.56, "end": 426.44, "text": " time you train a model."}, {"start": 426.44, "end": 430.16, "text": " Instead, what you might do is just take a subset."}, {"start": 430.16, "end": 438.8, "text": " So you might choose just 1,000 examples of these SAR of S as prime tuples and use it"}, {"start": 438.8, "end": 444.76000000000005, "text": " to create just 1,000 training examples to train the neural network."}, {"start": 444.76000000000005, "end": 449.64000000000004, "text": " And it turns out that this will make each iteration of training a model a little bit"}, {"start": 449.64000000000004, "end": 451.8, "text": " more noisy, but much faster."}, {"start": 451.8, "end": 456.84000000000003, "text": " And this will overall tend to speed up this reinforcement learning algorithm."}, {"start": 456.84, "end": 462.23999999999995, "text": " So that's how mini-bashing can speed up both a supervised learning algorithm like linear"}, {"start": 462.23999999999995, "end": 469.32, "text": " regression as well as this reinforcement learning algorithm where you may use a mini-bash size"}, {"start": 469.32, "end": 475.71999999999997, "text": " of say, 1,000 examples, even if you've stored away 10,000 of these tuples in your replay"}, {"start": 475.71999999999997, "end": 476.71999999999997, "text": " buffer."}, {"start": 476.71999999999997, "end": 482.67999999999995, "text": " Finally, there's one other refinement to the algorithm that can make it converge more reliably,"}, {"start": 482.68, "end": 489.68, "text": " which is I've written out this step here of set Q equals Q new, but it turns out that"}, {"start": 489.68, "end": 497.04, "text": " this can make a very abrupt change to Q. If you train a new neural network, Q new, maybe"}, {"start": 497.04, "end": 501.64, "text": " just by chance is not a very good neural network, maybe it's even a little bit worse than the"}, {"start": 501.64, "end": 510.84000000000003, "text": " old one, then you just overwrote your Q function with a potentially worse, noisy neural network."}, {"start": 510.84, "end": 519.12, "text": " So the soft update method helps to prevent Q new from through just one unlucky step getting"}, {"start": 519.12, "end": 520.12, "text": " worse."}, {"start": 520.12, "end": 526.16, "text": " In particular, the neural network Q will have some parameters W and B, all the parameters"}, {"start": 526.16, "end": 528.76, "text": " for all the layers of the neural network."}, {"start": 528.76, "end": 537.0799999999999, "text": " And when you train the new neural network, you get some parameters W new and B new."}, {"start": 537.08, "end": 542.84, "text": " So in the original algorithm as described on that slide, you would set W to be equal"}, {"start": 542.84, "end": 548.5600000000001, "text": " to W new and B equals B new, right?"}, {"start": 548.5600000000001, "end": 551.2800000000001, "text": " That's what set Q equals Q new means."}, {"start": 551.2800000000001, "end": 562.76, "text": " With the soft update, what we do is instead set W equals 0.01 times W new plus 0.99 times"}, {"start": 562.76, "end": 563.76, "text": " W."}, {"start": 563.76, "end": 571.28, "text": " In other words, we're going to make W to be 99% the old version of W plus 1% of the new"}, {"start": 571.28, "end": 573.4399999999999, "text": " version W new."}, {"start": 573.4399999999999, "end": 579.48, "text": " So this is called a soft update because whenever we train a new neural network W new, we're"}, {"start": 579.48, "end": 582.88, "text": " only going to accept a little bit of the new value."}, {"start": 582.88, "end": 592.28, "text": " And similarly, B equals 0.01 times B new plus 0.99 times B. These numbers 0.01 and 0.99,"}, {"start": 592.28, "end": 598.88, "text": " these are hyperparameters that you could set, but it controls how aggressively you move"}, {"start": 598.88, "end": 601.28, "text": " W to W new."}, {"start": 601.28, "end": 604.88, "text": " And these two numbers are expected to add up to one."}, {"start": 604.88, "end": 611.24, "text": " One extreme would be if you were to set W equals 1 times W new plus 0 times W, in which"}, {"start": 611.24, "end": 616.3199999999999, "text": " case you're back to the original algorithm up here where you're just copying W new onto"}, {"start": 616.3199999999999, "end": 617.3199999999999, "text": " W."}, {"start": 617.32, "end": 624.1600000000001, "text": " But the soft update allows you to make a more gradual change to Q or to the new network"}, {"start": 624.1600000000001, "end": 631.6, "text": " parameters W and B that affect your current guess for the Q function Q of S A."}, {"start": 631.6, "end": 637.5600000000001, "text": " And it turns out that using the soft update method causes the reinforcement learning algorithm"}, {"start": 637.5600000000001, "end": 638.84, "text": " to converge more reliably."}, {"start": 638.84, "end": 643.74, "text": " It makes it less likely that the reinforcement learning algorithm will oscillate or diverge"}, {"start": 643.74, "end": 646.36, "text": " or have other undesirable properties."}, {"start": 646.36, "end": 651.32, "text": " And so with these two final refinements to the algorithm, mini-batching, which actually"}, {"start": 651.32, "end": 655.6, "text": " applies very well to supervised learning as well, not just reinforcement learning, as"}, {"start": 655.6, "end": 659.5600000000001, "text": " well as the idea of soft updates, you should be able to get your learning algorithm to"}, {"start": 659.5600000000001, "end": 662.5600000000001, "text": " work really well on the lunar lander."}, {"start": 662.5600000000001, "end": 668.88, "text": " The lunar lander is actually a decently complex, decently challenging application and so that"}, {"start": 668.88, "end": 671.8000000000001, "text": " you can get it to work and land safely on the moon."}, {"start": 671.8, "end": 677.9599999999999, "text": " I think that's actually really cool and I hope you enjoy playing with the practice lab."}, {"start": 677.9599999999999, "end": 681.24, "text": " Now we've talked a lot about reinforcement learning."}, {"start": 681.24, "end": 685.4799999999999, "text": " Before we wrap up, I'd like to share with you my thoughts on the state of reinforcement"}, {"start": 685.4799999999999, "end": 690.5999999999999, "text": " learning so that as you go out and build applications using different machine learning techniques,"}, {"start": 690.5999999999999, "end": 694.76, "text": " be it supervised and unsupervised reinforcement learning techniques, that you have a framework"}, {"start": 694.76, "end": 699.5999999999999, "text": " for understanding where reinforcement learning fits in to the world of machine learning"}, {"start": 699.5999999999999, "end": 700.5999999999999, "text": " today."}, {"start": 700.6, "end": 702.84, "text": " So let's go take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=pdeGAhJ5pbE
10.16 Continuous State Spaces |The state of reinforcement learning -[Machine Learning-Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Reinforcement learning is an exciting set of technologies. In fact, when I was working on my PhD thesis, reinforcement learning was the subject of my thesis. So I was and still am excited about these ideas. Despite all the research momentum and excitement behind reinforcement learning though, I think there is a bit or maybe sometimes a lot of hype around it. So what I hope to do is share with you a practical sense of where reinforcement learning is today in terms of its utility for applications. One of the reasons for some of the hype about reinforcement learning is it turns out many of the research publications have been on simulated environments. And having worked in both simulations and on real robots myself, I can tell you that it's much easier to get a reinforcement learning algorithm to work in a simulation or in a video game than in a real robot. So a lot of developers have commented that even after they got it to work in simulation, it turned out to be surprisingly challenging to get something to work in the real world or in a real robot. And so if you apply these algorithms to a real application, this is one limitation that I hope you pay attention to, to make sure what you do does work on the real application. Second, despite all the media coverage about reinforcement learning, today there are far fewer applications of reinforcement learning than supervised and unsupervised learning. So if you are building a practical application, the odds that you will find supervised learning or unsupervised learning useful or the right tool for the job is much higher than the odds that you would end up using reinforcement learning. I have used reinforcement learning a few times myself, especially on robotic control applications, but in my day-to-day applied work, I end up using supervised and unsupervised learning much more. There is a lot of exciting research in reinforcement learning right now, and I think the potential of reinforcement learning for future applications is very large. And reinforcement learning still remains one of the major pillars of machine learning. And so having it as a framework as you develop your own machine learning algorithms, I hope will make you more effective at building working machine learning systems as well. So I hope you've enjoyed this week's materials on reinforcement learning, and specifically I hope you have fun getting the lunar lander to land for yourself. I hope it will be a satisfying experience when you implement an algorithm and see that lunar lander land safely on the moon because of code that you wrote. That brings us to the end of this specialization. Let's go on to the last video where we'll wrap up.
[{"start": 0.0, "end": 6.0, "text": " Reinforcement learning is an exciting set of technologies."}, {"start": 6.0, "end": 9.0, "text": " In fact, when I was working on my PhD thesis,"}, {"start": 9.0, "end": 12.0, "text": " reinforcement learning was the subject of my thesis."}, {"start": 12.0, "end": 16.0, "text": " So I was and still am excited about these ideas."}, {"start": 16.0, "end": 21.0, "text": " Despite all the research momentum and excitement behind reinforcement learning though,"}, {"start": 21.0, "end": 26.0, "text": " I think there is a bit or maybe sometimes a lot of hype around it."}, {"start": 26.0, "end": 32.0, "text": " So what I hope to do is share with you a practical sense of where reinforcement learning is today"}, {"start": 32.0, "end": 36.0, "text": " in terms of its utility for applications."}, {"start": 36.0, "end": 40.0, "text": " One of the reasons for some of the hype about reinforcement learning is"}, {"start": 40.0, "end": 46.0, "text": " it turns out many of the research publications have been on simulated environments."}, {"start": 46.0, "end": 50.0, "text": " And having worked in both simulations and on real robots myself,"}, {"start": 50.0, "end": 55.0, "text": " I can tell you that it's much easier to get a reinforcement learning algorithm to work"}, {"start": 55.0, "end": 60.0, "text": " in a simulation or in a video game than in a real robot."}, {"start": 60.0, "end": 67.0, "text": " So a lot of developers have commented that even after they got it to work in simulation,"}, {"start": 67.0, "end": 73.0, "text": " it turned out to be surprisingly challenging to get something to work in the real world or in a real robot."}, {"start": 73.0, "end": 77.0, "text": " And so if you apply these algorithms to a real application,"}, {"start": 77.0, "end": 81.0, "text": " this is one limitation that I hope you pay attention to,"}, {"start": 81.0, "end": 84.0, "text": " to make sure what you do does work on the real application."}, {"start": 84.0, "end": 88.0, "text": " Second, despite all the media coverage about reinforcement learning,"}, {"start": 88.0, "end": 94.0, "text": " today there are far fewer applications of reinforcement learning than supervised and unsupervised learning."}, {"start": 94.0, "end": 97.0, "text": " So if you are building a practical application,"}, {"start": 97.0, "end": 102.0, "text": " the odds that you will find supervised learning or unsupervised learning useful"}, {"start": 102.0, "end": 108.0, "text": " or the right tool for the job is much higher than the odds that you would end up using reinforcement learning."}, {"start": 108.0, "end": 111.0, "text": " I have used reinforcement learning a few times myself,"}, {"start": 111.0, "end": 114.0, "text": " especially on robotic control applications,"}, {"start": 114.0, "end": 120.0, "text": " but in my day-to-day applied work, I end up using supervised and unsupervised learning much more."}, {"start": 120.0, "end": 124.0, "text": " There is a lot of exciting research in reinforcement learning right now,"}, {"start": 124.0, "end": 130.0, "text": " and I think the potential of reinforcement learning for future applications is very large."}, {"start": 130.0, "end": 136.0, "text": " And reinforcement learning still remains one of the major pillars of machine learning."}, {"start": 136.0, "end": 142.0, "text": " And so having it as a framework as you develop your own machine learning algorithms,"}, {"start": 142.0, "end": 148.0, "text": " I hope will make you more effective at building working machine learning systems as well."}, {"start": 148.0, "end": 152.0, "text": " So I hope you've enjoyed this week's materials on reinforcement learning,"}, {"start": 152.0, "end": 158.0, "text": " and specifically I hope you have fun getting the lunar lander to land for yourself."}, {"start": 158.0, "end": 162.0, "text": " I hope it will be a satisfying experience when you implement an algorithm"}, {"start": 162.0, "end": 167.0, "text": " and see that lunar lander land safely on the moon because of code that you wrote."}, {"start": 167.0, "end": 171.0, "text": " That brings us to the end of this specialization."}, {"start": 171.0, "end": 199.0, "text": " Let's go on to the last video where we'll wrap up."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=GF1oHP5uDR8
10.17 Conclusion | Summary and Thank you --[Machine Learning-Andrew Ng]
Third and final Course: Unsupervised Learning, Recommenders, Reinforcement Learning. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to the final video of this machine learning specialization. We've been through a lot of videos together and this is the last one. Let's summarize the main topics we've gone over and then I'd like to say a few words and then we'll wrap up the class. Looking back, I think we've been through a lot together. The first course we went through was on supervised machine learning, including regression and regression, where you learned about linear regression, logistic regression, cost functions and the gradient descent algorithm. And the second course, we then looked at more advanced learning algorithms, including neural networks, decision tree, tree ensembles, and also went through advice for machine learning such as bias and variance and how to use a trained holdout cross-validation and test sets and how to improve your learning algorithm efficiently. And then the third course was on unsupervised learning, recommenders and reinforcement learning, where we talked about clustering algorithms, anomaly detection algorithms, collaborative filtering and content-based filtering, and then in this past week, reinforcement learning. With this broad set of tools, you're now well qualified to build a huge range of possible machine learning applications. Congratulations on making it all the way to this last video. If you've worked all the way through this specialization, you now have a really solid foundation in machine learning. And I think you've made a great start to becoming an expert in machine learning. As you know, machine learning is having a huge impact on society. It's a powerful tool used by billions of people every day through web search, product recommendations, speech recognition, and many other applications. It's even improving human knowledge through helping with scientific discovery. It's driving billions of dollars of value and is enabling new applications that were unimaginable just a few years ago. But I think the best applications of machine learning are still yet to be invented. And that brings us to you. You're now well qualified to wield the tools of machine learning to build applications yourself and do great things. I hope that you will use these skills to make other people's lives better. Before wrapping up this class, I want to say just one last thing to you. This class has been fun for me to teach, but not so long ago, I was a student myself. And so I know how time consuming it is to learn this stuff. I know you're a busy person with many other things going on in your life, so that you took the time to watch the videos, go through the quizzes and labs. I know you've put a lot of time and put a lot of yourself into this class. And so I just want to say thank you very much for having been a student in this class. I'm very grateful to you and appreciate all the time you spent with me and with the specialization. So thank you.
[{"start": 0.0, "end": 6.32, "text": " Welcome to the final video of this machine learning specialization."}, {"start": 6.32, "end": 10.28, "text": " We've been through a lot of videos together and this is the last one."}, {"start": 10.28, "end": 15.0, "text": " Let's summarize the main topics we've gone over and then I'd like to say a few words"}, {"start": 15.0, "end": 17.34, "text": " and then we'll wrap up the class."}, {"start": 17.34, "end": 21.8, "text": " Looking back, I think we've been through a lot together."}, {"start": 21.8, "end": 26.52, "text": " The first course we went through was on supervised machine learning, including regression and"}, {"start": 26.52, "end": 31.56, "text": " regression, where you learned about linear regression, logistic regression, cost functions"}, {"start": 31.56, "end": 34.0, "text": " and the gradient descent algorithm."}, {"start": 34.0, "end": 39.14, "text": " And the second course, we then looked at more advanced learning algorithms, including neural"}, {"start": 39.14, "end": 44.72, "text": " networks, decision tree, tree ensembles, and also went through advice for machine learning"}, {"start": 44.72, "end": 49.84, "text": " such as bias and variance and how to use a trained holdout cross-validation and test"}, {"start": 49.84, "end": 54.44, "text": " sets and how to improve your learning algorithm efficiently."}, {"start": 54.44, "end": 60.16, "text": " And then the third course was on unsupervised learning, recommenders and reinforcement learning,"}, {"start": 60.16, "end": 64.67999999999999, "text": " where we talked about clustering algorithms, anomaly detection algorithms, collaborative"}, {"start": 64.67999999999999, "end": 69.96, "text": " filtering and content-based filtering, and then in this past week, reinforcement learning."}, {"start": 69.96, "end": 76.84, "text": " With this broad set of tools, you're now well qualified to build a huge range of possible"}, {"start": 76.84, "end": 79.52, "text": " machine learning applications."}, {"start": 79.52, "end": 83.8, "text": " Congratulations on making it all the way to this last video."}, {"start": 83.8, "end": 87.8, "text": " If you've worked all the way through this specialization, you now have a really solid"}, {"start": 87.8, "end": 90.12, "text": " foundation in machine learning."}, {"start": 90.12, "end": 95.16, "text": " And I think you've made a great start to becoming an expert in machine learning."}, {"start": 95.16, "end": 99.0, "text": " As you know, machine learning is having a huge impact on society."}, {"start": 99.0, "end": 103.75999999999999, "text": " It's a powerful tool used by billions of people every day through web search, product"}, {"start": 103.75999999999999, "end": 107.64, "text": " recommendations, speech recognition, and many other applications."}, {"start": 107.64, "end": 112.03999999999999, "text": " It's even improving human knowledge through helping with scientific discovery."}, {"start": 112.04, "end": 116.92, "text": " It's driving billions of dollars of value and is enabling new applications that were"}, {"start": 116.92, "end": 120.32000000000001, "text": " unimaginable just a few years ago."}, {"start": 120.32000000000001, "end": 126.60000000000001, "text": " But I think the best applications of machine learning are still yet to be invented."}, {"start": 126.60000000000001, "end": 129.68, "text": " And that brings us to you."}, {"start": 129.68, "end": 134.56, "text": " You're now well qualified to wield the tools of machine learning to build applications"}, {"start": 134.56, "end": 137.12, "text": " yourself and do great things."}, {"start": 137.12, "end": 143.6, "text": " I hope that you will use these skills to make other people's lives better."}, {"start": 143.6, "end": 148.92000000000002, "text": " Before wrapping up this class, I want to say just one last thing to you."}, {"start": 148.92000000000002, "end": 155.08, "text": " This class has been fun for me to teach, but not so long ago, I was a student myself."}, {"start": 155.08, "end": 159.94, "text": " And so I know how time consuming it is to learn this stuff."}, {"start": 159.94, "end": 165.26, "text": " I know you're a busy person with many other things going on in your life, so that you"}, {"start": 165.26, "end": 170.67999999999998, "text": " took the time to watch the videos, go through the quizzes and labs."}, {"start": 170.67999999999998, "end": 175.35999999999999, "text": " I know you've put a lot of time and put a lot of yourself into this class."}, {"start": 175.35999999999999, "end": 182.39999999999998, "text": " And so I just want to say thank you very much for having been a student in this class."}, {"start": 182.39999999999998, "end": 188.68, "text": " I'm very grateful to you and appreciate all the time you spent with me and with the specialization."}, {"start": 188.68, "end": 195.68, "text": " So thank you."}]