CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
61
100
DESCRIPTION
stringclasses
6 values
TRANSCRIPTION
stringlengths
2.07k
14.5k
SEGMENTS
stringlengths
3.72k
25k
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=y8JgiWcUnU8
1.1 Machine Learning Overview | Welcome to machine learning!--[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to machine learning. What is machine learning? You probably use it many times a day without even knowing it. Anytime you want to find out something like, how do I make a sushi roll, you can do a web search on Google, Bing, or Baidu to find out. And that works so well because their machine learning software has figured out how to rank web pages. Or when you upload pictures to Instagram or Snapchat and think to yourself, I want to tag my friends so they can see their pictures. Well these apps can recognize your friends in your pictures and label them as well. That's also machine learning. Or if you've just finished watching a Star Wars movie on the video streaming service and you think, what other similar movies could I watch? Well the streaming service will likely use machine learning to recommend something that you might like. Each time you use voice-to-text on your phone to write a text message, hey Andrew, how's it going? Or tell your phone, hey Siri, play a song by Rihanna. Or ask your other phone, okay Google, show me Indian restaurants near me. That's also machine learning. Each time you receive an email titled, congratulations, you've won a million dollars. Well maybe you're rich, congratulations. Or more likely your email service will probably flag it as spam. Snapchat too is an application of machine learning. Beyond consumer applications that you might use, AI is also rapidly making its way into big companies and into industrial applications. For example, I'm deeply concerned about climate change and I'm glad to see that machine learning is already hoping to optimize wind turbine power generation. Or in healthcare, it's starting to make its way into hospitals to help doctors make accurate diagnoses. Or recently, at Landing AI, I've been doing a lot of work putting computer vision into factories to help inspect if something coming off the assembly line has any defects. That's machine learning. It's a science of getting computers to learn without being explicitly programmed. In this class, you learn about machine learning and get to implement machine learning in code yourself. Hundreds of others have taken the earlier version of this course, which is a course that led to the founding of Coursera. And many learners ended up building exciting machine learning systems or even pursuing very successful careers in AI. I'm excited that you're on this journey with me. Welcome and let's get started.
[{"start": 0.0, "end": 6.04, "text": " Welcome to machine learning."}, {"start": 6.04, "end": 8.16, "text": " What is machine learning?"}, {"start": 8.16, "end": 12.280000000000001, "text": " You probably use it many times a day without even knowing it."}, {"start": 12.280000000000001, "end": 17.16, "text": " Anytime you want to find out something like, how do I make a sushi roll, you can do a web"}, {"start": 17.16, "end": 21.080000000000002, "text": " search on Google, Bing, or Baidu to find out."}, {"start": 21.080000000000002, "end": 25.76, "text": " And that works so well because their machine learning software has figured out how to rank"}, {"start": 25.76, "end": 27.76, "text": " web pages."}, {"start": 27.76, "end": 32.44, "text": " Or when you upload pictures to Instagram or Snapchat and think to yourself, I want to"}, {"start": 32.44, "end": 35.480000000000004, "text": " tag my friends so they can see their pictures."}, {"start": 35.480000000000004, "end": 40.480000000000004, "text": " Well these apps can recognize your friends in your pictures and label them as well."}, {"start": 40.480000000000004, "end": 43.160000000000004, "text": " That's also machine learning."}, {"start": 43.160000000000004, "end": 47.480000000000004, "text": " Or if you've just finished watching a Star Wars movie on the video streaming service"}, {"start": 47.480000000000004, "end": 50.96, "text": " and you think, what other similar movies could I watch?"}, {"start": 50.96, "end": 55.24, "text": " Well the streaming service will likely use machine learning to recommend something that"}, {"start": 55.24, "end": 56.72, "text": " you might like."}, {"start": 56.72, "end": 61.64, "text": " Each time you use voice-to-text on your phone to write a text message, hey Andrew, how's"}, {"start": 61.64, "end": 62.64, "text": " it going?"}, {"start": 62.64, "end": 66.03999999999999, "text": " Or tell your phone, hey Siri, play a song by Rihanna."}, {"start": 66.03999999999999, "end": 71.92, "text": " Or ask your other phone, okay Google, show me Indian restaurants near me."}, {"start": 71.92, "end": 73.92, "text": " That's also machine learning."}, {"start": 73.92, "end": 79.32, "text": " Each time you receive an email titled, congratulations, you've won a million dollars."}, {"start": 79.32, "end": 81.44, "text": " Well maybe you're rich, congratulations."}, {"start": 81.44, "end": 86.24, "text": " Or more likely your email service will probably flag it as spam."}, {"start": 86.24, "end": 89.88, "text": " Snapchat too is an application of machine learning."}, {"start": 89.88, "end": 95.11999999999999, "text": " Beyond consumer applications that you might use, AI is also rapidly making its way into"}, {"start": 95.11999999999999, "end": 99.0, "text": " big companies and into industrial applications."}, {"start": 99.0, "end": 105.32, "text": " For example, I'm deeply concerned about climate change and I'm glad to see that machine learning"}, {"start": 105.32, "end": 110.28, "text": " is already hoping to optimize wind turbine power generation."}, {"start": 110.28, "end": 115.75999999999999, "text": " Or in healthcare, it's starting to make its way into hospitals to help doctors make accurate"}, {"start": 115.76, "end": 116.76, "text": " diagnoses."}, {"start": 116.76, "end": 122.52000000000001, "text": " Or recently, at Landing AI, I've been doing a lot of work putting computer vision into"}, {"start": 122.52000000000001, "end": 129.6, "text": " factories to help inspect if something coming off the assembly line has any defects."}, {"start": 129.6, "end": 131.52, "text": " That's machine learning."}, {"start": 131.52, "end": 137.20000000000002, "text": " It's a science of getting computers to learn without being explicitly programmed."}, {"start": 137.20000000000002, "end": 142.44, "text": " In this class, you learn about machine learning and get to implement machine learning in code"}, {"start": 142.44, "end": 144.52, "text": " yourself."}, {"start": 144.52, "end": 148.32000000000002, "text": " Hundreds of others have taken the earlier version of this course, which is a course"}, {"start": 148.32000000000002, "end": 150.88000000000002, "text": " that led to the founding of Coursera."}, {"start": 150.88000000000002, "end": 155.84, "text": " And many learners ended up building exciting machine learning systems or even pursuing"}, {"start": 155.84, "end": 158.4, "text": " very successful careers in AI."}, {"start": 158.4, "end": 161.32000000000002, "text": " I'm excited that you're on this journey with me."}, {"start": 161.32, "end": 174.84, "text": " Welcome and let's get started."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=AISftYVyS50
1.2 Machine Learning Overview | What is machine learning? --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, what is machine learning? In this video you learn the definition of what it is, and also get a sense of when you might want to apply it. Let's take a look together. Here's the definition of what is machine learning that is attributed to Arthur Samuel. He defined machine learning as the feeble study that gives computers the ability to learn without being explicitly programmed. Arthur's claim to fame was that back in the 1950s he wrote a checkers playing program. And the amazing thing about this program was that Arthur Samuel himself wasn't a very good checkers player. What he did was he had programmed a computer to play maybe tens of thousands of games against itself. And by watching what sorts of board positions tend to lead to wins, and what positions tend to lead to losses, the checkers playing program learned over time what a good or bad board position is. By trying to get to good and avoid bad positions, his program learned to get better and better at playing checkers. Because the computer had the patience to play tens of thousands of games against itself, it was able to get so much checkers playing experience that eventually it became a better checkers player than Arthur Samuel himself. Now throughout these videos, besides me trying to talk about stuff, I'll occasionally ask you a question to help make sure you understand the content. Here's one about what happens if the computer had played far fewer games. Please take a look and pick whichever you think is a better answer. Thanks for looking at the quiz. And so if you had selected this answer, whether made it worse, then you got it right. In general, the more opportunities you give a learning algorithm to learn, the better it will perform. If you didn't select the correct answer the first time, that's totally okay too. The point of these quiz questions isn't to see if you can get them all correct on the first try. These questions are here just to help you practice the concepts you're learning. Arthur Samuel's definition was a rather informal one. But in the next two videos, we'll dive deeper together into what are the major types of machine learning algorithms. In this class, you learn about many different learning algorithms. The two main types of machine learning are supervised learning and unsupervised learning. We'll define what these terms mean more in the next couple videos. Of these two, supervised learning is the type of machine learning that is used most in many real world applications and that has seen the most rapid advancement and innovation. In this specialization, which has three causes in total, the first and second causes will focus on supervised learning and the third will focus on unsupervised learning, recommender systems and reinforcement learning. By far, the most used types of learning algorithms today are supervised learning, unsupervised learning and recommender systems. The other thing we're going to spend a lot of time on in this specialization is practical advice for applying learning algorithms. This is something I feel pretty strongly about. Teaching about learning algorithms is like giving someone a set of tools and equally important or even more important than making sure you have great tools is making sure you know how to apply them. Because you know, what good is it if someone were to give you a state of the art hammer or a state of the art hand drill and say, good luck, now you have all the tools you need to build a three story house. It doesn't really work like that. And so too in machine learning, making sure you have the tools is really important. And so it's making sure that you know how to apply the tools of machine learning effectively. So that's what you get in this class, the tools as well as the skills of applying them effectively. I regularly visit with friends and teams in some of the top tech companies. And even today, I see experienced machine learning teams apply machine learning algorithms to some problems. And sometimes they've been going at it for six months without much success. And when I look at what they're doing, I sometimes feel like I could have told them six months ago that the current approach won't work. And there's a different way of using these tools that will give them a much better chance of success. So in this class, one of the relatively unique things you learn is you learn a lot about the best practices for how to actually develop a practical, valuable machine learning system. This way, you're less likely to end up in one of those teams that end up losing six months going in the wrong direction. In this class, you gain a sense of how the most skilled machine learning engineers build systems. I hope you finish this class as one of those very rare people in today's world that know how to design and build serious machine learning systems. So that's machine learning. In the next video, let's look more deeply at what is supervised learning and also what is unsupervised learning. In addition, you learn when you might want to use each of them, supervised and unsupervised learning. I'll see you in the next video.
[{"start": 0.0, "end": 3.0, "text": " So, what is machine learning?"}, {"start": 3.0, "end": 8.08, "text": " In this video you learn the definition of what it is, and also get a sense of when you might"}, {"start": 8.08, "end": 9.08, "text": " want to apply it."}, {"start": 9.08, "end": 10.68, "text": " Let's take a look together."}, {"start": 10.68, "end": 17.44, "text": " Here's the definition of what is machine learning that is attributed to Arthur Samuel."}, {"start": 17.44, "end": 21.48, "text": " He defined machine learning as the feeble study that gives computers the ability to"}, {"start": 21.48, "end": 25.080000000000002, "text": " learn without being explicitly programmed."}, {"start": 25.08, "end": 30.919999999999998, "text": " Arthur's claim to fame was that back in the 1950s he wrote a checkers playing program."}, {"start": 30.919999999999998, "end": 36.04, "text": " And the amazing thing about this program was that Arthur Samuel himself wasn't a very good"}, {"start": 36.04, "end": 38.2, "text": " checkers player."}, {"start": 38.2, "end": 43.32, "text": " What he did was he had programmed a computer to play maybe tens of thousands of games against"}, {"start": 43.32, "end": 44.4, "text": " itself."}, {"start": 44.4, "end": 49.04, "text": " And by watching what sorts of board positions tend to lead to wins, and what positions tend"}, {"start": 49.04, "end": 54.76, "text": " to lead to losses, the checkers playing program learned over time what a good or bad board"}, {"start": 54.76, "end": 56.16, "text": " position is."}, {"start": 56.16, "end": 62.0, "text": " By trying to get to good and avoid bad positions, his program learned to get better and better"}, {"start": 62.0, "end": 64.08, "text": " at playing checkers."}, {"start": 64.08, "end": 68.12, "text": " Because the computer had the patience to play tens of thousands of games against itself,"}, {"start": 68.12, "end": 73.88, "text": " it was able to get so much checkers playing experience that eventually it became a better"}, {"start": 73.88, "end": 77.75999999999999, "text": " checkers player than Arthur Samuel himself."}, {"start": 77.75999999999999, "end": 82.6, "text": " Now throughout these videos, besides me trying to talk about stuff, I'll occasionally ask"}, {"start": 82.6, "end": 86.36, "text": " you a question to help make sure you understand the content."}, {"start": 86.36, "end": 91.08, "text": " Here's one about what happens if the computer had played far fewer games."}, {"start": 91.08, "end": 99.03999999999999, "text": " Please take a look and pick whichever you think is a better answer."}, {"start": 99.03999999999999, "end": 101.03999999999999, "text": " Thanks for looking at the quiz."}, {"start": 101.03999999999999, "end": 109.11999999999999, "text": " And so if you had selected this answer, whether made it worse, then you got it right."}, {"start": 109.12, "end": 113.48, "text": " In general, the more opportunities you give a learning algorithm to learn, the better"}, {"start": 113.48, "end": 115.16000000000001, "text": " it will perform."}, {"start": 115.16000000000001, "end": 119.66000000000001, "text": " If you didn't select the correct answer the first time, that's totally okay too."}, {"start": 119.66000000000001, "end": 123.64, "text": " The point of these quiz questions isn't to see if you can get them all correct on the"}, {"start": 123.64, "end": 125.0, "text": " first try."}, {"start": 125.0, "end": 129.56, "text": " These questions are here just to help you practice the concepts you're learning."}, {"start": 129.56, "end": 132.88, "text": " Arthur Samuel's definition was a rather informal one."}, {"start": 132.88, "end": 137.68, "text": " But in the next two videos, we'll dive deeper together into what are the major types of"}, {"start": 137.68, "end": 141.16, "text": " machine learning algorithms."}, {"start": 141.16, "end": 145.12, "text": " In this class, you learn about many different learning algorithms."}, {"start": 145.12, "end": 151.78, "text": " The two main types of machine learning are supervised learning and unsupervised learning."}, {"start": 151.78, "end": 156.8, "text": " We'll define what these terms mean more in the next couple videos."}, {"start": 156.8, "end": 162.48000000000002, "text": " Of these two, supervised learning is the type of machine learning that is used most in many"}, {"start": 162.48, "end": 169.32, "text": " real world applications and that has seen the most rapid advancement and innovation."}, {"start": 169.32, "end": 174.98, "text": " In this specialization, which has three causes in total, the first and second causes will"}, {"start": 174.98, "end": 180.35999999999999, "text": " focus on supervised learning and the third will focus on unsupervised learning, recommender"}, {"start": 180.35999999999999, "end": 183.6, "text": " systems and reinforcement learning."}, {"start": 183.6, "end": 189.44, "text": " By far, the most used types of learning algorithms today are supervised learning, unsupervised"}, {"start": 189.44, "end": 192.52, "text": " learning and recommender systems."}, {"start": 192.52, "end": 197.2, "text": " The other thing we're going to spend a lot of time on in this specialization is practical"}, {"start": 197.2, "end": 200.68, "text": " advice for applying learning algorithms."}, {"start": 200.68, "end": 203.28, "text": " This is something I feel pretty strongly about."}, {"start": 203.28, "end": 208.52, "text": " Teaching about learning algorithms is like giving someone a set of tools and equally"}, {"start": 208.52, "end": 214.48, "text": " important or even more important than making sure you have great tools is making sure you"}, {"start": 214.48, "end": 216.56, "text": " know how to apply them."}, {"start": 216.56, "end": 221.6, "text": " Because you know, what good is it if someone were to give you a state of the art hammer"}, {"start": 221.6, "end": 225.32, "text": " or a state of the art hand drill and say, good luck, now you have all the tools you"}, {"start": 225.32, "end": 227.12, "text": " need to build a three story house."}, {"start": 227.12, "end": 229.68, "text": " It doesn't really work like that."}, {"start": 229.68, "end": 235.28, "text": " And so too in machine learning, making sure you have the tools is really important."}, {"start": 235.28, "end": 241.02, "text": " And so it's making sure that you know how to apply the tools of machine learning effectively."}, {"start": 241.02, "end": 245.2, "text": " So that's what you get in this class, the tools as well as the skills of applying them"}, {"start": 245.2, "end": 247.11999999999998, "text": " effectively."}, {"start": 247.11999999999998, "end": 252.07999999999998, "text": " I regularly visit with friends and teams in some of the top tech companies."}, {"start": 252.07999999999998, "end": 257.64, "text": " And even today, I see experienced machine learning teams apply machine learning algorithms"}, {"start": 257.64, "end": 259.2, "text": " to some problems."}, {"start": 259.2, "end": 263.96, "text": " And sometimes they've been going at it for six months without much success."}, {"start": 263.96, "end": 268.12, "text": " And when I look at what they're doing, I sometimes feel like I could have told them six months"}, {"start": 268.12, "end": 270.65999999999997, "text": " ago that the current approach won't work."}, {"start": 270.65999999999997, "end": 274.53999999999996, "text": " And there's a different way of using these tools that will give them a much better chance"}, {"start": 274.54, "end": 276.08000000000004, "text": " of success."}, {"start": 276.08000000000004, "end": 281.16, "text": " So in this class, one of the relatively unique things you learn is you learn a lot about"}, {"start": 281.16, "end": 287.6, "text": " the best practices for how to actually develop a practical, valuable machine learning system."}, {"start": 287.6, "end": 291.64000000000004, "text": " This way, you're less likely to end up in one of those teams that end up losing six"}, {"start": 291.64000000000004, "end": 294.68, "text": " months going in the wrong direction."}, {"start": 294.68, "end": 299.32000000000005, "text": " In this class, you gain a sense of how the most skilled machine learning engineers build"}, {"start": 299.32000000000005, "end": 300.32000000000005, "text": " systems."}, {"start": 300.32, "end": 306.04, "text": " I hope you finish this class as one of those very rare people in today's world that know"}, {"start": 306.04, "end": 309.88, "text": " how to design and build serious machine learning systems."}, {"start": 309.88, "end": 312.04, "text": " So that's machine learning."}, {"start": 312.04, "end": 317.8, "text": " In the next video, let's look more deeply at what is supervised learning and also what"}, {"start": 317.8, "end": 320.03999999999996, "text": " is unsupervised learning."}, {"start": 320.03999999999996, "end": 325.12, "text": " In addition, you learn when you might want to use each of them, supervised and unsupervised"}, {"start": 325.12, "end": 326.12, "text": " learning."}, {"start": 326.12, "end": 330.68, "text": " I'll see you in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=hHYcNPfbBXQ
1.3 Machine Learning Overview | Applications of machine learning -- [Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this class, you learn about the state of the art and also practice implementing machine learning algorithms yourself. You learn about the most important machine learning algorithms, some of which are exactly what's being used in large AI or large tech companies today, and you get a sense of what is the state of the art in AI. Beyond learning the algorithms though, in this class, you also learn all the important practical tips and tricks for making them perform well, and you get to implement them and see how they work for yourself. So why is machine learning so widely used today? Machine learning had grown up as a subfield of AI or artificial intelligence. We wanted to build intelligent machines, and it turns out that there are a few basic things that we could program a machine to do, such as how to find the shortest path from A to B like in your GPS. But for the most part, we just did not know how to write an explicit program to do many of the more interesting things, such as perform web search, recognize human speech, diagnose diseases from x-rays, or build a self-driving car. The only way we knew how to do these things was to have a machine learn to do it by itself. For me, when I founded and was leading the Google Brain team, I worked on problems like speech recognition, computer vision for Google Maps to view images, and advertising. Or leading AI by do, I worked on everything from AI for augmented reality to combating payment fraud to leading a self-driving car team. Most recently, at Landing AI, AI Fund, and Stanford University, I've been getting to work on AI applications in manufacturing, large-scale agriculture, healthcare, e-commerce, and other problems. Today, there are hundreds of thousands, perhaps millions of people, working on machine learning applications who could tell you similar stories about their work with machine learning. When you've learned these skills, I hope that you too will find it great fun to dabble in exciting different applications and maybe even different industries. In fact, I find it hard to think of any industry that machine learning is unlikely to touch in a significant way now or in the near future. Moving even further into the future, many people, including me, are excited about the AI dream of someday building machines as intelligent as you or me. This is sometimes called artificial general intelligence, or AGI. I think AGI has been overhyped and we're still a long way away from that goal. I don't know if it'll take 50 years or 500 years or longer to get there, but most AI researchers believe that the best way to get closer to what that goal is by using learning algorithms, maybe ones that take some inspiration from how the human brain works. You also hear a little more about this quest for AGI later in this course. According to a study by McKinsey, AI and machine learning is estimated to create an additional 13 trillion US dollars of value annually by the year 2030. Even though machine learning is already creating tremendous amounts of value in the software industry, I think there could be even vastly greater value that has yet to be created outside the software industry in sectors such as retail, travel, transportation, automotive, materials, manufacturing, and so on. Because of the massive untapped opportunities across so many different sectors, today there is a vast unfulfilled demand for this skill set. That's why this is such a great time to be learning about machine learning. If you find machine learning applications exciting, I hope you stick with me through this course. I can almost guarantee that you'll find mastering these skills worthwhile. In the next video, we'll look at a more formal definition of what is machine learning and we'll begin to talk about the main types of machine learning problems and algorithms. You pick up some of the main machine learning terminology and start to get a sense of what are the different algorithms and when each one might be appropriate. So let's go on to the next video.
[{"start": 0.0, "end": 6.88, "text": " In this class, you learn about the state of the art and also practice implementing machine"}, {"start": 6.88, "end": 9.4, "text": " learning algorithms yourself."}, {"start": 9.4, "end": 13.88, "text": " You learn about the most important machine learning algorithms, some of which are exactly"}, {"start": 13.88, "end": 19.44, "text": " what's being used in large AI or large tech companies today, and you get a sense of what"}, {"start": 19.44, "end": 22.48, "text": " is the state of the art in AI."}, {"start": 22.48, "end": 27.68, "text": " Beyond learning the algorithms though, in this class, you also learn all the important"}, {"start": 27.68, "end": 32.8, "text": " practical tips and tricks for making them perform well, and you get to implement them"}, {"start": 32.8, "end": 36.12, "text": " and see how they work for yourself."}, {"start": 36.12, "end": 40.4, "text": " So why is machine learning so widely used today?"}, {"start": 40.4, "end": 45.4, "text": " Machine learning had grown up as a subfield of AI or artificial intelligence."}, {"start": 45.4, "end": 49.8, "text": " We wanted to build intelligent machines, and it turns out that there are a few basic things"}, {"start": 49.8, "end": 54.120000000000005, "text": " that we could program a machine to do, such as how to find the shortest path from A to"}, {"start": 54.120000000000005, "end": 56.379999999999995, "text": " B like in your GPS."}, {"start": 56.38, "end": 61.64, "text": " But for the most part, we just did not know how to write an explicit program to do many"}, {"start": 61.64, "end": 67.0, "text": " of the more interesting things, such as perform web search, recognize human speech, diagnose"}, {"start": 67.0, "end": 70.74000000000001, "text": " diseases from x-rays, or build a self-driving car."}, {"start": 70.74000000000001, "end": 78.24000000000001, "text": " The only way we knew how to do these things was to have a machine learn to do it by itself."}, {"start": 78.24000000000001, "end": 83.24000000000001, "text": " For me, when I founded and was leading the Google Brain team, I worked on problems like"}, {"start": 83.24, "end": 89.16, "text": " speech recognition, computer vision for Google Maps to view images, and advertising."}, {"start": 89.16, "end": 95.19999999999999, "text": " Or leading AI by do, I worked on everything from AI for augmented reality to combating"}, {"start": 95.19999999999999, "end": 99.11999999999999, "text": " payment fraud to leading a self-driving car team."}, {"start": 99.11999999999999, "end": 103.36, "text": " Most recently, at Landing AI, AI Fund, and Stanford University, I've been getting to"}, {"start": 103.36, "end": 108.75999999999999, "text": " work on AI applications in manufacturing, large-scale agriculture, healthcare, e-commerce,"}, {"start": 108.75999999999999, "end": 109.75999999999999, "text": " and other problems."}, {"start": 109.76, "end": 114.76, "text": " Today, there are hundreds of thousands, perhaps millions of people, working on machine learning"}, {"start": 114.76, "end": 119.88000000000001, "text": " applications who could tell you similar stories about their work with machine learning."}, {"start": 119.88000000000001, "end": 124.84, "text": " When you've learned these skills, I hope that you too will find it great fun to dabble in"}, {"start": 124.84, "end": 128.28, "text": " exciting different applications and maybe even different industries."}, {"start": 128.28, "end": 134.08, "text": " In fact, I find it hard to think of any industry that machine learning is unlikely to touch"}, {"start": 134.08, "end": 138.6, "text": " in a significant way now or in the near future."}, {"start": 138.6, "end": 144.28, "text": " Moving even further into the future, many people, including me, are excited about the"}, {"start": 144.28, "end": 149.04, "text": " AI dream of someday building machines as intelligent as you or me."}, {"start": 149.04, "end": 154.28, "text": " This is sometimes called artificial general intelligence, or AGI."}, {"start": 154.28, "end": 159.6, "text": " I think AGI has been overhyped and we're still a long way away from that goal."}, {"start": 159.6, "end": 166.07999999999998, "text": " I don't know if it'll take 50 years or 500 years or longer to get there, but most AI researchers"}, {"start": 166.08, "end": 172.36, "text": " believe that the best way to get closer to what that goal is by using learning algorithms,"}, {"start": 172.36, "end": 176.4, "text": " maybe ones that take some inspiration from how the human brain works."}, {"start": 176.4, "end": 182.48000000000002, "text": " You also hear a little more about this quest for AGI later in this course."}, {"start": 182.48000000000002, "end": 188.64000000000001, "text": " According to a study by McKinsey, AI and machine learning is estimated to create an additional"}, {"start": 188.64000000000001, "end": 194.4, "text": " 13 trillion US dollars of value annually by the year 2030."}, {"start": 194.4, "end": 198.56, "text": " Even though machine learning is already creating tremendous amounts of value in the software"}, {"start": 198.56, "end": 205.56, "text": " industry, I think there could be even vastly greater value that has yet to be created outside"}, {"start": 205.56, "end": 211.92000000000002, "text": " the software industry in sectors such as retail, travel, transportation, automotive, materials,"}, {"start": 211.92000000000002, "end": 214.84, "text": " manufacturing, and so on."}, {"start": 214.84, "end": 220.12, "text": " Because of the massive untapped opportunities across so many different sectors, today there"}, {"start": 220.12, "end": 225.08, "text": " is a vast unfulfilled demand for this skill set."}, {"start": 225.08, "end": 229.64000000000001, "text": " That's why this is such a great time to be learning about machine learning."}, {"start": 229.64000000000001, "end": 234.8, "text": " If you find machine learning applications exciting, I hope you stick with me through"}, {"start": 234.8, "end": 235.8, "text": " this course."}, {"start": 235.8, "end": 240.76, "text": " I can almost guarantee that you'll find mastering these skills worthwhile."}, {"start": 240.76, "end": 248.32, "text": " In the next video, we'll look at a more formal definition of what is machine learning and"}, {"start": 248.32, "end": 254.28, "text": " we'll begin to talk about the main types of machine learning problems and algorithms."}, {"start": 254.28, "end": 260.24, "text": " You pick up some of the main machine learning terminology and start to get a sense of what"}, {"start": 260.24, "end": 265.32, "text": " are the different algorithms and when each one might be appropriate."}, {"start": 265.32, "end": 278.84, "text": " So let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=EZN_uM3J3kI
1.4 Machine Learning Overview | Supervised learning part 1 --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Machine learning is creating tremendous economic value today. I think 99% of the economic value created by machine learning today is through one type of machine learning, which is called supervised learning. Let's take a look at what that means. Supervised machine learning, or more commonly supervised learning, refers to algorithms that learn x to y or input to output mappings. The key characteristic of supervised learning is that you give your learning algorithm examples to learn from that include the right answers, where by right answer I mean the correct label y for a given input x. And it's by seeing correct pairs of input x and desired output label y that the learning algorithm eventually learns to take just the input alone without the output label and gives a reasonably accurate prediction or guess of the output. Let's look at some examples. If the input x is an email and the output y is this email spam or not spam, this gives you your spam filter. Or if the input is an audio clip and the algorithm's job is to output the text transcript, then this is speech recognition. Or if you want to input English and have it output the corresponding Spanish, Arabic, Hindi, Chinese, Japanese or something else translation, then that's machine translation. Or the most lucrative form of supervised learning today is probably used in online advertising. Nearly all the large online ad platforms have a learning algorithm that inputs some information about an ad and some information about you and then tries to figure out if you will click on that ad or not. Because by showing you ads that you're slightly more likely to click on for these large online ad platforms, every click is revenue. This actually drives a lot of revenue for these companies. This is something that one's done a lot of work on, maybe not the most inspiring application, but it certainly has a significant economic impact in some companies today. Or if you want to build a self-driving car, the learning algorithm would take as input an image and some information from other sensors, such as a radar or other things, and then try to output the position of say other cars so that your self-driving car can safely drive around the other cars. Or take manufacturing. I've actually done a lot of work in this sector at Landing AI. You can have a learning algorithm take as input a picture of a manufactured product, say a cell phone that just rolled off the production line, and have the learning algorithm output whether or not there is a scratch, dent, or other defect in the product. This is called visual inspection and is helping manufacturers reduce or prevent defects in their products. In all of these applications, you would first train your model with examples of InputsX and the right answers, that is, the labels Y. After the model has learned from these Input Outputs or X and Y pairs, it can then take a brand new Input X, something it's never seen before, and try to produce the appropriate corresponding output Y. Let's dive more deeply into one specific example. Say you want to predict housing prices based on the size of a house. You've collected some data and say you plot the data and it looks like this. Here on the horizontal axis is the size of the house in square feet. And yes, I live in the United States where we still use square feet. I know most of the world uses square meters. And here on the vertical axis is the price of the house in, say, thousands of dollars. So with this data, let's say a friend wants to know what's the price for their 750 square foot house. How can a learning algorithm help you? One thing a learning algorithm might be able to do is say, fit a straight line to the data and reading off the straight line, it looks like your friend's house could be sold for maybe about, I don't know, $150,000. But fitting a straight line isn't the only learning algorithm you can use. There are others that could work better for this application. For example, rather than fitting a straight line, you might decide that it's better to fit a curve, a function that's slightly more complicated or more complex than a straight line. If you do that and make a prediction here, then it looks like, well, your friend's house could be sold for closer to $200,000. One of the things you see later in this class is how you can decide whether to fit a straight line, a curve, or another function that is even more complex to the data. Now, it doesn't seem appropriate to pick the one that gives your friend the best price, but one thing you see is how to get an algorithm to systematically choose the most appropriate line or curve or other thing to fit to this data. What you're seeing in this slide is an example of supervised learning. This is the algorithm or data set in which the so-called right answer, that is, the label or the correct price y is given for every house on the plot. And the task of the learning algorithm is to produce more of these right answers, specifically predicting what is the likely price for other houses like your friend's house. That's why this is supervised learning. To define a little bit more terminology, this housing price prediction is a particular type of supervised learning called regression. And by regression, I mean we're trying to predict a number from infinitely many possible numbers such as the house prices in our example, which could be 150,000 or 70,000 or 183,000 or any other number in between. So that's supervised learning, learning input-output or x-to-y mappings. And you saw in this video an example of regression where the task is to predict a number. But there's also a second major type of supervised learning problem called classification. Let's take a look at what that means in the next video.
[{"start": 0.0, "end": 5.84, "text": " Machine learning is creating tremendous economic value today."}, {"start": 5.84, "end": 11.08, "text": " I think 99% of the economic value created by machine learning today is through one type"}, {"start": 11.08, "end": 13.92, "text": " of machine learning, which is called supervised learning."}, {"start": 13.92, "end": 17.8, "text": " Let's take a look at what that means."}, {"start": 17.8, "end": 22.76, "text": " Supervised machine learning, or more commonly supervised learning, refers to algorithms"}, {"start": 22.76, "end": 28.98, "text": " that learn x to y or input to output mappings."}, {"start": 28.98, "end": 35.72, "text": " The key characteristic of supervised learning is that you give your learning algorithm examples"}, {"start": 35.72, "end": 42.72, "text": " to learn from that include the right answers, where by right answer I mean the correct label"}, {"start": 42.72, "end": 46.0, "text": " y for a given input x."}, {"start": 46.0, "end": 53.28, "text": " And it's by seeing correct pairs of input x and desired output label y that the learning"}, {"start": 53.28, "end": 58.96, "text": " algorithm eventually learns to take just the input alone without the output label and gives"}, {"start": 58.96, "end": 63.92, "text": " a reasonably accurate prediction or guess of the output."}, {"start": 63.92, "end": 66.6, "text": " Let's look at some examples."}, {"start": 66.6, "end": 74.64, "text": " If the input x is an email and the output y is this email spam or not spam, this gives"}, {"start": 74.64, "end": 77.84, "text": " you your spam filter."}, {"start": 77.84, "end": 87.02000000000001, "text": " Or if the input is an audio clip and the algorithm's job is to output the text transcript, then"}, {"start": 87.02, "end": 90.47999999999999, "text": " this is speech recognition."}, {"start": 90.47999999999999, "end": 96.19999999999999, "text": " Or if you want to input English and have it output the corresponding Spanish, Arabic,"}, {"start": 96.19999999999999, "end": 103.47999999999999, "text": " Hindi, Chinese, Japanese or something else translation, then that's machine translation."}, {"start": 103.47999999999999, "end": 111.0, "text": " Or the most lucrative form of supervised learning today is probably used in online advertising."}, {"start": 111.0, "end": 116.56, "text": " Nearly all the large online ad platforms have a learning algorithm that inputs some information"}, {"start": 116.56, "end": 122.62, "text": " about an ad and some information about you and then tries to figure out if you will click"}, {"start": 122.62, "end": 124.8, "text": " on that ad or not."}, {"start": 124.8, "end": 129.16, "text": " Because by showing you ads that you're slightly more likely to click on for these large online"}, {"start": 129.16, "end": 132.32, "text": " ad platforms, every click is revenue."}, {"start": 132.32, "end": 135.76, "text": " This actually drives a lot of revenue for these companies."}, {"start": 135.76, "end": 140.88, "text": " This is something that one's done a lot of work on, maybe not the most inspiring application,"}, {"start": 140.88, "end": 145.8, "text": " but it certainly has a significant economic impact in some companies today."}, {"start": 145.8, "end": 151.16000000000003, "text": " Or if you want to build a self-driving car, the learning algorithm would take as input"}, {"start": 151.16000000000003, "end": 157.64000000000001, "text": " an image and some information from other sensors, such as a radar or other things, and then"}, {"start": 157.64000000000001, "end": 163.64000000000001, "text": " try to output the position of say other cars so that your self-driving car can safely drive"}, {"start": 163.64000000000001, "end": 165.9, "text": " around the other cars."}, {"start": 165.9, "end": 167.92000000000002, "text": " Or take manufacturing."}, {"start": 167.92000000000002, "end": 172.5, "text": " I've actually done a lot of work in this sector at Landing AI."}, {"start": 172.5, "end": 177.88, "text": " You can have a learning algorithm take as input a picture of a manufactured product,"}, {"start": 177.88, "end": 183.0, "text": " say a cell phone that just rolled off the production line, and have the learning algorithm"}, {"start": 183.0, "end": 189.2, "text": " output whether or not there is a scratch, dent, or other defect in the product."}, {"start": 189.2, "end": 194.08, "text": " This is called visual inspection and is helping manufacturers reduce or prevent defects in"}, {"start": 194.08, "end": 196.2, "text": " their products."}, {"start": 196.2, "end": 202.2, "text": " In all of these applications, you would first train your model with examples of InputsX"}, {"start": 202.2, "end": 206.11999999999998, "text": " and the right answers, that is, the labels Y."}, {"start": 206.11999999999998, "end": 211.64, "text": " After the model has learned from these Input Outputs or X and Y pairs, it can then take"}, {"start": 211.64, "end": 216.92, "text": " a brand new Input X, something it's never seen before, and try to produce the appropriate"}, {"start": 216.92, "end": 221.0, "text": " corresponding output Y."}, {"start": 221.0, "end": 225.28, "text": " Let's dive more deeply into one specific example."}, {"start": 225.28, "end": 230.35999999999999, "text": " Say you want to predict housing prices based on the size of a house."}, {"start": 230.36, "end": 236.04000000000002, "text": " You've collected some data and say you plot the data and it looks like this."}, {"start": 236.04000000000002, "end": 240.20000000000002, "text": " Here on the horizontal axis is the size of the house in square feet."}, {"start": 240.20000000000002, "end": 243.24, "text": " And yes, I live in the United States where we still use square feet."}, {"start": 243.24, "end": 247.08, "text": " I know most of the world uses square meters."}, {"start": 247.08, "end": 253.48000000000002, "text": " And here on the vertical axis is the price of the house in, say, thousands of dollars."}, {"start": 253.48000000000002, "end": 260.12, "text": " So with this data, let's say a friend wants to know what's the price for their 750 square"}, {"start": 260.12, "end": 261.64, "text": " foot house."}, {"start": 261.64, "end": 264.24, "text": " How can a learning algorithm help you?"}, {"start": 264.24, "end": 269.88, "text": " One thing a learning algorithm might be able to do is say, fit a straight line to the data"}, {"start": 269.88, "end": 274.4, "text": " and reading off the straight line, it looks like your friend's house could be sold for"}, {"start": 274.4, "end": 279.4, "text": " maybe about, I don't know, $150,000."}, {"start": 279.4, "end": 283.24, "text": " But fitting a straight line isn't the only learning algorithm you can use."}, {"start": 283.24, "end": 286.84000000000003, "text": " There are others that could work better for this application."}, {"start": 286.84, "end": 291.52, "text": " For example, rather than fitting a straight line, you might decide that it's better to"}, {"start": 291.52, "end": 297.23999999999995, "text": " fit a curve, a function that's slightly more complicated or more complex than a straight"}, {"start": 297.23999999999995, "end": 298.23999999999995, "text": " line."}, {"start": 298.23999999999995, "end": 303.96, "text": " If you do that and make a prediction here, then it looks like, well, your friend's house"}, {"start": 303.96, "end": 308.55999999999995, "text": " could be sold for closer to $200,000."}, {"start": 308.55999999999995, "end": 313.84, "text": " One of the things you see later in this class is how you can decide whether to fit a straight"}, {"start": 313.84, "end": 320.44, "text": " line, a curve, or another function that is even more complex to the data."}, {"start": 320.44, "end": 325.88, "text": " Now, it doesn't seem appropriate to pick the one that gives your friend the best price,"}, {"start": 325.88, "end": 332.0, "text": " but one thing you see is how to get an algorithm to systematically choose the most appropriate"}, {"start": 332.0, "end": 337.35999999999996, "text": " line or curve or other thing to fit to this data."}, {"start": 337.35999999999996, "end": 342.0, "text": " What you're seeing in this slide is an example of supervised learning."}, {"start": 342.0, "end": 347.92, "text": " This is the algorithm or data set in which the so-called right answer, that is, the label"}, {"start": 347.92, "end": 352.72, "text": " or the correct price y is given for every house on the plot."}, {"start": 352.72, "end": 358.64, "text": " And the task of the learning algorithm is to produce more of these right answers, specifically"}, {"start": 358.64, "end": 364.12, "text": " predicting what is the likely price for other houses like your friend's house."}, {"start": 364.12, "end": 366.92, "text": " That's why this is supervised learning."}, {"start": 366.92, "end": 371.8, "text": " To define a little bit more terminology, this housing price prediction is a particular type"}, {"start": 371.8, "end": 375.0, "text": " of supervised learning called regression."}, {"start": 375.0, "end": 380.56, "text": " And by regression, I mean we're trying to predict a number from infinitely many possible"}, {"start": 380.56, "end": 391.16, "text": " numbers such as the house prices in our example, which could be 150,000 or 70,000 or 183,000"}, {"start": 391.16, "end": 394.16, "text": " or any other number in between."}, {"start": 394.16, "end": 399.96000000000004, "text": " So that's supervised learning, learning input-output or x-to-y mappings."}, {"start": 399.96, "end": 406.32, "text": " And you saw in this video an example of regression where the task is to predict a number."}, {"start": 406.32, "end": 412.64, "text": " But there's also a second major type of supervised learning problem called classification."}, {"start": 412.64, "end": 430.24, "text": " Let's take a look at what that means in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=l16C3PKiHKg
1.5 Machine Learning Overview | Supervised learning part 2 --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart! First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, supervised learning algorithms learn to predict input-output or x-to-y mappings, and in the last video you saw that regression algorithms, which is a type of supervised learning algorithm, learn to predict numbers out of infinitely many possible numbers. There's a second major type of supervised learning algorithm called a classification algorithm. Let's take a look at what this means. Take breast cancer detection as an example of a classification problem. Say you're building a machine learning system so that doctors can have a diagnostic tool to detect breast cancer. This is important because early detection could potentially save a patient's life. Using a patient's medical records, your machine learning system tries to figure out if a tumor that is a lump is malignant, meaning cancerous or dangerous, or if that tumor, that lump, is benign, meaning that it's just a lump that isn't cancerous and isn't that dangerous. Some of my friends have actually been working on this specific problem. So maybe your dataset has tumors of various sizes, and these tumors are labeled as either benign, which I will designate in this example with a zero, or malignant, which I'll designate in this example with a one. You can then plot your data on a graph like this, where the horizontal axis represents the size of the tumor and the vertical axis takes on only two values, zero or one, depending on whether the tumor is benign, zero, or malignant, one. One reason that this is different from regression is that we're trying to predict only a small number of possible outputs or categories. In this case, two possible outputs, zero or one, benign or malignant. This is different from regression, which tries to predict any number out of an infinitely many number of possible numbers. And so the fact that there are only two possible outputs is what makes this classification. Because there are only two possible outputs or two possible categories in this example, you can also plot this dataset on a line like this, where now I'm going to use two different symbols to denote the category using a circle or an O to denote the benign examples and a cross to denote the malignant examples. And if a new patient walks in for a diagnosis and they have a lump that is this size, then the question is, will your system classify this tumor as benign or malignant? It turns out that in classification problems, you can also have more than two possible output categories. Maybe your learning algorithm can output multiple types of cancer diagnoses if it turns out to be malignant. So let's call two different types of cancer type one and type two. In this case, the algorithm would have three possible output categories it could predict. And by the way, in classification, the terms output classes and output categories are often used interchangeably, so when I say class or category, when referring to the output, it means the same thing. So to summarize, classification algorithms predict categories. Categories don't have to be numbers. It could be non-numeric. For example, it can predict whether a picture is that of a cat or a dog. And it can predict if a tumor is benign or malignant. Categories can also be numbers like zero or one or zero or one or two. But what makes classification different from regression when you're interpreting the numbers is that classification predicts a small, finite, limited set of possible output categories such as zero, one and two, but not all possible numbers in between like 0.5 or 1.7. In the example of supervised learning that we've been looking at, we had only one input value, the size of the tumor. But you can also use more than one input value to predict an output. Here's an example, instead of just knowing the tumor size, say you also have each patient's age in years. Your new data set now has two inputs, age and tumor size. Plotting this new data set, we're going to use circles to show patients whose tumors are benign and crosses to show the patients with a tumor that was malignant. So when a new patient comes in, the doctor can measure the patient's tumor size and also record the patient's age. And so given this, how can we predict if this patient's tumor is benign or malignant? Well, given a data set like this, what the learning algorithm might do is find some boundary that separates out the malignant tumors from the benign ones. So the learning algorithm has to decide how to fit a boundary line to this data. The boundary line found by the learning algorithm will help the doctor with the diagnosis. In this case, the tumor is more likely to be benign. From this example, we've seen how two inputs, the patient's age and tumor size can be used. In other machine learning problems, often many more input values are required. My friends who worked on breast cancer detection use many additional inputs like the thickness of the tumor clump, uniformity of the cell size, uniformity of the cell shape, and so on. So to recap, supervised learning maps input X to output Y, where the learning algorithm learns from the quote, right answers. The two major types of supervised learning are regression and classification. In a regression application, like predicting prices of houses, the learning algorithm has to predict numbers from infinitely many possible output numbers. Whereas in classification, the learning algorithm has to make a prediction of a category, all of a small set of possible outputs. So you now know what is supervised learning, including both regression and classification. I hope you're having fun. Next, there's a second major type of machine learning called unsupervised learning. Let's go on to the next video to see what that is.
[{"start": 0.0, "end": 9.48, "text": " So, supervised learning algorithms learn to predict input-output or x-to-y mappings, and"}, {"start": 9.48, "end": 14.280000000000001, "text": " in the last video you saw that regression algorithms, which is a type of supervised"}, {"start": 14.280000000000001, "end": 19.64, "text": " learning algorithm, learn to predict numbers out of infinitely many possible numbers."}, {"start": 19.64, "end": 24.92, "text": " There's a second major type of supervised learning algorithm called a classification"}, {"start": 24.92, "end": 25.92, "text": " algorithm."}, {"start": 25.92, "end": 29.34, "text": " Let's take a look at what this means."}, {"start": 29.34, "end": 35.519999999999996, "text": " Take breast cancer detection as an example of a classification problem."}, {"start": 35.519999999999996, "end": 39.72, "text": " Say you're building a machine learning system so that doctors can have a diagnostic tool"}, {"start": 39.72, "end": 41.8, "text": " to detect breast cancer."}, {"start": 41.8, "end": 47.16, "text": " This is important because early detection could potentially save a patient's life."}, {"start": 47.16, "end": 52.44, "text": " Using a patient's medical records, your machine learning system tries to figure out if a tumor"}, {"start": 52.44, "end": 60.16, "text": " that is a lump is malignant, meaning cancerous or dangerous, or if that tumor, that lump,"}, {"start": 60.16, "end": 67.08, "text": " is benign, meaning that it's just a lump that isn't cancerous and isn't that dangerous."}, {"start": 67.08, "end": 71.4, "text": " Some of my friends have actually been working on this specific problem."}, {"start": 71.4, "end": 78.6, "text": " So maybe your dataset has tumors of various sizes, and these tumors are labeled as either"}, {"start": 78.6, "end": 85.88, "text": " benign, which I will designate in this example with a zero, or malignant, which I'll designate"}, {"start": 85.88, "end": 88.96, "text": " in this example with a one."}, {"start": 88.96, "end": 95.44, "text": " You can then plot your data on a graph like this, where the horizontal axis represents"}, {"start": 95.44, "end": 102.46, "text": " the size of the tumor and the vertical axis takes on only two values, zero or one, depending"}, {"start": 102.46, "end": 108.52, "text": " on whether the tumor is benign, zero, or malignant, one."}, {"start": 108.52, "end": 116.24, "text": " One reason that this is different from regression is that we're trying to predict only a small"}, {"start": 116.24, "end": 119.44, "text": " number of possible outputs or categories."}, {"start": 119.44, "end": 124.91999999999999, "text": " In this case, two possible outputs, zero or one, benign or malignant."}, {"start": 124.91999999999999, "end": 130.38, "text": " This is different from regression, which tries to predict any number out of an infinitely"}, {"start": 130.38, "end": 135.1, "text": " many number of possible numbers."}, {"start": 135.1, "end": 142.0, "text": " And so the fact that there are only two possible outputs is what makes this classification."}, {"start": 142.0, "end": 148.44, "text": " Because there are only two possible outputs or two possible categories in this example,"}, {"start": 148.44, "end": 155.44, "text": " you can also plot this dataset on a line like this, where now I'm going to use two different"}, {"start": 155.44, "end": 163.54, "text": " symbols to denote the category using a circle or an O to denote the benign examples and"}, {"start": 163.54, "end": 168.04, "text": " a cross to denote the malignant examples."}, {"start": 168.04, "end": 175.68, "text": " And if a new patient walks in for a diagnosis and they have a lump that is this size, then"}, {"start": 175.68, "end": 183.23999999999998, "text": " the question is, will your system classify this tumor as benign or malignant?"}, {"start": 183.23999999999998, "end": 188.5, "text": " It turns out that in classification problems, you can also have more than two possible output"}, {"start": 188.5, "end": 190.5, "text": " categories."}, {"start": 190.5, "end": 195.52, "text": " Maybe your learning algorithm can output multiple types of cancer diagnoses if it turns out"}, {"start": 195.52, "end": 197.94, "text": " to be malignant."}, {"start": 197.94, "end": 202.88, "text": " So let's call two different types of cancer type one and type two."}, {"start": 202.88, "end": 210.36, "text": " In this case, the algorithm would have three possible output categories it could predict."}, {"start": 210.36, "end": 216.82, "text": " And by the way, in classification, the terms output classes and output categories are often"}, {"start": 216.82, "end": 222.32, "text": " used interchangeably, so when I say class or category, when referring to the output,"}, {"start": 222.32, "end": 224.56, "text": " it means the same thing."}, {"start": 224.56, "end": 231.44, "text": " So to summarize, classification algorithms predict categories."}, {"start": 231.44, "end": 233.12, "text": " Categories don't have to be numbers."}, {"start": 233.12, "end": 234.12, "text": " It could be non-numeric."}, {"start": 234.12, "end": 242.22, "text": " For example, it can predict whether a picture is that of a cat or a dog."}, {"start": 242.22, "end": 247.84, "text": " And it can predict if a tumor is benign or malignant."}, {"start": 247.84, "end": 253.28, "text": " Categories can also be numbers like zero or one or zero or one or two."}, {"start": 253.28, "end": 258.64, "text": " But what makes classification different from regression when you're interpreting the numbers"}, {"start": 258.64, "end": 265.8, "text": " is that classification predicts a small, finite, limited set of possible output categories"}, {"start": 265.8, "end": 274.84000000000003, "text": " such as zero, one and two, but not all possible numbers in between like 0.5 or 1.7."}, {"start": 274.84000000000003, "end": 279.76, "text": " In the example of supervised learning that we've been looking at, we had only one input"}, {"start": 279.76, "end": 285.48, "text": " value, the size of the tumor."}, {"start": 285.48, "end": 291.52, "text": " But you can also use more than one input value to predict an output."}, {"start": 291.52, "end": 297.71999999999997, "text": " Here's an example, instead of just knowing the tumor size, say you also have each patient's"}, {"start": 297.71999999999997, "end": 299.79999999999995, "text": " age in years."}, {"start": 299.79999999999995, "end": 305.24, "text": " Your new data set now has two inputs, age and tumor size."}, {"start": 305.24, "end": 309.88, "text": " Plotting this new data set, we're going to use circles to show patients whose tumors"}, {"start": 309.88, "end": 317.84, "text": " are benign and crosses to show the patients with a tumor that was malignant."}, {"start": 317.84, "end": 322.88, "text": " So when a new patient comes in, the doctor can measure the patient's tumor size and also"}, {"start": 322.88, "end": 325.88, "text": " record the patient's age."}, {"start": 325.88, "end": 332.15999999999997, "text": " And so given this, how can we predict if this patient's tumor is benign or malignant?"}, {"start": 332.15999999999997, "end": 338.67999999999995, "text": " Well, given a data set like this, what the learning algorithm might do is find some boundary"}, {"start": 338.67999999999995, "end": 344.62, "text": " that separates out the malignant tumors from the benign ones."}, {"start": 344.62, "end": 350.76, "text": " So the learning algorithm has to decide how to fit a boundary line to this data."}, {"start": 350.76, "end": 356.0, "text": " The boundary line found by the learning algorithm will help the doctor with the diagnosis."}, {"start": 356.0, "end": 361.32, "text": " In this case, the tumor is more likely to be benign."}, {"start": 361.32, "end": 367.52, "text": " From this example, we've seen how two inputs, the patient's age and tumor size can be used."}, {"start": 367.52, "end": 373.4, "text": " In other machine learning problems, often many more input values are required."}, {"start": 373.4, "end": 378.03999999999996, "text": " My friends who worked on breast cancer detection use many additional inputs like the thickness"}, {"start": 378.03999999999996, "end": 383.15999999999997, "text": " of the tumor clump, uniformity of the cell size, uniformity of the cell shape, and so"}, {"start": 383.15999999999997, "end": 384.15999999999997, "text": " on."}, {"start": 384.15999999999997, "end": 392.2, "text": " So to recap, supervised learning maps input X to output Y, where the learning algorithm"}, {"start": 392.2, "end": 395.84, "text": " learns from the quote, right answers."}, {"start": 395.84, "end": 401.59999999999997, "text": " The two major types of supervised learning are regression and classification."}, {"start": 401.6, "end": 406.44, "text": " In a regression application, like predicting prices of houses, the learning algorithm has"}, {"start": 406.44, "end": 411.08000000000004, "text": " to predict numbers from infinitely many possible output numbers."}, {"start": 411.08000000000004, "end": 416.08000000000004, "text": " Whereas in classification, the learning algorithm has to make a prediction of a category, all"}, {"start": 416.08000000000004, "end": 419.32000000000005, "text": " of a small set of possible outputs."}, {"start": 419.32000000000005, "end": 425.56, "text": " So you now know what is supervised learning, including both regression and classification."}, {"start": 425.56, "end": 427.24, "text": " I hope you're having fun."}, {"start": 427.24, "end": 433.08, "text": " Next, there's a second major type of machine learning called unsupervised learning."}, {"start": 433.08, "end": 460.44, "text": " Let's go on to the next video to see what that is."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=yzAFnfHYH9E
1.6 Machine Learning Overview | Unsupervised learning part 1 --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
After supervised learning, the most widely used form of machine learning is unsupervised learning. Let's take a look at what that means. We've talked about supervised learning, and this video is about unsupervised learning. But don't let the name unsupervised fool you. Unsupervised learning is, I think, just as super as supervised learning. When we're looking at supervised learning in the last video, recall that it looks something like this. In the case of a classification problem, each example was associated with an output label Y such as benign or malignant, designated by the O's and crosses. In unsupervised learning, we're given data that isn't associated with any output labels Y. Say you're given data on patients and their tumor size and the patient's age, but not whether the tumor was benign or malignant. So the data set looks like this on the right. We're not asked to diagnose whether the tumor is benign or malignant because we're not given any labels Y in the data set. Instead, our job is to find some structure or some pattern or just find something interesting in the data. This is unsupervised learning. We call it unsupervised because we're not trying to supervise the algorithm to give some quote right answer for every input. Instead, we ask the algorithm to figure out all by itself what's interesting or what patterns or structures there might be in this data. With this particular data set, an unsupervised learning algorithm might decide that the data can be assigned to two different groups or two different clusters. And so it might decide that there's one cluster or group over here and there's another cluster or group over here. This is a particular type of unsupervised learning called a clustering algorithm because it places the unlabeled data into different clusters. And this turns out to be used in many applications. For example, clustering is used in Google News. What Google News does is every day it goes and looks at hundreds of thousands of news articles on the internet and groups related stories together. For example, here's a sample from Google News where the headline of the top article is Giant Panda Gives Birth to Rare Twin Cubs at Japan's Older Zoo. This article actually caught my eye because my daughter loves pandas and so there are a lot of stuffed panda toys and watching of panda videos in my house. And looking at this, you might notice that below this are other related articles. Maybe from the headlines alone, you can start to guess what clustering might be doing. Notice that the word panda appears here, here, here, here, and here. And notice that the word twin also appears in all five articles. And the word zoo also appears in all of these articles. So the clustering algorithm is finding articles out of all the hundreds of thousands of news articles on the internet that day, finding the articles that mention similar words and grouping them into clusters. Now what's cool is that this clustering algorithm figures out on its own which words suggest that certain articles are in the same group. What I mean is there isn't an employee at Google News who's telling the algorithm to find articles of the word panda and twins and zoo to put them into the same cluster. The news topics change every day and there are so many news stories it just isn't feasible to have people doing this every single day for all the topics the news covers. Instead, the algorithm has to figure out on its own without supervision what are the clusters of news articles today. So that's why this clustering algorithm is a type of unsupervised learning algorithm. Let's look at a second example of unsupervised learning applied to clustering genetic or DNA data. This image shows a picture of DNA microarray data. These look like tiny grids of a spreadsheet and each tiny column represents the genetic or DNA activity of one person. So for example, this entire column here is from one person's DNA and this other column is of another person. Each row represents a particular gene. So just as an example, perhaps this row here might represent a gene that affects eye color or this row here is a gene that affects how tall someone is. Researchers have even found a genetic link to whether someone dislikes certain vegetables such as broccoli or Brussels sprouts or asparagus. So next time someone asks you, why didn't you finish a salad? You can tell them, oh, maybe it's genetic. For DNA microarrays, the idea is to measure how much certain genes are expressed for each individual person. So these colors, red, green, gray, and so on, show the degree to which different individuals do or do not have a specific gene active. And what you can do is then run a clustering algorithm to group individuals into different categories or different types of people. Like maybe these individuals are grouped together and let's just call this type one. And these people are grouped into type two and these people are grouped as type three. This is unsupervised learning because we're not telling the algorithm in advance that there is a type one person with certain characteristics or a type two person with certain characteristics. Instead, what we're saying is, here's a bunch of data. I don't know what the different types of people are, but can you automatically find structure into data and automatically figure out what are the major types of individuals? Since we're not giving the algorithm the right answer for the examples in advance, this is unsupervised learning. Here's a third example. Many companies have huge databases of customer information. Given this data, can you automatically group your customers into different market segments so that you can more efficiently serve your customers? Concretely, the deeplearning.ai team did some research to better understand the deeplearning.ai community and why different individuals take these classes, subscribe to the batch weekly newsletter or attend our PyLAI events. Let's visualize the deeplearning.ai community as this collection of people. Running clustering, that is market segmentation, found a few distinct groups of individuals. One group's primary motivation is seeking knowledge to grow their skills. Perhaps this is you. And if so, that's great. A second group's primary motivation is looking for a way to develop their career. Maybe you want to get a promotion or a new job or make some career progression. If this strives you, that's great too. And yet another group wants to stay updated on how AI impacts their field of work. Perhaps this is you. That's great too. This is the clustering that our team use to try to better serve our community as we're trying to figure out what are the major categories of learners in the deeplearning.ai community. So if any of these is your top motivation for learning, that's great. And I hope I'll be able to help you on your journey. Or in case this is you and you want something totally different than the other three categories, that's fine too. And I want you to know I love you all the same. So to summarize, a clustering algorithm, which is a type of unsophisticated learning algorithm, takes data without labels and tries to automatically group them into clusters. And so maybe the next time you see or think of a panda, maybe you'll think of clustering as well. And besides clustering, there are other types of unsophisticated learning as well. Let's go on to the next video to take a look at some other types of unsophisticated learning algorithms.
[{"start": 0.0, "end": 7.5200000000000005, "text": " After supervised learning, the most widely used form of machine learning is unsupervised"}, {"start": 7.5200000000000005, "end": 8.52, "text": " learning."}, {"start": 8.52, "end": 11.36, "text": " Let's take a look at what that means."}, {"start": 11.36, "end": 16.88, "text": " We've talked about supervised learning, and this video is about unsupervised learning."}, {"start": 16.88, "end": 20.16, "text": " But don't let the name unsupervised fool you."}, {"start": 20.16, "end": 25.2, "text": " Unsupervised learning is, I think, just as super as supervised learning."}, {"start": 25.2, "end": 29.46, "text": " When we're looking at supervised learning in the last video, recall that it looks something"}, {"start": 29.46, "end": 30.5, "text": " like this."}, {"start": 30.5, "end": 35.36, "text": " In the case of a classification problem, each example was associated with an output label"}, {"start": 35.36, "end": 41.8, "text": " Y such as benign or malignant, designated by the O's and crosses."}, {"start": 41.8, "end": 48.72, "text": " In unsupervised learning, we're given data that isn't associated with any output labels"}, {"start": 48.72, "end": 55.8, "text": " Y. Say you're given data on patients and their tumor size and the patient's age, but not"}, {"start": 55.8, "end": 59.44, "text": " whether the tumor was benign or malignant."}, {"start": 59.44, "end": 63.559999999999995, "text": " So the data set looks like this on the right."}, {"start": 63.559999999999995, "end": 71.12, "text": " We're not asked to diagnose whether the tumor is benign or malignant because we're not given"}, {"start": 71.12, "end": 73.44, "text": " any labels Y in the data set."}, {"start": 73.44, "end": 79.8, "text": " Instead, our job is to find some structure or some pattern or just find something interesting"}, {"start": 79.8, "end": 81.17999999999999, "text": " in the data."}, {"start": 81.17999999999999, "end": 83.6, "text": " This is unsupervised learning."}, {"start": 83.6, "end": 88.46, "text": " We call it unsupervised because we're not trying to supervise the algorithm to give"}, {"start": 88.46, "end": 91.96, "text": " some quote right answer for every input."}, {"start": 91.96, "end": 98.39999999999999, "text": " Instead, we ask the algorithm to figure out all by itself what's interesting or what patterns"}, {"start": 98.39999999999999, "end": 102.28, "text": " or structures there might be in this data."}, {"start": 102.28, "end": 107.36, "text": " With this particular data set, an unsupervised learning algorithm might decide that the data"}, {"start": 107.36, "end": 112.3, "text": " can be assigned to two different groups or two different clusters."}, {"start": 112.3, "end": 120.64, "text": " And so it might decide that there's one cluster or group over here and there's another cluster"}, {"start": 120.64, "end": 123.53999999999999, "text": " or group over here."}, {"start": 123.53999999999999, "end": 129.04, "text": " This is a particular type of unsupervised learning called a clustering algorithm because"}, {"start": 129.04, "end": 133.84, "text": " it places the unlabeled data into different clusters."}, {"start": 133.84, "end": 137.48, "text": " And this turns out to be used in many applications."}, {"start": 137.48, "end": 142.84, "text": " For example, clustering is used in Google News."}, {"start": 142.84, "end": 147.72, "text": " What Google News does is every day it goes and looks at hundreds of thousands of news"}, {"start": 147.72, "end": 152.14, "text": " articles on the internet and groups related stories together."}, {"start": 152.14, "end": 158.04, "text": " For example, here's a sample from Google News where the headline of the top article is Giant"}, {"start": 158.04, "end": 162.22, "text": " Panda Gives Birth to Rare Twin Cubs at Japan's Older Zoo."}, {"start": 162.22, "end": 167.23999999999998, "text": " This article actually caught my eye because my daughter loves pandas and so there are"}, {"start": 167.24, "end": 172.08, "text": " a lot of stuffed panda toys and watching of panda videos in my house."}, {"start": 172.08, "end": 180.04000000000002, "text": " And looking at this, you might notice that below this are other related articles."}, {"start": 180.04000000000002, "end": 186.24, "text": " Maybe from the headlines alone, you can start to guess what clustering might be doing."}, {"start": 186.24, "end": 194.70000000000002, "text": " Notice that the word panda appears here, here, here, here, and here."}, {"start": 194.7, "end": 201.95999999999998, "text": " And notice that the word twin also appears in all five articles."}, {"start": 201.95999999999998, "end": 206.33999999999997, "text": " And the word zoo also appears in all of these articles."}, {"start": 206.33999999999997, "end": 211.28, "text": " So the clustering algorithm is finding articles out of all the hundreds of thousands of news"}, {"start": 211.28, "end": 217.0, "text": " articles on the internet that day, finding the articles that mention similar words and"}, {"start": 217.0, "end": 219.64, "text": " grouping them into clusters."}, {"start": 219.64, "end": 225.32, "text": " Now what's cool is that this clustering algorithm figures out on its own which words suggest"}, {"start": 225.32, "end": 227.88, "text": " that certain articles are in the same group."}, {"start": 227.88, "end": 232.32, "text": " What I mean is there isn't an employee at Google News who's telling the algorithm to"}, {"start": 232.32, "end": 238.04, "text": " find articles of the word panda and twins and zoo to put them into the same cluster."}, {"start": 238.04, "end": 243.56, "text": " The news topics change every day and there are so many news stories it just isn't feasible"}, {"start": 243.56, "end": 249.2, "text": " to have people doing this every single day for all the topics the news covers."}, {"start": 249.2, "end": 255.67999999999998, "text": " Instead, the algorithm has to figure out on its own without supervision what are the clusters"}, {"start": 255.67999999999998, "end": 258.03999999999996, "text": " of news articles today."}, {"start": 258.03999999999996, "end": 264.15999999999997, "text": " So that's why this clustering algorithm is a type of unsupervised learning algorithm."}, {"start": 264.15999999999997, "end": 269.32, "text": " Let's look at a second example of unsupervised learning applied to clustering genetic or"}, {"start": 269.32, "end": 272.0, "text": " DNA data."}, {"start": 272.0, "end": 276.44, "text": " This image shows a picture of DNA microarray data."}, {"start": 276.44, "end": 282.08, "text": " These look like tiny grids of a spreadsheet and each tiny column represents the genetic"}, {"start": 282.08, "end": 285.76, "text": " or DNA activity of one person."}, {"start": 285.76, "end": 292.26, "text": " So for example, this entire column here is from one person's DNA and this other column"}, {"start": 292.26, "end": 294.96, "text": " is of another person."}, {"start": 294.96, "end": 298.3, "text": " Each row represents a particular gene."}, {"start": 298.3, "end": 304.4, "text": " So just as an example, perhaps this row here might represent a gene that affects eye color"}, {"start": 304.4, "end": 310.44, "text": " or this row here is a gene that affects how tall someone is."}, {"start": 310.44, "end": 314.79999999999995, "text": " Researchers have even found a genetic link to whether someone dislikes certain vegetables"}, {"start": 314.79999999999995, "end": 319.35999999999996, "text": " such as broccoli or Brussels sprouts or asparagus."}, {"start": 319.35999999999996, "end": 322.44, "text": " So next time someone asks you, why didn't you finish a salad?"}, {"start": 322.44, "end": 326.08, "text": " You can tell them, oh, maybe it's genetic."}, {"start": 326.08, "end": 332.4, "text": " For DNA microarrays, the idea is to measure how much certain genes are expressed for each"}, {"start": 332.4, "end": 334.12, "text": " individual person."}, {"start": 334.12, "end": 340.08, "text": " So these colors, red, green, gray, and so on, show the degree to which different individuals"}, {"start": 340.08, "end": 344.88, "text": " do or do not have a specific gene active."}, {"start": 344.88, "end": 351.28000000000003, "text": " And what you can do is then run a clustering algorithm to group individuals into different"}, {"start": 351.28000000000003, "end": 354.5, "text": " categories or different types of people."}, {"start": 354.5, "end": 360.28000000000003, "text": " Like maybe these individuals are grouped together and let's just call this type one."}, {"start": 360.28, "end": 369.4, "text": " And these people are grouped into type two and these people are grouped as type three."}, {"start": 369.4, "end": 373.84, "text": " This is unsupervised learning because we're not telling the algorithm in advance that"}, {"start": 373.84, "end": 378.79999999999995, "text": " there is a type one person with certain characteristics or a type two person with certain characteristics."}, {"start": 378.79999999999995, "end": 382.4, "text": " Instead, what we're saying is, here's a bunch of data."}, {"start": 382.4, "end": 387.28, "text": " I don't know what the different types of people are, but can you automatically find structure"}, {"start": 387.28, "end": 392.32, "text": " into data and automatically figure out what are the major types of individuals?"}, {"start": 392.32, "end": 397.32, "text": " Since we're not giving the algorithm the right answer for the examples in advance, this is"}, {"start": 397.32, "end": 398.79999999999995, "text": " unsupervised learning."}, {"start": 398.79999999999995, "end": 401.76, "text": " Here's a third example."}, {"start": 401.76, "end": 406.59999999999997, "text": " Many companies have huge databases of customer information."}, {"start": 406.59999999999997, "end": 412.11999999999995, "text": " Given this data, can you automatically group your customers into different market segments"}, {"start": 412.11999999999995, "end": 416.03999999999996, "text": " so that you can more efficiently serve your customers?"}, {"start": 416.04, "end": 421.84000000000003, "text": " Concretely, the deeplearning.ai team did some research to better understand the deeplearning.ai"}, {"start": 421.84000000000003, "end": 427.68, "text": " community and why different individuals take these classes, subscribe to the batch weekly"}, {"start": 427.68, "end": 432.08000000000004, "text": " newsletter or attend our PyLAI events."}, {"start": 432.08000000000004, "end": 437.72, "text": " Let's visualize the deeplearning.ai community as this collection of people."}, {"start": 437.72, "end": 445.52000000000004, "text": " Running clustering, that is market segmentation, found a few distinct groups of individuals."}, {"start": 445.52, "end": 450.91999999999996, "text": " One group's primary motivation is seeking knowledge to grow their skills."}, {"start": 450.91999999999996, "end": 451.91999999999996, "text": " Perhaps this is you."}, {"start": 451.91999999999996, "end": 453.64, "text": " And if so, that's great."}, {"start": 453.64, "end": 458.47999999999996, "text": " A second group's primary motivation is looking for a way to develop their career."}, {"start": 458.47999999999996, "end": 462.53999999999996, "text": " Maybe you want to get a promotion or a new job or make some career progression."}, {"start": 462.53999999999996, "end": 465.62, "text": " If this strives you, that's great too."}, {"start": 465.62, "end": 471.5, "text": " And yet another group wants to stay updated on how AI impacts their field of work."}, {"start": 471.5, "end": 473.2, "text": " Perhaps this is you."}, {"start": 473.2, "end": 474.64, "text": " That's great too."}, {"start": 474.64, "end": 480.24, "text": " This is the clustering that our team use to try to better serve our community as we're"}, {"start": 480.24, "end": 486.94, "text": " trying to figure out what are the major categories of learners in the deeplearning.ai community."}, {"start": 486.94, "end": 490.47999999999996, "text": " So if any of these is your top motivation for learning, that's great."}, {"start": 490.47999999999996, "end": 493.84, "text": " And I hope I'll be able to help you on your journey."}, {"start": 493.84, "end": 500.0, "text": " Or in case this is you and you want something totally different than the other three categories,"}, {"start": 500.0, "end": 501.0, "text": " that's fine too."}, {"start": 501.0, "end": 504.58, "text": " And I want you to know I love you all the same."}, {"start": 504.58, "end": 509.76, "text": " So to summarize, a clustering algorithm, which is a type of unsophisticated learning algorithm,"}, {"start": 509.76, "end": 515.64, "text": " takes data without labels and tries to automatically group them into clusters."}, {"start": 515.64, "end": 521.12, "text": " And so maybe the next time you see or think of a panda, maybe you'll think of clustering"}, {"start": 521.12, "end": 522.68, "text": " as well."}, {"start": 522.68, "end": 527.48, "text": " And besides clustering, there are other types of unsophisticated learning as well."}, {"start": 527.48, "end": 531.56, "text": " Let's go on to the next video to take a look at some other types of unsophisticated learning"}, {"start": 531.56, "end": 535.1999999999999, "text": " algorithms."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=u7Y_b04upmQ
1.7 Machine Learning Overview | Unsupervised learning part 2 --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart! First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you saw what is unsupervised learning and one type of unsupervised learning called clustering. Let's give a slightly more formal definition of unsupervised learning and take a quick look at some other types of unsupervised learning other than clustering. Whereas in supervised learning, the data comes with both inputs x and output labels y, in unsupervised learning, the data comes only with inputs x but not output labels y, and the algorithm has to find some structure or some pattern or something interesting in the data. We've seen just one example of unsupervised learning called a clustering algorithm, which groups similar data points together. In this specialization, you learn about clustering as well as two other types of unsupervised learning. One is called anomaly detection, which is used to detect unusual events. This turns out to be really important for fraud detection in the financial system, where unusual events, unusual transactions could be a sign of fraud and for many other applications. And you also learn about dimensionality reduction. This lets you take a big data set and almost magically compress it to a much smaller data set while losing as little information as possible. In case anomaly detection and dimensionality reduction don't seem to make too much sense to you yet, don't worry about it. We'll get to this later in this specialization. Now, I'd like to ask you another question to help you check your understanding. And no pressure, if you don't get it right on the first try, it's totally fine. Please select any of the following that you think are examples of unsupervised learning. Two are unsupervised examples and two are supervised learning examples. So please take a look. Maybe you remember the spam filtering problem. If you have labeled data, you know, labeled as spam or non spam email, you can treat this as a supervised learning problem. The second example, the news story example. That's exactly the Google News and Tandem example that you saw in the last video. And so you can approach that using a clustering algorithm to group news articles together. So that would use unsupervised learning. The market segmentation example that I talked about a little bit earlier, you can do that as an unsupervised learning problem as well, because you can give your algorithm some data and ask it to discover market segments automatically. And the final example on diagnosing diabetes. Well, actually, that's a lot like our breast cancer example from the supervised learning videos. Only instead of benign or malignant tumors, we instead have diabetes or not diabetes. And so you can approach this as a supervised learning problem, just like we did for the breast tumor classification problem. Even though in this in the last video, we've talked mainly about clustering in later videos in this specialization, we'll dive much more deeply into anomaly detection and dimensionality reduction as well. So that's unsupervised learning. Before we wrap up this section, I want to share with you something that I find really exciting and useful, which is the use of Jupyter notebooks in machine learning. Let's take a look at that in the next video.
[{"start": 0.0, "end": 6.88, "text": " In the last video, you saw what is unsupervised learning and one type of unsupervised learning"}, {"start": 6.88, "end": 7.88, "text": " called clustering."}, {"start": 7.88, "end": 13.52, "text": " Let's give a slightly more formal definition of unsupervised learning and take a quick"}, {"start": 13.52, "end": 18.04, "text": " look at some other types of unsupervised learning other than clustering."}, {"start": 18.04, "end": 23.88, "text": " Whereas in supervised learning, the data comes with both inputs x and output labels y, in"}, {"start": 23.88, "end": 30.52, "text": " unsupervised learning, the data comes only with inputs x but not output labels y, and"}, {"start": 30.52, "end": 35.6, "text": " the algorithm has to find some structure or some pattern or something interesting in the"}, {"start": 35.6, "end": 36.6, "text": " data."}, {"start": 36.6, "end": 43.76, "text": " We've seen just one example of unsupervised learning called a clustering algorithm, which"}, {"start": 43.76, "end": 46.379999999999995, "text": " groups similar data points together."}, {"start": 46.379999999999995, "end": 52.84, "text": " In this specialization, you learn about clustering as well as two other types of unsupervised"}, {"start": 52.84, "end": 53.84, "text": " learning."}, {"start": 53.84, "end": 60.28, "text": " One is called anomaly detection, which is used to detect unusual events."}, {"start": 60.28, "end": 65.88000000000001, "text": " This turns out to be really important for fraud detection in the financial system, where"}, {"start": 65.88000000000001, "end": 73.56, "text": " unusual events, unusual transactions could be a sign of fraud and for many other applications."}, {"start": 73.56, "end": 77.5, "text": " And you also learn about dimensionality reduction."}, {"start": 77.5, "end": 83.16, "text": " This lets you take a big data set and almost magically compress it to a much smaller data"}, {"start": 83.16, "end": 87.4, "text": " set while losing as little information as possible."}, {"start": 87.4, "end": 92.36, "text": " In case anomaly detection and dimensionality reduction don't seem to make too much sense"}, {"start": 92.36, "end": 94.32, "text": " to you yet, don't worry about it."}, {"start": 94.32, "end": 96.68, "text": " We'll get to this later in this specialization."}, {"start": 96.68, "end": 103.64, "text": " Now, I'd like to ask you another question to help you check your understanding."}, {"start": 103.64, "end": 108.2, "text": " And no pressure, if you don't get it right on the first try, it's totally fine."}, {"start": 108.2, "end": 114.68, "text": " Please select any of the following that you think are examples of unsupervised learning."}, {"start": 114.68, "end": 119.78, "text": " Two are unsupervised examples and two are supervised learning examples."}, {"start": 119.78, "end": 124.32, "text": " So please take a look."}, {"start": 124.32, "end": 127.16, "text": " Maybe you remember the spam filtering problem."}, {"start": 127.16, "end": 132.82, "text": " If you have labeled data, you know, labeled as spam or non spam email, you can treat this"}, {"start": 132.82, "end": 135.79999999999998, "text": " as a supervised learning problem."}, {"start": 135.79999999999998, "end": 138.72, "text": " The second example, the news story example."}, {"start": 138.72, "end": 143.98, "text": " That's exactly the Google News and Tandem example that you saw in the last video."}, {"start": 143.98, "end": 149.74, "text": " And so you can approach that using a clustering algorithm to group news articles together."}, {"start": 149.74, "end": 153.16, "text": " So that would use unsupervised learning."}, {"start": 153.16, "end": 158.12, "text": " The market segmentation example that I talked about a little bit earlier, you can do that"}, {"start": 158.12, "end": 163.56, "text": " as an unsupervised learning problem as well, because you can give your algorithm some data"}, {"start": 163.56, "end": 168.20000000000002, "text": " and ask it to discover market segments automatically."}, {"start": 168.20000000000002, "end": 171.4, "text": " And the final example on diagnosing diabetes."}, {"start": 171.4, "end": 176.84, "text": " Well, actually, that's a lot like our breast cancer example from the supervised learning"}, {"start": 176.84, "end": 177.84, "text": " videos."}, {"start": 177.84, "end": 184.38, "text": " Only instead of benign or malignant tumors, we instead have diabetes or not diabetes."}, {"start": 184.38, "end": 188.44, "text": " And so you can approach this as a supervised learning problem, just like we did for the"}, {"start": 188.44, "end": 191.85999999999999, "text": " breast tumor classification problem."}, {"start": 191.85999999999999, "end": 197.64, "text": " Even though in this in the last video, we've talked mainly about clustering in later videos"}, {"start": 197.64, "end": 202.84, "text": " in this specialization, we'll dive much more deeply into anomaly detection and dimensionality"}, {"start": 202.84, "end": 205.84, "text": " reduction as well."}, {"start": 205.84, "end": 208.48, "text": " So that's unsupervised learning."}, {"start": 208.48, "end": 212.06, "text": " Before we wrap up this section, I want to share with you something that I find really"}, {"start": 212.06, "end": 216.44, "text": " exciting and useful, which is the use of Jupyter notebooks in machine learning."}, {"start": 216.44, "end": 243.44, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=SRDUSIcnW8M
1.8 Machine Learning Overview | Jupyter Notebooks --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
So far in the videos, you've seen supervised learning and unsupervised learning, and also examples of both. For you to more deeply understand these concepts, I'd like to invite you in this class to see, run, and maybe later write code yourself to implement these concepts. The most widely used tool by machine learning and data science practitioners today is the Jupyter Notebook. This is the default environment that a lot of us use to code up and experiment and try things out. And so in this class, right here in your web browser, you'll be able to use a Jupyter Notebook environment to test out some of these ideas for yourself as well. This is not some made up, simplified environment. This is the exact same environment, the exact same tool, the Jupyter Notebook that developers are using in many large companies right now. One type of lab that you see throughout this class are optional labs, which are ones you can open and run one line at a time with usually not needing to write any code yourself. So optional labs are designed to be very easy, and I can guarantee you will get full marks for every single one of them, because there are no marks. All you need to do is open it up and just run the code we've provided. And by reading through and running the code in the optional labs, you see how machine learning code runs. And you should be able to complete them relatively quickly, just by running it one line at a time from top to bottom. Optional labs are completely optional, so you don't have to do them at all if you don't want to. But I hope you will take a look, because running through them will give you a deeper feel, give you a little bit more experience with what machine learning algorithms, what machine learning code actually looks like. Starting next week, there will also be some practice labs, which will give you an opportunity to write some of that code yourself. But we'll get to that next week, so don't worry about it for now. And I hope you just go through the next optional lab and get through the rest of the content for this week. So let's take a look at an example of a notebook. This is what you see when you go to the first optional lab. Feel free to scroll up and down and browse and mouse over the different menus and take a look at the different options here. You might notice that there are two types of these blocks, also called cells, in the notebook. And there are two types of cells. One is what's called a markdown cell, which means basically a bunch of text. So here you can actually edit the text if you don't like the text that we wrote. But this is text that describes the code. Then there's a second type of block or cell, which looks like this, which is a code cell. So here we've already provided the code. And if you want to run this code cell, hitting shift enter will run the code in this code cell. Oh, and by the way, if you click on a markdown cell, so it's showing all this formatting, go ahead and hit shift enter on your keyboard as well. And that will also convert it back to this nicely formatted text. This optional lab shows some common Python code. So you can go ahead and run this afterwards in your own Jupyter notebook. When you jump into this notebook yourself, what I'd like you to do is select the cells and hit shift enter. Well, read through the code, see if it makes sense, you know, try to make a prediction about what you think this code will do, and then hit shift enter and then see what the code actually does. And if you feel like it, feel free to go in and edit the code, you know, change the code and then run it and see what happens. And if you haven't played in a Jupyter notebook environment before, I hope you become more familiar with Python in a Jupyter notebook. I spend a lot of hours playing around in Jupyter notebooks, and so I hope you have fun with them too. And after that, I look forward to seeing you in the next video, where we'll take the supervised learning problem and start to flesh out our first supervised learning algorithm. I hope that'll be fun too, and look forward to seeing you there.
[{"start": 0.0, "end": 8.6, "text": " So far in the videos, you've seen supervised learning and unsupervised learning, and also"}, {"start": 8.6, "end": 10.6, "text": " examples of both."}, {"start": 10.6, "end": 16.72, "text": " For you to more deeply understand these concepts, I'd like to invite you in this class to see,"}, {"start": 16.72, "end": 21.98, "text": " run, and maybe later write code yourself to implement these concepts."}, {"start": 21.98, "end": 26.740000000000002, "text": " The most widely used tool by machine learning and data science practitioners today is the"}, {"start": 26.740000000000002, "end": 28.46, "text": " Jupyter Notebook."}, {"start": 28.46, "end": 33.44, "text": " This is the default environment that a lot of us use to code up and experiment and try"}, {"start": 33.44, "end": 34.92, "text": " things out."}, {"start": 34.92, "end": 40.28, "text": " And so in this class, right here in your web browser, you'll be able to use a Jupyter Notebook"}, {"start": 40.28, "end": 45.08, "text": " environment to test out some of these ideas for yourself as well."}, {"start": 45.08, "end": 48.72, "text": " This is not some made up, simplified environment."}, {"start": 48.72, "end": 54.24, "text": " This is the exact same environment, the exact same tool, the Jupyter Notebook that developers"}, {"start": 54.24, "end": 57.64, "text": " are using in many large companies right now."}, {"start": 57.64, "end": 62.36, "text": " One type of lab that you see throughout this class are optional labs, which are ones you"}, {"start": 62.36, "end": 70.04, "text": " can open and run one line at a time with usually not needing to write any code yourself."}, {"start": 70.04, "end": 76.12, "text": " So optional labs are designed to be very easy, and I can guarantee you will get full marks"}, {"start": 76.12, "end": 80.02, "text": " for every single one of them, because there are no marks."}, {"start": 80.02, "end": 84.68, "text": " All you need to do is open it up and just run the code we've provided."}, {"start": 84.68, "end": 89.2, "text": " And by reading through and running the code in the optional labs, you see how machine"}, {"start": 89.2, "end": 91.44000000000001, "text": " learning code runs."}, {"start": 91.44000000000001, "end": 97.88000000000001, "text": " And you should be able to complete them relatively quickly, just by running it one line at a"}, {"start": 97.88000000000001, "end": 100.68, "text": " time from top to bottom."}, {"start": 100.68, "end": 104.76, "text": " Optional labs are completely optional, so you don't have to do them at all if you don't"}, {"start": 104.76, "end": 105.76, "text": " want to."}, {"start": 105.76, "end": 111.4, "text": " But I hope you will take a look, because running through them will give you a deeper feel,"}, {"start": 111.4, "end": 115.64, "text": " give you a little bit more experience with what machine learning algorithms, what machine"}, {"start": 115.64, "end": 119.12, "text": " learning code actually looks like."}, {"start": 119.12, "end": 123.36000000000001, "text": " Starting next week, there will also be some practice labs, which will give you an opportunity"}, {"start": 123.36000000000001, "end": 126.0, "text": " to write some of that code yourself."}, {"start": 126.0, "end": 129.08, "text": " But we'll get to that next week, so don't worry about it for now."}, {"start": 129.08, "end": 133.76, "text": " And I hope you just go through the next optional lab and get through the rest of the content"}, {"start": 133.76, "end": 134.76, "text": " for this week."}, {"start": 134.76, "end": 138.84, "text": " So let's take a look at an example of a notebook."}, {"start": 138.84, "end": 142.36, "text": " This is what you see when you go to the first optional lab."}, {"start": 142.36, "end": 147.6, "text": " Feel free to scroll up and down and browse and mouse over the different menus and take"}, {"start": 147.6, "end": 151.28, "text": " a look at the different options here."}, {"start": 151.28, "end": 156.24, "text": " You might notice that there are two types of these blocks, also called cells, in the"}, {"start": 156.24, "end": 157.72, "text": " notebook."}, {"start": 157.72, "end": 160.24, "text": " And there are two types of cells."}, {"start": 160.24, "end": 166.36, "text": " One is what's called a markdown cell, which means basically a bunch of text."}, {"start": 166.36, "end": 171.4, "text": " So here you can actually edit the text if you don't like the text that we wrote."}, {"start": 171.4, "end": 175.48000000000002, "text": " But this is text that describes the code."}, {"start": 175.48000000000002, "end": 182.76000000000002, "text": " Then there's a second type of block or cell, which looks like this, which is a code cell."}, {"start": 182.76000000000002, "end": 185.16000000000003, "text": " So here we've already provided the code."}, {"start": 185.16000000000003, "end": 190.64000000000001, "text": " And if you want to run this code cell, hitting shift enter will run the code in this code"}, {"start": 190.64000000000001, "end": 191.64000000000001, "text": " cell."}, {"start": 191.64, "end": 197.64, "text": " Oh, and by the way, if you click on a markdown cell, so it's showing all this formatting,"}, {"start": 197.64, "end": 201.67999999999998, "text": " go ahead and hit shift enter on your keyboard as well."}, {"start": 201.67999999999998, "end": 206.48, "text": " And that will also convert it back to this nicely formatted text."}, {"start": 206.48, "end": 209.77999999999997, "text": " This optional lab shows some common Python code."}, {"start": 209.77999999999997, "end": 214.5, "text": " So you can go ahead and run this afterwards in your own Jupyter notebook."}, {"start": 214.5, "end": 219.33999999999997, "text": " When you jump into this notebook yourself, what I'd like you to do is select the cells"}, {"start": 219.33999999999997, "end": 221.2, "text": " and hit shift enter."}, {"start": 221.2, "end": 225.48, "text": " Well, read through the code, see if it makes sense, you know, try to make a prediction"}, {"start": 225.48, "end": 232.07999999999998, "text": " about what you think this code will do, and then hit shift enter and then see what the"}, {"start": 232.07999999999998, "end": 234.07999999999998, "text": " code actually does."}, {"start": 234.07999999999998, "end": 238.16, "text": " And if you feel like it, feel free to go in and edit the code, you know, change the code"}, {"start": 238.16, "end": 240.95999999999998, "text": " and then run it and see what happens."}, {"start": 240.95999999999998, "end": 245.28, "text": " And if you haven't played in a Jupyter notebook environment before, I hope you become more"}, {"start": 245.28, "end": 248.76, "text": " familiar with Python in a Jupyter notebook."}, {"start": 248.76, "end": 253.72, "text": " I spend a lot of hours playing around in Jupyter notebooks, and so I hope you have fun with"}, {"start": 253.72, "end": 255.12, "text": " them too."}, {"start": 255.12, "end": 259.92, "text": " And after that, I look forward to seeing you in the next video, where we'll take the supervised"}, {"start": 259.92, "end": 265.86, "text": " learning problem and start to flesh out our first supervised learning algorithm."}, {"start": 265.86, "end": 279.36, "text": " I hope that'll be fun too, and look forward to seeing you there."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=isx7QB_j4jY
1.9 Machine Learning Overview | Linear regression model part 1 --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, we'll look at what the overall process of supervised learning is like. Specifically, you see the first model of this course, a linear regression model. That just means filling a straight line to your data. It's probably the most widely used learning algorithm in the world today. And as you get familiar with linear regression, many of the concepts you see here will also apply to other machine learning models, models that you'll see later in this specialization. Let's start with a problem that you can address using linear regression. Say you want to predict the price of a house based on the size of a house. This is example we see in earlier this week. We're going to use a data set on house sizes and prices from Portland, a city in the United States. Here with a graph where the horizontal axis is the size of the house in square feet and the vertical axis is the price of the house in thousands of dollars. Let's go ahead and plot the data points for various houses in the data set. Here each data point, each of these little crosses is a house with a size and a price that it most recently was sold for. Now let's say you're a real estate agent in Portland and you're helping a client sell her house. And she's asking you, how much do you think you're going to get for this house? This data set might help you estimate the price she could get for it. You start by measuring the size of the house and it turns out that her house is 1250 square feet. How much do you think this house could sell for? One thing you could do is you can build a linear regression model from this data set. Your model will fit a straight line to the data, which might look like this. And based on this straight line fit to the data, you can kind of see that if a house is 1250 square feet, it will intersect the best fit line over here. And if you trace that to the vertical axis on the left, you can see the prices may be around here, say about $220,000. So this is an example of what's called a supervised learning model. We call this supervised learning because you are first training your model by giving it data that has the right answers. Because you give the model examples of houses with both the size of the house as well as the price that the model should predict for each house. Where here are the prices, that is the right answers are given for every house in the data set. This linear regression model is a particular type of supervised learning model. It's called a regression model because it predicts numbers as the output, like prices and dollars. Any supervised learning model that predicts a number, such as 220,000 or 1.5 or negative 33.2 is addressing what's called a regression problem. So linear regression is one example of a regression model, but there are other models for addressing regression problems too. And we'll see some of those later in course two of this specialization. And just to remind you, in contrast with the regression model, the other most common type of supervised learning model is called a classification model. A classification model predicts categories or discrete categories, such as predicting if a picture is of a cat meow or a dog woof. Or if given a medical record, it has to predict if a patient has a particular disease. You see more about classification models later in this course as well. So as a reminder about the difference between classification and regression, in classification, there are only a small number of possible outputs. If your model is recognizing cats versus dogs, that's two possible outputs. Or maybe you're trying to recognize any of 10 possible medical conditions in a patient. So there's a discrete, finite set of possible outputs. We call it a classification problem. Whereas in regression, there are infinitely many possible numbers that the model could output. In addition to visualizing this data as a plot here on the left, there's one other way of looking at the data that would be useful. And that's a data table here on the right. The data comprises a set of inputs. This would be the size of the house, which is this column here. It also has outputs, you're trying to predict the price, which is this column here. Notice that the horizontal and vertical axes correspond to these two columns, the size and the price. And so if you have, say, 47 rows in this data table, then there are 47 of these little crosses on the plot of the left, each cross corresponding to one row of the table. For example, the first row of the table is a house with size 2104 square feet. So that's around here. And this house, so for $400,000, which is around here. So this first row of the table is plotted as this data point over here. Now let's look at some notation for describing the data. This is notation that you find useful throughout your journey in machine learning. As you increasingly get familiar with machine learning terminology, this would be terminology that you can use to talk about machine learning concepts with others as well, since a lot of this is quite standard across AI. You'll be seeing this notation multiple times in this specialization. So it's okay if you don't remember everything the first time through, it will naturally become more familiar over time. The data set that you just saw and that is used to train the model is called a training set. Note that your client's house is not in this data set because it's not yet sold. So no one knows what his price is. So to predict the price of your client's house, you first train your model to learn from the training set and that model can then predict your client's house's price. In machine learning, the standard notation to denote the input here is lowercase x and we call this the input variable. It's also called a feature or an input feature. For example, for the first house in your training set, x is the size of the house, so x equals 2104. The standard notation to denote the output variable, which you're trying to predict, which is also sometimes called the target variable, is lowercase y. And so here, y is the price of the house and for the first training example, this is equal to 400. So y equals 400. So the data set has one row for each house and in this particular training set, there are 47 rows with each row representing a different training example. We're going to use lowercase m to refer to the total number of training examples and so here, m is equal to 47. To indicate a single training example, we're going to use the notation parentheses x comma y. So for the first training example, x comma y, this pair of numbers is 2104 comma 400. Now we have a lot of different training examples. We have 47 of them in fact. So to refer to a specific training example, this will correspond to a specific row in this table on the left. I'm going to use the notation x superscript in parentheses i comma y superscript in parentheses i. The superscript tells us that this is the i-th training example, such as the first, second or third up to the 47th training example. i here refers to a specific row in the table. So for instance, here is the first example when i equals 1 in the training set. And so x superscript 1 is equal to 2104 and y superscript 1 is equal to 400. And let's add the superscript 1 here as well. Just a note, this superscript i in parentheses is not exponentiation. So when I write this, this is not x squared. This is not x to the power of 2. It just refers to the second training example. So this i is just an index in the training set and refers to row i in the table. In this video, you saw what a training set is like as well as standard notation for describing this training set. In the next video, let's look at what it'll take to take this training set that you just saw and feed it to a learning algorithm so that the algorithm can learn from this data. We'll see that in the next video.
[{"start": 0.0, "end": 8.0, "text": " In this video, we'll look at what the overall process of supervised learning is like."}, {"start": 8.0, "end": 14.24, "text": " Specifically, you see the first model of this course, a linear regression model."}, {"start": 14.24, "end": 17.52, "text": " That just means filling a straight line to your data."}, {"start": 17.52, "end": 21.92, "text": " It's probably the most widely used learning algorithm in the world today."}, {"start": 21.92, "end": 27.32, "text": " And as you get familiar with linear regression, many of the concepts you see here will also"}, {"start": 27.32, "end": 34.4, "text": " apply to other machine learning models, models that you'll see later in this specialization."}, {"start": 34.4, "end": 38.44, "text": " Let's start with a problem that you can address using linear regression."}, {"start": 38.44, "end": 42.28, "text": " Say you want to predict the price of a house based on the size of a house."}, {"start": 42.28, "end": 45.56, "text": " This is example we see in earlier this week."}, {"start": 45.56, "end": 51.72, "text": " We're going to use a data set on house sizes and prices from Portland, a city in the United"}, {"start": 51.72, "end": 53.120000000000005, "text": " States."}, {"start": 53.12, "end": 58.16, "text": " Here with a graph where the horizontal axis is the size of the house in square feet and"}, {"start": 58.16, "end": 63.76, "text": " the vertical axis is the price of the house in thousands of dollars."}, {"start": 63.76, "end": 68.36, "text": " Let's go ahead and plot the data points for various houses in the data set."}, {"start": 68.36, "end": 74.44, "text": " Here each data point, each of these little crosses is a house with a size and a price"}, {"start": 74.44, "end": 77.88, "text": " that it most recently was sold for."}, {"start": 77.88, "end": 83.39999999999999, "text": " Now let's say you're a real estate agent in Portland and you're helping a client sell"}, {"start": 83.39999999999999, "end": 84.39999999999999, "text": " her house."}, {"start": 84.39999999999999, "end": 87.8, "text": " And she's asking you, how much do you think you're going to get for this house?"}, {"start": 87.8, "end": 92.52, "text": " This data set might help you estimate the price she could get for it."}, {"start": 92.52, "end": 98.28, "text": " You start by measuring the size of the house and it turns out that her house is 1250 square"}, {"start": 98.28, "end": 99.39999999999999, "text": " feet."}, {"start": 99.39999999999999, "end": 102.52, "text": " How much do you think this house could sell for?"}, {"start": 102.52, "end": 108.0, "text": " One thing you could do is you can build a linear regression model from this data set."}, {"start": 108.0, "end": 113.36, "text": " Your model will fit a straight line to the data, which might look like this."}, {"start": 113.36, "end": 117.6, "text": " And based on this straight line fit to the data, you can kind of see that if a house"}, {"start": 117.6, "end": 124.64, "text": " is 1250 square feet, it will intersect the best fit line over here."}, {"start": 124.64, "end": 129.28, "text": " And if you trace that to the vertical axis on the left, you can see the prices may be"}, {"start": 129.28, "end": 133.16, "text": " around here, say about $220,000."}, {"start": 133.16, "end": 138.04, "text": " So this is an example of what's called a supervised learning model."}, {"start": 138.04, "end": 142.2, "text": " We call this supervised learning because you are first training your model by giving it"}, {"start": 142.2, "end": 144.72, "text": " data that has the right answers."}, {"start": 144.72, "end": 148.76, "text": " Because you give the model examples of houses with both the size of the house as well as"}, {"start": 148.76, "end": 152.36, "text": " the price that the model should predict for each house."}, {"start": 152.36, "end": 157.6, "text": " Where here are the prices, that is the right answers are given for every house in the data"}, {"start": 157.6, "end": 159.08, "text": " set."}, {"start": 159.08, "end": 163.56, "text": " This linear regression model is a particular type of supervised learning model."}, {"start": 163.56, "end": 168.16000000000003, "text": " It's called a regression model because it predicts numbers as the output, like prices"}, {"start": 168.16000000000003, "end": 169.88000000000002, "text": " and dollars."}, {"start": 169.88000000000002, "end": 177.96, "text": " Any supervised learning model that predicts a number, such as 220,000 or 1.5 or negative"}, {"start": 177.96, "end": 183.64000000000001, "text": " 33.2 is addressing what's called a regression problem."}, {"start": 183.64, "end": 190.0, "text": " So linear regression is one example of a regression model, but there are other models for addressing"}, {"start": 190.0, "end": 192.55999999999997, "text": " regression problems too."}, {"start": 192.55999999999997, "end": 198.2, "text": " And we'll see some of those later in course two of this specialization."}, {"start": 198.2, "end": 203.88, "text": " And just to remind you, in contrast with the regression model, the other most common type"}, {"start": 203.88, "end": 208.72, "text": " of supervised learning model is called a classification model."}, {"start": 208.72, "end": 214.68, "text": " A classification model predicts categories or discrete categories, such as predicting"}, {"start": 214.68, "end": 219.72, "text": " if a picture is of a cat meow or a dog woof."}, {"start": 219.72, "end": 225.52, "text": " Or if given a medical record, it has to predict if a patient has a particular disease."}, {"start": 225.52, "end": 230.24, "text": " You see more about classification models later in this course as well."}, {"start": 230.24, "end": 235.76, "text": " So as a reminder about the difference between classification and regression, in classification,"}, {"start": 235.76, "end": 239.23999999999998, "text": " there are only a small number of possible outputs."}, {"start": 239.23999999999998, "end": 245.48, "text": " If your model is recognizing cats versus dogs, that's two possible outputs."}, {"start": 245.48, "end": 252.82, "text": " Or maybe you're trying to recognize any of 10 possible medical conditions in a patient."}, {"start": 252.82, "end": 256.52, "text": " So there's a discrete, finite set of possible outputs."}, {"start": 256.52, "end": 259.03999999999996, "text": " We call it a classification problem."}, {"start": 259.03999999999996, "end": 263.12, "text": " Whereas in regression, there are infinitely many possible numbers that the model could"}, {"start": 263.12, "end": 264.62, "text": " output."}, {"start": 264.62, "end": 270.36, "text": " In addition to visualizing this data as a plot here on the left, there's one other way"}, {"start": 270.36, "end": 274.4, "text": " of looking at the data that would be useful."}, {"start": 274.4, "end": 278.88, "text": " And that's a data table here on the right."}, {"start": 278.88, "end": 282.16, "text": " The data comprises a set of inputs."}, {"start": 282.16, "end": 286.68, "text": " This would be the size of the house, which is this column here."}, {"start": 286.68, "end": 295.12, "text": " It also has outputs, you're trying to predict the price, which is this column here."}, {"start": 295.12, "end": 302.8, "text": " Notice that the horizontal and vertical axes correspond to these two columns, the size"}, {"start": 302.8, "end": 305.2, "text": " and the price."}, {"start": 305.2, "end": 315.08, "text": " And so if you have, say, 47 rows in this data table, then there are 47 of these little crosses"}, {"start": 315.08, "end": 322.84, "text": " on the plot of the left, each cross corresponding to one row of the table."}, {"start": 322.84, "end": 331.4, "text": " For example, the first row of the table is a house with size 2104 square feet."}, {"start": 331.4, "end": 335.36, "text": " So that's around here."}, {"start": 335.36, "end": 342.41999999999996, "text": " And this house, so for $400,000, which is around here."}, {"start": 342.42, "end": 349.56, "text": " So this first row of the table is plotted as this data point over here."}, {"start": 349.56, "end": 353.98, "text": " Now let's look at some notation for describing the data."}, {"start": 353.98, "end": 359.0, "text": " This is notation that you find useful throughout your journey in machine learning."}, {"start": 359.0, "end": 364.36, "text": " As you increasingly get familiar with machine learning terminology, this would be terminology"}, {"start": 364.36, "end": 369.44, "text": " that you can use to talk about machine learning concepts with others as well, since a lot"}, {"start": 369.44, "end": 373.2, "text": " of this is quite standard across AI."}, {"start": 373.2, "end": 377.04, "text": " You'll be seeing this notation multiple times in this specialization."}, {"start": 377.04, "end": 381.56, "text": " So it's okay if you don't remember everything the first time through, it will naturally"}, {"start": 381.56, "end": 385.08, "text": " become more familiar over time."}, {"start": 385.08, "end": 392.08, "text": " The data set that you just saw and that is used to train the model is called a training"}, {"start": 392.08, "end": 393.08, "text": " set."}, {"start": 393.08, "end": 398.76, "text": " Note that your client's house is not in this data set because it's not yet sold."}, {"start": 398.76, "end": 401.56, "text": " So no one knows what his price is."}, {"start": 401.56, "end": 406.88, "text": " So to predict the price of your client's house, you first train your model to learn from the"}, {"start": 406.88, "end": 414.03999999999996, "text": " training set and that model can then predict your client's house's price."}, {"start": 414.03999999999996, "end": 420.96, "text": " In machine learning, the standard notation to denote the input here is lowercase x and"}, {"start": 420.96, "end": 423.76, "text": " we call this the input variable."}, {"start": 423.76, "end": 429.59999999999997, "text": " It's also called a feature or an input feature."}, {"start": 429.59999999999997, "end": 436.52, "text": " For example, for the first house in your training set, x is the size of the house, so x equals"}, {"start": 436.52, "end": 440.28, "text": " 2104."}, {"start": 440.28, "end": 446.92, "text": " The standard notation to denote the output variable, which you're trying to predict,"}, {"start": 446.92, "end": 455.64000000000004, "text": " which is also sometimes called the target variable, is lowercase y."}, {"start": 455.64000000000004, "end": 462.6, "text": " And so here, y is the price of the house and for the first training example, this is equal"}, {"start": 462.6, "end": 464.5, "text": " to 400."}, {"start": 464.5, "end": 468.20000000000005, "text": " So y equals 400."}, {"start": 468.2, "end": 477.4, "text": " So the data set has one row for each house and in this particular training set, there"}, {"start": 477.4, "end": 484.2, "text": " are 47 rows with each row representing a different training example."}, {"start": 484.2, "end": 489.59999999999997, "text": " We're going to use lowercase m to refer to the total number of training examples and"}, {"start": 489.59999999999997, "end": 494.03999999999996, "text": " so here, m is equal to 47."}, {"start": 494.04, "end": 500.76000000000005, "text": " To indicate a single training example, we're going to use the notation parentheses x comma"}, {"start": 500.76000000000005, "end": 502.28000000000003, "text": " y."}, {"start": 502.28000000000003, "end": 514.28, "text": " So for the first training example, x comma y, this pair of numbers is 2104 comma 400."}, {"start": 514.28, "end": 517.44, "text": " Now we have a lot of different training examples."}, {"start": 517.44, "end": 519.46, "text": " We have 47 of them in fact."}, {"start": 519.46, "end": 526.0, "text": " So to refer to a specific training example, this will correspond to a specific row in"}, {"start": 526.0, "end": 527.72, "text": " this table on the left."}, {"start": 527.72, "end": 536.24, "text": " I'm going to use the notation x superscript in parentheses i comma y superscript in parentheses"}, {"start": 536.24, "end": 538.12, "text": " i."}, {"start": 538.12, "end": 545.6800000000001, "text": " The superscript tells us that this is the i-th training example, such as the first,"}, {"start": 545.68, "end": 549.52, "text": " second or third up to the 47th training example."}, {"start": 549.52, "end": 556.04, "text": " i here refers to a specific row in the table."}, {"start": 556.04, "end": 566.12, "text": " So for instance, here is the first example when i equals 1 in the training set."}, {"start": 566.12, "end": 577.32, "text": " And so x superscript 1 is equal to 2104 and y superscript 1 is equal to 400."}, {"start": 577.32, "end": 582.52, "text": " And let's add the superscript 1 here as well."}, {"start": 582.52, "end": 588.36, "text": " Just a note, this superscript i in parentheses is not exponentiation."}, {"start": 588.36, "end": 593.6800000000001, "text": " So when I write this, this is not x squared."}, {"start": 593.6800000000001, "end": 596.04, "text": " This is not x to the power of 2."}, {"start": 596.04, "end": 599.7199999999999, "text": " It just refers to the second training example."}, {"start": 599.7199999999999, "end": 606.8, "text": " So this i is just an index in the training set and refers to row i in the table."}, {"start": 606.8, "end": 612.48, "text": " In this video, you saw what a training set is like as well as standard notation for describing"}, {"start": 612.48, "end": 613.48, "text": " this training set."}, {"start": 613.48, "end": 617.64, "text": " In the next video, let's look at what it'll take to take this training set that you just"}, {"start": 617.64, "end": 624.1999999999999, "text": " saw and feed it to a learning algorithm so that the algorithm can learn from this data."}, {"start": 624.2, "end": 626.6, "text": " We'll see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=vrTHO5zRq6s
1.10 Machine Learning Overview | Linear regression model part 2 --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart! First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's look in this video at the process of how supervised learning works. Supervised learning algorithm will input a data set and then what exactly does it do and what does it output? Let's find out in this video. Recall that a training set in supervised learning includes both the input features, such as the size of the hulls, and also the output targets, such as the price of the hulls. The output targets are the right answers that the model will learn from. To train the model, you feed the training set, both the input features and the output targets, to your learning algorithm. Then your supervised learning algorithm will produce some function. We'll write this function as lowercase f, where f stands for function. Historically, this function used to be called a hypothesis, but I'm just going to call it a function f in this class. The job of f is to take a new input x and output an estimate or prediction, which I'm going to call y hat, and it's written like the variable y with this little hat symbol on top. In machine learning, the convention is that y hat is the estimate or the prediction for y. The function f is called the model. X is called the input or the input feature, and the output of the model is the prediction y hat. The model's prediction is the estimated value of y. When the symbol is just a letter y, then that refers to the target, which is the actual true value in the training set. In contrast, y hat is an estimate. It may or may not be the actual true value. Well, if you're helping your client to sell their hulls, well, the true price of the hulls is unknown until they sell it. So your model f, given the size, outputs a price which is the estimated, that is the prediction of what the true price will be. Now, when we design a learning algorithm, a key question is, how are we going to represent the function f? Or in other words, what is the math formula we're going to use to compute f? For now, let's stick with f being a straight line. So your function can be written as f subscript w comma b of x equals, I'm going to use w times x plus b. I'll define w and b soon, but for now, just know that w and b are numbers, and the values chosen for w and b will determine the prediction y hat based on the input feature x. So this f w b of x means f is a function that takes x's input, and depending on the values of w and b, f will output some value of a prediction y hat. As an alternative to writing this f w comma b of x, I'll sometimes just write f of x without explicitly including w and b in the subscript. It's just a simple notation, but means exactly the same thing as f w b of x. Let's plot the training set on the graph where the input feature x is on the horizontal axis, and the output target y is on the vertical axis. Remember, the algorithm learns from this data and generates a best fit line like maybe this one here. This straight line is the linear function f w b of x equals w times x plus b. Or more simply, we can drop w and b and just write f of x equals w x plus b. Here's what this function is doing, it's making predictions for the value of y using a straight line function of x. So you may ask, why are we choosing a linear function where linear function is just a fancy term for a straight line, instead of some nonlinear function like a curve or a parabola? Well, sometimes you want to fit more complex nonlinear functions as well, like a curve like this, but since this linear function is relatively simple and easy to work with, let's use a line as a foundation that will eventually help you to get to more complex models that are nonlinear. This particular model has a name, it's called linear regression. More specifically, this is linear regression with one variable, where the phrase one variable means that there's a single input variable or feature x, namely the size of the house. Another name for a linear model with one input variable is univariate linear regression, where uni means one in Latin and where variate means variable. So univariate is just a fancy way of saying one variable. In a later video, you'll also see a variation of regression where you want to make a prediction based not just on the size of a house, but on a bunch of other things that you may know about the house, such as number of bedrooms and other features. And by the way, when you're done with this video, there is another optional lab. You don't need to write any code, just review it, run the code and see what it does. That will show you how to define in Python a straight line function. And the lab will let you choose the values of W and B to try to fit the training data. You don't have to do the lab if you don't want to, but I hope you play of it when you're done watching this video. So that's linear regression. In order for you to make this work, one of the most important things you have to do is construct a cost function. The idea of a cost function is one of the most universal and important ideas in machine learning and is used in both linear regression and in training many of the most advanced AI models in the world. So let's go on to the next video and take a look at how you can construct a cost function.
[{"start": 0.0, "end": 7.38, "text": " Let's look in this video at the process of how supervised learning works."}, {"start": 7.38, "end": 11.3, "text": " Supervised learning algorithm will input a data set and then what exactly does it do"}, {"start": 11.3, "end": 12.84, "text": " and what does it output?"}, {"start": 12.84, "end": 15.68, "text": " Let's find out in this video."}, {"start": 15.68, "end": 21.0, "text": " Recall that a training set in supervised learning includes both the input features, such as"}, {"start": 21.0, "end": 26.16, "text": " the size of the hulls, and also the output targets, such as the price of the hulls."}, {"start": 26.16, "end": 31.08, "text": " The output targets are the right answers that the model will learn from."}, {"start": 31.08, "end": 35.8, "text": " To train the model, you feed the training set, both the input features and the output"}, {"start": 35.8, "end": 40.24, "text": " targets, to your learning algorithm."}, {"start": 40.24, "end": 45.28, "text": " Then your supervised learning algorithm will produce some function."}, {"start": 45.28, "end": 50.08, "text": " We'll write this function as lowercase f, where f stands for function."}, {"start": 50.08, "end": 55.6, "text": " Historically, this function used to be called a hypothesis, but I'm just going to call"}, {"start": 55.6, "end": 58.800000000000004, "text": " it a function f in this class."}, {"start": 58.800000000000004, "end": 69.36, "text": " The job of f is to take a new input x and output an estimate or prediction, which I'm"}, {"start": 69.36, "end": 76.44, "text": " going to call y hat, and it's written like the variable y with this little hat symbol"}, {"start": 76.44, "end": 78.52000000000001, "text": " on top."}, {"start": 78.52, "end": 85.84, "text": " In machine learning, the convention is that y hat is the estimate or the prediction for"}, {"start": 85.84, "end": 88.0, "text": " y."}, {"start": 88.0, "end": 92.0, "text": " The function f is called the model."}, {"start": 92.0, "end": 99.12, "text": " X is called the input or the input feature, and the output of the model is the prediction"}, {"start": 99.12, "end": 101.12, "text": " y hat."}, {"start": 101.12, "end": 105.88, "text": " The model's prediction is the estimated value of y."}, {"start": 105.88, "end": 112.6, "text": " When the symbol is just a letter y, then that refers to the target, which is the actual"}, {"start": 112.6, "end": 115.75999999999999, "text": " true value in the training set."}, {"start": 115.75999999999999, "end": 118.56, "text": " In contrast, y hat is an estimate."}, {"start": 118.56, "end": 120.96, "text": " It may or may not be the actual true value."}, {"start": 120.96, "end": 126.6, "text": " Well, if you're helping your client to sell their hulls, well, the true price of the hulls"}, {"start": 126.6, "end": 129.2, "text": " is unknown until they sell it."}, {"start": 129.2, "end": 135.2, "text": " So your model f, given the size, outputs a price which is the estimated, that is the"}, {"start": 135.2, "end": 138.79999999999998, "text": " prediction of what the true price will be."}, {"start": 138.79999999999998, "end": 146.67999999999998, "text": " Now, when we design a learning algorithm, a key question is, how are we going to represent"}, {"start": 146.67999999999998, "end": 148.44, "text": " the function f?"}, {"start": 148.44, "end": 155.56, "text": " Or in other words, what is the math formula we're going to use to compute f?"}, {"start": 155.56, "end": 159.82, "text": " For now, let's stick with f being a straight line."}, {"start": 159.82, "end": 168.88, "text": " So your function can be written as f subscript w comma b of x equals, I'm going to use w"}, {"start": 168.88, "end": 172.35999999999999, "text": " times x plus b."}, {"start": 172.35999999999999, "end": 179.92, "text": " I'll define w and b soon, but for now, just know that w and b are numbers, and the values"}, {"start": 179.92, "end": 188.44, "text": " chosen for w and b will determine the prediction y hat based on the input feature x."}, {"start": 188.44, "end": 196.92, "text": " So this f w b of x means f is a function that takes x's input, and depending on the values"}, {"start": 196.92, "end": 204.26, "text": " of w and b, f will output some value of a prediction y hat."}, {"start": 204.26, "end": 212.4, "text": " As an alternative to writing this f w comma b of x, I'll sometimes just write f of x without"}, {"start": 212.4, "end": 215.68, "text": " explicitly including w and b in the subscript."}, {"start": 215.68, "end": 223.24, "text": " It's just a simple notation, but means exactly the same thing as f w b of x."}, {"start": 223.24, "end": 229.32, "text": " Let's plot the training set on the graph where the input feature x is on the horizontal axis,"}, {"start": 229.32, "end": 233.84, "text": " and the output target y is on the vertical axis."}, {"start": 233.84, "end": 240.84, "text": " Remember, the algorithm learns from this data and generates a best fit line like maybe this"}, {"start": 240.84, "end": 242.54000000000002, "text": " one here."}, {"start": 242.54, "end": 252.16, "text": " This straight line is the linear function f w b of x equals w times x plus b."}, {"start": 252.16, "end": 260.84, "text": " Or more simply, we can drop w and b and just write f of x equals w x plus b."}, {"start": 260.84, "end": 266.76, "text": " Here's what this function is doing, it's making predictions for the value of y using a straight"}, {"start": 266.76, "end": 268.68, "text": " line function of x."}, {"start": 268.68, "end": 274.54, "text": " So you may ask, why are we choosing a linear function where linear function is just a fancy"}, {"start": 274.54, "end": 280.84000000000003, "text": " term for a straight line, instead of some nonlinear function like a curve or a parabola?"}, {"start": 280.84000000000003, "end": 286.52, "text": " Well, sometimes you want to fit more complex nonlinear functions as well, like a curve"}, {"start": 286.52, "end": 292.16, "text": " like this, but since this linear function is relatively simple and easy to work with,"}, {"start": 292.16, "end": 297.18, "text": " let's use a line as a foundation that will eventually help you to get to more complex"}, {"start": 297.18, "end": 300.44, "text": " models that are nonlinear."}, {"start": 300.44, "end": 304.76, "text": " This particular model has a name, it's called linear regression."}, {"start": 304.76, "end": 310.72, "text": " More specifically, this is linear regression with one variable, where the phrase one variable"}, {"start": 310.72, "end": 317.72, "text": " means that there's a single input variable or feature x, namely the size of the house."}, {"start": 317.72, "end": 324.16, "text": " Another name for a linear model with one input variable is univariate linear regression,"}, {"start": 324.16, "end": 329.8, "text": " where uni means one in Latin and where variate means variable."}, {"start": 329.8, "end": 334.8, "text": " So univariate is just a fancy way of saying one variable."}, {"start": 334.8, "end": 340.56, "text": " In a later video, you'll also see a variation of regression where you want to make a prediction"}, {"start": 340.56, "end": 345.52000000000004, "text": " based not just on the size of a house, but on a bunch of other things that you may know"}, {"start": 345.52000000000004, "end": 349.20000000000005, "text": " about the house, such as number of bedrooms and other features."}, {"start": 349.20000000000005, "end": 353.92, "text": " And by the way, when you're done with this video, there is another optional lab."}, {"start": 353.92, "end": 358.92, "text": " You don't need to write any code, just review it, run the code and see what it does."}, {"start": 358.92, "end": 363.6, "text": " That will show you how to define in Python a straight line function."}, {"start": 363.6, "end": 370.56, "text": " And the lab will let you choose the values of W and B to try to fit the training data."}, {"start": 370.56, "end": 374.52000000000004, "text": " You don't have to do the lab if you don't want to, but I hope you play of it when you're"}, {"start": 374.52000000000004, "end": 377.20000000000005, "text": " done watching this video."}, {"start": 377.20000000000005, "end": 379.34000000000003, "text": " So that's linear regression."}, {"start": 379.34000000000003, "end": 382.96000000000004, "text": " In order for you to make this work, one of the most important things you have to do is"}, {"start": 382.96, "end": 385.47999999999996, "text": " construct a cost function."}, {"start": 385.47999999999996, "end": 390.28, "text": " The idea of a cost function is one of the most universal and important ideas in machine"}, {"start": 390.28, "end": 395.91999999999996, "text": " learning and is used in both linear regression and in training many of the most advanced"}, {"start": 395.91999999999996, "end": 397.79999999999995, "text": " AI models in the world."}, {"start": 397.8, "end": 414.6, "text": " So let's go on to the next video and take a look at how you can construct a cost function."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=ZzeDtSmrRoU
1.11 Machine Learning Overview | Cost function formula --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In order to implement linear regression, the first key step is for us to define something called a cost function. This is something we'll build in this video. And the cost function will tell us how well the model is doing so that we can try to get it to do better. Let's look at what this means. Recall that you have a training set that contains input features x and output targets y. And the model you're going to use to fit this training set is this linear function fwb of x equals w times x plus b. To introduce a little bit more terminology, the w and b are called the parameters of the model. In machine learning, parameters of a model are the variables you can adjust during training in order to improve the model. Sometimes, you also hear the parameters w and b referred to as coefficients or as weights. Now, let's take a look at what these parameters w and b do. Depending on the values you've chosen for w and b, you get a different function f of x, which generates a different line on the graph. And remember that we can write f of x as a shorthand for fwb of x. We're going to take a look at some plots of f of x on a chart. Maybe you're already familiar with drawing lines on charts, but even if this is a review for you, I hope this will help you build intuition on how w and b, the parameters, determine f. When w is equal to 0 and b is equal to 1.5, then f looks like this horizontal line. In this case, the function f of x is 0 times x plus 1.5, so f is always a constant value. It always predicts 1.5 for the estimated value of y. So y hat is always equal to b. And here, b is also called the y-intercept, because that's where it crosses the vertical axis or the y-axis on this graph. As a second example, if w is 0.5 and b is equal to 0, then f of x is 0.5 times x. When x is 0, the prediction is also 0. And when x is 2, then the prediction is 0.5 times 2, which is 1. So you get a line that looks like this. And notice that the slope is 0.5 divided by 1, so the value of w gives you the slope of the line, which is 0.5. And finally, if w equals 0.5 and b equals 1, then f of x is 0.5 times x plus 1. And when x is 0, then f of x equals b, which is 1, so the line intersects the vertical axis at b, the y-intercept. Also when x is 2, then f of x is 2, so the line looks like this. Again, the slope is 0.5 divided by 1, so the value of w gives you the slope, which is 0.5. Recall that you have a training set, like the one shown here, with linear regression, what you want to do is to choose values for the parameters w and b so that the straight line you get from the function f somehow fits the data well, like maybe this line shown here. And when I say that the line fits the data, visually you can think of this to mean that the line defined by f is roughly passing through or somewhat close to the training examples as compared to other possible lines that are not as close to these points. And just to remind you of some notation, a training example like this point here is defined by x super strip i, y super strip i, where y is the target. For a given input x i, the function f also makes a predicted value for y, and the value that it predicts for y is y hat i, shown here. For our choice of a model, f of x i is w times x i plus b. Stated differently, the prediction y hat i is f of w b of x i, where for the model where using f of x i is equal to w x i plus b. So now the question is, how do you find values for w and b so that the prediction y hat i is close to the true target y i for many or maybe all training examples x i y i? To answer that question, let's first take a look at how to measure how well a line fits the training data. To do that, we're going to construct our cost function. The cost function takes the prediction y hat and compares it to the target y by taking y hat minus y. This difference is called the error. We're measuring how far off the prediction is from the target. Next let's compute the square of this error. Also we're going to want to compute this term for different training examples i in the training set. So when measuring the error for example i, we'll compute this squared error term. Finally, we want to measure the error across the entire training set. In particular, let's sum up the squared errors like this. We'll sum from i equals 1, 2, 3, all the way up to m. And remember that m is the number of training examples, which is 47 for this data set. Notice that if we have more training examples, m is larger and your cost function will calculate a bigger number since it's summing over more examples. So to build a cost function that doesn't automatically get bigger as the training set size gets larger, by convention, we will compute the average squared error instead of the total squared error. And we do that by dividing by m like this. Okay, we're nearly there. Just one last thing. By convention, the cost function that machine learning people use actually divides by two times m. The extra division by two is just meant to make some of our later calculations a little bit neater. But the cost function still works whether you include this division by two or not. So this expression right here is the cost function. And we're going to write J of WB to refer to the cost function. This is also called the squared error cost function. And it's called this because you're taking the square of these error terms. In machine learning, different people will use different cost functions for different applications. But the squared error cost function is by far the most commonly used one for linear regression. And for that matter, for all regression problems, where it seems to give good results for many applications. So just as a reminder, the prediction y hat is equal to the output of the model f at x. So we can rewrite the cost function J of WB as one over two m times the sum from i equals one to m of f of x i minus y i, the quantity squared. Eventually, we're going to want to find values of W and B that make the cost function small. But before going there, let's first gain more intuition about what J of WB is really computing. At this point, you might be thinking we've done a whole lot of math to define the cost function. But what exactly is it doing? Let's go on to the next video, where we'll step through one example of what the cost function is really computing. That I hope will help you build intuition about what it means if J of WB is large versus if the cost J is small. Let's go on to the next video.
[{"start": 0.0, "end": 6.32, "text": " In order to implement linear regression, the first key step is for us to define something"}, {"start": 6.32, "end": 8.32, "text": " called a cost function."}, {"start": 8.32, "end": 10.8, "text": " This is something we'll build in this video."}, {"start": 10.8, "end": 16.04, "text": " And the cost function will tell us how well the model is doing so that we can try to get"}, {"start": 16.04, "end": 17.68, "text": " it to do better."}, {"start": 17.68, "end": 19.68, "text": " Let's look at what this means."}, {"start": 19.68, "end": 26.6, "text": " Recall that you have a training set that contains input features x and output targets y."}, {"start": 26.6, "end": 33.68, "text": " And the model you're going to use to fit this training set is this linear function fwb of"}, {"start": 33.68, "end": 37.32, "text": " x equals w times x plus b."}, {"start": 37.32, "end": 43.480000000000004, "text": " To introduce a little bit more terminology, the w and b are called the parameters of the"}, {"start": 43.480000000000004, "end": 44.88, "text": " model."}, {"start": 44.88, "end": 51.08, "text": " In machine learning, parameters of a model are the variables you can adjust during training"}, {"start": 51.08, "end": 53.84, "text": " in order to improve the model."}, {"start": 53.84, "end": 61.64, "text": " Sometimes, you also hear the parameters w and b referred to as coefficients or as weights."}, {"start": 61.64, "end": 68.28, "text": " Now, let's take a look at what these parameters w and b do."}, {"start": 68.28, "end": 73.56, "text": " Depending on the values you've chosen for w and b, you get a different function f of"}, {"start": 73.56, "end": 77.56, "text": " x, which generates a different line on the graph."}, {"start": 77.56, "end": 84.84, "text": " And remember that we can write f of x as a shorthand for fwb of x."}, {"start": 84.84, "end": 90.0, "text": " We're going to take a look at some plots of f of x on a chart."}, {"start": 90.0, "end": 94.18, "text": " Maybe you're already familiar with drawing lines on charts, but even if this is a review"}, {"start": 94.18, "end": 100.4, "text": " for you, I hope this will help you build intuition on how w and b, the parameters, determine"}, {"start": 100.4, "end": 102.72, "text": " f."}, {"start": 102.72, "end": 111.0, "text": " When w is equal to 0 and b is equal to 1.5, then f looks like this horizontal line."}, {"start": 111.0, "end": 120.6, "text": " In this case, the function f of x is 0 times x plus 1.5, so f is always a constant value."}, {"start": 120.6, "end": 125.75999999999999, "text": " It always predicts 1.5 for the estimated value of y."}, {"start": 125.75999999999999, "end": 129.12, "text": " So y hat is always equal to b."}, {"start": 129.12, "end": 134.72, "text": " And here, b is also called the y-intercept, because that's where it crosses the vertical"}, {"start": 134.72, "end": 138.72, "text": " axis or the y-axis on this graph."}, {"start": 138.72, "end": 148.96, "text": " As a second example, if w is 0.5 and b is equal to 0, then f of x is 0.5 times x."}, {"start": 148.96, "end": 152.56, "text": " When x is 0, the prediction is also 0."}, {"start": 152.56, "end": 158.52, "text": " And when x is 2, then the prediction is 0.5 times 2, which is 1."}, {"start": 158.52, "end": 160.88000000000002, "text": " So you get a line that looks like this."}, {"start": 160.88000000000002, "end": 169.56, "text": " And notice that the slope is 0.5 divided by 1, so the value of w gives you the slope of"}, {"start": 169.56, "end": 173.44, "text": " the line, which is 0.5."}, {"start": 173.44, "end": 184.68, "text": " And finally, if w equals 0.5 and b equals 1, then f of x is 0.5 times x plus 1."}, {"start": 184.68, "end": 191.72, "text": " And when x is 0, then f of x equals b, which is 1, so the line intersects the vertical"}, {"start": 191.72, "end": 195.16, "text": " axis at b, the y-intercept."}, {"start": 195.16, "end": 201.04000000000002, "text": " Also when x is 2, then f of x is 2, so the line looks like this."}, {"start": 201.04000000000002, "end": 209.4, "text": " Again, the slope is 0.5 divided by 1, so the value of w gives you the slope, which is 0.5."}, {"start": 209.4, "end": 215.36, "text": " Recall that you have a training set, like the one shown here, with linear regression,"}, {"start": 215.36, "end": 220.32, "text": " what you want to do is to choose values for the parameters w and b so that the straight"}, {"start": 220.32, "end": 225.88, "text": " line you get from the function f somehow fits the data well, like maybe this line shown"}, {"start": 225.88, "end": 227.28, "text": " here."}, {"start": 227.28, "end": 232.76, "text": " And when I say that the line fits the data, visually you can think of this to mean that"}, {"start": 232.76, "end": 239.88, "text": " the line defined by f is roughly passing through or somewhat close to the training examples"}, {"start": 239.88, "end": 245.79999999999998, "text": " as compared to other possible lines that are not as close to these points."}, {"start": 245.79999999999998, "end": 252.84, "text": " And just to remind you of some notation, a training example like this point here is defined"}, {"start": 252.84, "end": 260.8, "text": " by x super strip i, y super strip i, where y is the target."}, {"start": 260.8, "end": 270.44, "text": " For a given input x i, the function f also makes a predicted value for y, and the value"}, {"start": 270.44, "end": 275.2, "text": " that it predicts for y is y hat i, shown here."}, {"start": 275.2, "end": 281.48, "text": " For our choice of a model, f of x i is w times x i plus b."}, {"start": 281.48, "end": 290.72, "text": " Stated differently, the prediction y hat i is f of w b of x i, where for the model where"}, {"start": 290.72, "end": 299.76000000000005, "text": " using f of x i is equal to w x i plus b."}, {"start": 299.76000000000005, "end": 307.76000000000005, "text": " So now the question is, how do you find values for w and b so that the prediction y hat i"}, {"start": 307.76000000000005, "end": 317.54, "text": " is close to the true target y i for many or maybe all training examples x i y i?"}, {"start": 317.54, "end": 323.08000000000004, "text": " To answer that question, let's first take a look at how to measure how well a line fits"}, {"start": 323.08000000000004, "end": 325.04, "text": " the training data."}, {"start": 325.04, "end": 329.08000000000004, "text": " To do that, we're going to construct our cost function."}, {"start": 329.08000000000004, "end": 336.8, "text": " The cost function takes the prediction y hat and compares it to the target y by taking"}, {"start": 336.8, "end": 339.90000000000003, "text": " y hat minus y."}, {"start": 339.90000000000003, "end": 342.96000000000004, "text": " This difference is called the error."}, {"start": 342.96, "end": 347.96, "text": " We're measuring how far off the prediction is from the target."}, {"start": 347.96, "end": 353.2, "text": " Next let's compute the square of this error."}, {"start": 353.2, "end": 358.24, "text": " Also we're going to want to compute this term for different training examples i in the training"}, {"start": 358.24, "end": 359.41999999999996, "text": " set."}, {"start": 359.41999999999996, "end": 365.59999999999997, "text": " So when measuring the error for example i, we'll compute this squared error term."}, {"start": 365.59999999999997, "end": 370.34, "text": " Finally, we want to measure the error across the entire training set."}, {"start": 370.34, "end": 374.11999999999995, "text": " In particular, let's sum up the squared errors like this."}, {"start": 374.11999999999995, "end": 380.28, "text": " We'll sum from i equals 1, 2, 3, all the way up to m."}, {"start": 380.28, "end": 386.28, "text": " And remember that m is the number of training examples, which is 47 for this data set."}, {"start": 386.28, "end": 391.88, "text": " Notice that if we have more training examples, m is larger and your cost function will calculate"}, {"start": 391.88, "end": 395.91999999999996, "text": " a bigger number since it's summing over more examples."}, {"start": 395.92, "end": 403.24, "text": " So to build a cost function that doesn't automatically get bigger as the training set size gets larger,"}, {"start": 403.24, "end": 409.52000000000004, "text": " by convention, we will compute the average squared error instead of the total squared"}, {"start": 409.52000000000004, "end": 410.52000000000004, "text": " error."}, {"start": 410.52000000000004, "end": 414.24, "text": " And we do that by dividing by m like this."}, {"start": 414.24, "end": 417.52000000000004, "text": " Okay, we're nearly there."}, {"start": 417.52000000000004, "end": 419.24, "text": " Just one last thing."}, {"start": 419.24, "end": 425.44, "text": " By convention, the cost function that machine learning people use actually divides by two"}, {"start": 425.44, "end": 426.64, "text": " times m."}, {"start": 426.64, "end": 431.71999999999997, "text": " The extra division by two is just meant to make some of our later calculations a little"}, {"start": 431.71999999999997, "end": 432.71999999999997, "text": " bit neater."}, {"start": 432.71999999999997, "end": 437.38, "text": " But the cost function still works whether you include this division by two or not."}, {"start": 437.38, "end": 440.68, "text": " So this expression right here is the cost function."}, {"start": 440.68, "end": 448.4, "text": " And we're going to write J of WB to refer to the cost function."}, {"start": 448.4, "end": 452.32, "text": " This is also called the squared error cost function."}, {"start": 452.32, "end": 457.44, "text": " And it's called this because you're taking the square of these error terms."}, {"start": 457.44, "end": 462.48, "text": " In machine learning, different people will use different cost functions for different"}, {"start": 462.48, "end": 463.48, "text": " applications."}, {"start": 463.48, "end": 468.71999999999997, "text": " But the squared error cost function is by far the most commonly used one for linear"}, {"start": 468.71999999999997, "end": 470.12, "text": " regression."}, {"start": 470.12, "end": 475.64, "text": " And for that matter, for all regression problems, where it seems to give good results for many"}, {"start": 475.64, "end": 476.8, "text": " applications."}, {"start": 476.8, "end": 485.72, "text": " So just as a reminder, the prediction y hat is equal to the output of the model f at x."}, {"start": 485.72, "end": 495.76, "text": " So we can rewrite the cost function J of WB as one over two m times the sum from i equals"}, {"start": 495.76, "end": 503.04, "text": " one to m of f of x i minus y i, the quantity squared."}, {"start": 503.04, "end": 510.36, "text": " Eventually, we're going to want to find values of W and B that make the cost function small."}, {"start": 510.36, "end": 518.64, "text": " But before going there, let's first gain more intuition about what J of WB is really computing."}, {"start": 518.64, "end": 523.28, "text": " At this point, you might be thinking we've done a whole lot of math to define the cost"}, {"start": 523.28, "end": 524.4, "text": " function."}, {"start": 524.4, "end": 526.88, "text": " But what exactly is it doing?"}, {"start": 526.88, "end": 531.8000000000001, "text": " Let's go on to the next video, where we'll step through one example of what the cost"}, {"start": 531.8, "end": 533.4, "text": " function is really computing."}, {"start": 533.4, "end": 539.4399999999999, "text": " That I hope will help you build intuition about what it means if J of WB is large versus"}, {"start": 539.4399999999999, "end": 542.04, "text": " if the cost J is small."}, {"start": 542.04, "end": 562.4399999999999, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=33VvmIZof0E
1.12 Machine Learning Overview | Cost function intuition --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
We've seen the mathematical definition of the cost function. Now let's build some intuition about what the cost function is really doing. In this video, we'll walk through one example to see how the cost function can be used to find the best parameters for your model. I know this video is a little bit longer than the others, but bear with me, I think it'll be worth it. To recap, here's what we've seen about the cost function so far. You want to fit a straight line to the training data, so you have this model fwb of x is w times x plus b. And here the model's parameters are w and b. Now depending on the values chosen for these parameters, you get different straight lines like this. And you want to find values for w and b so that the straight line fits the training data well. To measure how well a choice of w and b fits the training data, you have a cost function j. And what the cost function j does is it measures the difference between the model's predictions and the actual true values for y. What you see later is that linear regression would try to find values for w and b that make a j of w be as small as possible. In math, we write it like this. We want to minimize j as a function of w and b. So now, in order for us to better visualize the cost function j, let's work with a simplified version of the linear regression model. We're going to use the model fw of x is w times x. You can think of this as taking the original model on the left and getting rid of the parameter b or setting the parameter b equal to zero. So it just goes away from the equation. So f is now just w times x. So you now have just one parameter w and your cost function j looks similar to what it was before, taking the difference and squaring it, except that now f is equal to w times xi and j is now a function of just w. And the goal becomes a little bit different as well because you have just one parameter w, not w and b. So with this simplified model, the goal is to find a value for w that minimizes j of w. To see this visually, what this means is that if b is set to zero, then f defines a line that looks like this. And you see that the line passes through the origin here because when x is zero, well, f of x is zero too. Now using this simplified model, let's see how the cost function changes as you choose different values for the parameter w. In particular, let's look at graphs of the model f of x and the cost function j. I'm going to plot these side by side and you'll be able to see how the two are related. First notice that for f subscript w, when the parameter w is fixed, that is, is always a constant value, then f w is only a function of x, which means that the estimated value of y depends on the value of the input x. In contrast, looking to the right, the cost function j is a function of w, where w controls the slope of the line defined by f w, so the cost defined by j depends on a parameter, in this case, the parameter w. So let's go ahead and plot these functions f w of x and j of w side by side so you can see how they are related. We'll start to the model that is the function f w of x on the left. Here the input feature x is on the horizontal axis and the output value y is on the vertical axis. Here's a plot of three points representing the trading set at positions 1 1, 2 2, and 3 3. Let's pick a value for w, say w is 1. So for this choice of w, the function f w looks like this straight line with a slope of 1. Now what you can do next is calculate the cost j when w equals 1. So you may recall that the cost function is defined as follows, it's the squared error cost function. So if you substitute f w of x i with w times x i, the cost function looks like this, where this expression is now w times x i minus y i. So for this value of w, it turns out that the error term inside the cost function, this w times x i minus y i is equal to zero for each of the three data points. Notice for this data set, when x is 1, then y is 1. When w is also 1, then f of x equals 1. So f of x equals y for this first training example, and the difference is zero. Plugging this into the cost function j, you get zero squared. Similarly, when x is 2, then y is 2, and f of x is also 2. So again, f of x equals y for the second training example. In the cost function, the squared error for the second example is also zero squared. Finally, when x is 3, then y is 3, and f of 3 is also 3. In the cost function, the third squared error term is also zero squared. So for all three examples in this training set, f of x i equals y i for each training example i. So f of x i minus y i is zero. So for this particular data set, when w is 1, then the cost j is equal to zero. Now what you can do on the right is plot the cost function j. And notice that because the cost function is a function of the parameter w, the horizontal axis is now labeled w and not x. And the vertical axis is now j and not y. So you have j of 1 equals to zero. In other words, when w equals 1, j of w is zero. So let me go ahead and plot that. Now let's look at how f and j change for different values of w. w can take on a range of values, right? So w can take on negative values, w can be zero, and it can take on positive values too. So what if w is equal to 0.5 instead of 1? What would these graphs look like then? Let's go ahead and plot that. So let's set w to be equal to 0.5. And in this case, the function f of x now looks like this, is aligned with a slope equal to 0.5. And let's also compute the cost j when w is 0.5. Recall that the cost function is measuring the squared error or difference between the estimated value that is y hat i, which is f of x i, and the true value that is y i for each example i. So visually you can see that the error or difference is equal to the height of this vertical line here when x is equal to 1, because this little line is the gap between the actual value of y and the value that the function f predicted, which is a bit further down here. So for this first example, when x is 1, f of x is 0.5, so the squared error on the first example is 0.5 minus 1 squared. Remember the cost function will sum over all the training examples in the training set. So let's go on to the second training example. When x is 2, the model is predicting f of x is 1, and the actual value of y is 2. So the error for the second example is equal to the height of this little line segment here, and the squared error is the square of the length of this line segment, so you get 1 minus 2 squared. Let's do the third example. Repeating this process, the error here, also shown by this line segment, is 1.5 minus 3 squared. Next we sum up all of these terms, which turns out to be equal to 3.5. Then we multiply this term by 1 over 2m, where m is the number of training examples. Since there are three training examples, m equals 3, so this is equal to 1 over 2 times 3, where this m here is 3. If we work out the math, this turns out to be 3.5 divided by 6, so the cost j is about 0.58. Let's go ahead and plot that over there on the right. Now let's try one more value for w. How about if w equals 0? What do the graphs for f and j look like when w is equal to 0? It turns out that if w is equal to 0, then f of x is just this horizontal line that is exactly on the x-axis, and so the error for each example is a line that goes from each point down to the horizontal line that represents f of x equals 0. So the cost j when w equals 0 is 1 over 2m times the quantity 1 squared plus 2 squared plus 3 squared, and that's equal to 1 over 6 times 14, which is about 2.33. So let's plot this point where w is 0 and j of 0 is 2.33 over here. And you can keep doing this for other values of w. Since w can be any number, it can also be a negative value, so if w is negative 0.5, then the line f is a downward sloping line like this. It turns out that when w is negative 0.5, then you end up with an even higher cost around 5.25, which is this point up here. And you can continue computing the cost function for different values of w and so on and plot these, right? So it turns out that by computing a range of values, you can slowly trace out what the cost function j looks like. And that's what j is. To recap, each value of parameter w corresponds to a different straight line fit, f of x, on the graph to the left. And for the given training set, that choice for a value of w corresponds to a single point, a single point, on the graph on the right. Because for each value of w, you can calculate a cost j of w. For example, when w equals 1, this corresponds to this straight line fit through the data. And it also corresponds to this point on the graph of j, where w equals 1 and the cost j of 1 equals 0. Whereas when w equals 0.5, this gives you this line, which has a smaller slope. And this line, in combination with the training set, corresponds to this point on the cost function graph at w equals 0.5. So for each value of w, you wind up with a different line and its corresponding cost j of w. And you can use these points to trace out this plot on the right. Given this, how can you choose the value of w that results in the function f fitting the data well? Well, as you can imagine, choosing a value of w that causes j of w to be as small as possible seems like a good bet. J is the cost function that measures how big the squared errors are. So choosing w that minimizes these squared errors, makes them as small as possible, would give us a good model. In this example, if you were to choose the value of w that results in the smallest possible value of j of w, you'd end up picking w equals 1. And as you can see, that's actually a pretty good choice. This results in a line that fits the training data very well. So that's how in linear regression, you use the cost function to find the value of w that minimizes j. Or in the more general case, when we had parameters w and b rather than just w, you find the values of w and b that minimize j. So to summarize, you saw plots of both f and j and work through how the two are related. As you vary w or vary w and b, you end up with different straight lines. And when that straight line passes close to data, the cost j is small. So the goal of linear regression is to find the parameters w or w and b that results in the smallest possible value for the cost function j. Now in this video, we worked through our example with a simplified problem using only w. In the next video, let's visualize what the cost function looks like for the full version of linear regression using both w and b. And you see some cool 3D plots. Let's go to the next video.
[{"start": 0.0, "end": 5.76, "text": " We've seen the mathematical definition of the cost function."}, {"start": 5.76, "end": 10.28, "text": " Now let's build some intuition about what the cost function is really doing."}, {"start": 10.28, "end": 14.88, "text": " In this video, we'll walk through one example to see how the cost function can be used to"}, {"start": 14.88, "end": 17.740000000000002, "text": " find the best parameters for your model."}, {"start": 17.740000000000002, "end": 21.400000000000002, "text": " I know this video is a little bit longer than the others, but bear with me, I think it'll"}, {"start": 21.400000000000002, "end": 22.400000000000002, "text": " be worth it."}, {"start": 22.400000000000002, "end": 26.98, "text": " To recap, here's what we've seen about the cost function so far."}, {"start": 26.98, "end": 33.52, "text": " You want to fit a straight line to the training data, so you have this model fwb of x is w"}, {"start": 33.52, "end": 35.88, "text": " times x plus b."}, {"start": 35.88, "end": 40.16, "text": " And here the model's parameters are w and b."}, {"start": 40.16, "end": 45.44, "text": " Now depending on the values chosen for these parameters, you get different straight lines"}, {"start": 45.44, "end": 47.24, "text": " like this."}, {"start": 47.24, "end": 53.120000000000005, "text": " And you want to find values for w and b so that the straight line fits the training data"}, {"start": 53.120000000000005, "end": 54.480000000000004, "text": " well."}, {"start": 54.48, "end": 61.559999999999995, "text": " To measure how well a choice of w and b fits the training data, you have a cost function"}, {"start": 61.559999999999995, "end": 63.36, "text": " j."}, {"start": 63.36, "end": 69.03999999999999, "text": " And what the cost function j does is it measures the difference between the model's predictions"}, {"start": 69.03999999999999, "end": 73.6, "text": " and the actual true values for y."}, {"start": 73.6, "end": 78.92, "text": " What you see later is that linear regression would try to find values for w and b that"}, {"start": 78.92, "end": 83.4, "text": " make a j of w be as small as possible."}, {"start": 83.4, "end": 85.84, "text": " In math, we write it like this."}, {"start": 85.84, "end": 93.5, "text": " We want to minimize j as a function of w and b."}, {"start": 93.5, "end": 99.28, "text": " So now, in order for us to better visualize the cost function j, let's work with a simplified"}, {"start": 99.28, "end": 102.16000000000001, "text": " version of the linear regression model."}, {"start": 102.16000000000001, "end": 108.96000000000001, "text": " We're going to use the model fw of x is w times x."}, {"start": 108.96, "end": 114.11999999999999, "text": " You can think of this as taking the original model on the left and getting rid of the parameter"}, {"start": 114.11999999999999, "end": 118.19999999999999, "text": " b or setting the parameter b equal to zero."}, {"start": 118.19999999999999, "end": 121.33999999999999, "text": " So it just goes away from the equation."}, {"start": 121.33999999999999, "end": 125.16, "text": " So f is now just w times x."}, {"start": 125.16, "end": 132.0, "text": " So you now have just one parameter w and your cost function j looks similar to what it was"}, {"start": 132.0, "end": 139.64, "text": " before, taking the difference and squaring it, except that now f is equal to w times"}, {"start": 139.64, "end": 145.68, "text": " xi and j is now a function of just w."}, {"start": 145.68, "end": 150.08, "text": " And the goal becomes a little bit different as well because you have just one parameter"}, {"start": 150.08, "end": 152.6, "text": " w, not w and b."}, {"start": 152.6, "end": 160.12, "text": " So with this simplified model, the goal is to find a value for w that minimizes j of"}, {"start": 160.12, "end": 161.96, "text": " w."}, {"start": 161.96, "end": 168.60000000000002, "text": " To see this visually, what this means is that if b is set to zero, then f defines a line"}, {"start": 168.60000000000002, "end": 170.60000000000002, "text": " that looks like this."}, {"start": 170.60000000000002, "end": 175.92000000000002, "text": " And you see that the line passes through the origin here because when x is zero, well,"}, {"start": 175.92000000000002, "end": 178.48000000000002, "text": " f of x is zero too."}, {"start": 178.48000000000002, "end": 183.34, "text": " Now using this simplified model, let's see how the cost function changes as you choose"}, {"start": 183.34, "end": 187.36, "text": " different values for the parameter w."}, {"start": 187.36, "end": 195.48000000000002, "text": " In particular, let's look at graphs of the model f of x and the cost function j."}, {"start": 195.48000000000002, "end": 202.32000000000002, "text": " I'm going to plot these side by side and you'll be able to see how the two are related."}, {"start": 202.32000000000002, "end": 209.36, "text": " First notice that for f subscript w, when the parameter w is fixed, that is, is always"}, {"start": 209.36, "end": 217.28, "text": " a constant value, then f w is only a function of x, which means that the estimated value"}, {"start": 217.28, "end": 223.08, "text": " of y depends on the value of the input x."}, {"start": 223.08, "end": 231.64000000000001, "text": " In contrast, looking to the right, the cost function j is a function of w, where w controls"}, {"start": 231.64, "end": 239.95999999999998, "text": " the slope of the line defined by f w, so the cost defined by j depends on a parameter,"}, {"start": 239.95999999999998, "end": 244.0, "text": " in this case, the parameter w."}, {"start": 244.0, "end": 251.82, "text": " So let's go ahead and plot these functions f w of x and j of w side by side so you can"}, {"start": 251.82, "end": 255.83999999999997, "text": " see how they are related."}, {"start": 255.84, "end": 262.68, "text": " We'll start to the model that is the function f w of x on the left."}, {"start": 262.68, "end": 269.96, "text": " Here the input feature x is on the horizontal axis and the output value y is on the vertical"}, {"start": 269.96, "end": 270.96, "text": " axis."}, {"start": 270.96, "end": 278.4, "text": " Here's a plot of three points representing the trading set at positions 1 1, 2 2, and"}, {"start": 278.4, "end": 280.76, "text": " 3 3."}, {"start": 280.76, "end": 285.04, "text": " Let's pick a value for w, say w is 1."}, {"start": 285.04, "end": 293.68, "text": " So for this choice of w, the function f w looks like this straight line with a slope"}, {"start": 293.68, "end": 295.52000000000004, "text": " of 1."}, {"start": 295.52000000000004, "end": 303.62, "text": " Now what you can do next is calculate the cost j when w equals 1."}, {"start": 303.62, "end": 308.40000000000003, "text": " So you may recall that the cost function is defined as follows, it's the squared error"}, {"start": 308.40000000000003, "end": 310.16, "text": " cost function."}, {"start": 310.16, "end": 319.52000000000004, "text": " So if you substitute f w of x i with w times x i, the cost function looks like this, where"}, {"start": 319.52000000000004, "end": 325.24, "text": " this expression is now w times x i minus y i."}, {"start": 325.24, "end": 331.06, "text": " So for this value of w, it turns out that the error term inside the cost function, this"}, {"start": 331.06, "end": 337.90000000000003, "text": " w times x i minus y i is equal to zero for each of the three data points."}, {"start": 337.9, "end": 342.35999999999996, "text": " Notice for this data set, when x is 1, then y is 1."}, {"start": 342.35999999999996, "end": 347.08, "text": " When w is also 1, then f of x equals 1."}, {"start": 347.08, "end": 353.56, "text": " So f of x equals y for this first training example, and the difference is zero."}, {"start": 353.56, "end": 358.0, "text": " Plugging this into the cost function j, you get zero squared."}, {"start": 358.0, "end": 365.03999999999996, "text": " Similarly, when x is 2, then y is 2, and f of x is also 2."}, {"start": 365.04, "end": 369.32, "text": " So again, f of x equals y for the second training example."}, {"start": 369.32, "end": 375.32, "text": " In the cost function, the squared error for the second example is also zero squared."}, {"start": 375.32, "end": 383.08000000000004, "text": " Finally, when x is 3, then y is 3, and f of 3 is also 3."}, {"start": 383.08000000000004, "end": 387.56, "text": " In the cost function, the third squared error term is also zero squared."}, {"start": 387.56, "end": 395.48, "text": " So for all three examples in this training set, f of x i equals y i for each training"}, {"start": 395.48, "end": 397.12, "text": " example i."}, {"start": 397.12, "end": 403.4, "text": " So f of x i minus y i is zero."}, {"start": 403.4, "end": 412.84000000000003, "text": " So for this particular data set, when w is 1, then the cost j is equal to zero."}, {"start": 412.84, "end": 418.4, "text": " Now what you can do on the right is plot the cost function j."}, {"start": 418.4, "end": 424.96, "text": " And notice that because the cost function is a function of the parameter w, the horizontal"}, {"start": 424.96, "end": 429.26, "text": " axis is now labeled w and not x."}, {"start": 429.26, "end": 435.59999999999997, "text": " And the vertical axis is now j and not y."}, {"start": 435.59999999999997, "end": 440.64, "text": " So you have j of 1 equals to zero."}, {"start": 440.64, "end": 447.21999999999997, "text": " In other words, when w equals 1, j of w is zero."}, {"start": 447.21999999999997, "end": 450.91999999999996, "text": " So let me go ahead and plot that."}, {"start": 450.91999999999996, "end": 456.4, "text": " Now let's look at how f and j change for different values of w."}, {"start": 456.4, "end": 458.76, "text": " w can take on a range of values, right?"}, {"start": 458.76, "end": 466.14, "text": " So w can take on negative values, w can be zero, and it can take on positive values too."}, {"start": 466.14, "end": 470.84, "text": " So what if w is equal to 0.5 instead of 1?"}, {"start": 470.84, "end": 474.0, "text": " What would these graphs look like then?"}, {"start": 474.0, "end": 475.59999999999997, "text": " Let's go ahead and plot that."}, {"start": 475.59999999999997, "end": 479.44, "text": " So let's set w to be equal to 0.5."}, {"start": 479.44, "end": 486.28, "text": " And in this case, the function f of x now looks like this, is aligned with a slope equal"}, {"start": 486.28, "end": 489.59999999999997, "text": " to 0.5."}, {"start": 489.6, "end": 496.88, "text": " And let's also compute the cost j when w is 0.5."}, {"start": 496.88, "end": 500.96000000000004, "text": " Recall that the cost function is measuring the squared error or difference between the"}, {"start": 500.96000000000004, "end": 512.2, "text": " estimated value that is y hat i, which is f of x i, and the true value that is y i for"}, {"start": 512.2, "end": 514.5600000000001, "text": " each example i."}, {"start": 514.56, "end": 520.56, "text": " So visually you can see that the error or difference is equal to the height of this"}, {"start": 520.56, "end": 527.92, "text": " vertical line here when x is equal to 1, because this little line is the gap between the actual"}, {"start": 527.92, "end": 535.52, "text": " value of y and the value that the function f predicted, which is a bit further down here."}, {"start": 535.52, "end": 545.36, "text": " So for this first example, when x is 1, f of x is 0.5, so the squared error on the first"}, {"start": 545.36, "end": 551.1999999999999, "text": " example is 0.5 minus 1 squared."}, {"start": 551.1999999999999, "end": 555.96, "text": " Remember the cost function will sum over all the training examples in the training set."}, {"start": 555.96, "end": 559.68, "text": " So let's go on to the second training example."}, {"start": 559.68, "end": 570.0799999999999, "text": " When x is 2, the model is predicting f of x is 1, and the actual value of y is 2."}, {"start": 570.0799999999999, "end": 575.28, "text": " So the error for the second example is equal to the height of this little line segment"}, {"start": 575.28, "end": 582.5999999999999, "text": " here, and the squared error is the square of the length of this line segment, so you"}, {"start": 582.5999999999999, "end": 586.8399999999999, "text": " get 1 minus 2 squared."}, {"start": 586.84, "end": 593.4, "text": " Let's do the third example. Repeating this process, the error here, also shown by this"}, {"start": 593.4, "end": 599.9200000000001, "text": " line segment, is 1.5 minus 3 squared."}, {"start": 599.9200000000001, "end": 606.24, "text": " Next we sum up all of these terms, which turns out to be equal to 3.5."}, {"start": 606.24, "end": 615.5600000000001, "text": " Then we multiply this term by 1 over 2m, where m is the number of training examples."}, {"start": 615.56, "end": 624.56, "text": " Since there are three training examples, m equals 3, so this is equal to 1 over 2 times"}, {"start": 624.56, "end": 629.9799999999999, "text": " 3, where this m here is 3."}, {"start": 629.9799999999999, "end": 638.3199999999999, "text": " If we work out the math, this turns out to be 3.5 divided by 6, so the cost j is about"}, {"start": 638.3199999999999, "end": 640.8, "text": " 0.58."}, {"start": 640.8, "end": 645.4399999999999, "text": " Let's go ahead and plot that over there on the right."}, {"start": 645.44, "end": 648.72, "text": " Now let's try one more value for w."}, {"start": 648.72, "end": 651.44, "text": " How about if w equals 0?"}, {"start": 651.44, "end": 657.6400000000001, "text": " What do the graphs for f and j look like when w is equal to 0?"}, {"start": 657.6400000000001, "end": 664.5600000000001, "text": " It turns out that if w is equal to 0, then f of x is just this horizontal line that is"}, {"start": 664.5600000000001, "end": 670.6800000000001, "text": " exactly on the x-axis, and so the error for each example is a line that goes from each"}, {"start": 670.68, "end": 677.3199999999999, "text": " point down to the horizontal line that represents f of x equals 0."}, {"start": 677.3199999999999, "end": 687.12, "text": " So the cost j when w equals 0 is 1 over 2m times the quantity 1 squared plus 2 squared"}, {"start": 687.12, "end": 696.54, "text": " plus 3 squared, and that's equal to 1 over 6 times 14, which is about 2.33."}, {"start": 696.54, "end": 705.36, "text": " So let's plot this point where w is 0 and j of 0 is 2.33 over here."}, {"start": 705.36, "end": 708.52, "text": " And you can keep doing this for other values of w."}, {"start": 708.52, "end": 717.3199999999999, "text": " Since w can be any number, it can also be a negative value, so if w is negative 0.5,"}, {"start": 717.3199999999999, "end": 723.12, "text": " then the line f is a downward sloping line like this."}, {"start": 723.12, "end": 729.84, "text": " It turns out that when w is negative 0.5, then you end up with an even higher cost around"}, {"start": 729.84, "end": 734.96, "text": " 5.25, which is this point up here."}, {"start": 734.96, "end": 740.44, "text": " And you can continue computing the cost function for different values of w and so on and plot"}, {"start": 740.44, "end": 742.04, "text": " these, right?"}, {"start": 742.04, "end": 747.2, "text": " So it turns out that by computing a range of values, you can slowly trace out what the"}, {"start": 747.2, "end": 749.8, "text": " cost function j looks like."}, {"start": 749.8, "end": 752.6800000000001, "text": " And that's what j is."}, {"start": 752.68, "end": 761.2399999999999, "text": " To recap, each value of parameter w corresponds to a different straight line fit, f of x,"}, {"start": 761.2399999999999, "end": 763.8, "text": " on the graph to the left."}, {"start": 763.8, "end": 771.12, "text": " And for the given training set, that choice for a value of w corresponds to a single point,"}, {"start": 771.12, "end": 774.3, "text": " a single point, on the graph on the right."}, {"start": 774.3, "end": 780.0799999999999, "text": " Because for each value of w, you can calculate a cost j of w."}, {"start": 780.08, "end": 788.32, "text": " For example, when w equals 1, this corresponds to this straight line fit through the data."}, {"start": 788.32, "end": 795.9200000000001, "text": " And it also corresponds to this point on the graph of j, where w equals 1 and the cost"}, {"start": 795.9200000000001, "end": 799.2800000000001, "text": " j of 1 equals 0."}, {"start": 799.2800000000001, "end": 806.48, "text": " Whereas when w equals 0.5, this gives you this line, which has a smaller slope."}, {"start": 806.48, "end": 812.32, "text": " And this line, in combination with the training set, corresponds to this point on the cost"}, {"start": 812.32, "end": 817.28, "text": " function graph at w equals 0.5."}, {"start": 817.28, "end": 822.84, "text": " So for each value of w, you wind up with a different line and its corresponding cost"}, {"start": 822.84, "end": 825.16, "text": " j of w."}, {"start": 825.16, "end": 829.36, "text": " And you can use these points to trace out this plot on the right."}, {"start": 829.36, "end": 835.48, "text": " Given this, how can you choose the value of w that results in the function f fitting the"}, {"start": 835.48, "end": 836.76, "text": " data well?"}, {"start": 836.76, "end": 843.28, "text": " Well, as you can imagine, choosing a value of w that causes j of w to be as small as"}, {"start": 843.28, "end": 846.44, "text": " possible seems like a good bet."}, {"start": 846.44, "end": 851.44, "text": " J is the cost function that measures how big the squared errors are."}, {"start": 851.44, "end": 856.64, "text": " So choosing w that minimizes these squared errors, makes them as small as possible, would"}, {"start": 856.64, "end": 858.52, "text": " give us a good model."}, {"start": 858.52, "end": 863.8000000000001, "text": " In this example, if you were to choose the value of w that results in the smallest possible"}, {"start": 863.8, "end": 869.4399999999999, "text": " value of j of w, you'd end up picking w equals 1."}, {"start": 869.4399999999999, "end": 872.16, "text": " And as you can see, that's actually a pretty good choice."}, {"start": 872.16, "end": 877.0799999999999, "text": " This results in a line that fits the training data very well."}, {"start": 877.0799999999999, "end": 884.0799999999999, "text": " So that's how in linear regression, you use the cost function to find the value of w that"}, {"start": 884.0799999999999, "end": 886.76, "text": " minimizes j."}, {"start": 886.76, "end": 893.76, "text": " Or in the more general case, when we had parameters w and b rather than just w, you find the values"}, {"start": 893.76, "end": 897.3199999999999, "text": " of w and b that minimize j."}, {"start": 897.3199999999999, "end": 906.0, "text": " So to summarize, you saw plots of both f and j and work through how the two are related."}, {"start": 906.0, "end": 911.04, "text": " As you vary w or vary w and b, you end up with different straight lines."}, {"start": 911.04, "end": 916.56, "text": " And when that straight line passes close to data, the cost j is small."}, {"start": 916.56, "end": 922.56, "text": " So the goal of linear regression is to find the parameters w or w and b that results in"}, {"start": 922.56, "end": 926.92, "text": " the smallest possible value for the cost function j."}, {"start": 926.92, "end": 933.1199999999999, "text": " Now in this video, we worked through our example with a simplified problem using only w."}, {"start": 933.1199999999999, "end": 938.2399999999999, "text": " In the next video, let's visualize what the cost function looks like for the full version"}, {"start": 938.2399999999999, "end": 942.04, "text": " of linear regression using both w and b."}, {"start": 942.04, "end": 944.8, "text": " And you see some cool 3D plots."}, {"start": 944.8, "end": 954.4799999999999, "text": " Let's go to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=75eDNh4A07Q
1.13 Machine Learning Overview | Visualizing the cost function --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you saw one visualization of the cost function J of w or J of wb. Let's look at some further, richer visualizations so you can get an even better intuition about what the cost function is doing. Here is what we've seen so far. There's the model, the model's parameters w and b, the cost function J of w and b, as well as the goal of linear regression, which is to minimize the cost function J of w and b over parameters w and b. In the last video, we had temporarily set b to zero in order to simplify the visualizations. But now, let's go back to the original model with both parameters w and b without setting b to be equal to zero. Same as last time, we want to get a visual understanding of the model function f of x, shown here on the left, and how it relates to the cost function J of wb, shown here on the right. Here's a training set of house sizes and prices. Let's see pick one possible function of x, like this one. Here I've set w to 0.06 and b to 50. So f of x is 0.06 times x plus 50. Note that this is not a particularly good model for this training set. It's actually a pretty bad model. It seems to consistently underestimate housing prices. Given these values for w and b, let's look at what the cost function J of w and b may look like. Recall what we saw last time was when you had only w because we temporarily set b to zero to simplify things. Back then, we had come up with a plot of the cost function that looked like this as a function of w only. So when we had only one parameter w, the cost function had this u-shaped curve, shaped a bit like a soup bowl. That sounds delicious. Now, in this housing price example that we have on this slide, we have two parameters, w and b, and so the plot becomes a little more complex. It turns out that the cost function also has a similar shape, like a soup bowl, except in three dimensions instead of two. In fact, depending on your training set, the cost function will look something like this. To me, this looks like a soup bowl, maybe because I'm a little bit hungry. Or maybe to you, it looks like a curved dinner plate or a hammock. Actually, that sounds relaxing too. And there's your coconut drink. Maybe when you're done with this course, you should treat yourself to vacation and relax in a hammock like this. What you see here is a 3D surface plot where the axes are labeled w and b. So as you vary w and b, which are the two parameters of the model, you get different values for the cost function j of w and b. This is a lot like the u-shaped curve you saw in the last video, except instead of having one parameter, w, as input in the j, you now have two parameters, w and b, as inputs into this soup bowl or this hammock-shaped function j. And I just want to point out that any single point on the surface represents a particular choice of w and b. For example, if w was minus 10 and b was minus 15, then the height of the surface above this point is the value of j when w is minus 10 and b is minus 15. Now in order to look even more closely at specific points, there's another way of plotting the cost function j that would be useful for visualization, which is rather than using these 3D surface plots. I'd like to take this exact same function j, so I'm not changing the function j at all, and plot it using something called a contour plot. And if you've ever seen a topographical map showing how high different mountains are, the contours in a topographical map are basically horizontal slices of the landscape of, say, a mountain. This image is of Mount Fuji in Japan. I still remember my family visiting Mount Fuji when I was a teenager. It was a beautiful sight. And if you fly directly above the mountain, that's what this contour map looks like. It shows all the points that are at the same height for different heights. At the bottom of this slide is a 3D surface plot of the cost function j. I know it doesn't look very bow-shaped, but it is actually a bow just very stretched out, which is why it looks like that. In an optional lab that is shortly to follow, you'll be able to see this in 3D and spin around the surface yourself, and it'll look more obviously bow-shaped there. Next, here on the upper right is a contour plot of this exact same cost function as that shown at the bottom. The two axes on this contour plot are B on the vertical axis and W on the horizontal axis. What each of these ovals, also called ellipses, shows is the set of points on the 3D surface which are at the exact same height. In other words, the set of points which have the same value for the cost function j. So, to get a contour plot, you take the 3D surface at the bottom and you use a knife to slice it horizontally. You take horizontal slices of that 3D surface and get all the points that are at the same height. Therefore, each horizontal slice ends up being shown as one of these ellipses, or one of these ovals. So concretely, if you take that point, and that point, and that point, all of these three points have the same value for the cost function j, even though they have different values for W and B. And in the figure on the upper left, you see also that these three points correspond to different functions f, all three of which are actually pretty bad for predicting housing prices in this case. Now, the bottom of the bow, where the cost function j is at a minimum, is this point right here, at the center of this concentric ovals. If you haven't seen contour plots much before, I'd like you to imagine, if you will, that you are flying high up above the bow in an airplane or in a rocket ship, and you're looking straight down at it. So that is as if you set your computer monitor flat on your desk facing up, and the bow shape is coming directly out of your screen rising above your desk. Imagine that the bow shape grows out of your computer screen lying flat, like that, so that each of these ovals have the same height above your screen, and the minimum of the bow is right down there in the center of the smallest oval. So it turns out that the contour plots are a convenient way to visualize the 3D cost function j, but in a way that's plotted in just 2D. In this video, you saw how the 3D bow-shaped surface plot can also be visualized as a contour plot. Using this visualization tool, in the next video, let's visualize some specific choices of WMB in the linear regression model so that we can see how these different choices affect the straight line you're fitting to the data. Let's go on to the next video.
[{"start": 0.0, "end": 8.700000000000001, "text": " In the last video, you saw one visualization of the cost function J of w or J of wb."}, {"start": 8.700000000000001, "end": 14.18, "text": " Let's look at some further, richer visualizations so you can get an even better intuition about"}, {"start": 14.18, "end": 16.740000000000002, "text": " what the cost function is doing."}, {"start": 16.740000000000002, "end": 18.86, "text": " Here is what we've seen so far."}, {"start": 18.86, "end": 26.18, "text": " There's the model, the model's parameters w and b, the cost function J of w and b, as"}, {"start": 26.18, "end": 31.96, "text": " well as the goal of linear regression, which is to minimize the cost function J of w and"}, {"start": 31.96, "end": 35.56, "text": " b over parameters w and b."}, {"start": 35.56, "end": 41.68, "text": " In the last video, we had temporarily set b to zero in order to simplify the visualizations."}, {"start": 41.68, "end": 48.64, "text": " But now, let's go back to the original model with both parameters w and b without setting"}, {"start": 48.64, "end": 50.8, "text": " b to be equal to zero."}, {"start": 50.8, "end": 57.48, "text": " Same as last time, we want to get a visual understanding of the model function f of x,"}, {"start": 57.48, "end": 65.28, "text": " shown here on the left, and how it relates to the cost function J of wb, shown here on"}, {"start": 65.28, "end": 66.28, "text": " the right."}, {"start": 66.28, "end": 70.88, "text": " Here's a training set of house sizes and prices."}, {"start": 70.88, "end": 75.08, "text": " Let's see pick one possible function of x, like this one."}, {"start": 75.08, "end": 80.6, "text": " Here I've set w to 0.06 and b to 50."}, {"start": 80.6, "end": 85.91999999999999, "text": " So f of x is 0.06 times x plus 50."}, {"start": 85.91999999999999, "end": 88.96, "text": " Note that this is not a particularly good model for this training set."}, {"start": 88.96, "end": 90.96, "text": " It's actually a pretty bad model."}, {"start": 90.96, "end": 95.39999999999999, "text": " It seems to consistently underestimate housing prices."}, {"start": 95.39999999999999, "end": 101.6, "text": " Given these values for w and b, let's look at what the cost function J of w and b may"}, {"start": 101.6, "end": 102.6, "text": " look like."}, {"start": 102.6, "end": 110.0, "text": " Recall what we saw last time was when you had only w because we temporarily set b to"}, {"start": 110.0, "end": 111.8, "text": " zero to simplify things."}, {"start": 111.8, "end": 118.0, "text": " Back then, we had come up with a plot of the cost function that looked like this as a function"}, {"start": 118.0, "end": 120.48, "text": " of w only."}, {"start": 120.48, "end": 127.08, "text": " So when we had only one parameter w, the cost function had this u-shaped curve, shaped a"}, {"start": 127.08, "end": 128.96, "text": " bit like a soup bowl."}, {"start": 128.96, "end": 130.76, "text": " That sounds delicious."}, {"start": 130.76, "end": 138.12, "text": " Now, in this housing price example that we have on this slide, we have two parameters,"}, {"start": 138.12, "end": 144.20000000000002, "text": " w and b, and so the plot becomes a little more complex."}, {"start": 144.20000000000002, "end": 152.08, "text": " It turns out that the cost function also has a similar shape, like a soup bowl, except"}, {"start": 152.08, "end": 155.04000000000002, "text": " in three dimensions instead of two."}, {"start": 155.04000000000002, "end": 160.96, "text": " In fact, depending on your training set, the cost function will look something like this."}, {"start": 160.96, "end": 165.6, "text": " To me, this looks like a soup bowl, maybe because I'm a little bit hungry."}, {"start": 165.6, "end": 170.6, "text": " Or maybe to you, it looks like a curved dinner plate or a hammock."}, {"start": 170.6, "end": 172.6, "text": " Actually, that sounds relaxing too."}, {"start": 172.6, "end": 175.95999999999998, "text": " And there's your coconut drink."}, {"start": 175.95999999999998, "end": 179.92, "text": " Maybe when you're done with this course, you should treat yourself to vacation and relax"}, {"start": 179.92, "end": 181.76, "text": " in a hammock like this."}, {"start": 181.76, "end": 188.88, "text": " What you see here is a 3D surface plot where the axes are labeled w and b."}, {"start": 188.88, "end": 194.51999999999998, "text": " So as you vary w and b, which are the two parameters of the model, you get different"}, {"start": 194.52, "end": 199.44, "text": " values for the cost function j of w and b."}, {"start": 199.44, "end": 204.56, "text": " This is a lot like the u-shaped curve you saw in the last video, except instead of having"}, {"start": 204.56, "end": 211.96, "text": " one parameter, w, as input in the j, you now have two parameters, w and b, as inputs into"}, {"start": 211.96, "end": 215.84, "text": " this soup bowl or this hammock-shaped function j."}, {"start": 215.84, "end": 220.52, "text": " And I just want to point out that any single point on the surface represents a particular"}, {"start": 220.52, "end": 223.72, "text": " choice of w and b."}, {"start": 223.72, "end": 232.32, "text": " For example, if w was minus 10 and b was minus 15, then the height of the surface above this"}, {"start": 232.32, "end": 240.04, "text": " point is the value of j when w is minus 10 and b is minus 15."}, {"start": 240.04, "end": 245.92, "text": " Now in order to look even more closely at specific points, there's another way of plotting"}, {"start": 245.92, "end": 251.6, "text": " the cost function j that would be useful for visualization, which is rather than using"}, {"start": 251.6, "end": 253.88, "text": " these 3D surface plots."}, {"start": 253.88, "end": 259.76, "text": " I'd like to take this exact same function j, so I'm not changing the function j at all,"}, {"start": 259.76, "end": 263.32, "text": " and plot it using something called a contour plot."}, {"start": 263.32, "end": 268.64, "text": " And if you've ever seen a topographical map showing how high different mountains are,"}, {"start": 268.64, "end": 275.52, "text": " the contours in a topographical map are basically horizontal slices of the landscape of, say,"}, {"start": 275.52, "end": 276.88, "text": " a mountain."}, {"start": 276.88, "end": 280.36, "text": " This image is of Mount Fuji in Japan."}, {"start": 280.36, "end": 284.8, "text": " I still remember my family visiting Mount Fuji when I was a teenager."}, {"start": 284.8, "end": 287.84000000000003, "text": " It was a beautiful sight."}, {"start": 287.84000000000003, "end": 294.0, "text": " And if you fly directly above the mountain, that's what this contour map looks like."}, {"start": 294.0, "end": 300.0, "text": " It shows all the points that are at the same height for different heights."}, {"start": 300.0, "end": 305.28000000000003, "text": " At the bottom of this slide is a 3D surface plot of the cost function j."}, {"start": 305.28, "end": 311.23999999999995, "text": " I know it doesn't look very bow-shaped, but it is actually a bow just very stretched out,"}, {"start": 311.23999999999995, "end": 313.15999999999997, "text": " which is why it looks like that."}, {"start": 313.15999999999997, "end": 318.84, "text": " In an optional lab that is shortly to follow, you'll be able to see this in 3D and spin"}, {"start": 318.84, "end": 323.59999999999997, "text": " around the surface yourself, and it'll look more obviously bow-shaped there."}, {"start": 323.59999999999997, "end": 330.34, "text": " Next, here on the upper right is a contour plot of this exact same cost function as that"}, {"start": 330.34, "end": 332.64, "text": " shown at the bottom."}, {"start": 332.64, "end": 340.2, "text": " The two axes on this contour plot are B on the vertical axis and W on the horizontal"}, {"start": 340.2, "end": 341.64, "text": " axis."}, {"start": 341.64, "end": 349.91999999999996, "text": " What each of these ovals, also called ellipses, shows is the set of points on the 3D surface"}, {"start": 349.91999999999996, "end": 352.15999999999997, "text": " which are at the exact same height."}, {"start": 352.15999999999997, "end": 357.0, "text": " In other words, the set of points which have the same value for the cost function j."}, {"start": 357.0, "end": 365.92, "text": " So, to get a contour plot, you take the 3D surface at the bottom and you use a knife"}, {"start": 365.92, "end": 368.64, "text": " to slice it horizontally."}, {"start": 368.64, "end": 374.34, "text": " You take horizontal slices of that 3D surface and get all the points that are at the same"}, {"start": 374.34, "end": 375.34, "text": " height."}, {"start": 375.34, "end": 382.68, "text": " Therefore, each horizontal slice ends up being shown as one of these ellipses, or one of"}, {"start": 382.68, "end": 385.16, "text": " these ovals."}, {"start": 385.16, "end": 394.12, "text": " So concretely, if you take that point, and that point, and that point, all of these three"}, {"start": 394.12, "end": 401.08000000000004, "text": " points have the same value for the cost function j, even though they have different values"}, {"start": 401.08000000000004, "end": 404.08000000000004, "text": " for W and B."}, {"start": 404.08000000000004, "end": 410.48, "text": " And in the figure on the upper left, you see also that these three points correspond to"}, {"start": 410.48, "end": 416.52000000000004, "text": " different functions f, all three of which are actually pretty bad for predicting housing"}, {"start": 416.52000000000004, "end": 418.36, "text": " prices in this case."}, {"start": 418.36, "end": 427.34000000000003, "text": " Now, the bottom of the bow, where the cost function j is at a minimum, is this point"}, {"start": 427.34000000000003, "end": 432.12, "text": " right here, at the center of this concentric ovals."}, {"start": 432.12, "end": 438.20000000000005, "text": " If you haven't seen contour plots much before, I'd like you to imagine, if you will, that"}, {"start": 438.2, "end": 444.92, "text": " you are flying high up above the bow in an airplane or in a rocket ship, and you're looking"}, {"start": 444.92, "end": 446.84, "text": " straight down at it."}, {"start": 446.84, "end": 453.84, "text": " So that is as if you set your computer monitor flat on your desk facing up, and the bow shape"}, {"start": 453.84, "end": 458.52, "text": " is coming directly out of your screen rising above your desk."}, {"start": 458.52, "end": 465.2, "text": " Imagine that the bow shape grows out of your computer screen lying flat, like that, so"}, {"start": 465.2, "end": 470.8, "text": " that each of these ovals have the same height above your screen, and the minimum of the"}, {"start": 470.8, "end": 477.08, "text": " bow is right down there in the center of the smallest oval."}, {"start": 477.08, "end": 482.84, "text": " So it turns out that the contour plots are a convenient way to visualize the 3D cost"}, {"start": 482.84, "end": 487.96, "text": " function j, but in a way that's plotted in just 2D."}, {"start": 487.96, "end": 495.15999999999997, "text": " In this video, you saw how the 3D bow-shaped surface plot can also be visualized as a contour"}, {"start": 495.16, "end": 496.48, "text": " plot."}, {"start": 496.48, "end": 502.16, "text": " Using this visualization tool, in the next video, let's visualize some specific choices"}, {"start": 502.16, "end": 507.94000000000005, "text": " of WMB in the linear regression model so that we can see how these different choices affect"}, {"start": 507.94000000000005, "end": 510.64000000000004, "text": " the straight line you're fitting to the data."}, {"start": 510.64, "end": 526.28, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=GY0KyF3h8hA
1.14 Machine Learning Overview | Visualizing examples --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's look at some more visualizations of w and b. Here's one example. Over here, you have a particular point on the graph J. For this point, w equals about negative 0.15 and b equals about 800. So this point corresponds to one pair of values for w and b that use a particular cost J. And in fact, this particular pair of values for w and b corresponds to this function f of x, which is this line that you can see on the left. This line intersects the vertical axis at 800 because b equals 800 and the slope of the line is negative 0.15 because w equals negative 0.15. Now if you look at the data points in the training set, you may notice that this line is not a good fit to the data. For this function f of x, with these values of w and b, many of the predictions for the value of y are quite far from the actual target value of y that is in the training data. Since this line is not a good fit, if you look at the graph of J, the cost of this line is out here, which is pretty far from the minimum. This is a pretty high cost because this choice of w and b is just not that good a fit to the training set. Now let's look at another example with a different choice of w and b. Now here is another function that is still not a great fit for the data, but maybe slightly less bad. So this point here represents the cost for this particular pair of w and b that creates that line. The value of w is equal to 0 and the value of b is about 360. This pair of parameters corresponds to this function, which is a flat line because f of x equals 0 times x plus 360. I hope that makes sense. Let's look at yet another example. Here's one more choice for w and b, and with these values, you end up with this line f of x. Again, not a great fit to the data. It is actually further away from the minimum compared to the previous example. And remember that the minimum is at the center of that smallest ellipse. This example, if you look at f of x on the left, this looks like a pretty good fit to the training set. You can see on the right, this point representing the cost is very close to the center of the small ellipse. It's not quite exactly the minimum, but it's pretty close. For this value of w and b, you get this line f of x. You can see that if you measure the vertical distances between the data points and the predicted values on the straight line, you get the error for each data point. The sum of squared errors for all of these data points is pretty close to the minimum possible sum of squared errors among all possible straight line fits. I hope that by looking at these figures, you can get a better sense of how different choices of the parameters affect the line f of x and how this corresponds to different values for the cost j. And hopefully, you can see how the better fit lines correspond to points on the graph of j that are closer to the minimum possible cost for this cost function j of w and b. In the optional lab that follows this video, you get to run some code. And remember, all of the code is given, so you just need to hit shift enter to run it and take a look at it. And the lab will show you how the cost function is implemented in code. And given a small training set and different choices for the parameters, you'll be able to see how the cost varies depending on how well the model fits the data. In the optional lab, you also can play with an interactive contour plot. Check this out. You can use your mouse cursor to click anywhere on the contour plot, and you're going to see the straight line defined by the values you chose for the parameters w and b. You see a dot appear also on the 3D surface plot showing the cost. Finally, the optional lab also has a 3D surface plot that you can manually rotate and spin around using your mouse cursor to take a better look at what the cost function looks like. I hope you enjoyed playing with the optional lab. Now, in linear regression, rather than having to manually try to read the contour plot for the best value for w and b, which isn't really a good procedure and also won't work once we get to more complex machine learning models, what you really want is an efficient algorithm that you can write in code for automatically finding the values of parameters w and b that gives you the best fit line that minimizes the cost function j. There is an algorithm for doing this called gradient descent. This algorithm is one of the most important algorithms in machine learning. Gradient descent and variations on gradient descent are used to train not just linear regression, but some of the biggest and most complex models in all of AI. So, let's go to the next video to dive into this really important algorithm called gradient descent.
[{"start": 0.0, "end": 7.6000000000000005, "text": " Let's look at some more visualizations of w and b."}, {"start": 7.6000000000000005, "end": 9.8, "text": " Here's one example."}, {"start": 9.8, "end": 17.32, "text": " Over here, you have a particular point on the graph J. For this point, w equals about"}, {"start": 17.32, "end": 22.82, "text": " negative 0.15 and b equals about 800."}, {"start": 22.82, "end": 30.36, "text": " So this point corresponds to one pair of values for w and b that use a particular cost J."}, {"start": 30.36, "end": 36.4, "text": " And in fact, this particular pair of values for w and b corresponds to this function f"}, {"start": 36.4, "end": 41.22, "text": " of x, which is this line that you can see on the left."}, {"start": 41.22, "end": 48.519999999999996, "text": " This line intersects the vertical axis at 800 because b equals 800 and the slope of"}, {"start": 48.52, "end": 54.56, "text": " the line is negative 0.15 because w equals negative 0.15."}, {"start": 54.56, "end": 58.92, "text": " Now if you look at the data points in the training set, you may notice that this line"}, {"start": 58.92, "end": 61.800000000000004, "text": " is not a good fit to the data."}, {"start": 61.800000000000004, "end": 69.04, "text": " For this function f of x, with these values of w and b, many of the predictions for the"}, {"start": 69.04, "end": 76.68, "text": " value of y are quite far from the actual target value of y that is in the training data."}, {"start": 76.68, "end": 82.60000000000001, "text": " Since this line is not a good fit, if you look at the graph of J, the cost of this line"}, {"start": 82.60000000000001, "end": 87.08000000000001, "text": " is out here, which is pretty far from the minimum."}, {"start": 87.08000000000001, "end": 92.64000000000001, "text": " This is a pretty high cost because this choice of w and b is just not that good a fit to"}, {"start": 92.64000000000001, "end": 95.48, "text": " the training set."}, {"start": 95.48, "end": 102.2, "text": " Now let's look at another example with a different choice of w and b."}, {"start": 102.2, "end": 107.48, "text": " Now here is another function that is still not a great fit for the data, but maybe slightly"}, {"start": 107.48, "end": 109.16, "text": " less bad."}, {"start": 109.16, "end": 115.4, "text": " So this point here represents the cost for this particular pair of w and b that creates"}, {"start": 115.4, "end": 117.4, "text": " that line."}, {"start": 117.4, "end": 124.32000000000001, "text": " The value of w is equal to 0 and the value of b is about 360."}, {"start": 124.32000000000001, "end": 129.44, "text": " This pair of parameters corresponds to this function, which is a flat line because f of"}, {"start": 129.44, "end": 133.84, "text": " x equals 0 times x plus 360."}, {"start": 133.84, "end": 136.44, "text": " I hope that makes sense."}, {"start": 136.44, "end": 139.28, "text": " Let's look at yet another example."}, {"start": 139.28, "end": 144.84, "text": " Here's one more choice for w and b, and with these values, you end up with this line f"}, {"start": 144.84, "end": 145.84, "text": " of x."}, {"start": 145.84, "end": 147.68, "text": " Again, not a great fit to the data."}, {"start": 147.68, "end": 153.28, "text": " It is actually further away from the minimum compared to the previous example."}, {"start": 153.28, "end": 159.12, "text": " And remember that the minimum is at the center of that smallest ellipse."}, {"start": 159.12, "end": 165.64000000000001, "text": " This example, if you look at f of x on the left, this looks like a pretty good fit to"}, {"start": 165.64000000000001, "end": 167.16, "text": " the training set."}, {"start": 167.16, "end": 175.28, "text": " You can see on the right, this point representing the cost is very close to the center of the"}, {"start": 175.28, "end": 176.28, "text": " small ellipse."}, {"start": 176.28, "end": 180.4, "text": " It's not quite exactly the minimum, but it's pretty close."}, {"start": 180.4, "end": 187.08, "text": " For this value of w and b, you get this line f of x."}, {"start": 187.08, "end": 191.52, "text": " You can see that if you measure the vertical distances between the data points and the"}, {"start": 191.52, "end": 198.88000000000002, "text": " predicted values on the straight line, you get the error for each data point."}, {"start": 198.88000000000002, "end": 204.92000000000002, "text": " The sum of squared errors for all of these data points is pretty close to the minimum"}, {"start": 204.92000000000002, "end": 210.92000000000002, "text": " possible sum of squared errors among all possible straight line fits."}, {"start": 210.92000000000002, "end": 215.92000000000002, "text": " I hope that by looking at these figures, you can get a better sense of how different choices"}, {"start": 215.92, "end": 223.28, "text": " of the parameters affect the line f of x and how this corresponds to different values for"}, {"start": 223.28, "end": 225.83999999999997, "text": " the cost j."}, {"start": 225.83999999999997, "end": 232.04, "text": " And hopefully, you can see how the better fit lines correspond to points on the graph"}, {"start": 232.04, "end": 241.83999999999997, "text": " of j that are closer to the minimum possible cost for this cost function j of w and b."}, {"start": 241.84, "end": 247.32, "text": " In the optional lab that follows this video, you get to run some code."}, {"start": 247.32, "end": 251.92000000000002, "text": " And remember, all of the code is given, so you just need to hit shift enter to run it"}, {"start": 251.92000000000002, "end": 253.76, "text": " and take a look at it."}, {"start": 253.76, "end": 258.88, "text": " And the lab will show you how the cost function is implemented in code."}, {"start": 258.88, "end": 263.94, "text": " And given a small training set and different choices for the parameters, you'll be able"}, {"start": 263.94, "end": 269.92, "text": " to see how the cost varies depending on how well the model fits the data."}, {"start": 269.92, "end": 274.04, "text": " In the optional lab, you also can play with an interactive contour plot."}, {"start": 274.04, "end": 275.52000000000004, "text": " Check this out."}, {"start": 275.52000000000004, "end": 280.44, "text": " You can use your mouse cursor to click anywhere on the contour plot, and you're going to see"}, {"start": 280.44, "end": 285.84000000000003, "text": " the straight line defined by the values you chose for the parameters w and b."}, {"start": 285.84000000000003, "end": 291.68, "text": " You see a dot appear also on the 3D surface plot showing the cost."}, {"start": 291.68, "end": 298.6, "text": " Finally, the optional lab also has a 3D surface plot that you can manually rotate and spin"}, {"start": 298.6, "end": 304.92, "text": " around using your mouse cursor to take a better look at what the cost function looks like."}, {"start": 304.92, "end": 307.48, "text": " I hope you enjoyed playing with the optional lab."}, {"start": 307.48, "end": 313.40000000000003, "text": " Now, in linear regression, rather than having to manually try to read the contour plot for"}, {"start": 313.40000000000003, "end": 318.44, "text": " the best value for w and b, which isn't really a good procedure and also won't work once"}, {"start": 318.44, "end": 324.40000000000003, "text": " we get to more complex machine learning models, what you really want is an efficient algorithm"}, {"start": 324.4, "end": 329.79999999999995, "text": " that you can write in code for automatically finding the values of parameters w and b that"}, {"start": 329.79999999999995, "end": 335.35999999999996, "text": " gives you the best fit line that minimizes the cost function j."}, {"start": 335.35999999999996, "end": 339.08, "text": " There is an algorithm for doing this called gradient descent."}, {"start": 339.08, "end": 343.59999999999997, "text": " This algorithm is one of the most important algorithms in machine learning."}, {"start": 343.59999999999997, "end": 348.35999999999996, "text": " Gradient descent and variations on gradient descent are used to train not just linear"}, {"start": 348.35999999999996, "end": 353.15999999999997, "text": " regression, but some of the biggest and most complex models in all of AI."}, {"start": 353.16, "end": 359.08000000000004, "text": " So, let's go to the next video to dive into this really important algorithm called gradient"}, {"start": 359.08, "end": 383.88, "text": " descent."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=6W3tzcOnWfQ
1.15 Machine Learning Overview | Gradient descent --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome back. In the last video we saw visualizations of the cost function j and how you can try different choices of the parameters w and b and see what cost value that gets you. It would be nice if we had a more systematic way to find the values of w and b that result in the smallest possible cost j of w, b. It turns out there's an algorithm called gradient descent that you can use to do that. Gradient descent is used all over the place in machine learning, not just for linear regression, but for training, for example, some of the most advanced neural network models, also called deep learning models. Deep learning models are something you learn about in the second course. So learning this tool of gradient descent will set you up with one of the most important building blocks in machine learning. Here's an overview of what we'll do with gradient descent. You have the cost function j of w, b right here that you want to minimize. In the example we've seen so far, this is a cost function for linear regression, but it turns out that gradient descent is an algorithm that you can use to try to minimize any function, not just a cost function for linear regression. Just to make this discussion on gradient descent more general, it turns out that gradient descent applies to more general functions, including other cost functions that work with models that have more than two parameters. So for instance, if you have a cost function j as a function of w1, w2, up to wn and b, your objective is to minimize j over the parameters w1 to wn and b. In other words, you want to pick values for w1 through wn and b that gives you the smallest possible value of j. It turns out that gradient descent is an algorithm that you can apply to try to minimize this cost function j as well. What you're going to do is just start off with some initial guesses for w and b. In linear regression, it won't matter too much what the initial values are. So a common choice is to set them both to zero. So for example, you can set w to zero and b to zero as the initial guess. With the gradient descent algorithm, what you're going to do is you'll keep on changing the parameters w and b a bit every time to try to reduce the cost j of wb until hopefully j settles at or near a minimum. One thing I should note is that for some functions j that may not be a bow shape or a hammer shape, it is possible for there to be more than one possible minimum. Let's take a look at an example of a more complex surface plot j to see what gradient descent is doing. This function is not a squared error cost function. For linear regression with the squared error cost function, you always end up with a bow shape or a hammer shape. But this is a type of cost function you might get if you're training a neural network model. Notice the axes. That is w and b on the bottom axis. For different values of w and b, you get different points on the surface j of wb, where the height of the surface at some point is the value of the cost function. Now, let's imagine that this surface plot is actually a view of a slightly hilly outdoor park or a golf course, where the high points are hills and the low points are valleys like so. And I'd like you to imagine, if you will, that you're physically standing at this point on the hill. And if it helps you to relax, imagine that this lots of really nice green grass and butterflies and flowers is a really nice hill. And your goal is to start up here and get to the bottom of one of these valleys as efficiently as possible. So what the gradient descent algorithm does is you're going to spin around 360 degrees and look around and ask yourself, if I were to take a tiny little baby step in one direction, and I want to go downhill as quickly as possible toward one of these valleys, what direction do I choose to take that baby step? Well, if you want to walk down this hill as efficiently as possible, it turns out that if you're standing at this point in the hill and you look around, you may notice that the best direction to take your next step downhill is roughly that direction. Mathematically, this is the direction of steepest descent. And it means that when you take a tiny baby little step, this takes you downhill faster than a tiny little baby step you could have taken in any other direction. So after taking this first step, you're now at this point on the hill over here. Now, let's repeat the process. Standing at this new point, you're going to again spin around 360 degrees and ask yourself, in what direction would I take the next little baby step in order to move downhill? And if you do that and take another step, you end up moving a bit in that direction and you can keep going. From this new point, you can again look around and decide what direction would take you downhill most quickly. Take another step, another step, and so on until you find yourself at the bottom of this valley at this local minimum right here. What you just did was go through multiple steps of gradient descent. It turns out gradient descent has an interesting property. Remember that you can choose a starting point at the surface by choosing starting values for the parameters w and b. When you perform gradient descent a moment ago, you had started at this point over here, right? Now, imagine if you try gradient descent again, but this time you choose a different starting point by choosing parameters that place your starting point just a couple steps to the right over here. If you then repeat the gradient descent process, which means you look around, take a little step in the direction of the steepest descent, so you end up here. Then you again look around, take another step and so on. And if you were to run gradient descent the second time, starting just a couple steps to the right of where we did it the first time, then you will end up in a totally different valley, this different minimum over here on the right. The bottoms of both the first and the second valleys are called local minima, because if you start going down the first valley, gradient descent won't lead you to the second valley. And the same is true if you started going down the second valley, you will stay in that second minimum and not find your way into the first local minimum. So this is an interesting property of the gradient descent algorithm, and you'll see more about this later. So in this video, you saw how gradient descent helps you go downhill. In the next video, let's look at the mathematical expressions that you can implement to make gradient descent work. Let's go on to the next video.
[{"start": 0.0, "end": 8.2, "text": " Welcome back. In the last video we saw visualizations of the cost function j and how you can try"}, {"start": 8.2, "end": 14.32, "text": " different choices of the parameters w and b and see what cost value that gets you. It"}, {"start": 14.32, "end": 20.080000000000002, "text": " would be nice if we had a more systematic way to find the values of w and b that result"}, {"start": 20.080000000000002, "end": 26.96, "text": " in the smallest possible cost j of w, b. It turns out there's an algorithm called gradient"}, {"start": 26.96, "end": 32.36, "text": " descent that you can use to do that. Gradient descent is used all over the place in machine"}, {"start": 32.36, "end": 37.56, "text": " learning, not just for linear regression, but for training, for example, some of the"}, {"start": 37.56, "end": 43.84, "text": " most advanced neural network models, also called deep learning models. Deep learning"}, {"start": 43.84, "end": 50.2, "text": " models are something you learn about in the second course. So learning this tool of gradient"}, {"start": 50.2, "end": 55.6, "text": " descent will set you up with one of the most important building blocks in machine learning."}, {"start": 55.6, "end": 61.92, "text": " Here's an overview of what we'll do with gradient descent. You have the cost function j of w,"}, {"start": 61.92, "end": 69.04, "text": " b right here that you want to minimize. In the example we've seen so far, this is a cost"}, {"start": 69.04, "end": 74.16, "text": " function for linear regression, but it turns out that gradient descent is an algorithm"}, {"start": 74.16, "end": 82.64, "text": " that you can use to try to minimize any function, not just a cost function for linear regression."}, {"start": 82.64, "end": 88.08, "text": " Just to make this discussion on gradient descent more general, it turns out that gradient descent"}, {"start": 88.08, "end": 94.08, "text": " applies to more general functions, including other cost functions that work with models"}, {"start": 94.08, "end": 101.68, "text": " that have more than two parameters. So for instance, if you have a cost function j as"}, {"start": 101.68, "end": 112.28, "text": " a function of w1, w2, up to wn and b, your objective is to minimize j over the parameters"}, {"start": 112.28, "end": 123.0, "text": " w1 to wn and b. In other words, you want to pick values for w1 through wn and b that gives"}, {"start": 123.0, "end": 129.16, "text": " you the smallest possible value of j. It turns out that gradient descent is an algorithm"}, {"start": 129.16, "end": 135.8, "text": " that you can apply to try to minimize this cost function j as well. What you're going"}, {"start": 135.8, "end": 143.36, "text": " to do is just start off with some initial guesses for w and b. In linear regression,"}, {"start": 143.36, "end": 148.92000000000002, "text": " it won't matter too much what the initial values are. So a common choice is to set them"}, {"start": 148.92000000000002, "end": 156.08, "text": " both to zero. So for example, you can set w to zero and b to zero as the initial guess."}, {"start": 156.08, "end": 161.12, "text": " With the gradient descent algorithm, what you're going to do is you'll keep on changing"}, {"start": 161.12, "end": 169.68, "text": " the parameters w and b a bit every time to try to reduce the cost j of wb until hopefully"}, {"start": 169.68, "end": 176.96, "text": " j settles at or near a minimum. One thing I should note is that for some functions j"}, {"start": 176.96, "end": 182.84, "text": " that may not be a bow shape or a hammer shape, it is possible for there to be more than one"}, {"start": 182.84, "end": 190.92000000000002, "text": " possible minimum. Let's take a look at an example of a more complex surface plot j to"}, {"start": 190.92, "end": 197.6, "text": " see what gradient descent is doing. This function is not a squared error cost function. For"}, {"start": 197.6, "end": 203.16, "text": " linear regression with the squared error cost function, you always end up with a bow shape"}, {"start": 203.16, "end": 209.27999999999997, "text": " or a hammer shape. But this is a type of cost function you might get if you're training"}, {"start": 209.27999999999997, "end": 219.54, "text": " a neural network model. Notice the axes. That is w and b on the bottom axis. For different"}, {"start": 219.54, "end": 227.23999999999998, "text": " values of w and b, you get different points on the surface j of wb, where the height of"}, {"start": 227.23999999999998, "end": 233.48, "text": " the surface at some point is the value of the cost function. Now, let's imagine that"}, {"start": 233.48, "end": 240.6, "text": " this surface plot is actually a view of a slightly hilly outdoor park or a golf course,"}, {"start": 240.6, "end": 246.12, "text": " where the high points are hills and the low points are valleys like so. And I'd like you"}, {"start": 246.12, "end": 252.8, "text": " to imagine, if you will, that you're physically standing at this point on the hill. And if"}, {"start": 252.8, "end": 258.36, "text": " it helps you to relax, imagine that this lots of really nice green grass and butterflies"}, {"start": 258.36, "end": 266.78000000000003, "text": " and flowers is a really nice hill. And your goal is to start up here and get to the bottom"}, {"start": 266.78000000000003, "end": 273.84000000000003, "text": " of one of these valleys as efficiently as possible. So what the gradient descent algorithm"}, {"start": 273.84, "end": 281.56, "text": " does is you're going to spin around 360 degrees and look around and ask yourself, if I were"}, {"start": 281.56, "end": 288.44, "text": " to take a tiny little baby step in one direction, and I want to go downhill as quickly as possible"}, {"start": 288.44, "end": 294.67999999999995, "text": " toward one of these valleys, what direction do I choose to take that baby step? Well,"}, {"start": 294.67999999999995, "end": 300.2, "text": " if you want to walk down this hill as efficiently as possible, it turns out that if you're standing"}, {"start": 300.2, "end": 304.88, "text": " at this point in the hill and you look around, you may notice that the best direction to"}, {"start": 304.88, "end": 311.24, "text": " take your next step downhill is roughly that direction. Mathematically, this is the direction"}, {"start": 311.24, "end": 317.0, "text": " of steepest descent. And it means that when you take a tiny baby little step, this takes"}, {"start": 317.0, "end": 323.4, "text": " you downhill faster than a tiny little baby step you could have taken in any other direction."}, {"start": 323.4, "end": 330.15999999999997, "text": " So after taking this first step, you're now at this point on the hill over here. Now,"}, {"start": 330.16, "end": 335.72, "text": " let's repeat the process. Standing at this new point, you're going to again spin around"}, {"start": 335.72, "end": 342.16, "text": " 360 degrees and ask yourself, in what direction would I take the next little baby step in"}, {"start": 342.16, "end": 348.02000000000004, "text": " order to move downhill? And if you do that and take another step, you end up moving a"}, {"start": 348.02000000000004, "end": 355.68, "text": " bit in that direction and you can keep going. From this new point, you can again look around"}, {"start": 355.68, "end": 362.12, "text": " and decide what direction would take you downhill most quickly. Take another step, another step,"}, {"start": 362.12, "end": 368.08, "text": " and so on until you find yourself at the bottom of this valley at this local minimum right"}, {"start": 368.08, "end": 375.4, "text": " here. What you just did was go through multiple steps of gradient descent. It turns out gradient"}, {"start": 375.4, "end": 382.4, "text": " descent has an interesting property. Remember that you can choose a starting point at the"}, {"start": 382.4, "end": 388.47999999999996, "text": " surface by choosing starting values for the parameters w and b. When you perform gradient"}, {"start": 388.47999999999996, "end": 395.79999999999995, "text": " descent a moment ago, you had started at this point over here, right? Now, imagine if you"}, {"start": 395.79999999999995, "end": 401.71999999999997, "text": " try gradient descent again, but this time you choose a different starting point by choosing"}, {"start": 401.71999999999997, "end": 408.23999999999995, "text": " parameters that place your starting point just a couple steps to the right over here."}, {"start": 408.24, "end": 412.84000000000003, "text": " If you then repeat the gradient descent process, which means you look around, take a little"}, {"start": 412.84000000000003, "end": 419.04, "text": " step in the direction of the steepest descent, so you end up here. Then you again look around,"}, {"start": 419.04, "end": 426.40000000000003, "text": " take another step and so on. And if you were to run gradient descent the second time, starting"}, {"start": 426.40000000000003, "end": 431.16, "text": " just a couple steps to the right of where we did it the first time, then you will end"}, {"start": 431.16, "end": 438.64000000000004, "text": " up in a totally different valley, this different minimum over here on the right. The bottoms"}, {"start": 438.64000000000004, "end": 445.24, "text": " of both the first and the second valleys are called local minima, because if you start"}, {"start": 445.24, "end": 451.08000000000004, "text": " going down the first valley, gradient descent won't lead you to the second valley. And the"}, {"start": 451.08000000000004, "end": 456.6, "text": " same is true if you started going down the second valley, you will stay in that second"}, {"start": 456.6, "end": 463.40000000000003, "text": " minimum and not find your way into the first local minimum. So this is an interesting property"}, {"start": 463.40000000000003, "end": 469.44, "text": " of the gradient descent algorithm, and you'll see more about this later. So in this video,"}, {"start": 469.44, "end": 475.48, "text": " you saw how gradient descent helps you go downhill. In the next video, let's look at"}, {"start": 475.48, "end": 481.24, "text": " the mathematical expressions that you can implement to make gradient descent work. Let's"}, {"start": 481.24, "end": 488.24, "text": " go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Dz4JZzgn-hg
1.16 Machine Learning Overview | Implementing gradient descent --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's take a look at how you can actually implement the gradient descent algorithm. Let me write down the gradient descent algorithm. Here it is. On each step, w, the parameter, is updated to the old value of w minus alpha times this term d over dw of the cost function j of wb. So what this expression is saying is update your parameter w by taking the current value of w and adjusting it a small amount, which is this expression on the right, minus alpha times this term over here. So if you feel like there's a lot going on in this equation, it's okay. Don't worry about it. We'll unpack it together. First, this equal notation here. Notice I said we're assigning w a value using this equal sign. So in this context, this equal sign is the assignment operator. Specifically, in this context, if you write code that says a equals c, it means take the value of c and store it in your computer in the variable a. If you write a equals a plus one, it means set the value of a to be equal to a plus one or increment the value of a by one. So the assignment operator in coding is different than truth assertions in mathematics, where if I write a equals c, I'm asserting, that is, I am claiming that the values of a and c are equal to each other. And hopefully I will never write a truth assertion a equals a plus one because, you know, that just can't possibly be true. So in Python and in other programming languages, truth assertions are sometimes written as equals equals. So you may see code that says a equals equals c if you're testing whether a is equal to c. But in math notation, as we conventionally use it, like in these videos, the equal sign can be used for either assignments or for truth assertion. And so I try to make sure it's clear when I write an equal sign, whether we're assigning a value to a variable or whether we're asserting the truth of the equality of two values. Now let's dive more deeply into what the symbols in this equation means. The symbol here is the Greek alphabet alpha, and in this equation, alpha is also called the learning rate. The learning rate is usually a small positive number between zero and one, and it might be say 0.01. What alpha does is it basically controls how big of a step you take downhill. So if alpha is very large, then that corresponds to a very aggressive gradient descent procedure where you're trying to take huge steps downhill. And if alpha is very small, then you'd be taking small baby steps downhill. We'll come back later to delve more deeply into how to choose a good learning rate alpha. And finally, this term here, that's the derivative term of the cost function J. Let's not worry about the details of this derivative right now, but later on you get to see more about the derivative term. But for now, you can think of this derivative term that I drew a magenta box around as telling you in which direction you want to take your baby step. And in combination with the learning rate alpha, it also determines the size of the steps you want to take downhill. Now I do want to mention that derivatives come from calculus, and even if you aren't familiar with calculus, don't worry about it. Even without knowing any calculus, you'll be able to figure out all you need to know about this derivative term in this video and the next. One more thing. Remember your model has two parameters, not just W, but also B. So you also have an assignment operation to update the parameter B that looks very similar. B is assigned the O value of B minus the learning rate alpha times this slightly different derivative term D over dB of J of WB. So remember in the graph of the surface plot where you're taking baby steps until you get to the bottom of the valley? Well, for the gradient descent algorithm, you're going to repeat these two update steps until the algorithm converges. And by converges, I mean that you reach the point at a local minimum where the parameters W and B no longer change much with each additional step that you take. Now there's one more subtle detail about how to correctly implement gradient descent. You're going to update two parameters W and B, right? So this update takes place for both parameters W and B. One important detail is that for gradient descent, you want to simultaneously update W and B, meaning you want to update both parameters at the same time. What I mean by that is that in this expression, you're going to update W from the old W to a new W, and you're also updating B from its old value to a new value B. And the way to implement this is to compute the right side, computing this thing for W and B and simultaneously at the same time, update W and B to the new values. So let's take a look at what this means. Here's the correct way to implement gradient descent, which does a simultaneous update. This sets a variable tempW equal to that expression, which is W minus that term here. Let's also set another variable tempB to that, which is B minus that term. So you compute both right-hand sides, both updates, and store them into variables tempW and tempB. Then you copy the value of tempW into W, and you also copy the value of tempB into B. Now one thing you may notice is that this value of W is from before W gets updated. Here notice that the pre-update W is what goes into the derivative term over here, okay? In contrast, here's an incorrect implementation of gradient descent that does not do a simultaneous update. In this incorrect implementation, we compute tempW same as before, so far that's okay. And now here's where things start to differ. We then update W with the value in tempW before calculating the new value for the other parameter B. Next, we calculate tempB as B minus that term here. And finally, we update B with the value in tempB. The difference between the right-hand side and the left-hand side implementations is that if you look over here, this W has already been updated to this new value, and it's this updated W that actually goes into the cost function J of WB. It means that this term here on the right is not the same as this term over here that you see on the left. And that also means this tempB term on the right is not quite the same as the tempB term on the left, and thus this updated value for B on the right is not the same as this updated value for variable B on the left. The way that gradient descent is implemented in code, it actually turns out to be more natural to implement it the correct way with simultaneous updates. When you hear someone talk about gradient descent, they always mean the gradient descent where you perform a simultaneous update of the parameters. If however, you were to implement non-simultaneous updates, it turns out it will probably work more or less anyway, but doing it this way isn't really the correct way to implement it, it's actually some other algorithm with different properties. So I would advise you to just stick to the correct simultaneous update and not use this incorrect version on the right. So that's gradient descent. In the next video, we'll go into details of the derivative term, which you saw in this video, but that we didn't really talk about in detail. Calculus are part of calculus, and again, if you're not familiar with calculus, don't worry about it. You won't need to know calculus at all in order to complete this course or this specialization, and you'll have all the information you need in order to implement gradient descent. Coming up in the next video, we'll go over derivatives together, and you come away with the intuition and knowledge you need to be able to implement and apply gradient descent yourself. I think that'll be an exciting thing for you to know how to implement. So let's go on to the next video to see how to do that.
[{"start": 0.0, "end": 10.040000000000001, "text": " Let's take a look at how you can actually implement the gradient descent algorithm."}, {"start": 10.040000000000001, "end": 14.280000000000001, "text": " Let me write down the gradient descent algorithm."}, {"start": 14.280000000000001, "end": 15.4, "text": " Here it is."}, {"start": 15.4, "end": 25.240000000000002, "text": " On each step, w, the parameter, is updated to the old value of w minus alpha times this"}, {"start": 25.24, "end": 30.959999999999997, "text": " term d over dw of the cost function j of wb."}, {"start": 30.959999999999997, "end": 36.76, "text": " So what this expression is saying is update your parameter w by taking the current value"}, {"start": 36.76, "end": 44.44, "text": " of w and adjusting it a small amount, which is this expression on the right, minus alpha"}, {"start": 44.44, "end": 49.32, "text": " times this term over here."}, {"start": 49.32, "end": 54.519999999999996, "text": " So if you feel like there's a lot going on in this equation, it's okay."}, {"start": 54.52, "end": 55.52, "text": " Don't worry about it."}, {"start": 55.52, "end": 58.120000000000005, "text": " We'll unpack it together."}, {"start": 58.120000000000005, "end": 62.2, "text": " First, this equal notation here."}, {"start": 62.2, "end": 66.86, "text": " Notice I said we're assigning w a value using this equal sign."}, {"start": 66.86, "end": 71.36, "text": " So in this context, this equal sign is the assignment operator."}, {"start": 71.36, "end": 79.0, "text": " Specifically, in this context, if you write code that says a equals c, it means take the"}, {"start": 79.0, "end": 84.28, "text": " value of c and store it in your computer in the variable a."}, {"start": 84.28, "end": 89.96000000000001, "text": " If you write a equals a plus one, it means set the value of a to be equal to a plus one"}, {"start": 89.96000000000001, "end": 93.84, "text": " or increment the value of a by one."}, {"start": 93.84, "end": 101.68, "text": " So the assignment operator in coding is different than truth assertions in mathematics, where"}, {"start": 101.68, "end": 107.76, "text": " if I write a equals c, I'm asserting, that is, I am claiming that the values of a and"}, {"start": 107.76, "end": 110.56, "text": " c are equal to each other."}, {"start": 110.56, "end": 115.32000000000001, "text": " And hopefully I will never write a truth assertion a equals a plus one because, you know, that"}, {"start": 115.32000000000001, "end": 118.32000000000001, "text": " just can't possibly be true."}, {"start": 118.32000000000001, "end": 124.2, "text": " So in Python and in other programming languages, truth assertions are sometimes written as"}, {"start": 124.2, "end": 126.04, "text": " equals equals."}, {"start": 126.04, "end": 131.84, "text": " So you may see code that says a equals equals c if you're testing whether a is equal to"}, {"start": 131.84, "end": 132.84, "text": " c."}, {"start": 132.84, "end": 138.44, "text": " But in math notation, as we conventionally use it, like in these videos, the equal sign"}, {"start": 138.44, "end": 143.4, "text": " can be used for either assignments or for truth assertion."}, {"start": 143.4, "end": 148.0, "text": " And so I try to make sure it's clear when I write an equal sign, whether we're assigning"}, {"start": 148.0, "end": 155.56, "text": " a value to a variable or whether we're asserting the truth of the equality of two values."}, {"start": 155.56, "end": 160.56, "text": " Now let's dive more deeply into what the symbols in this equation means."}, {"start": 160.56, "end": 169.28, "text": " The symbol here is the Greek alphabet alpha, and in this equation, alpha is also called"}, {"start": 169.28, "end": 171.28, "text": " the learning rate."}, {"start": 171.28, "end": 176.22, "text": " The learning rate is usually a small positive number between zero and one, and it might"}, {"start": 176.22, "end": 179.92000000000002, "text": " be say 0.01."}, {"start": 179.92000000000002, "end": 186.1, "text": " What alpha does is it basically controls how big of a step you take downhill."}, {"start": 186.1, "end": 192.88, "text": " So if alpha is very large, then that corresponds to a very aggressive gradient descent procedure"}, {"start": 192.88, "end": 196.06, "text": " where you're trying to take huge steps downhill."}, {"start": 196.06, "end": 201.28, "text": " And if alpha is very small, then you'd be taking small baby steps downhill."}, {"start": 201.28, "end": 207.07999999999998, "text": " We'll come back later to delve more deeply into how to choose a good learning rate alpha."}, {"start": 207.07999999999998, "end": 213.64, "text": " And finally, this term here, that's the derivative term of the cost function J."}, {"start": 213.64, "end": 218.35999999999999, "text": " Let's not worry about the details of this derivative right now, but later on you get"}, {"start": 218.35999999999999, "end": 221.23999999999998, "text": " to see more about the derivative term."}, {"start": 221.23999999999998, "end": 226.44, "text": " But for now, you can think of this derivative term that I drew a magenta box around as telling"}, {"start": 226.44, "end": 230.07999999999998, "text": " you in which direction you want to take your baby step."}, {"start": 230.07999999999998, "end": 235.39999999999998, "text": " And in combination with the learning rate alpha, it also determines the size of the"}, {"start": 235.39999999999998, "end": 238.6, "text": " steps you want to take downhill."}, {"start": 238.6, "end": 244.12, "text": " Now I do want to mention that derivatives come from calculus, and even if you aren't"}, {"start": 244.12, "end": 247.48, "text": " familiar with calculus, don't worry about it."}, {"start": 247.48, "end": 251.04, "text": " Even without knowing any calculus, you'll be able to figure out all you need to know"}, {"start": 251.04, "end": 255.72, "text": " about this derivative term in this video and the next."}, {"start": 255.72, "end": 257.2, "text": " One more thing."}, {"start": 257.2, "end": 264.2, "text": " Remember your model has two parameters, not just W, but also B. So you also have an assignment"}, {"start": 264.2, "end": 268.76, "text": " operation to update the parameter B that looks very similar."}, {"start": 268.76, "end": 278.76, "text": " B is assigned the O value of B minus the learning rate alpha times this slightly different derivative"}, {"start": 278.76, "end": 283.28, "text": " term D over dB of J of WB."}, {"start": 283.28, "end": 288.8, "text": " So remember in the graph of the surface plot where you're taking baby steps until you get"}, {"start": 288.8, "end": 290.59999999999997, "text": " to the bottom of the valley?"}, {"start": 290.6, "end": 296.08000000000004, "text": " Well, for the gradient descent algorithm, you're going to repeat these two update steps"}, {"start": 296.08000000000004, "end": 298.52000000000004, "text": " until the algorithm converges."}, {"start": 298.52000000000004, "end": 304.24, "text": " And by converges, I mean that you reach the point at a local minimum where the parameters"}, {"start": 304.24, "end": 310.6, "text": " W and B no longer change much with each additional step that you take."}, {"start": 310.6, "end": 316.88, "text": " Now there's one more subtle detail about how to correctly implement gradient descent."}, {"start": 316.88, "end": 321.1, "text": " You're going to update two parameters W and B, right?"}, {"start": 321.1, "end": 326.62, "text": " So this update takes place for both parameters W and B."}, {"start": 326.62, "end": 334.6, "text": " One important detail is that for gradient descent, you want to simultaneously update"}, {"start": 334.6, "end": 340.28, "text": " W and B, meaning you want to update both parameters at the same time."}, {"start": 340.28, "end": 346.84, "text": " What I mean by that is that in this expression, you're going to update W from the old W to"}, {"start": 346.84, "end": 355.44, "text": " a new W, and you're also updating B from its old value to a new value B."}, {"start": 355.44, "end": 361.52, "text": " And the way to implement this is to compute the right side, computing this thing for W"}, {"start": 361.52, "end": 370.79999999999995, "text": " and B and simultaneously at the same time, update W and B to the new values."}, {"start": 370.79999999999995, "end": 374.4, "text": " So let's take a look at what this means."}, {"start": 374.4, "end": 380.32, "text": " Here's the correct way to implement gradient descent, which does a simultaneous update."}, {"start": 380.32, "end": 388.23999999999995, "text": " This sets a variable tempW equal to that expression, which is W minus that term here."}, {"start": 388.23999999999995, "end": 394.15999999999997, "text": " Let's also set another variable tempB to that, which is B minus that term."}, {"start": 394.15999999999997, "end": 399.4, "text": " So you compute both right-hand sides, both updates, and store them into variables tempW"}, {"start": 399.4, "end": 401.91999999999996, "text": " and tempB."}, {"start": 401.92, "end": 412.08000000000004, "text": " Then you copy the value of tempW into W, and you also copy the value of tempB into B."}, {"start": 412.08000000000004, "end": 420.96000000000004, "text": " Now one thing you may notice is that this value of W is from before W gets updated."}, {"start": 420.96000000000004, "end": 428.72, "text": " Here notice that the pre-update W is what goes into the derivative term over here, okay?"}, {"start": 428.72, "end": 434.8, "text": " In contrast, here's an incorrect implementation of gradient descent that does not do a simultaneous"}, {"start": 434.8, "end": 435.8, "text": " update."}, {"start": 435.8, "end": 444.12, "text": " In this incorrect implementation, we compute tempW same as before, so far that's okay."}, {"start": 444.12, "end": 447.24, "text": " And now here's where things start to differ."}, {"start": 447.24, "end": 454.72, "text": " We then update W with the value in tempW before calculating the new value for the other parameter"}, {"start": 454.72, "end": 455.72, "text": " B."}, {"start": 455.72, "end": 460.88000000000005, "text": " Next, we calculate tempB as B minus that term here."}, {"start": 460.88000000000005, "end": 465.84000000000003, "text": " And finally, we update B with the value in tempB."}, {"start": 465.84000000000003, "end": 469.52000000000004, "text": " The difference between the right-hand side and the left-hand side implementations is"}, {"start": 469.52000000000004, "end": 476.8, "text": " that if you look over here, this W has already been updated to this new value, and it's this"}, {"start": 476.8, "end": 483.52000000000004, "text": " updated W that actually goes into the cost function J of WB."}, {"start": 483.52, "end": 488.76, "text": " It means that this term here on the right is not the same as this term over here that"}, {"start": 488.76, "end": 491.35999999999996, "text": " you see on the left."}, {"start": 491.35999999999996, "end": 499.15999999999997, "text": " And that also means this tempB term on the right is not quite the same as the tempB term"}, {"start": 499.15999999999997, "end": 505.35999999999996, "text": " on the left, and thus this updated value for B on the right is not the same as this updated"}, {"start": 505.35999999999996, "end": 509.71999999999997, "text": " value for variable B on the left."}, {"start": 509.72, "end": 513.88, "text": " The way that gradient descent is implemented in code, it actually turns out to be more"}, {"start": 513.88, "end": 520.46, "text": " natural to implement it the correct way with simultaneous updates."}, {"start": 520.46, "end": 524.8000000000001, "text": " When you hear someone talk about gradient descent, they always mean the gradient descent"}, {"start": 524.8000000000001, "end": 528.96, "text": " where you perform a simultaneous update of the parameters."}, {"start": 528.96, "end": 535.88, "text": " If however, you were to implement non-simultaneous updates, it turns out it will probably work"}, {"start": 535.88, "end": 540.6, "text": " more or less anyway, but doing it this way isn't really the correct way to implement"}, {"start": 540.6, "end": 544.68, "text": " it, it's actually some other algorithm with different properties."}, {"start": 544.68, "end": 550.06, "text": " So I would advise you to just stick to the correct simultaneous update and not use this"}, {"start": 550.06, "end": 553.08, "text": " incorrect version on the right."}, {"start": 553.08, "end": 555.24, "text": " So that's gradient descent."}, {"start": 555.24, "end": 559.68, "text": " In the next video, we'll go into details of the derivative term, which you saw in this"}, {"start": 559.68, "end": 563.96, "text": " video, but that we didn't really talk about in detail."}, {"start": 563.96, "end": 568.24, "text": " Calculus are part of calculus, and again, if you're not familiar with calculus, don't"}, {"start": 568.24, "end": 569.5400000000001, "text": " worry about it."}, {"start": 569.5400000000001, "end": 575.0400000000001, "text": " You won't need to know calculus at all in order to complete this course or this specialization,"}, {"start": 575.0400000000001, "end": 580.36, "text": " and you'll have all the information you need in order to implement gradient descent."}, {"start": 580.36, "end": 585.2800000000001, "text": " Coming up in the next video, we'll go over derivatives together, and you come away with"}, {"start": 585.2800000000001, "end": 591.0, "text": " the intuition and knowledge you need to be able to implement and apply gradient descent"}, {"start": 591.0, "end": 592.32, "text": " yourself."}, {"start": 592.32, "end": 595.9200000000001, "text": " I think that'll be an exciting thing for you to know how to implement."}, {"start": 595.92, "end": 622.92, "text": " So let's go on to the next video to see how to do that."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=rGvoO8U2Ozc
1.17 Machine Learning Overview | Gradient descent intuition --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Now, let's dive more deeply into gradient descent to gain better intuition about what it's doing and why it might make sense. Here's the gradient descent algorithm that you saw in the previous video. And as a reminder, this variable, this Greek symbol alpha is the learning rate. And the learning rate controls how big of a step you take when updating the model's parameters w and b. So this term here, this d over dw, this is a derivative term. And by convention in math, this d is written with this funny font here. And in case anyone watching this has a PhD in math or is an expert in multivariate calculus, they may be wondering, that's not the derivative, that's the partial derivative. And yes, they'd be right. But for the purposes of implementing a machine learning algorithm, I'm just going to call it derivative, and don't worry about these little distinctions. So what we're going to focus on now is get more intuition about what this learning rate and what this derivative are doing, and why when multiplied together like this, it results in updates to parameters w and b that make sense. In order to do this, let's use a slightly simpler example, where we work on minimizing just one parameter. So let's say that you have a cost function j of just one parameter w, where w is a number. This means that gradient descent now looks like this. w is updated to w minus the learning rate alpha times d over dw of j of w. And you're trying to minimize the cost by adjusting the parameter w. So this is like our previous example, where we had temporarily set b equal to zero. With one parameter w instead of two, you can look at two dimensional graphs of the cost function j instead of three dimensional graphs. Let's look at what gradient descent does on this function j of w. Here on the horizontal axis is parameter w, and on the vertical axis is the cost j of w. Now, let's initialize gradient descent with some starting value for w. Let's initialize it at this location. So imagine that you start off at this point right here on the function j. What gradient descent will do is it will update w to be w minus learning rate alpha times d over dw of j of w. Let's look at what this derivative term here means. A way to think about the derivative at this point on the line is to draw a tangent line, which is a straight line that touches this curve at that point. In math, the slope of this line is the derivative of the function j at this point. And to get the slope, you can draw a little triangle like this. And if you compute the height divided by the width of this triangle, that is the slope. So for example, the slope might be, you know, 2 over 1, for instance. And when the tangent line is pointing up into the right, the slope is positive, which means that this derivative is a positive number, so it's greater than zero. And so the updated w is going to be w minus the learning rate times some positive number. The learning rate is always a positive number. So if you take w minus a positive number, you end up with a new value for w that is smaller. So on the graph, we are moving to the left, we are decreasing the value of w. And you may notice that this is the right thing to do if your goal is to decrease the cost j, because when we move toward the left on this curve, the cost j decreases and you're getting closer to the minimum for j, which is over here. So so far, gradient descent seems to be doing the right thing. Now, let's look at another example. Let's take the same function j of w as above. And now let's say that you initialize gradient descent at a different location, say by choosing a starting value for w that's over here on the left. So that's this point of the function j. Now the derivative term, remember is d over dw of j of w. And when we look at the tangent line at this point over here, the slope of this line is the derivative of j at this point. But this tangent line is sloping down into the right. And so this line sloping down into the right has a negative slope. In other words, the derivative j at this point is a negative number. For instance, if you draw a triangle, then the height like this is negative two and the width is one. So the slope is negative two divided by one, which is negative two, which is a negative number. So when you update w, you get w minus the learning rate times a negative number. And so this means you subtract from w a negative number. But subtracting a negative number means adding a positive number. And so you end up increasing w, because subtracting a negative number is the same as adding a positive number to w. So this step of gradient descent causes w to increase, which means you're moving to the right of the graph, and your cos j has decreased down to here. And again, it looks like gradient descent is doing something reasonable, it's getting you closer to the minimum. So hopefully, these last two examples show some of the intuition behind what the derivative term is doing, and why this helps gradient descent change w to get you closer to the minimum. I hope this video gave you some sense for why the derivative term in gradient descent makes sense. One other key quantity in the gradient descent algorithm is the learning rate alpha. How do you choose alpha? What happens if it's too small? What happens if it's too big? In the next video, let's take a deeper look at the parameter alpha to help build intuitions about what it does, as well as how to make a good choice for a good value of alpha for your implementation of gradient descent.
[{"start": 0.0, "end": 7.88, "text": " Now, let's dive more deeply into gradient descent to gain better intuition about what"}, {"start": 7.88, "end": 10.84, "text": " it's doing and why it might make sense."}, {"start": 10.84, "end": 15.52, "text": " Here's the gradient descent algorithm that you saw in the previous video."}, {"start": 15.52, "end": 21.56, "text": " And as a reminder, this variable, this Greek symbol alpha is the learning rate."}, {"start": 21.56, "end": 25.84, "text": " And the learning rate controls how big of a step you take when updating the model's"}, {"start": 25.84, "end": 28.8, "text": " parameters w and b."}, {"start": 28.8, "end": 35.0, "text": " So this term here, this d over dw, this is a derivative term."}, {"start": 35.0, "end": 40.72, "text": " And by convention in math, this d is written with this funny font here."}, {"start": 40.72, "end": 45.519999999999996, "text": " And in case anyone watching this has a PhD in math or is an expert in multivariate calculus,"}, {"start": 45.519999999999996, "end": 49.400000000000006, "text": " they may be wondering, that's not the derivative, that's the partial derivative."}, {"start": 49.400000000000006, "end": 50.88, "text": " And yes, they'd be right."}, {"start": 50.88, "end": 55.040000000000006, "text": " But for the purposes of implementing a machine learning algorithm, I'm just going to call"}, {"start": 55.04, "end": 60.2, "text": " it derivative, and don't worry about these little distinctions."}, {"start": 60.2, "end": 66.16, "text": " So what we're going to focus on now is get more intuition about what this learning rate"}, {"start": 66.16, "end": 72.72, "text": " and what this derivative are doing, and why when multiplied together like this, it results"}, {"start": 72.72, "end": 77.74, "text": " in updates to parameters w and b that make sense."}, {"start": 77.74, "end": 83.8, "text": " In order to do this, let's use a slightly simpler example, where we work on minimizing"}, {"start": 83.8, "end": 86.56, "text": " just one parameter."}, {"start": 86.56, "end": 94.2, "text": " So let's say that you have a cost function j of just one parameter w, where w is a number."}, {"start": 94.2, "end": 97.88, "text": " This means that gradient descent now looks like this."}, {"start": 97.88, "end": 107.16, "text": " w is updated to w minus the learning rate alpha times d over dw of j of w."}, {"start": 107.16, "end": 112.74, "text": " And you're trying to minimize the cost by adjusting the parameter w."}, {"start": 112.74, "end": 119.44, "text": " So this is like our previous example, where we had temporarily set b equal to zero."}, {"start": 119.44, "end": 124.6, "text": " With one parameter w instead of two, you can look at two dimensional graphs of the cost"}, {"start": 124.6, "end": 128.4, "text": " function j instead of three dimensional graphs."}, {"start": 128.4, "end": 135.4, "text": " Let's look at what gradient descent does on this function j of w."}, {"start": 135.4, "end": 142.4, "text": " Here on the horizontal axis is parameter w, and on the vertical axis is the cost j of"}, {"start": 142.4, "end": 143.4, "text": " w."}, {"start": 143.4, "end": 148.04, "text": " Now, let's initialize gradient descent with some starting value for w."}, {"start": 148.04, "end": 151.04000000000002, "text": " Let's initialize it at this location."}, {"start": 151.04000000000002, "end": 156.56, "text": " So imagine that you start off at this point right here on the function j."}, {"start": 156.56, "end": 164.8, "text": " What gradient descent will do is it will update w to be w minus learning rate alpha times"}, {"start": 164.8, "end": 168.32, "text": " d over dw of j of w."}, {"start": 168.32, "end": 172.08, "text": " Let's look at what this derivative term here means."}, {"start": 172.08, "end": 179.8, "text": " A way to think about the derivative at this point on the line is to draw a tangent line,"}, {"start": 179.8, "end": 184.16000000000003, "text": " which is a straight line that touches this curve at that point."}, {"start": 184.16000000000003, "end": 190.64000000000001, "text": " In math, the slope of this line is the derivative of the function j at this point."}, {"start": 190.64000000000001, "end": 195.08, "text": " And to get the slope, you can draw a little triangle like this."}, {"start": 195.08, "end": 201.46, "text": " And if you compute the height divided by the width of this triangle, that is the slope."}, {"start": 201.46, "end": 207.56, "text": " So for example, the slope might be, you know, 2 over 1, for instance."}, {"start": 207.56, "end": 212.96, "text": " And when the tangent line is pointing up into the right, the slope is positive, which means"}, {"start": 212.96, "end": 219.48000000000002, "text": " that this derivative is a positive number, so it's greater than zero."}, {"start": 219.48000000000002, "end": 227.44, "text": " And so the updated w is going to be w minus the learning rate times some positive number."}, {"start": 227.44, "end": 230.54000000000002, "text": " The learning rate is always a positive number."}, {"start": 230.54, "end": 237.04, "text": " So if you take w minus a positive number, you end up with a new value for w that is"}, {"start": 237.04, "end": 238.7, "text": " smaller."}, {"start": 238.7, "end": 245.84, "text": " So on the graph, we are moving to the left, we are decreasing the value of w."}, {"start": 245.84, "end": 249.79999999999998, "text": " And you may notice that this is the right thing to do if your goal is to decrease the"}, {"start": 249.79999999999998, "end": 256.03999999999996, "text": " cost j, because when we move toward the left on this curve, the cost j decreases and you're"}, {"start": 256.04, "end": 261.04, "text": " getting closer to the minimum for j, which is over here."}, {"start": 261.04, "end": 265.32, "text": " So so far, gradient descent seems to be doing the right thing."}, {"start": 265.32, "end": 268.96000000000004, "text": " Now, let's look at another example."}, {"start": 268.96000000000004, "end": 271.90000000000003, "text": " Let's take the same function j of w as above."}, {"start": 271.90000000000003, "end": 277.72, "text": " And now let's say that you initialize gradient descent at a different location, say by choosing"}, {"start": 277.72, "end": 282.48, "text": " a starting value for w that's over here on the left."}, {"start": 282.48, "end": 285.76, "text": " So that's this point of the function j."}, {"start": 285.76, "end": 293.76, "text": " Now the derivative term, remember is d over dw of j of w."}, {"start": 293.76, "end": 298.44, "text": " And when we look at the tangent line at this point over here, the slope of this line is"}, {"start": 298.44, "end": 301.03999999999996, "text": " the derivative of j at this point."}, {"start": 301.03999999999996, "end": 305.09999999999997, "text": " But this tangent line is sloping down into the right."}, {"start": 305.09999999999997, "end": 309.82, "text": " And so this line sloping down into the right has a negative slope."}, {"start": 309.82, "end": 314.52, "text": " In other words, the derivative j at this point is a negative number."}, {"start": 314.52, "end": 320.24, "text": " For instance, if you draw a triangle, then the height like this is negative two and the"}, {"start": 320.24, "end": 322.03999999999996, "text": " width is one."}, {"start": 322.03999999999996, "end": 327.84, "text": " So the slope is negative two divided by one, which is negative two, which is a negative"}, {"start": 327.84, "end": 329.64, "text": " number."}, {"start": 329.64, "end": 336.47999999999996, "text": " So when you update w, you get w minus the learning rate times a negative number."}, {"start": 336.47999999999996, "end": 341.68, "text": " And so this means you subtract from w a negative number."}, {"start": 341.68, "end": 347.44, "text": " But subtracting a negative number means adding a positive number."}, {"start": 347.44, "end": 354.68, "text": " And so you end up increasing w, because subtracting a negative number is the same as adding a"}, {"start": 354.68, "end": 357.9, "text": " positive number to w."}, {"start": 357.9, "end": 363.68, "text": " So this step of gradient descent causes w to increase, which means you're moving to"}, {"start": 363.68, "end": 369.24, "text": " the right of the graph, and your cos j has decreased down to here."}, {"start": 369.24, "end": 374.56, "text": " And again, it looks like gradient descent is doing something reasonable, it's getting"}, {"start": 374.56, "end": 377.32, "text": " you closer to the minimum."}, {"start": 377.32, "end": 383.68, "text": " So hopefully, these last two examples show some of the intuition behind what the derivative"}, {"start": 383.68, "end": 390.32, "text": " term is doing, and why this helps gradient descent change w to get you closer to the"}, {"start": 390.32, "end": 391.32, "text": " minimum."}, {"start": 391.32, "end": 397.04, "text": " I hope this video gave you some sense for why the derivative term in gradient descent"}, {"start": 397.04, "end": 398.54, "text": " makes sense."}, {"start": 398.54, "end": 403.52000000000004, "text": " One other key quantity in the gradient descent algorithm is the learning rate alpha."}, {"start": 403.52000000000004, "end": 404.68, "text": " How do you choose alpha?"}, {"start": 404.68, "end": 406.16, "text": " What happens if it's too small?"}, {"start": 406.16, "end": 407.88, "text": " What happens if it's too big?"}, {"start": 407.88, "end": 413.22, "text": " In the next video, let's take a deeper look at the parameter alpha to help build intuitions"}, {"start": 413.22, "end": 418.28000000000003, "text": " about what it does, as well as how to make a good choice for a good value of alpha for"}, {"start": 418.28, "end": 428.96, "text": " your implementation of gradient descent."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Eu8lt4j9xiU
1.18 Machine Learning Overview | Learning rate --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
The choice of the learning rate alpha will have a huge impact on the efficiency of your implementation of gradient descent, and if alpha, the learning rate, is chosen poorly, gradient descent may not even work at all. In this video, let's take a deeper look at the learning rate. This will also help you choose better learning rates for your implementations of gradient descent. So, here again is the gradient descent rule. W is updated to be W minus the learning rate alpha times the derivative term. To learn more about what the learning rate alpha is doing, let's see what could happen if the learning rate alpha is either too small, or if it is too large. For the case where the learning rate is too small, here's a graph where the horizontal axis is W and the vertical axis is the cos j, and here's a graph of the function j of W. Let's start gradient descent at this point here. If the learning rate is too small, then what happens is that you multiply your derivative term by some really, really, really small number. So, you're going to be multiplying by a number alpha that's really small, like 0.0000001, and so you end up taking a very small baby step like that. Then from this point, you're going to take another, you know, tiny, tiny little baby step, but because the learning rate is so small, the second step is also just minus Q. The outcome of this process is that you do end up decreasing the cos j, but incredibly slowly. So, here's another step, and another step, another tiny step, until you finally approach the minimum. But, as you may notice, you're going to need a lot of steps to get to the minimum. So, to summarize, if the learning rate is too small, then gradient descent will work, but it will be slow. It will take a very long time, because it's going to take these tiny, tiny baby steps, and it's going to need a lot of steps before it gets anywhere close to the minimum. Now, let's look at a different case. What happens if the learning rate is too large? Here's another graph of the cos function, and let's say we start gradient descent with W at this value here. So, it's actually already pretty close to the minimum. So, the derivative points to the right, but if the learning rate is too large, then you update W via a giant step to be all the way over here, and that's this point here on the function j. So, you move from this point on the left all the way to this point on the right, and now the cost has actually gotten worse. It has increased because it has started out at this value here, and after one step, it actually increased to this value here. Now, the derivative at this new point says to decrease W, but when the learning rate is too big, then you may take a huge step going from here all the way out here. So, now you've gotten to this point here, and again, if the learning rate is too big, then you take another huge step at the next iteration and way overshoot the minimum again. So, now you're at this point on the right, and one more time, you do another update and end up all the way here, and so you're now at this point here. So, as you may notice, you're actually getting further and further away from the minimum. So, if the learning rate is too large, then gradient descent may overshoot and may never reach the minimum. And another way to say that is that gradient descent may fail to converge and may even diverge. So, here's another question you may be wondering. What if your parameter W is already at this point here, so that your cost J is already at a local minimum? What do you think one step of gradient descent will do if you've already reached a minimum? So, this is a tricky one. When I was first learning this stuff, it actually took me a long time to figure it out, but let's work through this together. Let's suppose you have some cost function J, and the one you see here isn't a square error cost function, and this cost function has two local minima corresponding to the two values that you see here. Now, let's suppose that after some number of steps of gradient descent, your parameter W is over here, say equal to 5, and so this is the current value of W. This means that you're at this point on the cost function J, and that happens to be a local minimum. Turns out, if you draw a tangent to the function at this point, the slope of this line is zero, and thus, the derivative term here is equal to zero for the current value of W, and so your gradient descent update becomes W is updated to W minus the learning rate times zero, where here, that's because the derivative term is equal to zero, and this is the same as saying let's set W to be equal to W. So, this means that if you're already at a local minimum, gradient descent leaves W unchanged because it just updates the new value of W to be the exact same old value of W. So, concretely, let's say if the current value of W is 5, and alpha is 0.1, after one iteration, you update W as W minus alpha times zero, and it is still equal to 5. So, if your parameters have already brought you to a local minimum, then further gradient descent steps to absolutely nothing. It doesn't change the parameters, which is what you want, because it keeps the solution at that local minimum. This also explains why gradient descent can reach a local minimum even with a fixed learning rate alpha. Here's what I mean. To illustrate this, let's look at another example. Here's the cost function J of W that we want to minimize. Let's initialize gradient descent up here at this point. If we take one update step, maybe it'll take us to that point. And because this derivative is pretty large, gradient descent takes a relatively big step, right? Now, we're at this second point, where we take another step. And you may notice that the slope is not as steep as it was at the first point, so the derivative isn't as large. And so, the next update step will not be as large as that first step. Now, we're at this third point here, and the derivative is smaller than it was at the previous step and we'll take an even smaller step. As we approach the minimum, the derivative gets closer and closer to zero. So as we run gradient descent, eventually we're taking very small steps until you finally reach a local minimum. So just to recap, as we get nearer a local minimum, gradient descent will automatically take smaller steps. And that's because as we approach the local minimum, the derivative automatically gets smaller and that means the update steps also automatically get smaller, even if the learning rate alpha is kept at some fixed value. So that's the gradient descent algorithm. You can use it to try to minimize any cost function j, not just the mean squared error cost function that we're using for linear regression. In the next video, we're going to take the function j and set that back to be exactly the linear regression model's cost function, the mean squared error cost function that we had come up with earlier. And putting together gradient descent with this cost function that will give you your first learning algorithm, the linear regression algorithm.
[{"start": 0.0, "end": 7.12, "text": " The choice of the learning rate alpha will have a huge impact on the efficiency of your"}, {"start": 7.12, "end": 13.280000000000001, "text": " implementation of gradient descent, and if alpha, the learning rate, is chosen poorly,"}, {"start": 13.280000000000001, "end": 15.94, "text": " gradient descent may not even work at all."}, {"start": 15.94, "end": 19.8, "text": " In this video, let's take a deeper look at the learning rate."}, {"start": 19.8, "end": 25.04, "text": " This will also help you choose better learning rates for your implementations of gradient"}, {"start": 25.04, "end": 26.04, "text": " descent."}, {"start": 26.04, "end": 30.04, "text": " So, here again is the gradient descent rule."}, {"start": 30.04, "end": 35.92, "text": " W is updated to be W minus the learning rate alpha times the derivative term."}, {"start": 35.92, "end": 39.96, "text": " To learn more about what the learning rate alpha is doing, let's see what could happen"}, {"start": 39.96, "end": 46.36, "text": " if the learning rate alpha is either too small, or if it is too large."}, {"start": 46.36, "end": 51.2, "text": " For the case where the learning rate is too small, here's a graph where the horizontal"}, {"start": 51.2, "end": 59.800000000000004, "text": " axis is W and the vertical axis is the cos j, and here's a graph of the function j of"}, {"start": 59.800000000000004, "end": 65.28, "text": " W. Let's start gradient descent at this point here."}, {"start": 65.28, "end": 71.32000000000001, "text": " If the learning rate is too small, then what happens is that you multiply your derivative"}, {"start": 71.32000000000001, "end": 74.64, "text": " term by some really, really, really small number."}, {"start": 74.64, "end": 83.52, "text": " So, you're going to be multiplying by a number alpha that's really small, like 0.0000001,"}, {"start": 83.52, "end": 88.56, "text": " and so you end up taking a very small baby step like that."}, {"start": 88.56, "end": 92.76, "text": " Then from this point, you're going to take another, you know, tiny, tiny little baby"}, {"start": 92.76, "end": 99.52, "text": " step, but because the learning rate is so small, the second step is also just minus"}, {"start": 99.52, "end": 100.52, "text": " Q."}, {"start": 100.52, "end": 106.24, "text": " The outcome of this process is that you do end up decreasing the cos j, but incredibly"}, {"start": 106.24, "end": 107.24, "text": " slowly."}, {"start": 107.24, "end": 113.28, "text": " So, here's another step, and another step, another tiny step, until you finally approach"}, {"start": 113.28, "end": 114.28, "text": " the minimum."}, {"start": 114.28, "end": 119.08, "text": " But, as you may notice, you're going to need a lot of steps to get to the minimum."}, {"start": 119.08, "end": 126.36, "text": " So, to summarize, if the learning rate is too small, then gradient descent will work,"}, {"start": 126.36, "end": 127.92, "text": " but it will be slow."}, {"start": 127.92, "end": 132.92000000000002, "text": " It will take a very long time, because it's going to take these tiny, tiny baby steps,"}, {"start": 132.92000000000002, "end": 136.92000000000002, "text": " and it's going to need a lot of steps before it gets anywhere close to the minimum."}, {"start": 136.92000000000002, "end": 141.0, "text": " Now, let's look at a different case."}, {"start": 141.0, "end": 144.44, "text": " What happens if the learning rate is too large?"}, {"start": 144.44, "end": 148.88, "text": " Here's another graph of the cos function, and let's say we start gradient descent with"}, {"start": 148.88, "end": 152.04, "text": " W at this value here."}, {"start": 152.04, "end": 156.68, "text": " So, it's actually already pretty close to the minimum."}, {"start": 156.68, "end": 165.20000000000002, "text": " So, the derivative points to the right, but if the learning rate is too large, then you"}, {"start": 165.20000000000002, "end": 174.56, "text": " update W via a giant step to be all the way over here, and that's this point here on the"}, {"start": 174.56, "end": 175.56, "text": " function j."}, {"start": 175.56, "end": 182.28, "text": " So, you move from this point on the left all the way to this point on the right, and now"}, {"start": 182.28, "end": 184.4, "text": " the cost has actually gotten worse."}, {"start": 184.4, "end": 190.24, "text": " It has increased because it has started out at this value here, and after one step, it"}, {"start": 190.24, "end": 193.28, "text": " actually increased to this value here."}, {"start": 193.28, "end": 200.88, "text": " Now, the derivative at this new point says to decrease W, but when the learning rate"}, {"start": 200.88, "end": 207.24, "text": " is too big, then you may take a huge step going from here all the way out here."}, {"start": 207.24, "end": 213.64000000000001, "text": " So, now you've gotten to this point here, and again, if the learning rate is too big,"}, {"start": 213.64, "end": 218.6, "text": " then you take another huge step at the next iteration and way overshoot the minimum again."}, {"start": 218.6, "end": 224.32, "text": " So, now you're at this point on the right, and one more time, you do another update and"}, {"start": 224.32, "end": 230.39999999999998, "text": " end up all the way here, and so you're now at this point here."}, {"start": 230.39999999999998, "end": 236.64, "text": " So, as you may notice, you're actually getting further and further away from the minimum."}, {"start": 236.64, "end": 243.83999999999997, "text": " So, if the learning rate is too large, then gradient descent may overshoot and may never"}, {"start": 243.83999999999997, "end": 246.07999999999998, "text": " reach the minimum."}, {"start": 246.07999999999998, "end": 252.83999999999997, "text": " And another way to say that is that gradient descent may fail to converge and may even"}, {"start": 252.83999999999997, "end": 253.83999999999997, "text": " diverge."}, {"start": 253.83999999999997, "end": 259.32, "text": " So, here's another question you may be wondering."}, {"start": 259.32, "end": 266.64, "text": " What if your parameter W is already at this point here, so that your cost J is already"}, {"start": 266.64, "end": 269.59999999999997, "text": " at a local minimum?"}, {"start": 269.59999999999997, "end": 275.71999999999997, "text": " What do you think one step of gradient descent will do if you've already reached a minimum?"}, {"start": 275.71999999999997, "end": 279.12, "text": " So, this is a tricky one."}, {"start": 279.12, "end": 283.92, "text": " When I was first learning this stuff, it actually took me a long time to figure it out, but"}, {"start": 283.92, "end": 286.12, "text": " let's work through this together."}, {"start": 286.12, "end": 293.08, "text": " Let's suppose you have some cost function J, and the one you see here isn't a square"}, {"start": 293.08, "end": 299.12, "text": " error cost function, and this cost function has two local minima corresponding to the"}, {"start": 299.12, "end": 302.52, "text": " two values that you see here."}, {"start": 302.52, "end": 308.6, "text": " Now, let's suppose that after some number of steps of gradient descent, your parameter"}, {"start": 308.6, "end": 317.36, "text": " W is over here, say equal to 5, and so this is the current value of W. This means that"}, {"start": 317.36, "end": 323.92, "text": " you're at this point on the cost function J, and that happens to be a local minimum."}, {"start": 323.92, "end": 330.84000000000003, "text": " Turns out, if you draw a tangent to the function at this point, the slope of this line is zero,"}, {"start": 330.84000000000003, "end": 337.8, "text": " and thus, the derivative term here is equal to zero for the current value of W, and so"}, {"start": 337.8, "end": 344.72, "text": " your gradient descent update becomes W is updated to W minus the learning rate times"}, {"start": 344.72, "end": 351.68, "text": " zero, where here, that's because the derivative term is equal to zero, and this is the same"}, {"start": 351.68, "end": 360.32, "text": " as saying let's set W to be equal to W. So, this means that if you're already at a local"}, {"start": 360.32, "end": 366.52, "text": " minimum, gradient descent leaves W unchanged because it just updates the new value of W"}, {"start": 366.52, "end": 375.79999999999995, "text": " to be the exact same old value of W. So, concretely, let's say if the current value of W is 5,"}, {"start": 375.79999999999995, "end": 385.59999999999997, "text": " and alpha is 0.1, after one iteration, you update W as W minus alpha times zero, and"}, {"start": 385.59999999999997, "end": 388.4, "text": " it is still equal to 5."}, {"start": 388.4, "end": 395.12, "text": " So, if your parameters have already brought you to a local minimum, then further gradient"}, {"start": 395.12, "end": 397.8, "text": " descent steps to absolutely nothing."}, {"start": 397.8, "end": 401.84000000000003, "text": " It doesn't change the parameters, which is what you want, because it keeps the solution"}, {"start": 401.84000000000003, "end": 404.34000000000003, "text": " at that local minimum."}, {"start": 404.34000000000003, "end": 409.92, "text": " This also explains why gradient descent can reach a local minimum even with a fixed learning"}, {"start": 409.92, "end": 410.92, "text": " rate alpha."}, {"start": 410.92, "end": 413.32, "text": " Here's what I mean."}, {"start": 413.32, "end": 417.44, "text": " To illustrate this, let's look at another example."}, {"start": 417.44, "end": 422.64, "text": " Here's the cost function J of W that we want to minimize."}, {"start": 422.64, "end": 427.52, "text": " Let's initialize gradient descent up here at this point."}, {"start": 427.52, "end": 434.24, "text": " If we take one update step, maybe it'll take us to that point."}, {"start": 434.24, "end": 440.68, "text": " And because this derivative is pretty large, gradient descent takes a relatively big step,"}, {"start": 440.68, "end": 441.68, "text": " right?"}, {"start": 441.68, "end": 446.91999999999996, "text": " Now, we're at this second point, where we take another step."}, {"start": 446.91999999999996, "end": 451.76, "text": " And you may notice that the slope is not as steep as it was at the first point, so the"}, {"start": 451.76, "end": 454.03999999999996, "text": " derivative isn't as large."}, {"start": 454.03999999999996, "end": 459.64, "text": " And so, the next update step will not be as large as that first step."}, {"start": 459.64, "end": 466.88, "text": " Now, we're at this third point here, and the derivative is smaller than it was at the previous"}, {"start": 466.88, "end": 471.12, "text": " step and we'll take an even smaller step."}, {"start": 471.12, "end": 476.88, "text": " As we approach the minimum, the derivative gets closer and closer to zero."}, {"start": 476.88, "end": 483.2, "text": " So as we run gradient descent, eventually we're taking very small steps until you finally"}, {"start": 483.2, "end": 485.28, "text": " reach a local minimum."}, {"start": 485.28, "end": 491.48, "text": " So just to recap, as we get nearer a local minimum, gradient descent will automatically"}, {"start": 491.48, "end": 493.56, "text": " take smaller steps."}, {"start": 493.56, "end": 498.52, "text": " And that's because as we approach the local minimum, the derivative automatically gets"}, {"start": 498.52, "end": 505.44, "text": " smaller and that means the update steps also automatically get smaller, even if the learning"}, {"start": 505.44, "end": 508.88, "text": " rate alpha is kept at some fixed value."}, {"start": 508.88, "end": 511.9, "text": " So that's the gradient descent algorithm."}, {"start": 511.9, "end": 517.24, "text": " You can use it to try to minimize any cost function j, not just the mean squared error"}, {"start": 517.24, "end": 521.72, "text": " cost function that we're using for linear regression."}, {"start": 521.72, "end": 527.64, "text": " In the next video, we're going to take the function j and set that back to be exactly"}, {"start": 527.64, "end": 532.36, "text": " the linear regression model's cost function, the mean squared error cost function that"}, {"start": 532.36, "end": 534.76, "text": " we had come up with earlier."}, {"start": 534.76, "end": 539.16, "text": " And putting together gradient descent with this cost function that will give you your"}, {"start": 539.16, "end": 566.16, "text": " first learning algorithm, the linear regression algorithm."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=ZSg_NglG3aA
1.19 Machine Learning Overview| Gradient descent for Linear Regression--[Machine Learning|Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, previously you took a look at the linear regression model and then the cost function and then the gradient descent algorithm. In this video, we're going to put it all together and use the square error cost function for the linear regression model with gradient descent. This will allow us to train the linear regression model to fit a straight line to our training data. Let's get to it. This is the linear regression model and to the right is the squared error cost function and below is the gradient descent algorithm. It turns out if you calculate these derivatives, these are the terms you would get. The derivative with respect to w is this. One over m summed from i equals one through m. Then the error term, that is the difference between the predicted and the actual values, times the input feature x i. And the derivative with respect to b is this formula over here, which looks the same as the equation above except that it doesn't have that x i term at the end. And if you use these formulas to compute these two derivatives and implement gradient descent this way, it will work. Now you may be wondering, where did I get these formulas from? They're derived using calculus. If you want to see the full derivation, I'll quickly run through the derivation on the next slide. But if you don't remember or aren't interested in the calculus, don't worry about it. You can skip the materials on the next slide entirely and still be able to implement gradient descent and finish this class and everything will work just fine. So in this slide, which is one of the most mathematical slides of the entire specialization and again is completely optional, we'll show you how to calculate the derivative terms. Let's start with the first term. The derivative of the cost function j with respect to w. We'll start by plugging in the definition of the cost function j. So j of w b is this. So one over two m times this sum of the squared error terms. And now remember also that f of w b of x i is equal to this term over here, which is w x i plus b. And so what we would like to do is compute the derivative also called the partial derivative with respect to w of this equation right here on the right. If you've taken a calculus class before, and again is totally fine if you haven't, you may know that by the rules of calculus, the derivative is equal to this term over here, which is why the two here and the two here cancel out, leaving us with this equation that you saw on the previous slide. So this, by the way, is why we had to find the cost function with the one half earlier this week is because it makes the partial derivative meter. It cancels out the two that appears from computing the derivative. For the other derivative with respect to b, this is quite similar. I can write it out like this and once again, plug in the definition of f of x i giving this equation. And by the rules of calculus, this is equal to this, where there's no x i anymore at the end. And so the twos cancel once more and you end up with this expression for the derivative with respect to b. And so now you have these two expressions for the derivatives and you can plug them into the gradient descent algorithm. So here's the gradient descent algorithm for linear regression. You repeatedly carry out these updates to w and b until convergence. Remember that this f of x is a linear regression model, so it's equal to w times x plus b. This expression here is the derivative of the cost function with respect to w and this expression is the derivative of the cost function with respect to b. And just as a reminder, you want to update w and b simultaneously on each step. Now let's get familiar with how gradient descent works. One issue we saw with gradient descent is that it can lead to a local minimum instead of a global minimum, where the global minimum means the point that has the lowest possible value for the cost function j out of all possible points. You may recall this surface plot that looks like an outdoor park with a few hills with the grass and the birds is a relaxing outdoor hill. This function has more than one local minimum. Remember depending on where you initialize the parameters w and b, you can end up at different local minima. You can end up here or you can end up here. But it turns out when you're using a squared error cost function with linear regression, the cost function does not and will never have multiple local minima. It has a single global minimum because of this bow shape. The technical term for this is that this cost function is a convex function. Informally, a convex function is a bow shaped function and it cannot have any local minima other than the single global minimum. So when you implement gradient descent on a convex function, one nice property is that so long as your learning rate is chosen appropriately, it will always converge to the global minimum. Congratulations, you now know how to implement gradient descent for linear regression. We have just one last video for this week. In that video, we'll see this algorithm in action. Let's go to that last video.
[{"start": 0.0, "end": 7.5200000000000005, "text": " So, previously you took a look at the linear regression model and then the cost function"}, {"start": 7.5200000000000005, "end": 10.68, "text": " and then the gradient descent algorithm."}, {"start": 10.68, "end": 15.8, "text": " In this video, we're going to put it all together and use the square error cost function for"}, {"start": 15.8, "end": 19.48, "text": " the linear regression model with gradient descent."}, {"start": 19.48, "end": 24.76, "text": " This will allow us to train the linear regression model to fit a straight line to our training"}, {"start": 24.76, "end": 25.76, "text": " data."}, {"start": 25.76, "end": 27.48, "text": " Let's get to it."}, {"start": 27.48, "end": 34.88, "text": " This is the linear regression model and to the right is the squared error cost function"}, {"start": 34.88, "end": 38.480000000000004, "text": " and below is the gradient descent algorithm."}, {"start": 38.480000000000004, "end": 45.760000000000005, "text": " It turns out if you calculate these derivatives, these are the terms you would get."}, {"start": 45.760000000000005, "end": 50.68, "text": " The derivative with respect to w is this."}, {"start": 50.68, "end": 54.2, "text": " One over m summed from i equals one through m."}, {"start": 54.2, "end": 59.88, "text": " Then the error term, that is the difference between the predicted and the actual values,"}, {"start": 59.88, "end": 63.480000000000004, "text": " times the input feature x i."}, {"start": 63.480000000000004, "end": 69.88, "text": " And the derivative with respect to b is this formula over here, which looks the same as"}, {"start": 69.88, "end": 76.28, "text": " the equation above except that it doesn't have that x i term at the end."}, {"start": 76.28, "end": 82.28, "text": " And if you use these formulas to compute these two derivatives and implement gradient descent"}, {"start": 82.28, "end": 84.8, "text": " this way, it will work."}, {"start": 84.8, "end": 89.16, "text": " Now you may be wondering, where did I get these formulas from?"}, {"start": 89.16, "end": 91.56, "text": " They're derived using calculus."}, {"start": 91.56, "end": 95.88, "text": " If you want to see the full derivation, I'll quickly run through the derivation on the"}, {"start": 95.88, "end": 96.88, "text": " next slide."}, {"start": 96.88, "end": 101.76, "text": " But if you don't remember or aren't interested in the calculus, don't worry about it."}, {"start": 101.76, "end": 106.32, "text": " You can skip the materials on the next slide entirely and still be able to implement gradient"}, {"start": 106.32, "end": 110.6, "text": " descent and finish this class and everything will work just fine."}, {"start": 110.6, "end": 115.64, "text": " So in this slide, which is one of the most mathematical slides of the entire specialization"}, {"start": 115.64, "end": 122.08, "text": " and again is completely optional, we'll show you how to calculate the derivative terms."}, {"start": 122.08, "end": 124.28, "text": " Let's start with the first term."}, {"start": 124.28, "end": 129.28, "text": " The derivative of the cost function j with respect to w."}, {"start": 129.28, "end": 134.2, "text": " We'll start by plugging in the definition of the cost function j."}, {"start": 134.2, "end": 139.04, "text": " So j of w b is this."}, {"start": 139.04, "end": 145.84, "text": " So one over two m times this sum of the squared error terms."}, {"start": 145.84, "end": 158.51999999999998, "text": " And now remember also that f of w b of x i is equal to this term over here, which is"}, {"start": 158.51999999999998, "end": 161.89999999999998, "text": " w x i plus b."}, {"start": 161.89999999999998, "end": 168.2, "text": " And so what we would like to do is compute the derivative also called the partial derivative"}, {"start": 168.2, "end": 175.28, "text": " with respect to w of this equation right here on the right."}, {"start": 175.28, "end": 180.32, "text": " If you've taken a calculus class before, and again is totally fine if you haven't, you"}, {"start": 180.32, "end": 187.6, "text": " may know that by the rules of calculus, the derivative is equal to this term over here,"}, {"start": 187.6, "end": 195.48, "text": " which is why the two here and the two here cancel out, leaving us with this equation"}, {"start": 195.48, "end": 199.16, "text": " that you saw on the previous slide."}, {"start": 199.16, "end": 205.12, "text": " So this, by the way, is why we had to find the cost function with the one half earlier"}, {"start": 205.12, "end": 209.48, "text": " this week is because it makes the partial derivative meter."}, {"start": 209.48, "end": 214.92, "text": " It cancels out the two that appears from computing the derivative."}, {"start": 214.92, "end": 220.07999999999998, "text": " For the other derivative with respect to b, this is quite similar."}, {"start": 220.08, "end": 227.72000000000003, "text": " I can write it out like this and once again, plug in the definition of f of x i giving"}, {"start": 227.72000000000003, "end": 230.20000000000002, "text": " this equation."}, {"start": 230.20000000000002, "end": 237.64000000000001, "text": " And by the rules of calculus, this is equal to this, where there's no x i anymore at the"}, {"start": 237.64000000000001, "end": 238.72000000000003, "text": " end."}, {"start": 238.72000000000003, "end": 245.0, "text": " And so the twos cancel once more and you end up with this expression for the derivative"}, {"start": 245.0, "end": 247.24, "text": " with respect to b."}, {"start": 247.24, "end": 252.44, "text": " And so now you have these two expressions for the derivatives and you can plug them"}, {"start": 252.44, "end": 256.36, "text": " into the gradient descent algorithm."}, {"start": 256.36, "end": 260.64, "text": " So here's the gradient descent algorithm for linear regression."}, {"start": 260.64, "end": 266.96000000000004, "text": " You repeatedly carry out these updates to w and b until convergence."}, {"start": 266.96000000000004, "end": 276.2, "text": " Remember that this f of x is a linear regression model, so it's equal to w times x plus b."}, {"start": 276.2, "end": 282.2, "text": " This expression here is the derivative of the cost function with respect to w and this"}, {"start": 282.2, "end": 287.84, "text": " expression is the derivative of the cost function with respect to b."}, {"start": 287.84, "end": 294.71999999999997, "text": " And just as a reminder, you want to update w and b simultaneously on each step."}, {"start": 294.71999999999997, "end": 298.56, "text": " Now let's get familiar with how gradient descent works."}, {"start": 298.56, "end": 303.64, "text": " One issue we saw with gradient descent is that it can lead to a local minimum instead"}, {"start": 303.64, "end": 309.12, "text": " of a global minimum, where the global minimum means the point that has the lowest possible"}, {"start": 309.12, "end": 313.52, "text": " value for the cost function j out of all possible points."}, {"start": 313.52, "end": 318.76, "text": " You may recall this surface plot that looks like an outdoor park with a few hills with"}, {"start": 318.76, "end": 322.15999999999997, "text": " the grass and the birds is a relaxing outdoor hill."}, {"start": 322.15999999999997, "end": 325.56, "text": " This function has more than one local minimum."}, {"start": 325.56, "end": 330.8, "text": " Remember depending on where you initialize the parameters w and b, you can end up at"}, {"start": 330.8, "end": 332.64, "text": " different local minima."}, {"start": 332.64, "end": 337.12, "text": " You can end up here or you can end up here."}, {"start": 337.12, "end": 342.36, "text": " But it turns out when you're using a squared error cost function with linear regression,"}, {"start": 342.36, "end": 347.68, "text": " the cost function does not and will never have multiple local minima."}, {"start": 347.68, "end": 352.56, "text": " It has a single global minimum because of this bow shape."}, {"start": 352.56, "end": 358.47999999999996, "text": " The technical term for this is that this cost function is a convex function."}, {"start": 358.48, "end": 365.8, "text": " Informally, a convex function is a bow shaped function and it cannot have any local minima"}, {"start": 365.8, "end": 369.48, "text": " other than the single global minimum."}, {"start": 369.48, "end": 376.12, "text": " So when you implement gradient descent on a convex function, one nice property is that"}, {"start": 376.12, "end": 382.68, "text": " so long as your learning rate is chosen appropriately, it will always converge to the global minimum."}, {"start": 382.68, "end": 388.3, "text": " Congratulations, you now know how to implement gradient descent for linear regression."}, {"start": 388.3, "end": 391.16, "text": " We have just one last video for this week."}, {"start": 391.16, "end": 394.16, "text": " In that video, we'll see this algorithm in action."}, {"start": 394.16, "end": 419.16, "text": " Let's go to that last video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=3xyYI4wPuTs
1.20 Machine Learning Overview | Running gradient descent --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's see what happens when you run gradient descent for linear regression. Let's go see the algorithm in action. Here's a plot of the model and data on the upper left, and a contour plot of the cost function on the upper right. And at the bottom is the surface plot of the same cost function. Often W and B will both be initialized to zero, but for this demonstration, let's initialize W to be equal to negative 0.1 and B to be 900. So this corresponds to f of x equals negative 0.1 x plus 900. Now if we take one step using gradient descent, we end up going from this point of the cost function out here to this point just down and to the right. And notice that the straight line fit has also changed a bit. Let's take another step. The cost function has now moved to this third point, and again the function f of x has also changed a bit. As you take more of these steps, the cost is decreasing at each update, so the parameters W and B are following this trajectory. And if you look on the left, you get this corresponding straight line fit that, you know, fits the data better and better until we've reached the global minimum. The global minimum corresponds to this straight line fit, which is a relatively good fit to the data. I mean, isn't that cool? And so that's gradient descent, and we're going to use this to fit a model to the housing data. And you can now use this f of x model to predict the price of your client's house or anyone else's house. For instance, if your friend's house size is 1250 square feet, you can now read off the value and predict that maybe they could get, I don't know, $250,000 for the house. To be more precise, this gradient descent process is called batch gradient descent. The term batch gradient descent refers to the fact that on every step of gradient descent, we're looking at all of the training examples instead of just a subset of the training data. So in computing gradient descent, when computing derivatives, we're computing the sum from i equals one to m, and batch gradient descent is looking at the entire batch of training examples at each update. I know that batch gradient descent may not be the most intuitive name, but this is what people in the machine learning community call it. If you've heard of the newsletter, The Batch, that's published by deeplearning.ai, the newsletter, The Batch, was also named for this concept in machine learning. And then it turns out that there are other versions of gradient descent that do not look at the entire training set, but instead looks at smaller subsets of the training data at each update step. But we'll use batch gradient descent for linear regression. So that's it for linear regression. Congratulations on getting through your first machine learning model. I hope you go and celebrate or, I don't know, maybe take a nap in your hammock. In the optional lab that follows this video, you see a review of the gradient descent algorithm as well as how to implement it in code. You also see a plot that shows how the cost decreases as you continue training more iterations. And you also see a contour plot, seeing how the cost gets closer to the global minimum as gradient descent finds better and better values for the parameters W and B. So remember that to do the optional lab, you just need to read and run this code. You won't need to write any code yourself. And I hope you take a few moments to do that and also become familiar with the gradient descent code because this will help you to implement this and similar algorithms in the future yourself. Thanks for sticking with me through the end of this last video for the first week and congratulations for making it all the way here. You're on your way to becoming a machine learning person. In addition to the optional labs, if you haven't done so yet, I hope you also check out the practice quizzes, which are a nice way that you can double check your own understanding of the concepts. It's also totally fine if you don't get them all right the first time. And you can also take the quizzes multiple times until you get the score that you want. You now know how to implement linear regression with one variable. And that brings us to the close of this week. Next week, we'll learn to make linear regression much more powerful. Instead of one feature like size of a house, you learn how to get it to work with lots of features. You also learn how to get it to fit nonlinear curves. These improvements will make the algorithm much more useful and valuable. Lastly, we'll also go over some practical tips that will really help for getting linear regression to work on practical applications. I'm really happy to have you here with me in this class and I look forward to seeing you next week.
[{"start": 0.0, "end": 6.4, "text": " Let's see what happens when you run gradient descent for linear regression."}, {"start": 6.4, "end": 8.88, "text": " Let's go see the algorithm in action."}, {"start": 8.88, "end": 15.56, "text": " Here's a plot of the model and data on the upper left, and a contour plot of the cost"}, {"start": 15.56, "end": 18.32, "text": " function on the upper right."}, {"start": 18.32, "end": 23.64, "text": " And at the bottom is the surface plot of the same cost function."}, {"start": 23.64, "end": 29.64, "text": " Often W and B will both be initialized to zero, but for this demonstration, let's initialize"}, {"start": 29.64, "end": 36.08, "text": " W to be equal to negative 0.1 and B to be 900."}, {"start": 36.08, "end": 44.760000000000005, "text": " So this corresponds to f of x equals negative 0.1 x plus 900."}, {"start": 44.760000000000005, "end": 50.56, "text": " Now if we take one step using gradient descent, we end up going from this point of the cost"}, {"start": 50.56, "end": 57.8, "text": " function out here to this point just down and to the right."}, {"start": 57.8, "end": 64.39999999999999, "text": " And notice that the straight line fit has also changed a bit."}, {"start": 64.39999999999999, "end": 67.03999999999999, "text": " Let's take another step."}, {"start": 67.03999999999999, "end": 73.6, "text": " The cost function has now moved to this third point, and again the function f of x has also"}, {"start": 73.6, "end": 76.08, "text": " changed a bit."}, {"start": 76.08, "end": 82.66, "text": " As you take more of these steps, the cost is decreasing at each update, so the parameters"}, {"start": 82.66, "end": 88.52, "text": " W and B are following this trajectory."}, {"start": 88.52, "end": 94.2, "text": " And if you look on the left, you get this corresponding straight line fit that, you"}, {"start": 94.2, "end": 100.42, "text": " know, fits the data better and better until we've reached the global minimum."}, {"start": 100.42, "end": 106.6, "text": " The global minimum corresponds to this straight line fit, which is a relatively good fit to"}, {"start": 106.6, "end": 107.6, "text": " the data."}, {"start": 107.6, "end": 110.44, "text": " I mean, isn't that cool?"}, {"start": 110.44, "end": 117.0, "text": " And so that's gradient descent, and we're going to use this to fit a model to the housing"}, {"start": 117.0, "end": 118.67999999999999, "text": " data."}, {"start": 118.67999999999999, "end": 125.42, "text": " And you can now use this f of x model to predict the price of your client's house or anyone"}, {"start": 125.42, "end": 126.88, "text": " else's house."}, {"start": 126.88, "end": 133.96, "text": " For instance, if your friend's house size is 1250 square feet, you can now read off"}, {"start": 133.96, "end": 141.60000000000002, "text": " the value and predict that maybe they could get, I don't know, $250,000 for the house."}, {"start": 141.60000000000002, "end": 147.76000000000002, "text": " To be more precise, this gradient descent process is called batch gradient descent."}, {"start": 147.76000000000002, "end": 154.0, "text": " The term batch gradient descent refers to the fact that on every step of gradient descent,"}, {"start": 154.0, "end": 161.4, "text": " we're looking at all of the training examples instead of just a subset of the training data."}, {"start": 161.4, "end": 167.72, "text": " So in computing gradient descent, when computing derivatives, we're computing the sum from"}, {"start": 167.72, "end": 175.92000000000002, "text": " i equals one to m, and batch gradient descent is looking at the entire batch of training"}, {"start": 175.92000000000002, "end": 179.04000000000002, "text": " examples at each update."}, {"start": 179.04000000000002, "end": 183.76, "text": " I know that batch gradient descent may not be the most intuitive name, but this is what"}, {"start": 183.76, "end": 187.16, "text": " people in the machine learning community call it."}, {"start": 187.16, "end": 193.72, "text": " If you've heard of the newsletter, The Batch, that's published by deeplearning.ai, the newsletter,"}, {"start": 193.72, "end": 198.64, "text": " The Batch, was also named for this concept in machine learning."}, {"start": 198.64, "end": 202.88, "text": " And then it turns out that there are other versions of gradient descent that do not look"}, {"start": 202.88, "end": 208.44, "text": " at the entire training set, but instead looks at smaller subsets of the training data at"}, {"start": 208.44, "end": 210.18, "text": " each update step."}, {"start": 210.18, "end": 214.72, "text": " But we'll use batch gradient descent for linear regression."}, {"start": 214.72, "end": 216.8, "text": " So that's it for linear regression."}, {"start": 216.8, "end": 220.8, "text": " Congratulations on getting through your first machine learning model."}, {"start": 220.8, "end": 226.20000000000002, "text": " I hope you go and celebrate or, I don't know, maybe take a nap in your hammock."}, {"start": 226.20000000000002, "end": 231.64000000000001, "text": " In the optional lab that follows this video, you see a review of the gradient descent algorithm"}, {"start": 231.64000000000001, "end": 234.68, "text": " as well as how to implement it in code."}, {"start": 234.68, "end": 241.68, "text": " You also see a plot that shows how the cost decreases as you continue training more iterations."}, {"start": 241.68, "end": 247.20000000000002, "text": " And you also see a contour plot, seeing how the cost gets closer to the global minimum"}, {"start": 247.20000000000002, "end": 253.6, "text": " as gradient descent finds better and better values for the parameters W and B."}, {"start": 253.6, "end": 259.54, "text": " So remember that to do the optional lab, you just need to read and run this code."}, {"start": 259.54, "end": 262.44, "text": " You won't need to write any code yourself."}, {"start": 262.44, "end": 267.44, "text": " And I hope you take a few moments to do that and also become familiar with the gradient"}, {"start": 267.44, "end": 274.36, "text": " descent code because this will help you to implement this and similar algorithms in the"}, {"start": 274.36, "end": 276.8, "text": " future yourself."}, {"start": 276.8, "end": 280.64, "text": " Thanks for sticking with me through the end of this last video for the first week and"}, {"start": 280.64, "end": 283.36, "text": " congratulations for making it all the way here."}, {"start": 283.36, "end": 287.4, "text": " You're on your way to becoming a machine learning person."}, {"start": 287.4, "end": 292.76, "text": " In addition to the optional labs, if you haven't done so yet, I hope you also check out the"}, {"start": 292.76, "end": 297.08, "text": " practice quizzes, which are a nice way that you can double check your own understanding"}, {"start": 297.08, "end": 298.08, "text": " of the concepts."}, {"start": 298.08, "end": 302.64, "text": " It's also totally fine if you don't get them all right the first time."}, {"start": 302.64, "end": 308.09999999999997, "text": " And you can also take the quizzes multiple times until you get the score that you want."}, {"start": 308.09999999999997, "end": 312.64, "text": " You now know how to implement linear regression with one variable."}, {"start": 312.64, "end": 316.15999999999997, "text": " And that brings us to the close of this week."}, {"start": 316.15999999999997, "end": 320.32, "text": " Next week, we'll learn to make linear regression much more powerful."}, {"start": 320.32, "end": 324.97999999999996, "text": " Instead of one feature like size of a house, you learn how to get it to work with lots"}, {"start": 324.97999999999996, "end": 326.24, "text": " of features."}, {"start": 326.24, "end": 330.28000000000003, "text": " You also learn how to get it to fit nonlinear curves."}, {"start": 330.28000000000003, "end": 334.36, "text": " These improvements will make the algorithm much more useful and valuable."}, {"start": 334.36, "end": 339.76, "text": " Lastly, we'll also go over some practical tips that will really help for getting linear"}, {"start": 339.76, "end": 343.28000000000003, "text": " regression to work on practical applications."}, {"start": 343.28000000000003, "end": 346.76, "text": " I'm really happy to have you here with me in this class and I look forward to seeing"}, {"start": 346.76, "end": 357.44, "text": " you next week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=fX86EFWljY0
2.1 Linear Regression with Multiple Variables | Multiple features --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome back. In this week, we'll learn to make linear regression much faster and much more powerful, and by the end of this week, you'll be two-thirds of the way to finishing this first course. Let's start by looking at a version of linear regression that can look at not just one feature, but a lot of different features. Let's take a look. In the original version of linear regression, you had a single feature x, the size of the house, and you're able to predict y, the price of the house. So the model was FWB of x equals Wx plus b. But now, what if you did not only have the size of the house as a feature with which to try to predict the price, but if you also knew the number of bedrooms, the number of floors, and the age of the home in years, it seems like this would give you a lot more information with which to predict the price. To introduce a little bit of new notation, we're going to use the variables x subscript 1, x subscript 2, x subscript 3, and x subscript 4 to denote the four features. And for simplicity, let's introduce a little bit more notation. We'll write x subscript j, or sometimes I'll just say for short x sub j to represent the list of features. So here, j will go from 1 through 4 because we have four features. I'm going to use lowercase n to denote the total number of features. So in this example, n is equal to 4. As before, we'll use x super stripped i to denote the I've training example. So here, x super strip i is actually going to be a list of four numbers, or sometimes we'll call this a vector that includes all the features of the I've training example. So as a concrete example, x super strip in parentheses 2 will be a vector of the features for the second training example. So it will equal to this 1416, 32, and 40. And technically, I'm writing these numbers in a row, so sometimes this is called a row vector rather than a column vector. But if you don't know what the difference is, don't worry about it. It's not that important for this purpose. And to refer to a specific feature in the I've training example, I will write x super strip i subscript j. So for example, x super strip 2 subscript 3 will be the value of the third feature, that is the number of flaws in the second training example. And so that's going to be equal to two. Sometimes in order to emphasize that this x two is not a number, but it's actually a list of numbers that is a vector, we'll draw an arrow on top of that, just to visually show that is a vector. And over here as well. But you don't have to draw this arrow in your notation. You can think of the arrow as an optional signifier that sometimes use just to emphasize that this is a vector and not a number. Now that we have multiple features, let's take a look at what our model would look like. Previously, this is how we defined the model where x was a single feature. So a single number. But now with multiple features, we're going to define it differently. Instead, the model will be f WB of x equals W1 x1 plus W2 x2 plus W3 x3 plus W4 x4 plus B. Concretely, for housing price prediction, one possible model may be that we estimate the price of the house as 0.1 times x1, the size of the house, plus four times x2, the number of bedrooms, plus 10 times x3, the number of floors, minus two times x4, the age of the house in years plus 80. Let's think a bit about how you might interpret these parameters. If the model is trying to predict the price of the house in thousands of dollars, you can think of this B equals 80 as saying that the base price of a house starts off at maybe $80,000, assuming it has no size, no bedrooms, no floor and no age. And you can think of this 0.1 as saying that maybe for every additional square foot, the price will increase by $0.1,000 or by $100 because we're saying that for each square foot, the price increases by 0.1, you know, times $1,000, which is $100. And maybe for each additional bathroom, the price increases by $4,000. And for each additional floor, the price may increase by $10,000. And for each additional year of the house's age, the price may decrease by $2,000 because the parameter is negative two. And in general, if you have n features, then the model will look like this. Here again is the definition of the model with n features. What we're going to do next is introduce a little bit of notation to rewrite this expression in a simpler but equivalent way. Let's define w as a list of numbers that lists the parameters w1, w2, w3, all the way through wn. In mathematics, this is called a vector. And sometimes to designate that this is a vector, which just means a list of numbers, I'm going to draw a little arrow on top. You don't always have to draw this arrow. And you can do so or not in your own notation. So you can think of this little arrow as just an optional signifier to remind us that this is a vector. If you've taken a linear algebra class before, you might recognize that this is a row vector as opposed to a column vector. But if you don't know what those terms means, you don't need to worry about it. Next, same as before, b is a single number and not a vector. And so this vector w together with this number b are the parameters of the model. Let me also write x as a list or a vector, again, a row vector that lists all of the features x1, x2, x3 up through xn. This is again a vector. So I'm going to add a little arrow up on top to signify. So in the notation up on top, we can also add little arrows here and here to signify that that w and that x are actually these lists of numbers that they're actually these vectors. So with this notation, the model can now be rewritten more succinctly as f of x equals the vector w dot and this dot refers to a dot product from linear algebra of x, the vector plus the number b. So what is this dot product thing? Well, the dot products of two vectors of two lists of numbers w and x is computed by taking the corresponding pairs of numbers w1 and x1, multiplying that w2 x2, multiplying that w3 x3, multiplying that all the way up to wn xn, multiplying that and then summing up all of these products. Writing that out, this means that the dot product is equal to w1 x1 plus w2 x2 plus w3 x3 plus all the way up to wn xn. And then finally we add back in the b on top. And you notice that this gives us exactly the same expression as we had on top. So the dot product notation lets you write the model in a more compact form with fewer characters. The name for this type of linear regression model with multiple input features is multiple linear regression. This is in contrast to univariate regression, which had just one feature. And by the way, you might think this algorithm is called multivariate regression, but that term actually refers to something else that we won't be using here. So I'm going to refer to this model as multiple linear regression. And so that's it for linear regression with multiple features, which is also called multiple linear regression. In order to implement this, there's a really neat trick called vectorization, which will make it much simpler to implement this and many other learning algorithms. Let's go on to the next video to take a look at what is vectorization.
[{"start": 0.0, "end": 7.2, "text": " Welcome back. In this week, we'll learn to make linear regression much faster and much"}, {"start": 7.2, "end": 12.620000000000001, "text": " more powerful, and by the end of this week, you'll be two-thirds of the way to finishing"}, {"start": 12.620000000000001, "end": 17.42, "text": " this first course. Let's start by looking at a version of linear regression that can"}, {"start": 17.42, "end": 23.36, "text": " look at not just one feature, but a lot of different features. Let's take a look."}, {"start": 23.36, "end": 30.64, "text": " In the original version of linear regression, you had a single feature x, the size of the"}, {"start": 30.64, "end": 40.84, "text": " house, and you're able to predict y, the price of the house. So the model was FWB of x equals"}, {"start": 40.84, "end": 47.879999999999995, "text": " Wx plus b. But now, what if you did not only have the size of the house as a feature with"}, {"start": 47.88, "end": 53.6, "text": " which to try to predict the price, but if you also knew the number of bedrooms, the"}, {"start": 53.6, "end": 58.440000000000005, "text": " number of floors, and the age of the home in years, it seems like this would give you"}, {"start": 58.440000000000005, "end": 62.5, "text": " a lot more information with which to predict the price."}, {"start": 62.5, "end": 67.36, "text": " To introduce a little bit of new notation, we're going to use the variables x subscript"}, {"start": 67.36, "end": 76.0, "text": " 1, x subscript 2, x subscript 3, and x subscript 4 to denote the four features. And for simplicity,"}, {"start": 76.0, "end": 81.72, "text": " let's introduce a little bit more notation. We'll write x subscript j, or sometimes I'll"}, {"start": 81.72, "end": 88.72, "text": " just say for short x sub j to represent the list of features. So here, j will go from"}, {"start": 88.72, "end": 95.8, "text": " 1 through 4 because we have four features. I'm going to use lowercase n to denote the"}, {"start": 95.8, "end": 102.92, "text": " total number of features. So in this example, n is equal to 4. As before, we'll use x super"}, {"start": 102.92, "end": 112.12, "text": " stripped i to denote the I've training example. So here, x super strip i is actually going"}, {"start": 112.12, "end": 120.2, "text": " to be a list of four numbers, or sometimes we'll call this a vector that includes all"}, {"start": 120.2, "end": 128.72, "text": " the features of the I've training example. So as a concrete example, x super strip in"}, {"start": 128.72, "end": 135.68, "text": " parentheses 2 will be a vector of the features for the second training example. So it will"}, {"start": 135.68, "end": 144.68, "text": " equal to this 1416, 32, and 40. And technically, I'm writing these numbers in a row, so sometimes"}, {"start": 144.68, "end": 149.32, "text": " this is called a row vector rather than a column vector. But if you don't know what"}, {"start": 149.32, "end": 155.16, "text": " the difference is, don't worry about it. It's not that important for this purpose. And to"}, {"start": 155.16, "end": 164.44, "text": " refer to a specific feature in the I've training example, I will write x super strip i subscript"}, {"start": 164.44, "end": 173.64, "text": " j. So for example, x super strip 2 subscript 3 will be the value of the third feature,"}, {"start": 173.64, "end": 178.6, "text": " that is the number of flaws in the second training example. And so that's going to be"}, {"start": 178.6, "end": 186.56, "text": " equal to two. Sometimes in order to emphasize that this x two is not a number, but it's"}, {"start": 186.56, "end": 192.56, "text": " actually a list of numbers that is a vector, we'll draw an arrow on top of that, just to"}, {"start": 192.56, "end": 200.28, "text": " visually show that is a vector. And over here as well. But you don't have to draw this arrow"}, {"start": 200.28, "end": 207.35999999999999, "text": " in your notation. You can think of the arrow as an optional signifier that sometimes use"}, {"start": 207.36, "end": 213.24, "text": " just to emphasize that this is a vector and not a number. Now that we have multiple features,"}, {"start": 213.24, "end": 218.68, "text": " let's take a look at what our model would look like. Previously, this is how we defined"}, {"start": 218.68, "end": 225.48000000000002, "text": " the model where x was a single feature. So a single number. But now with multiple features,"}, {"start": 225.48000000000002, "end": 236.64000000000001, "text": " we're going to define it differently. Instead, the model will be f WB of x equals W1 x1"}, {"start": 236.64, "end": 247.27999999999997, "text": " plus W2 x2 plus W3 x3 plus W4 x4 plus B. Concretely, for housing price prediction, one possible"}, {"start": 247.27999999999997, "end": 254.32, "text": " model may be that we estimate the price of the house as 0.1 times x1, the size of the"}, {"start": 254.32, "end": 263.15999999999997, "text": " house, plus four times x2, the number of bedrooms, plus 10 times x3, the number of floors, minus"}, {"start": 263.16, "end": 269.68, "text": " two times x4, the age of the house in years plus 80. Let's think a bit about how you might"}, {"start": 269.68, "end": 274.8, "text": " interpret these parameters. If the model is trying to predict the price of the house in"}, {"start": 274.8, "end": 282.18, "text": " thousands of dollars, you can think of this B equals 80 as saying that the base price"}, {"start": 282.18, "end": 288.36, "text": " of a house starts off at maybe $80,000, assuming it has no size, no bedrooms, no floor and"}, {"start": 288.36, "end": 296.0, "text": " no age. And you can think of this 0.1 as saying that maybe for every additional square foot,"}, {"start": 296.0, "end": 304.42, "text": " the price will increase by $0.1,000 or by $100 because we're saying that for each square"}, {"start": 304.42, "end": 313.24, "text": " foot, the price increases by 0.1, you know, times $1,000, which is $100. And maybe for"}, {"start": 313.24, "end": 320.24, "text": " each additional bathroom, the price increases by $4,000. And for each additional floor,"}, {"start": 320.24, "end": 326.16, "text": " the price may increase by $10,000. And for each additional year of the house's age, the"}, {"start": 326.16, "end": 334.52, "text": " price may decrease by $2,000 because the parameter is negative two. And in general, if you have"}, {"start": 334.52, "end": 340.74, "text": " n features, then the model will look like this."}, {"start": 340.74, "end": 346.72, "text": " Here again is the definition of the model with n features. What we're going to do next"}, {"start": 346.72, "end": 351.72, "text": " is introduce a little bit of notation to rewrite this expression in a simpler but equivalent"}, {"start": 351.72, "end": 360.16, "text": " way. Let's define w as a list of numbers that lists the parameters w1, w2, w3, all the way"}, {"start": 360.16, "end": 367.32, "text": " through wn. In mathematics, this is called a vector. And sometimes to designate that"}, {"start": 367.32, "end": 371.88, "text": " this is a vector, which just means a list of numbers, I'm going to draw a little arrow"}, {"start": 371.88, "end": 378.48, "text": " on top. You don't always have to draw this arrow. And you can do so or not in your own"}, {"start": 378.48, "end": 384.28, "text": " notation. So you can think of this little arrow as just an optional signifier to remind"}, {"start": 384.28, "end": 390.4, "text": " us that this is a vector. If you've taken a linear algebra class before, you might recognize"}, {"start": 390.4, "end": 395.52, "text": " that this is a row vector as opposed to a column vector. But if you don't know what"}, {"start": 395.52, "end": 401.2, "text": " those terms means, you don't need to worry about it. Next, same as before, b is a single"}, {"start": 401.2, "end": 409.91999999999996, "text": " number and not a vector. And so this vector w together with this number b are the parameters"}, {"start": 409.91999999999996, "end": 419.91999999999996, "text": " of the model. Let me also write x as a list or a vector, again, a row vector that lists"}, {"start": 419.92, "end": 428.16, "text": " all of the features x1, x2, x3 up through xn. This is again a vector. So I'm going to"}, {"start": 428.16, "end": 437.36, "text": " add a little arrow up on top to signify. So in the notation up on top, we can also add"}, {"start": 437.36, "end": 445.48, "text": " little arrows here and here to signify that that w and that x are actually these lists"}, {"start": 445.48, "end": 453.6, "text": " of numbers that they're actually these vectors. So with this notation, the model can now be"}, {"start": 453.6, "end": 463.92, "text": " rewritten more succinctly as f of x equals the vector w dot and this dot refers to a"}, {"start": 463.92, "end": 472.0, "text": " dot product from linear algebra of x, the vector plus the number b. So what is this"}, {"start": 472.0, "end": 478.6, "text": " dot product thing? Well, the dot products of two vectors of two lists of numbers w and"}, {"start": 478.6, "end": 488.48, "text": " x is computed by taking the corresponding pairs of numbers w1 and x1, multiplying that"}, {"start": 488.48, "end": 499.24, "text": " w2 x2, multiplying that w3 x3, multiplying that all the way up to wn xn, multiplying"}, {"start": 499.24, "end": 507.08, "text": " that and then summing up all of these products. Writing that out, this means that the dot"}, {"start": 507.08, "end": 522.08, "text": " product is equal to w1 x1 plus w2 x2 plus w3 x3 plus all the way up to wn xn. And then"}, {"start": 522.08, "end": 530.36, "text": " finally we add back in the b on top. And you notice that this gives us exactly the same"}, {"start": 530.36, "end": 537.84, "text": " expression as we had on top. So the dot product notation lets you write the model in a more"}, {"start": 537.84, "end": 544.5600000000001, "text": " compact form with fewer characters. The name for this type of linear regression model with"}, {"start": 544.5600000000001, "end": 550.5600000000001, "text": " multiple input features is multiple linear regression. This is in contrast to univariate"}, {"start": 550.56, "end": 556.3599999999999, "text": " regression, which had just one feature. And by the way, you might think this algorithm"}, {"start": 556.3599999999999, "end": 561.68, "text": " is called multivariate regression, but that term actually refers to something else that"}, {"start": 561.68, "end": 568.0799999999999, "text": " we won't be using here. So I'm going to refer to this model as multiple linear regression."}, {"start": 568.0799999999999, "end": 573.4799999999999, "text": " And so that's it for linear regression with multiple features, which is also called multiple"}, {"start": 573.4799999999999, "end": 580.1199999999999, "text": " linear regression. In order to implement this, there's a really neat trick called vectorization,"}, {"start": 580.12, "end": 585.44, "text": " which will make it much simpler to implement this and many other learning algorithms. Let's"}, {"start": 585.44, "end": 612.44, "text": " go on to the next video to take a look at what is vectorization."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=G8yfD_Xu7Ko
2.2 Linear Regression with Multiple Variables | Vectorization part 1 --[Machine Learning |Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart! First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, you see a very useful idea called vectorization. When you're implementing a learning algorithm, using vectorization will both make your code shorter and also make it run much more efficiently. Learning how to write vectorized code will allow you to also take advantage of modern numerical linear algebra libraries, as well as maybe even GPU hardware that stands for graphics processor unit. This is hardware originally designed to speed up computer graphics on your computer, but turns out can be used when you write vectorized code to also help you execute your code much more quickly. Let's look at a concrete example of what vectorization means. Here's an example with parameters w and b, where w is a vector with three numbers, and you also have a vector of features x with also three numbers. So here, n is equal to three. So notice that in linear algebra, the index or the counting starts from one, and so the first value is subscripted w1 and x1. In Python code, you can define these variables w, b, and x using arrays like this. Here I'm actually using a numerical linear algebra library in Python called NumPy, which is by far the most widely used numerical linear algebra library in Python and in machine learning. Because in Python, the indexing of arrays or counting in arrays starts from zero, you would access the first value of w using w square bracket zero, the second value using w square bracket one, and the third using w square bracket two. So the indexing here goes from zero, one, to two, rather than one, two, to three. Similarly, to access individual features of x, you would use x0, x1, and x2. Many programming languages, including Python, start counting from zero rather than one. Now, let's look at an implementation without vectorization for computing the model's prediction. In code, it would look like this. You take each parameter w and multiply it by its associated feature. Now, you could write your code like this, but what if n is in three, but instead n is 100 or 100,000? It's both inefficient for you to code and inefficient for your computer to compute. So here's another way, still without using vectorization, but using a for loop. In math, you can use a summation operator to add all the products of wj and xj for j equals one through n. Then outside the summation, you add b at the end. So the summation goes from j equals one up to and including n for n equals three, j therefore goes from one, two, to three. In code, you can initialize f to zero, then for j in range from zero to n, this actually makes j go from zero to n minus one. So from zero, one to two, you can then add to f the product of wj times xj. Finally, outside the for loop, you add b. Notice that in Python, the range zero to n means that j goes from zero all the way to n minus one and does not include n itself. And more commonly, this is written range n in Python. But in this video, I added a zero here just to emphasize that it starts from zero. While this implementation is a bit better than the first one, it still doesn't use vectorization and isn't that efficient. Now let's look at how you can do this using vectorization. This is the math expression that the function f, which is the dot product of w and x plus b. And now you can implement this with a single line of code. By computing fp equals np dot dot, I said dot dot because the first dot is the period and the second dot is the function or the method called dot. But it's fp equals np dot dot w comma x and this implements the mathematical dot products between the vectors w and x. And then finally, you can add b to it at the end. This NumPy dot function is a vectorized implementation of the dot product operation between two vectors. And especially when n is large, this will run much faster than the two previous code examples. I want to emphasize that vectorization actually has two distinct benefits. First, it makes the code shorter. It's now just one line of code. Isn't that cool? And second, it also results in your code running much faster than either of the two previous implementations that did not use vectorization. And the reason that the vectorized implementation is much faster is behind the scenes, the NumPy dot function is able to use parallel hardware in your computer. And this is true whether you're running this on a normal computer that is on a normal computer CPU, or if you are using a GPU, a graphics processor unit that's often used to accelerate machine learning jobs. And the ability of the NumPy dot function to use parallel hardware makes it much more efficient than the for loop or the sequential calculation that we saw previously. Now this version is much more practical when n is large because you are not typing w0 times x0 plus w1 times x1 plus lots of additional terms like you would have had for the previous version. But while this saves a lot on the typing, it's still not that computationally efficient because it still doesn't use vectorization. So to recap, vectorization makes your code shorter, so hopefully easier to write and easier for you or others to read. And it also makes it run much faster. But what is this magic behind vectorization that makes this run so much faster? Let's take a look at what your computer is actually doing behind the scenes to make vectorized code run so much faster.
[{"start": 0.0, "end": 7.28, "text": " In this video, you see a very useful idea called vectorization."}, {"start": 7.28, "end": 11.96, "text": " When you're implementing a learning algorithm, using vectorization will both make your code"}, {"start": 11.96, "end": 16.740000000000002, "text": " shorter and also make it run much more efficiently."}, {"start": 16.740000000000002, "end": 21.16, "text": " Learning how to write vectorized code will allow you to also take advantage of modern"}, {"start": 21.16, "end": 26.82, "text": " numerical linear algebra libraries, as well as maybe even GPU hardware that stands for"}, {"start": 26.82, "end": 28.68, "text": " graphics processor unit."}, {"start": 28.68, "end": 33.4, "text": " This is hardware originally designed to speed up computer graphics on your computer, but"}, {"start": 33.4, "end": 38.04, "text": " turns out can be used when you write vectorized code to also help you execute your code much"}, {"start": 38.04, "end": 39.04, "text": " more quickly."}, {"start": 39.04, "end": 42.8, "text": " Let's look at a concrete example of what vectorization means."}, {"start": 42.8, "end": 51.32, "text": " Here's an example with parameters w and b, where w is a vector with three numbers, and"}, {"start": 51.32, "end": 56.36, "text": " you also have a vector of features x with also three numbers."}, {"start": 56.36, "end": 60.019999999999996, "text": " So here, n is equal to three."}, {"start": 60.019999999999996, "end": 66.12, "text": " So notice that in linear algebra, the index or the counting starts from one, and so the"}, {"start": 66.12, "end": 71.03999999999999, "text": " first value is subscripted w1 and x1."}, {"start": 71.03999999999999, "end": 78.96000000000001, "text": " In Python code, you can define these variables w, b, and x using arrays like this."}, {"start": 78.96000000000001, "end": 85.0, "text": " Here I'm actually using a numerical linear algebra library in Python called NumPy, which"}, {"start": 85.0, "end": 92.12, "text": " is by far the most widely used numerical linear algebra library in Python and in machine learning."}, {"start": 92.12, "end": 99.4, "text": " Because in Python, the indexing of arrays or counting in arrays starts from zero, you"}, {"start": 99.4, "end": 106.92, "text": " would access the first value of w using w square bracket zero, the second value using"}, {"start": 106.92, "end": 112.76, "text": " w square bracket one, and the third using w square bracket two."}, {"start": 112.76, "end": 120.48, "text": " So the indexing here goes from zero, one, to two, rather than one, two, to three."}, {"start": 120.48, "end": 129.26, "text": " Similarly, to access individual features of x, you would use x0, x1, and x2."}, {"start": 129.26, "end": 134.88, "text": " Many programming languages, including Python, start counting from zero rather than one."}, {"start": 134.88, "end": 141.92000000000002, "text": " Now, let's look at an implementation without vectorization for computing the model's prediction."}, {"start": 141.92, "end": 144.35999999999999, "text": " In code, it would look like this."}, {"start": 144.35999999999999, "end": 150.35999999999999, "text": " You take each parameter w and multiply it by its associated feature."}, {"start": 150.35999999999999, "end": 156.67999999999998, "text": " Now, you could write your code like this, but what if n is in three, but instead n is"}, {"start": 156.67999999999998, "end": 158.83999999999997, "text": " 100 or 100,000?"}, {"start": 158.83999999999997, "end": 164.11999999999998, "text": " It's both inefficient for you to code and inefficient for your computer to compute."}, {"start": 164.11999999999998, "end": 170.07999999999998, "text": " So here's another way, still without using vectorization, but using a for loop."}, {"start": 170.08, "end": 178.10000000000002, "text": " In math, you can use a summation operator to add all the products of wj and xj for j"}, {"start": 178.10000000000002, "end": 180.36, "text": " equals one through n."}, {"start": 180.36, "end": 185.34, "text": " Then outside the summation, you add b at the end."}, {"start": 185.34, "end": 193.20000000000002, "text": " So the summation goes from j equals one up to and including n for n equals three, j therefore"}, {"start": 193.20000000000002, "end": 196.34, "text": " goes from one, two, to three."}, {"start": 196.34, "end": 204.4, "text": " In code, you can initialize f to zero, then for j in range from zero to n, this actually"}, {"start": 204.4, "end": 208.28, "text": " makes j go from zero to n minus one."}, {"start": 208.28, "end": 216.32, "text": " So from zero, one to two, you can then add to f the product of wj times xj."}, {"start": 216.32, "end": 220.32, "text": " Finally, outside the for loop, you add b."}, {"start": 220.32, "end": 225.92000000000002, "text": " Notice that in Python, the range zero to n means that j goes from zero all the way to"}, {"start": 225.92, "end": 229.88, "text": " n minus one and does not include n itself."}, {"start": 229.88, "end": 234.72, "text": " And more commonly, this is written range n in Python."}, {"start": 234.72, "end": 240.61999999999998, "text": " But in this video, I added a zero here just to emphasize that it starts from zero."}, {"start": 240.61999999999998, "end": 246.56, "text": " While this implementation is a bit better than the first one, it still doesn't use vectorization"}, {"start": 246.56, "end": 249.16, "text": " and isn't that efficient."}, {"start": 249.16, "end": 255.39999999999998, "text": " Now let's look at how you can do this using vectorization."}, {"start": 255.4, "end": 261.38, "text": " This is the math expression that the function f, which is the dot product of w and x plus"}, {"start": 261.38, "end": 262.72, "text": " b."}, {"start": 262.72, "end": 266.34000000000003, "text": " And now you can implement this with a single line of code."}, {"start": 266.34000000000003, "end": 273.56, "text": " By computing fp equals np dot dot, I said dot dot because the first dot is the period"}, {"start": 273.56, "end": 278.82, "text": " and the second dot is the function or the method called dot."}, {"start": 278.82, "end": 288.2, "text": " But it's fp equals np dot dot w comma x and this implements the mathematical dot products"}, {"start": 288.2, "end": 291.6, "text": " between the vectors w and x."}, {"start": 291.6, "end": 295.76, "text": " And then finally, you can add b to it at the end."}, {"start": 295.76, "end": 302.6, "text": " This NumPy dot function is a vectorized implementation of the dot product operation between two vectors."}, {"start": 302.6, "end": 307.8, "text": " And especially when n is large, this will run much faster than the two previous code"}, {"start": 307.8, "end": 309.32, "text": " examples."}, {"start": 309.32, "end": 313.8, "text": " I want to emphasize that vectorization actually has two distinct benefits."}, {"start": 313.8, "end": 316.2, "text": " First, it makes the code shorter."}, {"start": 316.2, "end": 318.12, "text": " It's now just one line of code."}, {"start": 318.12, "end": 319.36, "text": " Isn't that cool?"}, {"start": 319.36, "end": 324.88, "text": " And second, it also results in your code running much faster than either of the two previous"}, {"start": 324.88, "end": 329.2, "text": " implementations that did not use vectorization."}, {"start": 329.2, "end": 336.2, "text": " And the reason that the vectorized implementation is much faster is behind the scenes, the NumPy"}, {"start": 336.2, "end": 340.92, "text": " dot function is able to use parallel hardware in your computer."}, {"start": 340.92, "end": 345.84, "text": " And this is true whether you're running this on a normal computer that is on a normal computer"}, {"start": 345.84, "end": 352.58, "text": " CPU, or if you are using a GPU, a graphics processor unit that's often used to accelerate"}, {"start": 352.58, "end": 354.44, "text": " machine learning jobs."}, {"start": 354.44, "end": 359.52, "text": " And the ability of the NumPy dot function to use parallel hardware makes it much more"}, {"start": 359.52, "end": 367.52, "text": " efficient than the for loop or the sequential calculation that we saw previously."}, {"start": 367.52, "end": 374.91999999999996, "text": " Now this version is much more practical when n is large because you are not typing w0 times"}, {"start": 374.91999999999996, "end": 381.52, "text": " x0 plus w1 times x1 plus lots of additional terms like you would have had for the previous"}, {"start": 381.52, "end": 382.52, "text": " version."}, {"start": 382.52, "end": 388.2, "text": " But while this saves a lot on the typing, it's still not that computationally efficient"}, {"start": 388.2, "end": 392.08, "text": " because it still doesn't use vectorization."}, {"start": 392.08, "end": 397.52, "text": " So to recap, vectorization makes your code shorter, so hopefully easier to write and"}, {"start": 397.52, "end": 400.08, "text": " easier for you or others to read."}, {"start": 400.08, "end": 402.4, "text": " And it also makes it run much faster."}, {"start": 402.4, "end": 408.2, "text": " But what is this magic behind vectorization that makes this run so much faster?"}, {"start": 408.2, "end": 412.24, "text": " Let's take a look at what your computer is actually doing behind the scenes to make vectorized"}, {"start": 412.24, "end": 419.24, "text": " code run so much faster."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=nRGG50GDNAA
2.3 Linear Regression with Multiple Variables | Vectorization part 2 --[Machine Learning |Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
I remember when I first learned about vectorization, I spent many hours on my computer taking an un-vectorized version of an algorithm, running it, see how long it ran, and then running a vectorized version of the code and seeing how much faster that ran. And I just spent hours playing with that. And it frankly blew my mind that the same algorithm, vectorized, would run so much faster. It felt almost like a magic trick to me. In this video, let's figure out how this magic trick really works. Let's take a deeper look at how a vectorized implementation may work on your computer behind the scenes. Let's look at this for loop. A for loop like this runs without vectorization. So if j ranges from 0 to, say, 15, this piece of code performs operations one after another. On the first time step, which I'm going to write as time 0 or t0, it first operates on the values at index 0. At the next time step, it calculates values corresponding to index 1 and so on until the fifth theme step, where it computes that. In other words, it calculates these computations one step at a time, one step after another. In contrast, this function in NumPy is implemented in the computer hardware with vectorization. So the computer can get all values of the vectors w and x, and in a single step, it multiplies each pair of w and x with each other all at the same time in parallel. Then after that, the computer takes these 16 numbers and uses specialized hardware to add them all together very efficiently, rather than needing to carry out distinct additions one after another to add up these 16 numbers. This means that code with vectorization can perform calculations in much less time than code without vectorization. And this matters more when you're running learning algorithms on large data sets or trying to train large models, which is often the case with machine learning. So that's why being able to write vectorized implementations of learning algorithms has been a key step to getting learning algorithms to run efficiently and therefore scale well to the large data sets that many modern machine learning algorithms now have to operate on. Now let's take a look at a concrete example of how this helps with implementing multiple linear regression, that is linear regression with multiple input features. Say you have a problem with 16 features and 16 parameters w1 through w16 in addition to the parameter b. You calculated 16 derivative terms for these 16 weights and in code, maybe you stored the values of w and d in two non-pyrotries with d storing the values of the derivatives. For this example, I'm just going to ignore the parameter b. Now you want to compute and update for each of these 16 parameters. So wj is updated to wj minus the learning rate, say 0.1 times dj for j from 1 through 16. In code, without vectorization, you would be doing something like this, update w1 to be w1 minus the learning rate 0.1 times d1, next update w2 similarly and so on through w16, updated as w16 minus 0.1 times d16. In code without vectorization, you could use a full loop like this for j in range 0, 16, that again goes from 0 to 15, set wj equals wj minus 0.1 times dj. In contrast, with vectorization, you can imagine the computer's parallel processing hardware like this. It takes all 16 values in the vector w and subtracts in parallel 0.1 times all 16 values in the vector d and assign all 16 calculations back to w all at the same time and all in one step. In code, you can implement this as follows. W is assigned to w minus 0.1 times d. Behind the scenes, the computer takes these numpy arrays w and d and uses parallel processing hardware to carry out all 16 computations efficiently. So using a vectorized implementation, you should get a much more efficient implementation of linear regression. Maybe the speed difference won't be huge if you have 16 features, but if you have thousands of features and perhaps very large training sets, this type of vectorized implementation will make a huge difference in the running time of your learning algorithm. It could be the difference between code finishing in one or two minutes versus taking many, many hours to do the same thing. In the optional lab that follows this video, you see an introduction to one of the most used Python libraries in machine learning, which we've already touched on in this video called numpy. You see how they create vectors in code, and these vectors or lists of numbers are called numpy arrays. And you also see how to take the dot product of two vectors using a numpy function called dot. And you also get to see how vectorized code, such as using the dot function, can run much faster than a for loop. In fact, you get to time this code yourself and hopefully see it run much faster. This optional lab introduces a fair amount of new numpy syntax. So when you read through the optional lab, please don't feel like you have to understand all the code right away. But you can save this notebook and use it as a reference to look at when you're working with data stored in numpy arrays. So congrats on finishing this video on vectorization. You've learned one of the most important and useful techniques in implementing machine learning algorithms. In the next video, we'll put the math of multiple linear regression together with vectorization so that you really implement gradient descent for multiple linear regression with vectorization. Let's go on to the next video.
[{"start": 0.0, "end": 8.08, "text": " I remember when I first learned about vectorization, I spent many hours on my computer taking an"}, {"start": 8.08, "end": 12.9, "text": " un-vectorized version of an algorithm, running it, see how long it ran, and then running"}, {"start": 12.9, "end": 16.2, "text": " a vectorized version of the code and seeing how much faster that ran."}, {"start": 16.2, "end": 18.64, "text": " And I just spent hours playing with that."}, {"start": 18.64, "end": 24.240000000000002, "text": " And it frankly blew my mind that the same algorithm, vectorized, would run so much faster."}, {"start": 24.240000000000002, "end": 27.16, "text": " It felt almost like a magic trick to me."}, {"start": 27.16, "end": 31.44, "text": " In this video, let's figure out how this magic trick really works."}, {"start": 31.44, "end": 36.08, "text": " Let's take a deeper look at how a vectorized implementation may work on your computer behind"}, {"start": 36.08, "end": 37.32, "text": " the scenes."}, {"start": 37.32, "end": 39.44, "text": " Let's look at this for loop."}, {"start": 39.44, "end": 43.16, "text": " A for loop like this runs without vectorization."}, {"start": 43.16, "end": 52.66, "text": " So if j ranges from 0 to, say, 15, this piece of code performs operations one after another."}, {"start": 52.66, "end": 59.31999999999999, "text": " On the first time step, which I'm going to write as time 0 or t0, it first operates on"}, {"start": 59.31999999999999, "end": 62.199999999999996, "text": " the values at index 0."}, {"start": 62.199999999999996, "end": 68.72, "text": " At the next time step, it calculates values corresponding to index 1 and so on until the"}, {"start": 68.72, "end": 73.56, "text": " fifth theme step, where it computes that."}, {"start": 73.56, "end": 81.08, "text": " In other words, it calculates these computations one step at a time, one step after another."}, {"start": 81.08, "end": 88.67999999999999, "text": " In contrast, this function in NumPy is implemented in the computer hardware with vectorization."}, {"start": 88.67999999999999, "end": 95.67999999999999, "text": " So the computer can get all values of the vectors w and x, and in a single step, it"}, {"start": 95.67999999999999, "end": 102.52, "text": " multiplies each pair of w and x with each other all at the same time in parallel."}, {"start": 102.52, "end": 107.92, "text": " Then after that, the computer takes these 16 numbers and uses specialized hardware to"}, {"start": 107.92, "end": 114.08, "text": " add them all together very efficiently, rather than needing to carry out distinct additions"}, {"start": 114.08, "end": 118.4, "text": " one after another to add up these 16 numbers."}, {"start": 118.4, "end": 123.84, "text": " This means that code with vectorization can perform calculations in much less time than"}, {"start": 123.84, "end": 126.56, "text": " code without vectorization."}, {"start": 126.56, "end": 131.1, "text": " And this matters more when you're running learning algorithms on large data sets or"}, {"start": 131.1, "end": 136.4, "text": " trying to train large models, which is often the case with machine learning."}, {"start": 136.4, "end": 140.68, "text": " So that's why being able to write vectorized implementations of learning algorithms has"}, {"start": 140.68, "end": 146.64000000000001, "text": " been a key step to getting learning algorithms to run efficiently and therefore scale well"}, {"start": 146.64000000000001, "end": 152.48000000000002, "text": " to the large data sets that many modern machine learning algorithms now have to operate on."}, {"start": 152.48000000000002, "end": 158.06, "text": " Now let's take a look at a concrete example of how this helps with implementing multiple"}, {"start": 158.06, "end": 163.82, "text": " linear regression, that is linear regression with multiple input features."}, {"start": 163.82, "end": 173.07999999999998, "text": " Say you have a problem with 16 features and 16 parameters w1 through w16 in addition to"}, {"start": 173.07999999999998, "end": 175.6, "text": " the parameter b."}, {"start": 175.6, "end": 182.2, "text": " You calculated 16 derivative terms for these 16 weights and in code, maybe you stored the"}, {"start": 182.2, "end": 190.76, "text": " values of w and d in two non-pyrotries with d storing the values of the derivatives."}, {"start": 190.76, "end": 194.92, "text": " For this example, I'm just going to ignore the parameter b."}, {"start": 194.92, "end": 200.48, "text": " Now you want to compute and update for each of these 16 parameters."}, {"start": 200.48, "end": 213.2, "text": " So wj is updated to wj minus the learning rate, say 0.1 times dj for j from 1 through"}, {"start": 213.2, "end": 214.2, "text": " 16."}, {"start": 214.2, "end": 221.6, "text": " In code, without vectorization, you would be doing something like this, update w1 to"}, {"start": 221.6, "end": 230.79999999999998, "text": " be w1 minus the learning rate 0.1 times d1, next update w2 similarly and so on through"}, {"start": 230.79999999999998, "end": 237.83999999999997, "text": " w16, updated as w16 minus 0.1 times d16."}, {"start": 237.84, "end": 246.16, "text": " In code without vectorization, you could use a full loop like this for j in range 0, 16,"}, {"start": 246.16, "end": 254.4, "text": " that again goes from 0 to 15, set wj equals wj minus 0.1 times dj."}, {"start": 254.4, "end": 260.92, "text": " In contrast, with vectorization, you can imagine the computer's parallel processing hardware"}, {"start": 260.92, "end": 262.04, "text": " like this."}, {"start": 262.04, "end": 271.40000000000003, "text": " It takes all 16 values in the vector w and subtracts in parallel 0.1 times all 16 values"}, {"start": 271.40000000000003, "end": 279.16, "text": " in the vector d and assign all 16 calculations back to w all at the same time and all in"}, {"start": 279.16, "end": 280.48, "text": " one step."}, {"start": 280.48, "end": 285.24, "text": " In code, you can implement this as follows."}, {"start": 285.24, "end": 290.24, "text": " W is assigned to w minus 0.1 times d."}, {"start": 290.24, "end": 296.32, "text": " Behind the scenes, the computer takes these numpy arrays w and d and uses parallel processing"}, {"start": 296.32, "end": 301.0, "text": " hardware to carry out all 16 computations efficiently."}, {"start": 301.0, "end": 305.6, "text": " So using a vectorized implementation, you should get a much more efficient implementation"}, {"start": 305.6, "end": 307.8, "text": " of linear regression."}, {"start": 307.8, "end": 313.64, "text": " Maybe the speed difference won't be huge if you have 16 features, but if you have thousands"}, {"start": 313.64, "end": 318.24, "text": " of features and perhaps very large training sets, this type of vectorized implementation"}, {"start": 318.24, "end": 322.04, "text": " will make a huge difference in the running time of your learning algorithm."}, {"start": 322.04, "end": 326.88, "text": " It could be the difference between code finishing in one or two minutes versus taking many,"}, {"start": 326.88, "end": 329.68, "text": " many hours to do the same thing."}, {"start": 329.68, "end": 334.76, "text": " In the optional lab that follows this video, you see an introduction to one of the most"}, {"start": 334.76, "end": 339.16, "text": " used Python libraries in machine learning, which we've already touched on in this video"}, {"start": 339.16, "end": 341.40000000000003, "text": " called numpy."}, {"start": 341.40000000000003, "end": 346.62, "text": " You see how they create vectors in code, and these vectors or lists of numbers are called"}, {"start": 346.62, "end": 348.72, "text": " numpy arrays."}, {"start": 348.72, "end": 354.68, "text": " And you also see how to take the dot product of two vectors using a numpy function called"}, {"start": 354.68, "end": 356.76, "text": " dot."}, {"start": 356.76, "end": 362.16, "text": " And you also get to see how vectorized code, such as using the dot function, can run much"}, {"start": 362.16, "end": 364.36, "text": " faster than a for loop."}, {"start": 364.36, "end": 369.84000000000003, "text": " In fact, you get to time this code yourself and hopefully see it run much faster."}, {"start": 369.84000000000003, "end": 374.36, "text": " This optional lab introduces a fair amount of new numpy syntax."}, {"start": 374.36, "end": 379.04, "text": " So when you read through the optional lab, please don't feel like you have to understand"}, {"start": 379.04, "end": 380.92, "text": " all the code right away."}, {"start": 380.92, "end": 385.28000000000003, "text": " But you can save this notebook and use it as a reference to look at when you're working"}, {"start": 385.28000000000003, "end": 388.0, "text": " with data stored in numpy arrays."}, {"start": 388.0, "end": 391.56, "text": " So congrats on finishing this video on vectorization."}, {"start": 391.56, "end": 395.48, "text": " You've learned one of the most important and useful techniques in implementing machine"}, {"start": 395.48, "end": 397.12, "text": " learning algorithms."}, {"start": 397.12, "end": 403.7, "text": " In the next video, we'll put the math of multiple linear regression together with vectorization"}, {"start": 403.7, "end": 409.84, "text": " so that you really implement gradient descent for multiple linear regression with vectorization."}, {"start": 409.84, "end": 436.84, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=odAhNw-e4o0
2.4 Linear Regression with Multiple Variables|Gradient descent for multiple linear regression ---ML
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, you've learned about gradient descent, about multiple linear regression, and also vectorization. Let's put it all together to implement gradient descent for multiple linear regression with vectorization. This would be cool. Let's quickly review what multiple linear regression looks like. Using our previous notation, let's see how you can write it more succinctly using vector notation. We have parameters w1 to wn, as well as b, but instead of thinking of w1 to wn as separate numbers, that is, separate parameters, let's instead collect all of the w's into a vector w, so that now w is a vector of length n. So, we're just going to think of the parameters of this model as a vector w, as well as b, where b is still a number, same as before. For us before, we had defined multiple linear regression like this. Now using vector notation, we can write the model as f sub wb of x equals the vector w dot product with the vector x plus b. And remember that this dot here means dot product. Our cost function can be defined as j of w1 through wn comma b, but instead of just thinking of j as a function of these n different parameters wj as well as b, we're going to write j as a function of parameter vector w and the number b. So this w1 through wn is replaced by this vector w, and j now takes this input, a vector w and a number b, and returns a number. Here's what gradient descent looks like. We're going to repeatedly update each parameter wj to be wj minus alpha times the derivative of the cost j, where j has parameters w1 through wn and b, and once again we just write this as j of vector w and number b. Let's see what this will look like when you implement gradient descent, and in particular, let's take a look at the derivative term. You'll see that gradient descent becomes just a little bit different with multiple features compared to just one feature. Here's what we had when we had gradient descent with one feature. We had an update rule for w and a separate update rule for b. So hopefully these look familiar to you. And this term here is the derivative of the cost function j with respect to the parameter w. And similarly, we have an update rule for parameter b with univariate regression, we had only one feature. We call that feature xi without any subscript. Now here's the new notation for when we have n features where n is two or more. We get this update rule for gradient descent. We have a w1 to be w1 minus alpha times this expression here. And this formula is actually the derivative of the cost j with respect to w1. The formula for the derivative of j with respect to w1 on the right looks very similar to the case of one feature on the left. The error term still takes a prediction f of x minus the target y. One difference is that w and x are now vectors. And just as w on the left has now become w1 here on the right, xi here on the left is now instead xi subscript 1 here on the right. And this is just for j equals 1. For multiple linear regression, we have j ranging from 1 through n, and so we'll update the parameters w1, w2 all the way up to wn. And then as before, we'll update b. And if you implement this, you get gradient descent for multiple regression. So that's it for gradient descent for multiple regression. Before moving on from this video, I want to make a quick aside or a quick side note on an alternative way for finding w and b for linear regression. And this method is called the normal equation. Whereas it turns out gradient descent is a great method for minimizing the cost function j to find w and b. There is one other algorithm that works only for linear regression and pretty much none of the other algorithms you see in this specialization for solving for w and b. And this other method does not need an iterative gradient descent algorithm. Called the normal equation method, it turns out to be possible to use an advanced linear algebra library to just solve for w and b all in one go without iterations. Some disadvantages of the normal equation method are, first, unlike gradient descent, this does not generalize to other learning algorithms, such as the logistic regression algorithm that you learn about next week, or the neural networks or other algorithms you see later in this specialization. The normal equation method is also quite slow if the number of features n is large. Almost no machine learning practitioners should implement the normal equation method themselves. But if you're using a mature machine learning library and call linear regression, there's a chance that on the backend, it will be using this to solve for w and b. So if you're ever in a job interview and hear the term normal equation, that's what this refers to. Don't worry about the details of how the normal equation works. Just be aware that some machine learning libraries may use this complicated method in the backend to solve for w and b. But for most learning algorithms, including how you implement linear regression yourself, gradient descent is often a better way to get the job done. In the optional lab that follows this video, you see how to define a multiple regression model in code, and also how to calculate the prediction f of x. You also see how to calculate the cost and implement gradient descent for multiple linear regression model. This will be using Python's NumPy library, so if any of the code looks very new, that's okay. But you should feel free also to take a look at the previous optional lab that introduces NumPy and vectorization for a refresher of NumPy functions and how to implement those in code. So that's it. You now know multiple linear regression. This is probably the single most widely used learning algorithm in the world today. But there's more. With just a few tricks such as picking and scaling features appropriately, and also choosing the learning rate alpha appropriately, you will be able to make this work much better. So just a few more videos to go for this week. Let's go on to the next video to see those little tricks that will help you make multiple linear regression work much better.
[{"start": 0.0, "end": 7.2, "text": " So, you've learned about gradient descent, about multiple linear regression, and also"}, {"start": 7.2, "end": 8.2, "text": " vectorization."}, {"start": 8.2, "end": 13.16, "text": " Let's put it all together to implement gradient descent for multiple linear regression with"}, {"start": 13.16, "end": 14.44, "text": " vectorization."}, {"start": 14.44, "end": 15.44, "text": " This would be cool."}, {"start": 15.44, "end": 19.52, "text": " Let's quickly review what multiple linear regression looks like."}, {"start": 19.52, "end": 24.04, "text": " Using our previous notation, let's see how you can write it more succinctly using vector"}, {"start": 24.04, "end": 25.18, "text": " notation."}, {"start": 25.18, "end": 33.24, "text": " We have parameters w1 to wn, as well as b, but instead of thinking of w1 to wn as separate"}, {"start": 33.24, "end": 39.04, "text": " numbers, that is, separate parameters, let's instead collect all of the w's into a vector"}, {"start": 39.04, "end": 43.760000000000005, "text": " w, so that now w is a vector of length n."}, {"start": 43.760000000000005, "end": 51.24, "text": " So, we're just going to think of the parameters of this model as a vector w, as well as b,"}, {"start": 51.24, "end": 54.56, "text": " where b is still a number, same as before."}, {"start": 54.56, "end": 59.080000000000005, "text": " For us before, we had defined multiple linear regression like this."}, {"start": 59.080000000000005, "end": 67.44, "text": " Now using vector notation, we can write the model as f sub wb of x equals the vector w"}, {"start": 67.44, "end": 70.92, "text": " dot product with the vector x plus b."}, {"start": 70.92, "end": 75.4, "text": " And remember that this dot here means dot product."}, {"start": 75.4, "end": 82.74000000000001, "text": " Our cost function can be defined as j of w1 through wn comma b, but instead of just thinking"}, {"start": 82.74, "end": 90.91999999999999, "text": " of j as a function of these n different parameters wj as well as b, we're going to write j as"}, {"start": 90.91999999999999, "end": 96.64, "text": " a function of parameter vector w and the number b."}, {"start": 96.64, "end": 105.91999999999999, "text": " So this w1 through wn is replaced by this vector w, and j now takes this input, a vector"}, {"start": 105.91999999999999, "end": 110.52, "text": " w and a number b, and returns a number."}, {"start": 110.52, "end": 112.72, "text": " Here's what gradient descent looks like."}, {"start": 112.72, "end": 119.67999999999999, "text": " We're going to repeatedly update each parameter wj to be wj minus alpha times the derivative"}, {"start": 119.67999999999999, "end": 128.32, "text": " of the cost j, where j has parameters w1 through wn and b, and once again we just write this"}, {"start": 128.32, "end": 134.0, "text": " as j of vector w and number b."}, {"start": 134.0, "end": 138.44, "text": " Let's see what this will look like when you implement gradient descent, and in particular,"}, {"start": 138.44, "end": 141.72, "text": " let's take a look at the derivative term."}, {"start": 141.72, "end": 146.64, "text": " You'll see that gradient descent becomes just a little bit different with multiple features"}, {"start": 146.64, "end": 149.44, "text": " compared to just one feature."}, {"start": 149.44, "end": 153.72, "text": " Here's what we had when we had gradient descent with one feature."}, {"start": 153.72, "end": 158.72, "text": " We had an update rule for w and a separate update rule for b."}, {"start": 158.72, "end": 162.24, "text": " So hopefully these look familiar to you."}, {"start": 162.24, "end": 167.88, "text": " And this term here is the derivative of the cost function j with respect to the parameter"}, {"start": 167.88, "end": 168.88, "text": " w."}, {"start": 168.88, "end": 175.79999999999998, "text": " And similarly, we have an update rule for parameter b with univariate regression, we"}, {"start": 175.79999999999998, "end": 178.24, "text": " had only one feature."}, {"start": 178.24, "end": 182.84, "text": " We call that feature xi without any subscript."}, {"start": 182.84, "end": 189.32, "text": " Now here's the new notation for when we have n features where n is two or more."}, {"start": 189.32, "end": 192.88, "text": " We get this update rule for gradient descent."}, {"start": 192.88, "end": 199.48, "text": " We have a w1 to be w1 minus alpha times this expression here."}, {"start": 199.48, "end": 207.35999999999999, "text": " And this formula is actually the derivative of the cost j with respect to w1."}, {"start": 207.35999999999999, "end": 213.12, "text": " The formula for the derivative of j with respect to w1 on the right looks very similar to the"}, {"start": 213.12, "end": 216.35999999999999, "text": " case of one feature on the left."}, {"start": 216.35999999999999, "end": 222.04, "text": " The error term still takes a prediction f of x minus the target y."}, {"start": 222.04, "end": 226.88, "text": " One difference is that w and x are now vectors."}, {"start": 226.88, "end": 234.68, "text": " And just as w on the left has now become w1 here on the right, xi here on the left is"}, {"start": 234.68, "end": 239.56, "text": " now instead xi subscript 1 here on the right."}, {"start": 239.56, "end": 245.07999999999998, "text": " And this is just for j equals 1."}, {"start": 245.08, "end": 252.52, "text": " For multiple linear regression, we have j ranging from 1 through n, and so we'll update"}, {"start": 252.52, "end": 260.40000000000003, "text": " the parameters w1, w2 all the way up to wn."}, {"start": 260.40000000000003, "end": 264.36, "text": " And then as before, we'll update b."}, {"start": 264.36, "end": 269.68, "text": " And if you implement this, you get gradient descent for multiple regression."}, {"start": 269.68, "end": 274.66, "text": " So that's it for gradient descent for multiple regression."}, {"start": 274.66, "end": 281.28000000000003, "text": " Before moving on from this video, I want to make a quick aside or a quick side note on"}, {"start": 281.28000000000003, "end": 287.36, "text": " an alternative way for finding w and b for linear regression."}, {"start": 287.36, "end": 291.44000000000005, "text": " And this method is called the normal equation."}, {"start": 291.44000000000005, "end": 296.20000000000005, "text": " Whereas it turns out gradient descent is a great method for minimizing the cost function"}, {"start": 296.20000000000005, "end": 299.32000000000005, "text": " j to find w and b."}, {"start": 299.32000000000005, "end": 304.38, "text": " There is one other algorithm that works only for linear regression and pretty much none"}, {"start": 304.38, "end": 310.08, "text": " of the other algorithms you see in this specialization for solving for w and b."}, {"start": 310.08, "end": 316.0, "text": " And this other method does not need an iterative gradient descent algorithm."}, {"start": 316.0, "end": 320.84, "text": " Called the normal equation method, it turns out to be possible to use an advanced linear"}, {"start": 320.84, "end": 327.32, "text": " algebra library to just solve for w and b all in one go without iterations."}, {"start": 327.32, "end": 332.68, "text": " Some disadvantages of the normal equation method are, first, unlike gradient descent,"}, {"start": 332.68, "end": 337.28000000000003, "text": " this does not generalize to other learning algorithms, such as the logistic regression"}, {"start": 337.28000000000003, "end": 341.96, "text": " algorithm that you learn about next week, or the neural networks or other algorithms"}, {"start": 341.96, "end": 344.56, "text": " you see later in this specialization."}, {"start": 344.56, "end": 351.0, "text": " The normal equation method is also quite slow if the number of features n is large."}, {"start": 351.0, "end": 356.84000000000003, "text": " Almost no machine learning practitioners should implement the normal equation method themselves."}, {"start": 356.84, "end": 363.44, "text": " But if you're using a mature machine learning library and call linear regression, there's"}, {"start": 363.44, "end": 368.64, "text": " a chance that on the backend, it will be using this to solve for w and b."}, {"start": 368.64, "end": 373.17999999999995, "text": " So if you're ever in a job interview and hear the term normal equation, that's what this"}, {"start": 373.17999999999995, "end": 375.35999999999996, "text": " refers to."}, {"start": 375.35999999999996, "end": 378.67999999999995, "text": " Don't worry about the details of how the normal equation works."}, {"start": 378.67999999999995, "end": 384.91999999999996, "text": " Just be aware that some machine learning libraries may use this complicated method in the backend"}, {"start": 384.92, "end": 386.76, "text": " to solve for w and b."}, {"start": 386.76, "end": 392.40000000000003, "text": " But for most learning algorithms, including how you implement linear regression yourself,"}, {"start": 392.40000000000003, "end": 396.36, "text": " gradient descent is often a better way to get the job done."}, {"start": 396.36, "end": 401.6, "text": " In the optional lab that follows this video, you see how to define a multiple regression"}, {"start": 401.6, "end": 408.20000000000005, "text": " model in code, and also how to calculate the prediction f of x."}, {"start": 408.20000000000005, "end": 414.04, "text": " You also see how to calculate the cost and implement gradient descent for multiple linear"}, {"start": 414.04, "end": 415.68, "text": " regression model."}, {"start": 415.68, "end": 421.40000000000003, "text": " This will be using Python's NumPy library, so if any of the code looks very new, that's"}, {"start": 421.40000000000003, "end": 422.56, "text": " okay."}, {"start": 422.56, "end": 428.36, "text": " But you should feel free also to take a look at the previous optional lab that introduces"}, {"start": 428.36, "end": 434.24, "text": " NumPy and vectorization for a refresher of NumPy functions and how to implement those"}, {"start": 434.24, "end": 436.0, "text": " in code."}, {"start": 436.0, "end": 437.16, "text": " So that's it."}, {"start": 437.16, "end": 439.82000000000005, "text": " You now know multiple linear regression."}, {"start": 439.82, "end": 444.71999999999997, "text": " This is probably the single most widely used learning algorithm in the world today."}, {"start": 444.71999999999997, "end": 446.08, "text": " But there's more."}, {"start": 446.08, "end": 450.92, "text": " With just a few tricks such as picking and scaling features appropriately, and also choosing"}, {"start": 450.92, "end": 455.88, "text": " the learning rate alpha appropriately, you will be able to make this work much better."}, {"start": 455.88, "end": 458.44, "text": " So just a few more videos to go for this week."}, {"start": 458.44, "end": 462.92, "text": " Let's go on to the next video to see those little tricks that will help you make multiple"}, {"start": 462.92, "end": 469.92, "text": " linear regression work much better."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=AbtWSXHPfS0
2.5 Practical Tips for Linear Regression | Feature scaling part 1-- [Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart! First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, welcome back. Let's take a look at some techniques that will make gradient descent work much better. In this video, you see a technique called feature scaling that will enable gradient descent to run much faster. Let's start by taking a look at the relationship between the size of a feature, that is, how big are the numbers for that feature, and the size of its associated parameter. As a concrete example, let's predict the price of a house using two features, x1, the size of the house, and x2, the number of bedrooms. Let's say that x1 typically ranges from 300 to 2000 square feet, and x2, in the dataset, ranges from 0 to 5 bedrooms. So for this example, x1 takes on a relatively large range of values, and x2 takes on a relatively small range of values. Now let's take an example of a house that has a size of 2000 square feet, has 5 bedrooms, and a price of 500k or $500,000. For this one trading example, what do you think are reasonable values for the size of parameters w1 and w2? Well, let's look at one possible set of parameters. Say w1 is 50, and w2 is 0.1, and b is 50 for the purposes of discussion. So in this case, the estimated price in thousands of dollars is 100,000k here, plus 0.5k, plus 50k, which is slightly over $100 million. So that's clearly very far from the actual price of $500,000. And so this is not a very good set of parameter choices for w1 and w2. Now let's take a look at another possibility. Say w1 and w2 were the other way around. w1 is 0.1, and w2 is 50, and b is still also 50. In this choice of w1 and w2, w1 is relatively small and w2 is relatively large. 50 is much bigger than 0.1. So here the predicted price is 0.1 times 2000 plus 50 times 5 plus 50. The first term becomes 200k, the second term becomes 250k, and then plus 50. So this version of the model predicts a price of $500,000, which is a much more reasonable estimate and happens to be the same price as the true price of the house. So hopefully you might notice that when a possible range of values of a feature is large, like the size in square feet, which goes all the way up to 2000, it's more likely that a good model will learn to choose a relatively small parameter value like 0.1. Likewise, when the possible values of a feature are small, like the number of bedrooms, then a reasonable value for its parameters will be relatively large, like 50. So how does this relate to gradient descent? Well, let's take a look at a scatter plot of the features, where the size in square feet is the horizontal axis, x1, and the number of bedrooms, x2, is on the vertical axis. If you plot the training data, you notice that the horizontal axis is on a much larger scale or much larger range of values compared to the vertical axis. Next, let's look at how the cost function might look in a contour plot. You might see a contour plot where the horizontal axis has a much narrower range, say between 0 and 1, whereas the vertical axis takes on much larger values, say between 10 and 100. So the contours form ovals or ellipses, and they're shorter on one side and longer on the other. And this is because a very small change to w1 can have a very large impact on the estimated price and thus a very large impact on the cost j, because w1 tends to be multiplied by a very large number, the size in square feet. In contrast, it takes a much larger change in w2 in order to change the predictions much, and thus small changes to w2 don't change the cost function nearly as much. So where does this leave us? This is what might end up happening if you were to run gradient descent, if you were to use your training data as is. Because the contours are so tall and skinny, gradient descent may end up bouncing back and forth for a long time before it can finally find its way to the global minimum. In situations like this, a useful thing to do is to scale the features. This means performing some transformation of your training data so that x1, say, might now range from 0 to 1, and x2 might also range from 0 to 1. So the data points now look more like this, and you might notice that the scale of the plot on the bottom is now quite different than the one on top. The key point is that the rescaled x1 and x2 are both now taking comparable ranges of values to each other. And if you run gradient descent on a cost function defined on this rescaled x1 and x2 using this transformed data, then the contours will look more like this, more like circles and less tall and skinny, and gradient descent can find a much more direct path to the global minimum. So to recap, when you have different features that take on very different ranges of values, you can cause gradient descent to run slowly, but rescaling the different features so they all take on a comparable range of values can speed up gradient descent significantly. How do you actually do this? Let's take a look at that in the next video.
[{"start": 0.0, "end": 4.28, "text": " So, welcome back."}, {"start": 4.28, "end": 8.8, "text": " Let's take a look at some techniques that will make gradient descent work much better."}, {"start": 8.8, "end": 13.36, "text": " In this video, you see a technique called feature scaling that will enable gradient"}, {"start": 13.36, "end": 16.240000000000002, "text": " descent to run much faster."}, {"start": 16.240000000000002, "end": 20.96, "text": " Let's start by taking a look at the relationship between the size of a feature, that is, how"}, {"start": 20.96, "end": 27.1, "text": " big are the numbers for that feature, and the size of its associated parameter."}, {"start": 27.1, "end": 33.32, "text": " As a concrete example, let's predict the price of a house using two features, x1, the size"}, {"start": 33.32, "end": 37.52, "text": " of the house, and x2, the number of bedrooms."}, {"start": 37.52, "end": 45.28, "text": " Let's say that x1 typically ranges from 300 to 2000 square feet, and x2, in the dataset,"}, {"start": 45.28, "end": 48.44, "text": " ranges from 0 to 5 bedrooms."}, {"start": 48.44, "end": 55.72, "text": " So for this example, x1 takes on a relatively large range of values, and x2 takes on a relatively"}, {"start": 55.72, "end": 58.4, "text": " small range of values."}, {"start": 58.4, "end": 66.88, "text": " Now let's take an example of a house that has a size of 2000 square feet, has 5 bedrooms,"}, {"start": 66.88, "end": 72.03999999999999, "text": " and a price of 500k or $500,000."}, {"start": 72.03999999999999, "end": 76.4, "text": " For this one trading example, what do you think are reasonable values for the size of"}, {"start": 76.4, "end": 79.88, "text": " parameters w1 and w2?"}, {"start": 79.88, "end": 84.0, "text": " Well, let's look at one possible set of parameters."}, {"start": 84.0, "end": 94.32, "text": " Say w1 is 50, and w2 is 0.1, and b is 50 for the purposes of discussion."}, {"start": 94.32, "end": 104.6, "text": " So in this case, the estimated price in thousands of dollars is 100,000k here, plus 0.5k, plus"}, {"start": 104.6, "end": 110.52, "text": " 50k, which is slightly over $100 million."}, {"start": 110.52, "end": 117.96, "text": " So that's clearly very far from the actual price of $500,000."}, {"start": 117.96, "end": 123.75999999999999, "text": " And so this is not a very good set of parameter choices for w1 and w2."}, {"start": 123.75999999999999, "end": 127.08, "text": " Now let's take a look at another possibility."}, {"start": 127.08, "end": 131.28, "text": " Say w1 and w2 were the other way around."}, {"start": 131.28, "end": 137.48, "text": " w1 is 0.1, and w2 is 50, and b is still also 50."}, {"start": 137.48, "end": 145.48, "text": " In this choice of w1 and w2, w1 is relatively small and w2 is relatively large."}, {"start": 145.48, "end": 148.51999999999998, "text": " 50 is much bigger than 0.1."}, {"start": 148.51999999999998, "end": 158.39999999999998, "text": " So here the predicted price is 0.1 times 2000 plus 50 times 5 plus 50."}, {"start": 158.39999999999998, "end": 165.6, "text": " The first term becomes 200k, the second term becomes 250k, and then plus 50."}, {"start": 165.6, "end": 171.6, "text": " So this version of the model predicts a price of $500,000, which is a much more reasonable"}, {"start": 171.6, "end": 176.78, "text": " estimate and happens to be the same price as the true price of the house."}, {"start": 176.78, "end": 182.79999999999998, "text": " So hopefully you might notice that when a possible range of values of a feature is large,"}, {"start": 182.79999999999998, "end": 187.64, "text": " like the size in square feet, which goes all the way up to 2000, it's more likely that"}, {"start": 187.64, "end": 194.0, "text": " a good model will learn to choose a relatively small parameter value like 0.1."}, {"start": 194.0, "end": 200.92, "text": " Likewise, when the possible values of a feature are small, like the number of bedrooms, then"}, {"start": 200.92, "end": 206.68, "text": " a reasonable value for its parameters will be relatively large, like 50."}, {"start": 206.68, "end": 210.0, "text": " So how does this relate to gradient descent?"}, {"start": 210.0, "end": 216.88, "text": " Well, let's take a look at a scatter plot of the features, where the size in square"}, {"start": 216.88, "end": 223.96, "text": " feet is the horizontal axis, x1, and the number of bedrooms, x2, is on the vertical axis."}, {"start": 223.96, "end": 229.72, "text": " If you plot the training data, you notice that the horizontal axis is on a much larger"}, {"start": 229.72, "end": 234.72, "text": " scale or much larger range of values compared to the vertical axis."}, {"start": 234.72, "end": 241.24, "text": " Next, let's look at how the cost function might look in a contour plot."}, {"start": 241.24, "end": 248.16, "text": " You might see a contour plot where the horizontal axis has a much narrower range, say between"}, {"start": 248.16, "end": 256.2, "text": " 0 and 1, whereas the vertical axis takes on much larger values, say between 10 and 100."}, {"start": 256.2, "end": 262.4, "text": " So the contours form ovals or ellipses, and they're shorter on one side and longer on"}, {"start": 262.4, "end": 264.0, "text": " the other."}, {"start": 264.0, "end": 270.28, "text": " And this is because a very small change to w1 can have a very large impact on the estimated"}, {"start": 270.28, "end": 277.36, "text": " price and thus a very large impact on the cost j, because w1 tends to be multiplied"}, {"start": 277.36, "end": 281.76, "text": " by a very large number, the size in square feet."}, {"start": 281.76, "end": 288.40000000000003, "text": " In contrast, it takes a much larger change in w2 in order to change the predictions much,"}, {"start": 288.40000000000003, "end": 294.86, "text": " and thus small changes to w2 don't change the cost function nearly as much."}, {"start": 294.86, "end": 296.88, "text": " So where does this leave us?"}, {"start": 296.88, "end": 301.64, "text": " This is what might end up happening if you were to run gradient descent, if you were"}, {"start": 301.64, "end": 305.04, "text": " to use your training data as is."}, {"start": 305.04, "end": 311.16, "text": " Because the contours are so tall and skinny, gradient descent may end up bouncing back"}, {"start": 311.16, "end": 317.56, "text": " and forth for a long time before it can finally find its way to the global minimum."}, {"start": 317.56, "end": 322.74, "text": " In situations like this, a useful thing to do is to scale the features."}, {"start": 322.74, "end": 328.8, "text": " This means performing some transformation of your training data so that x1, say, might"}, {"start": 328.8, "end": 335.32, "text": " now range from 0 to 1, and x2 might also range from 0 to 1."}, {"start": 335.32, "end": 340.0, "text": " So the data points now look more like this, and you might notice that the scale of the"}, {"start": 340.0, "end": 345.24, "text": " plot on the bottom is now quite different than the one on top."}, {"start": 345.24, "end": 351.64, "text": " The key point is that the rescaled x1 and x2 are both now taking comparable ranges of"}, {"start": 351.64, "end": 354.44, "text": " values to each other."}, {"start": 354.44, "end": 361.24, "text": " And if you run gradient descent on a cost function defined on this rescaled x1 and x2"}, {"start": 361.24, "end": 367.22, "text": " using this transformed data, then the contours will look more like this, more like circles"}, {"start": 367.22, "end": 373.0, "text": " and less tall and skinny, and gradient descent can find a much more direct path to the global"}, {"start": 373.0, "end": 375.0, "text": " minimum."}, {"start": 375.0, "end": 380.15999999999997, "text": " So to recap, when you have different features that take on very different ranges of values,"}, {"start": 380.16, "end": 385.08000000000004, "text": " you can cause gradient descent to run slowly, but rescaling the different features so they"}, {"start": 385.08000000000004, "end": 390.84000000000003, "text": " all take on a comparable range of values can speed up gradient descent significantly."}, {"start": 390.84000000000003, "end": 392.28000000000003, "text": " How do you actually do this?"}, {"start": 392.28, "end": 410.67999999999995, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=g9bkFTnM-7k
2.6 Practical Tips for Linear Regression | Feature scaling part 2-- [Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's look at how you can implement feature scaling to take features that take on very different ranges of values and scale them to have comparable ranges of value to each other. So how do you actually scale features? Well, if X1 ranges from 3 to 2000, one way to get a scale version of X1 is to take each original X1 value and divide by 2000 the maximum of the range. So the scale X1 will range from 0.15 up to 1. Similarly, since X2 ranges from 0 to 5, you can calculate a scale version of X2 by taking each original X2 and dividing by 5, which is again the maximum. So the scale X2 will now range from 0 to 1. So if you plot the scaled X1 and X2 on a graph, it might look like this. In addition to dividing by the maximum, you can also do what's called mean normalization. So what this looks like is, you start with the original features and then you rescale them so that both of them are centered around 0. So whereas before they only had values greater than 0, now they have both negative and positive values, but maybe usually between negative 1 and plus 1. So to calculate the mean normalization of X1, first find the average, also called the mean of X1 on your training set, and let's call this mean mu1, with this being the Greek alphabet mu. For example, you may find that the average of feature 1, mu1, is 600 square feet. So let's take each X1, subtract the mean mu1, and then let's divide by the difference 2000 minus 300, where 2000 is the maximum and 300 the minimum. And if you do this, you get the normalized X1 to range from negative 0.18 to 0.82. Similarly to mean normalize X2, you can calculate the average of feature 2, and for instance mu2 may be 2.3. Then you can take each X2, subtract mu2, and divide by 5 minus 0, again the max 5 minus the min which is 0. The mean normalized X2 now ranges from negative 0.46 to 0.54, so if you plot the training data using the mean normalized X1 and X2, it might look like this. There's one last common rescaling method called z-score normalization. To implement z-score normalization, you need to calculate something called the standard deviation of each feature. If you don't know what a standard deviation is, don't worry about it, you won't need to know it for this class. Or if you've heard of the normal distribution or the bell-shaped curve, sometimes also called the Gaussian distribution, this is what the standard deviation for the normal distribution looks like. But if you haven't heard of this, you don't need to worry about that either. But if you do know what is the standard deviation, then to implement a z-score normalization, you first calculate the mean mu as well as the standard deviation, which is often denoted by the lowercase Greek alphabet sigma of each feature. So for instance, maybe feature 1 has a standard deviation of 450 and mean 600, then to z-score normalize x1, take each x1, subtract mu 1, and then divide by the standard deviation, which I'm going to denote as sigma 1. And what you might find is that the z-score normalize x1 now ranges from negative 0.67 to 3.1. Similarly, if you calculate the second feature's standard deviation to be 1.4 and mean to be 2.3, then you can compute x2 minus mu 2 divided by sigma 2. And in this case, the z-score normalize by x2 might now range from negative 1.6 to 1.9. So if you plot the training data on the normalized x1 and x2 on a graph, it might look like this. As a rule of thumb, when performing feature scaling, you might want to aim for getting the features to range from maybe anywhere around negative 1 to somewhere around plus 1 for each feature x. But these values, negative 1 and plus 1 can be a little bit loose. So if the features range from negative 3 to plus 3, or negative 0.3 to plus 0.3, all of these are completely okay. So if you have a feature x1 that winds up being between 0 and 3, that's not a problem. And you can rescale it if you want, but if you don't rescale it, it should work okay too. Or if you have a different feature, x2, whose values are between negative 2 and plus 0.5, again, that's okay. No harm rescaling it, but it might be okay if you leave it alone as well. But if another feature like x3 here ranges from negative 100 to plus 100, then this takes on a very different range of values than something from around negative 1 to plus 1. So you're probably better off rescaling this feature x3 so that it ranges from something closer to negative 1 to plus 1. Similarly, if you have a feature x4 that takes on really small values, say between negative 0.001 and plus 0.001, then these values are so small, that means you may want to rescale it as well. Finally, what if your feature x5, such as measurements of a hospital patient's body temperature, ranges from 98.6 to 105 degrees Fahrenheit? In this case, these values are around 100, which is actually pretty large compared to other scale features, and this will actually cause gradient descent to run more slowly. So in this case, feature rescaling will likely help. There's almost never any harm to carrying out feature rescaling, so when in doubt, I encourage you to just carry it out. And that's it for feature scaling. With this little technique, you'll often be able to get gradient descent to run much faster. So that's feature scaling. And with or without feature scaling, when you run gradient descent, how can you know, how can you check if gradient descent is really working, if it is finding you the global minimum or something close to it? In the next video, let's take a look at how to recognize if gradient descent is converging. And then in the video after that, this will lead to discussion of how to choose a good learning rate for gradient descent.
[{"start": 0.0, "end": 6.6000000000000005, "text": " Let's look at how you can implement feature scaling to take features that take on very"}, {"start": 6.6000000000000005, "end": 11.28, "text": " different ranges of values and scale them to have comparable ranges of value to each"}, {"start": 11.28, "end": 12.280000000000001, "text": " other."}, {"start": 12.280000000000001, "end": 14.56, "text": " So how do you actually scale features?"}, {"start": 14.56, "end": 22.76, "text": " Well, if X1 ranges from 3 to 2000, one way to get a scale version of X1 is to take each"}, {"start": 22.76, "end": 28.92, "text": " original X1 value and divide by 2000 the maximum of the range."}, {"start": 28.92, "end": 34.440000000000005, "text": " So the scale X1 will range from 0.15 up to 1."}, {"start": 34.440000000000005, "end": 41.760000000000005, "text": " Similarly, since X2 ranges from 0 to 5, you can calculate a scale version of X2 by taking"}, {"start": 41.760000000000005, "end": 47.2, "text": " each original X2 and dividing by 5, which is again the maximum."}, {"start": 47.2, "end": 52.56, "text": " So the scale X2 will now range from 0 to 1."}, {"start": 52.56, "end": 59.0, "text": " So if you plot the scaled X1 and X2 on a graph, it might look like this."}, {"start": 59.0, "end": 65.12, "text": " In addition to dividing by the maximum, you can also do what's called mean normalization."}, {"start": 65.12, "end": 69.72, "text": " So what this looks like is, you start with the original features and then you rescale"}, {"start": 69.72, "end": 73.52000000000001, "text": " them so that both of them are centered around 0."}, {"start": 73.52000000000001, "end": 79.24000000000001, "text": " So whereas before they only had values greater than 0, now they have both negative and positive"}, {"start": 79.24, "end": 85.67999999999999, "text": " values, but maybe usually between negative 1 and plus 1."}, {"start": 85.67999999999999, "end": 91.0, "text": " So to calculate the mean normalization of X1, first find the average, also called the"}, {"start": 91.0, "end": 98.0, "text": " mean of X1 on your training set, and let's call this mean mu1, with this being the Greek"}, {"start": 98.0, "end": 99.8, "text": " alphabet mu."}, {"start": 99.8, "end": 106.74, "text": " For example, you may find that the average of feature 1, mu1, is 600 square feet."}, {"start": 106.74, "end": 115.24, "text": " So let's take each X1, subtract the mean mu1, and then let's divide by the difference 2000"}, {"start": 115.24, "end": 122.08, "text": " minus 300, where 2000 is the maximum and 300 the minimum."}, {"start": 122.08, "end": 131.4, "text": " And if you do this, you get the normalized X1 to range from negative 0.18 to 0.82."}, {"start": 131.4, "end": 137.96, "text": " Similarly to mean normalize X2, you can calculate the average of feature 2, and for instance"}, {"start": 137.96, "end": 141.12, "text": " mu2 may be 2.3."}, {"start": 141.12, "end": 150.4, "text": " Then you can take each X2, subtract mu2, and divide by 5 minus 0, again the max 5 minus"}, {"start": 150.4, "end": 153.0, "text": " the min which is 0."}, {"start": 153.0, "end": 162.76, "text": " The mean normalized X2 now ranges from negative 0.46 to 0.54, so if you plot the training"}, {"start": 162.76, "end": 169.08, "text": " data using the mean normalized X1 and X2, it might look like this."}, {"start": 169.08, "end": 174.52, "text": " There's one last common rescaling method called z-score normalization."}, {"start": 174.52, "end": 178.8, "text": " To implement z-score normalization, you need to calculate something called the standard"}, {"start": 178.8, "end": 181.04, "text": " deviation of each feature."}, {"start": 181.04, "end": 184.6, "text": " If you don't know what a standard deviation is, don't worry about it, you won't need to"}, {"start": 184.6, "end": 186.64, "text": " know it for this class."}, {"start": 186.64, "end": 191.12, "text": " Or if you've heard of the normal distribution or the bell-shaped curve, sometimes also called"}, {"start": 191.12, "end": 196.72, "text": " the Gaussian distribution, this is what the standard deviation for the normal distribution"}, {"start": 196.72, "end": 197.72, "text": " looks like."}, {"start": 197.72, "end": 201.28, "text": " But if you haven't heard of this, you don't need to worry about that either."}, {"start": 201.28, "end": 207.04, "text": " But if you do know what is the standard deviation, then to implement a z-score normalization,"}, {"start": 207.04, "end": 213.48, "text": " you first calculate the mean mu as well as the standard deviation, which is often denoted"}, {"start": 213.48, "end": 218.68, "text": " by the lowercase Greek alphabet sigma of each feature."}, {"start": 218.68, "end": 228.35999999999999, "text": " So for instance, maybe feature 1 has a standard deviation of 450 and mean 600, then to z-score"}, {"start": 228.36, "end": 236.96, "text": " normalize x1, take each x1, subtract mu 1, and then divide by the standard deviation,"}, {"start": 236.96, "end": 240.20000000000002, "text": " which I'm going to denote as sigma 1."}, {"start": 240.20000000000002, "end": 247.92000000000002, "text": " And what you might find is that the z-score normalize x1 now ranges from negative 0.67"}, {"start": 247.92000000000002, "end": 250.92000000000002, "text": " to 3.1."}, {"start": 250.92, "end": 258.52, "text": " Similarly, if you calculate the second feature's standard deviation to be 1.4 and mean to be"}, {"start": 258.52, "end": 266.15999999999997, "text": " 2.3, then you can compute x2 minus mu 2 divided by sigma 2."}, {"start": 266.15999999999997, "end": 275.88, "text": " And in this case, the z-score normalize by x2 might now range from negative 1.6 to 1.9."}, {"start": 275.88, "end": 283.84, "text": " So if you plot the training data on the normalized x1 and x2 on a graph, it might look like this."}, {"start": 283.84, "end": 288.88, "text": " As a rule of thumb, when performing feature scaling, you might want to aim for getting"}, {"start": 288.88, "end": 295.0, "text": " the features to range from maybe anywhere around negative 1 to somewhere around plus"}, {"start": 295.0, "end": 298.24, "text": " 1 for each feature x."}, {"start": 298.24, "end": 303.28, "text": " But these values, negative 1 and plus 1 can be a little bit loose."}, {"start": 303.28, "end": 311.11999999999995, "text": " So if the features range from negative 3 to plus 3, or negative 0.3 to plus 0.3, all of"}, {"start": 311.11999999999995, "end": 313.03999999999996, "text": " these are completely okay."}, {"start": 313.03999999999996, "end": 319.28, "text": " So if you have a feature x1 that winds up being between 0 and 3, that's not a problem."}, {"start": 319.28, "end": 323.71999999999997, "text": " And you can rescale it if you want, but if you don't rescale it, it should work okay"}, {"start": 323.71999999999997, "end": 325.14, "text": " too."}, {"start": 325.14, "end": 331.52, "text": " Or if you have a different feature, x2, whose values are between negative 2 and plus 0.5,"}, {"start": 331.52, "end": 333.4, "text": " again, that's okay."}, {"start": 333.4, "end": 339.28, "text": " No harm rescaling it, but it might be okay if you leave it alone as well."}, {"start": 339.28, "end": 346.79999999999995, "text": " But if another feature like x3 here ranges from negative 100 to plus 100, then this takes"}, {"start": 346.79999999999995, "end": 352.24, "text": " on a very different range of values than something from around negative 1 to plus 1."}, {"start": 352.24, "end": 357.82, "text": " So you're probably better off rescaling this feature x3 so that it ranges from something"}, {"start": 357.82, "end": 361.71999999999997, "text": " closer to negative 1 to plus 1."}, {"start": 361.71999999999997, "end": 368.4, "text": " Similarly, if you have a feature x4 that takes on really small values, say between negative"}, {"start": 368.4, "end": 376.84, "text": " 0.001 and plus 0.001, then these values are so small, that means you may want to rescale"}, {"start": 376.84, "end": 378.48, "text": " it as well."}, {"start": 378.48, "end": 385.4, "text": " Finally, what if your feature x5, such as measurements of a hospital patient's body"}, {"start": 385.4, "end": 392.52, "text": " temperature, ranges from 98.6 to 105 degrees Fahrenheit?"}, {"start": 392.52, "end": 398.59999999999997, "text": " In this case, these values are around 100, which is actually pretty large compared to"}, {"start": 398.59999999999997, "end": 404.44, "text": " other scale features, and this will actually cause gradient descent to run more slowly."}, {"start": 404.44, "end": 408.67999999999995, "text": " So in this case, feature rescaling will likely help."}, {"start": 408.67999999999995, "end": 413.88, "text": " There's almost never any harm to carrying out feature rescaling, so when in doubt, I"}, {"start": 413.88, "end": 416.71999999999997, "text": " encourage you to just carry it out."}, {"start": 416.71999999999997, "end": 418.96, "text": " And that's it for feature scaling."}, {"start": 418.96, "end": 425.46, "text": " With this little technique, you'll often be able to get gradient descent to run much faster."}, {"start": 425.46, "end": 427.88, "text": " So that's feature scaling."}, {"start": 427.88, "end": 432.76, "text": " And with or without feature scaling, when you run gradient descent, how can you know,"}, {"start": 432.76, "end": 438.48, "text": " how can you check if gradient descent is really working, if it is finding you the global minimum"}, {"start": 438.48, "end": 440.4, "text": " or something close to it?"}, {"start": 440.4, "end": 446.67999999999995, "text": " In the next video, let's take a look at how to recognize if gradient descent is converging."}, {"start": 446.67999999999995, "end": 451.15999999999997, "text": " And then in the video after that, this will lead to discussion of how to choose a good"}, {"start": 451.16, "end": 471.16, "text": " learning rate for gradient descent."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=9w3HRwTRVUE
2.7 Practical Tips for Linear Regression | Checking gradient descent for convergence -- ML Andrew Ng
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
When running gradient descent, how can you tell if it is converging? That is, whether it's helping you to find parameters close to the global minimum of the cost function. By learning to recognize what a well-running implementation of gradient descent looks like, we will also, in a later video, be better able to choose a good learning rate alpha. Let's take a look. As a reminder, here's the gradient descent rule, and one of the key choices is the choice of the learning rate alpha. Here's something that I often do to make sure that gradient descent is working well. Recall that the job of gradient descent is to find parameters w and b that hopefully minimize the cost function j. So what I'll often do is plot the cost function j, which is calculated on the training set, and I'll plot the value of j at each iteration of gradient descent. Remember that each iteration means after each simultaneous update of the parameters w and b, so in this plot, the horizontal axis is the number of iterations of gradient descent that you've run so far. And so you may get a curve that looks like this. Notice that the horizontal axis is the number of iterations of gradient descent, and not a parameter like w or b. This differs from previous graphs you've seen, where the vertical axis was cost j, and the horizontal axis was a single parameter like w or b. This curve is also called a learning curve. Note that there are a few different types of learning curves used in machine learning, and you'll see some of the types later in this course as well. Secondly, if you look at this point on the curve, this means that after you've run gradient descent for 100 iterations, meaning 100 simultaneous updates of the parameters, you have some learned values for w and b. And if you compute the cost, jwb, for those values of w and b, the ones you got after 100 iterations, you get this value for the cost j, that is, this point on the vertical axis. And this point here corresponds to the value of j for the parameters that you got after 200 iterations of gradient descent. So looking at this graph helps you to see how your cost j changes after each iteration of gradient descent. If gradient descent is working properly, then the cost j should decrease after every single iteration. If j ever increases after one iteration, that means either alpha is chosen poorly, and it usually means alpha is too large, or there could be a bug in the code. Another useful thing that this plot can tell you is that if you look at this curve, by the time you reach maybe 300 iterations or so, the cost j is leveling off and is no longer decreasing much. And by 400 iterations, it looks like the curve has flattened out. So this means that gradient descent has more or less converged because the curve is no longer decreasing. So looking at this learning curve, you can try to spot whether or not gradient descent is converging. By the way, the number of iterations that gradient descent takes to converge can vary a lot between different applications. In one application, it may converge after just 30 iterations. For a different application, it could take a thousand or a hundred thousand iterations. It turns out to be very difficult to tell in advance how many iterations gradient descent needs to converge, which is why you can create a graph like this, a learning curve, to try to find out when you can stop training your particular model. Another way to decide when your model is done training is with an automatic convergence test. So let's let epsilon, this here is the Greek alphabet epsilon, let's let epsilon be a variable representing a small number, such as 0.001 or 10 to the power of negative three. If the cost J decreases by less than this number epsilon on one iteration, then you're likely on this flattened part of the curve that you see on the left and you can declare convergence. Remember, convergence hopefully indicates that you found parameters W and B that are close to the minimum possible value of J. I usually find that choosing the right threshold epsilon is pretty difficult. So I actually tend to look at graphs like this one on the left, rather than rely on automatic convergence tests. Looking at this other figure can tell you, I'll give you some advanced warning if maybe gradient descent is not working correctly as well. So you've now seen what the learning curve should look like when gradient descent is running well. Let's take these insights and in the next video, take a look at how to choose an appropriate learning rate.
[{"start": 0.0, "end": 5.96, "text": " When running gradient descent, how can you tell if it is converging?"}, {"start": 5.96, "end": 10.32, "text": " That is, whether it's helping you to find parameters close to the global minimum of"}, {"start": 10.32, "end": 12.16, "text": " the cost function."}, {"start": 12.16, "end": 16.8, "text": " By learning to recognize what a well-running implementation of gradient descent looks like,"}, {"start": 16.8, "end": 22.72, "text": " we will also, in a later video, be better able to choose a good learning rate alpha."}, {"start": 22.72, "end": 23.98, "text": " Let's take a look."}, {"start": 23.98, "end": 29.68, "text": " As a reminder, here's the gradient descent rule, and one of the key choices is the choice"}, {"start": 29.68, "end": 33.12, "text": " of the learning rate alpha."}, {"start": 33.12, "end": 38.32, "text": " Here's something that I often do to make sure that gradient descent is working well."}, {"start": 38.32, "end": 43.28, "text": " Recall that the job of gradient descent is to find parameters w and b that hopefully"}, {"start": 43.28, "end": 46.16, "text": " minimize the cost function j."}, {"start": 46.16, "end": 53.519999999999996, "text": " So what I'll often do is plot the cost function j, which is calculated on the training set,"}, {"start": 53.519999999999996, "end": 59.480000000000004, "text": " and I'll plot the value of j at each iteration of gradient descent."}, {"start": 59.48, "end": 66.36, "text": " Remember that each iteration means after each simultaneous update of the parameters w and"}, {"start": 66.36, "end": 74.62, "text": " b, so in this plot, the horizontal axis is the number of iterations of gradient descent"}, {"start": 74.62, "end": 77.16, "text": " that you've run so far."}, {"start": 77.16, "end": 80.92, "text": " And so you may get a curve that looks like this."}, {"start": 80.92, "end": 86.44, "text": " Notice that the horizontal axis is the number of iterations of gradient descent, and not"}, {"start": 86.44, "end": 90.16, "text": " a parameter like w or b."}, {"start": 90.16, "end": 96.8, "text": " This differs from previous graphs you've seen, where the vertical axis was cost j, and the"}, {"start": 96.8, "end": 103.03999999999999, "text": " horizontal axis was a single parameter like w or b."}, {"start": 103.03999999999999, "end": 106.96, "text": " This curve is also called a learning curve."}, {"start": 106.96, "end": 112.03999999999999, "text": " Note that there are a few different types of learning curves used in machine learning,"}, {"start": 112.03999999999999, "end": 116.36, "text": " and you'll see some of the types later in this course as well."}, {"start": 116.36, "end": 122.68, "text": " Secondly, if you look at this point on the curve, this means that after you've run gradient"}, {"start": 122.68, "end": 130.0, "text": " descent for 100 iterations, meaning 100 simultaneous updates of the parameters, you have some learned"}, {"start": 130.0, "end": 133.07999999999998, "text": " values for w and b."}, {"start": 133.07999999999998, "end": 140.8, "text": " And if you compute the cost, jwb, for those values of w and b, the ones you got after"}, {"start": 140.8, "end": 148.4, "text": " 100 iterations, you get this value for the cost j, that is, this point on the vertical"}, {"start": 148.4, "end": 150.04000000000002, "text": " axis."}, {"start": 150.04000000000002, "end": 156.56, "text": " And this point here corresponds to the value of j for the parameters that you got after"}, {"start": 156.56, "end": 160.10000000000002, "text": " 200 iterations of gradient descent."}, {"start": 160.10000000000002, "end": 166.26000000000002, "text": " So looking at this graph helps you to see how your cost j changes after each iteration"}, {"start": 166.26000000000002, "end": 168.18, "text": " of gradient descent."}, {"start": 168.18, "end": 173.48000000000002, "text": " If gradient descent is working properly, then the cost j should decrease after every single"}, {"start": 173.48000000000002, "end": 175.20000000000002, "text": " iteration."}, {"start": 175.20000000000002, "end": 182.84, "text": " If j ever increases after one iteration, that means either alpha is chosen poorly, and it"}, {"start": 182.84, "end": 188.28, "text": " usually means alpha is too large, or there could be a bug in the code."}, {"start": 188.28, "end": 193.44, "text": " Another useful thing that this plot can tell you is that if you look at this curve, by"}, {"start": 193.44, "end": 201.28, "text": " the time you reach maybe 300 iterations or so, the cost j is leveling off and is no longer"}, {"start": 201.28, "end": 203.16, "text": " decreasing much."}, {"start": 203.16, "end": 208.07999999999998, "text": " And by 400 iterations, it looks like the curve has flattened out."}, {"start": 208.07999999999998, "end": 214.44, "text": " So this means that gradient descent has more or less converged because the curve is no"}, {"start": 214.44, "end": 217.12, "text": " longer decreasing."}, {"start": 217.12, "end": 222.48, "text": " So looking at this learning curve, you can try to spot whether or not gradient descent"}, {"start": 222.48, "end": 224.6, "text": " is converging."}, {"start": 224.6, "end": 229.92, "text": " By the way, the number of iterations that gradient descent takes to converge can vary"}, {"start": 229.92, "end": 232.6, "text": " a lot between different applications."}, {"start": 232.6, "end": 237.0, "text": " In one application, it may converge after just 30 iterations."}, {"start": 237.0, "end": 242.88, "text": " For a different application, it could take a thousand or a hundred thousand iterations."}, {"start": 242.88, "end": 249.23999999999998, "text": " It turns out to be very difficult to tell in advance how many iterations gradient descent"}, {"start": 249.24, "end": 255.8, "text": " needs to converge, which is why you can create a graph like this, a learning curve, to try"}, {"start": 255.8, "end": 261.12, "text": " to find out when you can stop training your particular model."}, {"start": 261.12, "end": 266.68, "text": " Another way to decide when your model is done training is with an automatic convergence"}, {"start": 266.68, "end": 268.24, "text": " test."}, {"start": 268.24, "end": 276.16, "text": " So let's let epsilon, this here is the Greek alphabet epsilon, let's let epsilon be a variable"}, {"start": 276.16, "end": 283.92, "text": " representing a small number, such as 0.001 or 10 to the power of negative three."}, {"start": 283.92, "end": 289.32000000000005, "text": " If the cost J decreases by less than this number epsilon on one iteration, then you're"}, {"start": 289.32000000000005, "end": 294.76000000000005, "text": " likely on this flattened part of the curve that you see on the left and you can declare"}, {"start": 294.76000000000005, "end": 296.16, "text": " convergence."}, {"start": 296.16, "end": 301.8, "text": " Remember, convergence hopefully indicates that you found parameters W and B that are"}, {"start": 301.8, "end": 306.04, "text": " close to the minimum possible value of J."}, {"start": 306.04, "end": 310.44, "text": " I usually find that choosing the right threshold epsilon is pretty difficult."}, {"start": 310.44, "end": 314.68, "text": " So I actually tend to look at graphs like this one on the left, rather than rely on"}, {"start": 314.68, "end": 317.52000000000004, "text": " automatic convergence tests."}, {"start": 317.52000000000004, "end": 323.08000000000004, "text": " Looking at this other figure can tell you, I'll give you some advanced warning if maybe"}, {"start": 323.08000000000004, "end": 327.20000000000005, "text": " gradient descent is not working correctly as well."}, {"start": 327.20000000000005, "end": 331.58000000000004, "text": " So you've now seen what the learning curve should look like when gradient descent is"}, {"start": 331.58000000000004, "end": 332.96000000000004, "text": " running well."}, {"start": 332.96, "end": 338.03999999999996, "text": " Let's take these insights and in the next video, take a look at how to choose an appropriate"}, {"start": 338.04, "end": 363.92, "text": " learning rate."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=j4uDxRNjYlA
2.8 Practical Tips for Linear Regression | | Choosing the learning rate-[MachineLearning|Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Your learning algorithm will run much better with an appropriate choice of learning rate. If it's too small, it will run very slowly, and if it's too large, it may not even converge. Let's take a look at how you can choose a good learning rate for your model. Concretely, if you plot the cost for a number of iterations and notice that the cost sometimes goes up and sometimes goes down, you should take that as a clear sign that gradient descent is not working properly. This could mean that there's a bug in the code, or sometimes it could mean that your learning rate is too large. So here's an illustration of what might be happening. Here the vertical axis is a cost function J, and the horizontal axis represents a parameter like maybe W1, and if the learning rate is too big, then if you start off here, your update step may overshoot the minimum and end up here, and in the next update step here, you're again overshooting, so you end up here, and so on. And that's why the cost can sometimes go up instead of decreasing. To fix this, you can use a smaller learning rate. So then your updates may start here and go down a little bit and down a bit, and will hopefully consistently decrease until it reaches the global minimum. Sometimes you may see that the cost consistently increases after each iteration, like this curve here. This is also likely due to a learning rate that is too large, and it could be addressed by choosing a smaller learning rate. But learning rates like this could also be a sign of a possible bug in the code. For example, if I wrote my code so that W1 gets updated as W1 plus alpha times this derivative term, this could result in the cost consistently increasing at each iteration. And this is because adding the derivative term moves your cost j further from the global minimum instead of close up. So remember, you want to use a minus sign, so the code should be updated W1, updated by W1 minus alpha times the derivative term. One debugging tip for a correct implementation of gradient descent is that with a small enough learning rate, the cost function should decrease on every single iteration. So if gradient descent isn't working, one thing I will often do, and I hope you find this tip useful too, one thing I'll often do is just set alpha to be a very, very small number and see if that causes the cost to decrease on every iteration. If even with alpha set to a very small number, j doesn't decrease on every single iteration, but instead sometimes increases, then that usually means there's a bug somewhere in the code. Note that setting alpha to be really, really small is meant here as a debugging step, and a very, very small value of alpha is not going to be the most efficient choice for actually training your learning algorithm. One important trade off is that if your learning rate is too small, then gradient descent can take a lot of iterations to converge. So when I am running gradient descent, I will usually try a range of values for the learning rate alpha. So I might start by trying a learning rate of 0.001, and I might also try a learning rate that's 10 times as large, say 0.01 and 0.1 and so on. And for each choice of alpha, you might run gradient descent just for a handful of iterations and plot the cost function j as a function of the number of iterations. And after trying a few different values, you might then pick the value of alpha that seems to decrease the learning rate rapidly, but also consistently. In fact, what I actually do is try a range of values like this. After trying 0.001, I'll then increase the learning rate threefold to 0.003. And after that, I'll try 0.01, which is again about three times as large as 0.003. So these are roughly trying out gradient descent with each value of alpha being roughly three times bigger than the previous value. So what I'll do is try a range of values until I found a value that's too small. And then also make sure I found a value that's too large. And I'll slowly try to pick the largest possible learning rate or just something slightly smaller than the largest reasonable value that I found. And when I do that, it usually gives me a good learning rate for my model. So I hope this technique too will be useful for you to choose a good learning rate for your implementation of gradient descent. In the upcoming optional lab, you can also take a look at how feature scaling is done in code and also see how different choices of the learning rate alpha can lead to either better or worse training of your model. I hope you have fun playing with the value of alpha and seeing the outcomes of different choices of alpha. So please take a look and run the code in the optional lab to gain a deeper intuition about feature scaling as well as the learning rate alpha. Training learning rates is an important part of training many learning algorithms. And I hope that this video gives you intuition about different choices and how to pick a good value for alpha. Now there are a couple more ideas they can use to make multiple linear regression much more powerful. And that is choosing custom features, which will also allow you to fit curves, not just a straight line to your data. Let's take a look at that in the next video.
[{"start": 0.0, "end": 6.76, "text": " Your learning algorithm will run much better with an appropriate choice of learning rate."}, {"start": 6.76, "end": 12.64, "text": " If it's too small, it will run very slowly, and if it's too large, it may not even converge."}, {"start": 12.64, "end": 16.72, "text": " Let's take a look at how you can choose a good learning rate for your model."}, {"start": 16.72, "end": 23.68, "text": " Concretely, if you plot the cost for a number of iterations and notice that the cost sometimes"}, {"start": 23.68, "end": 29.64, "text": " goes up and sometimes goes down, you should take that as a clear sign that gradient descent"}, {"start": 29.64, "end": 31.76, "text": " is not working properly."}, {"start": 31.76, "end": 36.04, "text": " This could mean that there's a bug in the code, or sometimes it could mean that your"}, {"start": 36.04, "end": 38.44, "text": " learning rate is too large."}, {"start": 38.44, "end": 42.44, "text": " So here's an illustration of what might be happening."}, {"start": 42.44, "end": 50.34, "text": " Here the vertical axis is a cost function J, and the horizontal axis represents a parameter"}, {"start": 50.34, "end": 57.88, "text": " like maybe W1, and if the learning rate is too big, then if you start off here, your"}, {"start": 57.88, "end": 63.92, "text": " update step may overshoot the minimum and end up here, and in the next update step here,"}, {"start": 63.92, "end": 68.4, "text": " you're again overshooting, so you end up here, and so on."}, {"start": 68.4, "end": 73.04, "text": " And that's why the cost can sometimes go up instead of decreasing."}, {"start": 73.04, "end": 76.08, "text": " To fix this, you can use a smaller learning rate."}, {"start": 76.08, "end": 81.6, "text": " So then your updates may start here and go down a little bit and down a bit, and will"}, {"start": 81.6, "end": 86.92, "text": " hopefully consistently decrease until it reaches the global minimum."}, {"start": 86.92, "end": 92.52, "text": " Sometimes you may see that the cost consistently increases after each iteration, like this"}, {"start": 92.52, "end": 93.96000000000001, "text": " curve here."}, {"start": 93.96000000000001, "end": 98.6, "text": " This is also likely due to a learning rate that is too large, and it could be addressed"}, {"start": 98.6, "end": 102.04, "text": " by choosing a smaller learning rate."}, {"start": 102.04, "end": 107.16, "text": " But learning rates like this could also be a sign of a possible bug in the code."}, {"start": 107.16, "end": 115.84, "text": " For example, if I wrote my code so that W1 gets updated as W1 plus alpha times this derivative"}, {"start": 115.84, "end": 122.08, "text": " term, this could result in the cost consistently increasing at each iteration."}, {"start": 122.08, "end": 127.72, "text": " And this is because adding the derivative term moves your cost j further from the global"}, {"start": 127.72, "end": 130.12, "text": " minimum instead of close up."}, {"start": 130.12, "end": 136.64000000000001, "text": " So remember, you want to use a minus sign, so the code should be updated W1, updated"}, {"start": 136.64000000000001, "end": 142.22, "text": " by W1 minus alpha times the derivative term."}, {"start": 142.22, "end": 147.72, "text": " One debugging tip for a correct implementation of gradient descent is that with a small enough"}, {"start": 147.72, "end": 153.8, "text": " learning rate, the cost function should decrease on every single iteration."}, {"start": 153.8, "end": 158.68, "text": " So if gradient descent isn't working, one thing I will often do, and I hope you find"}, {"start": 158.68, "end": 164.64, "text": " this tip useful too, one thing I'll often do is just set alpha to be a very, very small"}, {"start": 164.64, "end": 171.76, "text": " number and see if that causes the cost to decrease on every iteration."}, {"start": 171.76, "end": 178.32, "text": " If even with alpha set to a very small number, j doesn't decrease on every single iteration,"}, {"start": 178.32, "end": 182.67999999999998, "text": " but instead sometimes increases, then that usually means there's a bug somewhere in the"}, {"start": 182.67999999999998, "end": 184.84, "text": " code."}, {"start": 184.84, "end": 190.64, "text": " Note that setting alpha to be really, really small is meant here as a debugging step, and"}, {"start": 190.64, "end": 195.12, "text": " a very, very small value of alpha is not going to be the most efficient choice for actually"}, {"start": 195.12, "end": 197.6, "text": " training your learning algorithm."}, {"start": 197.6, "end": 202.88, "text": " One important trade off is that if your learning rate is too small, then gradient descent can"}, {"start": 202.88, "end": 206.12, "text": " take a lot of iterations to converge."}, {"start": 206.12, "end": 211.76, "text": " So when I am running gradient descent, I will usually try a range of values for the learning"}, {"start": 211.76, "end": 212.76, "text": " rate alpha."}, {"start": 212.76, "end": 219.0, "text": " So I might start by trying a learning rate of 0.001, and I might also try a learning"}, {"start": 219.0, "end": 225.35999999999999, "text": " rate that's 10 times as large, say 0.01 and 0.1 and so on."}, {"start": 225.36, "end": 231.20000000000002, "text": " And for each choice of alpha, you might run gradient descent just for a handful of iterations"}, {"start": 231.20000000000002, "end": 237.16000000000003, "text": " and plot the cost function j as a function of the number of iterations."}, {"start": 237.16000000000003, "end": 242.48000000000002, "text": " And after trying a few different values, you might then pick the value of alpha that seems"}, {"start": 242.48000000000002, "end": 247.60000000000002, "text": " to decrease the learning rate rapidly, but also consistently."}, {"start": 247.60000000000002, "end": 252.68, "text": " In fact, what I actually do is try a range of values like this."}, {"start": 252.68, "end": 260.12, "text": " After trying 0.001, I'll then increase the learning rate threefold to 0.003."}, {"start": 260.12, "end": 268.26, "text": " And after that, I'll try 0.01, which is again about three times as large as 0.003."}, {"start": 268.26, "end": 273.3, "text": " So these are roughly trying out gradient descent with each value of alpha being roughly three"}, {"start": 273.3, "end": 277.24, "text": " times bigger than the previous value."}, {"start": 277.24, "end": 281.84000000000003, "text": " So what I'll do is try a range of values until I found a value that's too small."}, {"start": 281.84, "end": 286.08, "text": " And then also make sure I found a value that's too large."}, {"start": 286.08, "end": 292.56, "text": " And I'll slowly try to pick the largest possible learning rate or just something slightly smaller"}, {"start": 292.56, "end": 295.96, "text": " than the largest reasonable value that I found."}, {"start": 295.96, "end": 300.65999999999997, "text": " And when I do that, it usually gives me a good learning rate for my model."}, {"start": 300.65999999999997, "end": 305.76, "text": " So I hope this technique too will be useful for you to choose a good learning rate for"}, {"start": 305.76, "end": 309.91999999999996, "text": " your implementation of gradient descent."}, {"start": 309.92, "end": 315.08000000000004, "text": " In the upcoming optional lab, you can also take a look at how feature scaling is done"}, {"start": 315.08000000000004, "end": 320.76, "text": " in code and also see how different choices of the learning rate alpha can lead to either"}, {"start": 320.76, "end": 324.16, "text": " better or worse training of your model."}, {"start": 324.16, "end": 328.82, "text": " I hope you have fun playing with the value of alpha and seeing the outcomes of different"}, {"start": 328.82, "end": 330.70000000000005, "text": " choices of alpha."}, {"start": 330.70000000000005, "end": 335.52000000000004, "text": " So please take a look and run the code in the optional lab to gain a deeper intuition"}, {"start": 335.52000000000004, "end": 339.88, "text": " about feature scaling as well as the learning rate alpha."}, {"start": 339.88, "end": 344.04, "text": " Training learning rates is an important part of training many learning algorithms."}, {"start": 344.04, "end": 348.08, "text": " And I hope that this video gives you intuition about different choices and how to pick a"}, {"start": 348.08, "end": 350.92, "text": " good value for alpha."}, {"start": 350.92, "end": 355.44, "text": " Now there are a couple more ideas they can use to make multiple linear regression much"}, {"start": 355.44, "end": 356.92, "text": " more powerful."}, {"start": 356.92, "end": 362.24, "text": " And that is choosing custom features, which will also allow you to fit curves, not just"}, {"start": 362.24, "end": 364.2, "text": " a straight line to your data."}, {"start": 364.2, "end": 370.64, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=sO_JcsxQRz8
2.9 Practical Tips for Linear Regression | Feature engineering --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
The choice of features can have a huge impact on your learning algorithm's performance. In fact, for many practical applications, choosing or entering the right features is a critical step to making the algorithm work well. In this video, let's take a look at how you can choose or engineer the most appropriate features for your learning algorithm. Let's take a look at feature engineering by revisiting the example of predicting the price of a house. Say you have two features for each house. X1 is the width of the lot size of the plot of land that the house is built on. This in real estate is also called the frontage of the lot. And the second feature, X2, is the depth of the lot size of, let's assume, the rectangular plot of land that the house was built on. Given these two features, X1 and X2, you might build a model like this, where f of x is w1 x1 plus w2 x2 plus b, where x1 is the frontage or width and x2 is the depth. And this model might work okay. But here's another option for how you might choose a different way to use these features in the model that could be even more effective. You might notice that the area of the land can be calculated as the frontage or width times the depth. And you may have an intuition that the area of the land is more predictive of the price than the frontage and depth as separate features. So you might define a new feature, X3, as X1 times X2. So this new feature X3 is equal to the area of the plot of land. For this feature, you can then have a model fwb of x equals w1 x1 plus w2 x2 plus w3 x3 plus b, so that the model can now choose parameters w1, w2, and w3, depending on whether the data shows that the frontage or the depth or the area x3 of the lot turns out to be the most important thing for predicting the price of the house. What we just did, creating a new feature, is an example of what's called feature engineering, in which you might use your knowledge or intuition about the problem to design new features, usually by transforming or combining the original features of the problem in order to make it easier for the learning algorithm to make accurate predictions. So depending on what insights you may have into the application, rather than just taking the features that you happen to have started off with, sometimes by defining new features, you might be able to get a much better model. So that's feature engineering. And it turns out that there's one flavor of feature engineering that will allow you to fit not just straight lines, but curves, nonlinear functions to your data. Let's take a look in the next video at how you can do that.
[{"start": 0.0, "end": 6.0, "text": " The choice of features can have a huge impact on your learning algorithm's performance."}, {"start": 6.0, "end": 11.120000000000001, "text": " In fact, for many practical applications, choosing or entering the right features is"}, {"start": 11.120000000000001, "end": 14.6, "text": " a critical step to making the algorithm work well."}, {"start": 14.6, "end": 18.76, "text": " In this video, let's take a look at how you can choose or engineer the most appropriate"}, {"start": 18.76, "end": 21.76, "text": " features for your learning algorithm."}, {"start": 21.76, "end": 26.560000000000002, "text": " Let's take a look at feature engineering by revisiting the example of predicting the price"}, {"start": 26.560000000000002, "end": 28.72, "text": " of a house."}, {"start": 28.72, "end": 31.599999999999998, "text": " Say you have two features for each house."}, {"start": 31.599999999999998, "end": 37.78, "text": " X1 is the width of the lot size of the plot of land that the house is built on."}, {"start": 37.78, "end": 42.8, "text": " This in real estate is also called the frontage of the lot."}, {"start": 42.8, "end": 49.04, "text": " And the second feature, X2, is the depth of the lot size of, let's assume, the rectangular"}, {"start": 49.04, "end": 52.120000000000005, "text": " plot of land that the house was built on."}, {"start": 52.120000000000005, "end": 58.68, "text": " Given these two features, X1 and X2, you might build a model like this, where f of x is w1"}, {"start": 58.68, "end": 68.32, "text": " x1 plus w2 x2 plus b, where x1 is the frontage or width and x2 is the depth."}, {"start": 68.32, "end": 70.6, "text": " And this model might work okay."}, {"start": 70.6, "end": 74.62, "text": " But here's another option for how you might choose a different way to use these features"}, {"start": 74.62, "end": 77.42, "text": " in the model that could be even more effective."}, {"start": 77.42, "end": 82.68, "text": " You might notice that the area of the land can be calculated as the frontage or width"}, {"start": 82.68, "end": 84.72, "text": " times the depth."}, {"start": 84.72, "end": 90.44, "text": " And you may have an intuition that the area of the land is more predictive of the price"}, {"start": 90.44, "end": 94.28, "text": " than the frontage and depth as separate features."}, {"start": 94.28, "end": 100.02, "text": " So you might define a new feature, X3, as X1 times X2."}, {"start": 100.02, "end": 105.6, "text": " So this new feature X3 is equal to the area of the plot of land."}, {"start": 105.6, "end": 115.24, "text": " For this feature, you can then have a model fwb of x equals w1 x1 plus w2 x2 plus w3 x3"}, {"start": 115.24, "end": 122.88, "text": " plus b, so that the model can now choose parameters w1, w2, and w3, depending on whether the data"}, {"start": 122.88, "end": 129.16, "text": " shows that the frontage or the depth or the area x3 of the lot turns out to be the most"}, {"start": 129.16, "end": 133.12, "text": " important thing for predicting the price of the house."}, {"start": 133.12, "end": 139.64000000000001, "text": " What we just did, creating a new feature, is an example of what's called feature engineering,"}, {"start": 139.64000000000001, "end": 144.88, "text": " in which you might use your knowledge or intuition about the problem to design new features,"}, {"start": 144.88, "end": 149.92000000000002, "text": " usually by transforming or combining the original features of the problem in order to make it"}, {"start": 149.92000000000002, "end": 153.92000000000002, "text": " easier for the learning algorithm to make accurate predictions."}, {"start": 153.92000000000002, "end": 158.68, "text": " So depending on what insights you may have into the application, rather than just taking"}, {"start": 158.68, "end": 164.08, "text": " the features that you happen to have started off with, sometimes by defining new features,"}, {"start": 164.08, "end": 167.84, "text": " you might be able to get a much better model."}, {"start": 167.84, "end": 171.24, "text": " So that's feature engineering."}, {"start": 171.24, "end": 175.20000000000002, "text": " And it turns out that there's one flavor of feature engineering that will allow you to"}, {"start": 175.20000000000002, "end": 180.60000000000002, "text": " fit not just straight lines, but curves, nonlinear functions to your data."}, {"start": 180.6, "end": 197.04, "text": " Let's take a look in the next video at how you can do that."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=UobsU2oQwBA
2.10 Practical Tips for Linear Regression | Polynomial regression --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
So far, we've just been fitting straight lines to our data. Let's take the ideas of multiple linear regression and feature engineering to come up with a new algorithm called polynomial regression, which will let you fit curves, nonlinear functions, to your data. Let's say you have a housing data set that looks like this, where feature x is the size in square feet. It doesn't look like a straight line fits this data set very well, so maybe you want to fit a curve, maybe a quadratic function, to the data like this, which includes a size x and also x squared, which is the size raised to the power of 2. And maybe that will give you a better fit to the data. But then you may decide that your quadratic model doesn't really make sense, because a quadratic function eventually comes back down, and well, we wouldn't really expect housing prices to go down when the size increases, right? Big houses seem like they should usually cost more. So then you may choose a cubic function, where we now have not only x squared, but x cubed. So maybe this model produces this curve here, which is a somewhat better fit to the data, because the size does eventually come back up as the size increases. These are both examples of polynomial regression, because you took your optional feature x and raised it to the power of 2, or 3, or any other power. And in the case of the cubic function, the first feature is the size, the second feature is the size squared, and the third feature is the size cubed. I just want to point out one more thing, which is that if you create features that are these powers, like the square of the original features like this, then feature scaling becomes increasingly important. So if the size of the house ranges from say, 1 to 1000 square feet, then the second feature, which is the size squared, would range from 1 to a million, and the third feature, which is size cubed, ranges from 1 to a billion. So these two features, x squared and x cubed, take on very different ranges of values compared to the original feature x. And if you're using gradient descent, it's important to apply feature scaling to get your features into comparable ranges of values. Finally, here's one last example of how you really have a wide range of choices of features to use. Another reasonable alternative to taking the size squared and size cubed is to say use the square root of x. So your model may look like w1 times x plus w2 times the square root of x plus b. The square root function looks like this, and it becomes a bit less steep as x increases, but it doesn't ever completely flatten out, and it certainly never ever comes back down. So this would be another choice of features that might work well for this dataset as well. So you may ask yourself, how do I decide what features to use? Later, in the second course in this specialization, you see how you can choose different features and different models that include or don't include these features, and you have a process for measuring how well these different models perform to help you decide which features to include or not include. For now, I just want you to be aware that you have a choice in what features you use, and by using feature engineering and polynomial functions, you can potentially get a much better model for your data. In the optional lab that follows this video, you will see some code that implements polynomial regression using features like x, x squared, and x cubed. So please take a look and run the code and see how it works. There's also another optional lab after that one that shows how to use a popular open source toolkit that implements linear regression. Scikit-learn is a very widely used open source machine learning library that is used by many practitioners in many of the top AI, internet, machine learning companies in the world. So if either now or in the future you're using machine learning in your job, there's a very good chance you'll be using tools like Scikit-learn to train your models. And so working through that optional lab will give you a chance to not only better understand linear regression, but also see how this can be done in just a few lines of code using a library like Scikit-learn. For you to have a solid understanding of these algorithms and be able to apply them, I do think it's important that you know how to implement linear regression yourself, and not just call some Scikit-learn function that is a black box. But Scikit-learn also has an important role in the way machine learning is done in practice today. So, we're just about at the end of this week. Congratulations on finishing all of this week's videos. Please do take a look at the practice quizzes and also the practice lab, which I hope will let you try out and practice ideas that we've discussed. In this week's practice lab, you implement linear regression. I hope you have a lot of fun getting this learning algorithm to work for yourself. Best of luck with that, and I also look forward to seeing you in next week's videos, where we'll go beyond regression, that is predicting numbers, to talk about our first classification algorithm, which can predict categories. I'll see you next week.
[{"start": 0.0, "end": 7.54, "text": " So far, we've just been fitting straight lines to our data."}, {"start": 7.54, "end": 12.56, "text": " Let's take the ideas of multiple linear regression and feature engineering to come up with a"}, {"start": 12.56, "end": 18.04, "text": " new algorithm called polynomial regression, which will let you fit curves, nonlinear functions,"}, {"start": 18.04, "end": 19.04, "text": " to your data."}, {"start": 19.04, "end": 24.6, "text": " Let's say you have a housing data set that looks like this, where feature x is the size"}, {"start": 24.6, "end": 26.400000000000002, "text": " in square feet."}, {"start": 26.4, "end": 31.2, "text": " It doesn't look like a straight line fits this data set very well, so maybe you want"}, {"start": 31.2, "end": 38.6, "text": " to fit a curve, maybe a quadratic function, to the data like this, which includes a size"}, {"start": 38.6, "end": 44.56, "text": " x and also x squared, which is the size raised to the power of 2."}, {"start": 44.56, "end": 48.239999999999995, "text": " And maybe that will give you a better fit to the data."}, {"start": 48.239999999999995, "end": 51.84, "text": " But then you may decide that your quadratic model doesn't really make sense, because a"}, {"start": 51.84, "end": 57.400000000000006, "text": " quadratic function eventually comes back down, and well, we wouldn't really expect housing"}, {"start": 57.400000000000006, "end": 61.0, "text": " prices to go down when the size increases, right?"}, {"start": 61.0, "end": 64.88000000000001, "text": " Big houses seem like they should usually cost more."}, {"start": 64.88000000000001, "end": 72.66, "text": " So then you may choose a cubic function, where we now have not only x squared, but x cubed."}, {"start": 72.66, "end": 78.66, "text": " So maybe this model produces this curve here, which is a somewhat better fit to the data,"}, {"start": 78.66, "end": 83.8, "text": " because the size does eventually come back up as the size increases."}, {"start": 83.8, "end": 89.75999999999999, "text": " These are both examples of polynomial regression, because you took your optional feature x and"}, {"start": 89.75999999999999, "end": 94.84, "text": " raised it to the power of 2, or 3, or any other power."}, {"start": 94.84, "end": 99.5, "text": " And in the case of the cubic function, the first feature is the size, the second feature"}, {"start": 99.5, "end": 104.44, "text": " is the size squared, and the third feature is the size cubed."}, {"start": 104.44, "end": 109.48, "text": " I just want to point out one more thing, which is that if you create features that are these"}, {"start": 109.48, "end": 116.32, "text": " powers, like the square of the original features like this, then feature scaling becomes increasingly"}, {"start": 116.32, "end": 117.67999999999999, "text": " important."}, {"start": 117.67999999999999, "end": 124.72, "text": " So if the size of the house ranges from say, 1 to 1000 square feet, then the second feature,"}, {"start": 124.72, "end": 130.72, "text": " which is the size squared, would range from 1 to a million, and the third feature, which"}, {"start": 130.72, "end": 135.24, "text": " is size cubed, ranges from 1 to a billion."}, {"start": 135.24, "end": 141.44, "text": " So these two features, x squared and x cubed, take on very different ranges of values compared"}, {"start": 141.44, "end": 143.28, "text": " to the original feature x."}, {"start": 143.28, "end": 148.07999999999998, "text": " And if you're using gradient descent, it's important to apply feature scaling to get"}, {"start": 148.07999999999998, "end": 152.0, "text": " your features into comparable ranges of values."}, {"start": 152.0, "end": 157.96, "text": " Finally, here's one last example of how you really have a wide range of choices of features"}, {"start": 157.96, "end": 159.64, "text": " to use."}, {"start": 159.64, "end": 164.79999999999998, "text": " Another reasonable alternative to taking the size squared and size cubed is to say use"}, {"start": 164.79999999999998, "end": 166.79999999999998, "text": " the square root of x."}, {"start": 166.79999999999998, "end": 175.88, "text": " So your model may look like w1 times x plus w2 times the square root of x plus b."}, {"start": 175.88, "end": 182.07999999999998, "text": " The square root function looks like this, and it becomes a bit less steep as x increases,"}, {"start": 182.07999999999998, "end": 187.85999999999999, "text": " but it doesn't ever completely flatten out, and it certainly never ever comes back down."}, {"start": 187.86, "end": 193.20000000000002, "text": " So this would be another choice of features that might work well for this dataset as well."}, {"start": 193.20000000000002, "end": 197.76000000000002, "text": " So you may ask yourself, how do I decide what features to use?"}, {"start": 197.76000000000002, "end": 203.5, "text": " Later, in the second course in this specialization, you see how you can choose different features"}, {"start": 203.5, "end": 209.0, "text": " and different models that include or don't include these features, and you have a process"}, {"start": 209.0, "end": 213.52, "text": " for measuring how well these different models perform to help you decide which features"}, {"start": 213.52, "end": 215.88000000000002, "text": " to include or not include."}, {"start": 215.88, "end": 221.07999999999998, "text": " For now, I just want you to be aware that you have a choice in what features you use,"}, {"start": 221.07999999999998, "end": 225.79999999999998, "text": " and by using feature engineering and polynomial functions, you can potentially get a much"}, {"start": 225.79999999999998, "end": 229.0, "text": " better model for your data."}, {"start": 229.0, "end": 234.48, "text": " In the optional lab that follows this video, you will see some code that implements polynomial"}, {"start": 234.48, "end": 238.72, "text": " regression using features like x, x squared, and x cubed."}, {"start": 238.72, "end": 243.84, "text": " So please take a look and run the code and see how it works."}, {"start": 243.84, "end": 248.72, "text": " There's also another optional lab after that one that shows how to use a popular open source"}, {"start": 248.72, "end": 253.92000000000002, "text": " toolkit that implements linear regression."}, {"start": 253.92000000000002, "end": 259.28000000000003, "text": " Scikit-learn is a very widely used open source machine learning library that is used by many"}, {"start": 259.28000000000003, "end": 266.92, "text": " practitioners in many of the top AI, internet, machine learning companies in the world."}, {"start": 266.92, "end": 271.68, "text": " So if either now or in the future you're using machine learning in your job, there's a very"}, {"start": 271.68, "end": 277.76, "text": " good chance you'll be using tools like Scikit-learn to train your models."}, {"start": 277.76, "end": 282.6, "text": " And so working through that optional lab will give you a chance to not only better understand"}, {"start": 282.6, "end": 288.0, "text": " linear regression, but also see how this can be done in just a few lines of code using"}, {"start": 288.0, "end": 291.32, "text": " a library like Scikit-learn."}, {"start": 291.32, "end": 296.52, "text": " For you to have a solid understanding of these algorithms and be able to apply them, I do"}, {"start": 296.52, "end": 301.12, "text": " think it's important that you know how to implement linear regression yourself, and"}, {"start": 301.12, "end": 305.12, "text": " not just call some Scikit-learn function that is a black box."}, {"start": 305.12, "end": 309.92, "text": " But Scikit-learn also has an important role in the way machine learning is done in practice"}, {"start": 309.92, "end": 310.92, "text": " today."}, {"start": 310.92, "end": 315.04, "text": " So, we're just about at the end of this week."}, {"start": 315.04, "end": 318.4, "text": " Congratulations on finishing all of this week's videos."}, {"start": 318.4, "end": 323.24, "text": " Please do take a look at the practice quizzes and also the practice lab, which I hope will"}, {"start": 323.24, "end": 327.68, "text": " let you try out and practice ideas that we've discussed."}, {"start": 327.68, "end": 331.54, "text": " In this week's practice lab, you implement linear regression."}, {"start": 331.54, "end": 336.6, "text": " I hope you have a lot of fun getting this learning algorithm to work for yourself."}, {"start": 336.6, "end": 342.0, "text": " Best of luck with that, and I also look forward to seeing you in next week's videos, where"}, {"start": 342.0, "end": 346.96000000000004, "text": " we'll go beyond regression, that is predicting numbers, to talk about our first classification"}, {"start": 346.96000000000004, "end": 349.72, "text": " algorithm, which can predict categories."}, {"start": 349.72, "end": 358.12, "text": " I'll see you next week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=wul3SmrpAeY
3.1 Classification | Motivations --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to the third week of this course. By the end of this week, you have completed the first course of this specialization. So let's jump in. Last week, you learned about linear regression, which predicts a number. This week, you learned about classification, where your output variable y can take on only one of a small handful of possible values, instead of any number in an infinite range of numbers. It turns out that linear regression is not a good algorithm for classification problems. Let's take a look at why, and this will lead us into a different algorithm called logistic regression, which is one of the most popular and most widely used learning algorithms today. Here are some examples of classification problems. Recall the example of trying to figure out whether an email is spam. So the answer you want to output is going to be either a no or a yes. Another example would be figuring out if an online financial transaction is fraudulent. Fighting online financial fraud is something I once worked on, and it was strangely exhilarating because I knew there were forces out there trying to steal money, and my team's job was to stop them. So the problem is, given a financial transaction, can your learning algorithm figure out is this transaction fraudulent, such as was this credit card stolen? Another example we've touched on before was trying to classify a tumor as malignant versus not. In each of these problems, the variable that you want to predict can only be one of two possible values, no or yes. This type of classification problem, where there are only two possible outputs, is called binary classification, where the word binary refers to there being only two possible classes, or two possible categories. In these problems, I will use the terms class and category relatively interchangeably. They mean basically the same thing. By convention, we can refer to these two classes or categories in a few common ways. We often designate classes as no or yes, or sometimes equivalently, false or true, or very commonly using the numbers 0 or 1, following the common convention in computer science with 0 denoting false and 1 denoting true. I'm usually going to use the numbers 0 and 1 to represent the answer why, because that will fit in most easily with the types of learning algorithms we want to implement. But when we talk about it, we'll often say no or yes, or false or true, as well. One of the terminology is commonly used is to call the false or zero class the negative class and the true or the one class the positive class. For example, for spam classification, an email that is not spam may be referred to as a negative example because the output to the question of is it spam? The output is no or zero. In contrast, an email that is spam might be referred to as a positive training example because the answer to is it spam is yes or true or one. To be clear, negative and positive do not necessarily mean bad versus good or evil versus good. It's just that negative and positive examples are used to convey the concepts of absence or zero or false versus the presence or true or one of something you might be looking for, such as the absence or presence of the spam units or the spam property of an email or the absence of presence of fraudulent activity or absence of presence of malignancy in a tumor. Between non-spam and spam emails, which one you call false or zero and which one you call true or one is a little bit arbitrary. Often either choice could work. So a different engineer might actually swap it around and have the positive class be the presence of a good email or the positive class be the presence of a real financial transaction or a healthy patient. So how do you build a classification algorithm? Here's the example of a training set for classifying if a tumor is malignant. A class one positive class, yes class or benign class zero or negative class. I plotted both the tumor size on the horizontal axis as well as the label Y on the vertical axis. By the way, in week one, when we first talked about classification, this is how we previously visualized it on the number line, except that now we're calling the classes zero and one and plotting them on the vertical axis. Now one thing you could try on this training set is to apply the algorithm you already know linear regression and try to fit a straight line to the data. If you do that, maybe the straight line looks like this, right? And that's your f of X. Linear regression predicts not just the values zero and one, but all numbers between zero and one or even less than zero or greater than one. But here we want to predict categories. One thing you could try is to pick a threshold of say 0.5 so that if the model outputs a value below 0.5, then you predict Y equals zero or not malignant. And if the model outputs a number equal to or greater than 0.5, then predict Y equals one or malignant. Notice that this threshold value 0.5 intersects the best fit straight line at this point. So if you draw this vertical line here, everything to the left ends up with a prediction of Y equals zero and everything on the right ends up with a prediction of Y equals one. Now for this particular data set, it looks like linear regression could do something reasonable. But now let's see what happens if your data set has one more training example. This one way over here on the right. Let's also extend the horizontal axis. Notice that this training example shouldn't really change how you classify the data points. This vertical dividing line that we drew just now still makes sense as the cutoff where two meters smaller than this should be classified as zero and two meters greater than this should be classified as one. But once you've added this extra training example on the right, the best fit line for linear regression will shift over like this. And if you continue using the threshold of 0.5, you now notice that everything to the left of this point is predicted as zero, non-malignant, and everything to the right of this point is predicted to be one or malignant. This isn't what we want, because adding that example way to the right shouldn't change any of our conclusions about how to classify malignant versus benign tumors. But if you try to do this with linear regression, adding this one example, which feels like it shouldn't be changing anything, it ends up with us learning a much worse function for this classification problem. Clearly, when a tumor is large, we want the algorithm to classify it as malignant. So what we just saw was linear regression causes the best fit line when we added one more example to the right to shift over, and thus the dividing line also called the decision boundary to shift over to the right. You learn more about the decision boundary in the next video. You also learn about an algorithm called logistic regression, where the output value of the outcome will always be between 0 and 1, and the algorithm will avoid these problems that we're seeing on the slide. By the way, one thing confusing about the name logistic regression is that even though it has the word regression in it, it's actually used for classification. Don't be confused by the name, which was given for historical reasons. It's actually used to solve binary classification problems where the output label y is either 0 or 1. In the upcoming optional lab, you also get to take a look at what happens when you try to use linear regression for classification. Sometimes you get lucky and it may work, but often it will not work well, which is why I don't use linear regression myself for classification. In the optional lab, you see an interactive plot that attempts to classify between two categories, and you hopefully notice how this often doesn't work very well, which is okay, because that motivates the need for a different model to do classification tasks. So please check out this optional lab, and after that, we'll go on to the next video to look at logistic regression for classification.
[{"start": 0.0, "end": 3.72, "text": " Welcome to the third week of this course."}, {"start": 3.72, "end": 8.4, "text": " By the end of this week, you have completed the first course of this specialization."}, {"start": 8.4, "end": 10.200000000000001, "text": " So let's jump in."}, {"start": 10.200000000000001, "end": 15.14, "text": " Last week, you learned about linear regression, which predicts a number."}, {"start": 15.14, "end": 20.0, "text": " This week, you learned about classification, where your output variable y can take on only"}, {"start": 20.0, "end": 25.3, "text": " one of a small handful of possible values, instead of any number in an infinite range"}, {"start": 25.3, "end": 27.080000000000002, "text": " of numbers."}, {"start": 27.08, "end": 32.96, "text": " It turns out that linear regression is not a good algorithm for classification problems."}, {"start": 32.96, "end": 38.28, "text": " Let's take a look at why, and this will lead us into a different algorithm called logistic"}, {"start": 38.28, "end": 43.68, "text": " regression, which is one of the most popular and most widely used learning algorithms today."}, {"start": 43.68, "end": 47.36, "text": " Here are some examples of classification problems."}, {"start": 47.36, "end": 52.64, "text": " Recall the example of trying to figure out whether an email is spam."}, {"start": 52.64, "end": 58.26, "text": " So the answer you want to output is going to be either a no or a yes."}, {"start": 58.26, "end": 64.8, "text": " Another example would be figuring out if an online financial transaction is fraudulent."}, {"start": 64.8, "end": 70.6, "text": " Fighting online financial fraud is something I once worked on, and it was strangely exhilarating"}, {"start": 70.6, "end": 75.64, "text": " because I knew there were forces out there trying to steal money, and my team's job was"}, {"start": 75.64, "end": 77.58, "text": " to stop them."}, {"start": 77.58, "end": 83.72, "text": " So the problem is, given a financial transaction, can your learning algorithm figure out is"}, {"start": 83.72, "end": 89.75999999999999, "text": " this transaction fraudulent, such as was this credit card stolen?"}, {"start": 89.75999999999999, "end": 95.92, "text": " Another example we've touched on before was trying to classify a tumor as malignant versus"}, {"start": 95.92, "end": 98.0, "text": " not."}, {"start": 98.0, "end": 103.64, "text": " In each of these problems, the variable that you want to predict can only be one of two"}, {"start": 103.64, "end": 106.98, "text": " possible values, no or yes."}, {"start": 106.98, "end": 111.60000000000001, "text": " This type of classification problem, where there are only two possible outputs, is called"}, {"start": 111.60000000000001, "end": 118.28, "text": " binary classification, where the word binary refers to there being only two possible classes,"}, {"start": 118.28, "end": 122.04, "text": " or two possible categories."}, {"start": 122.04, "end": 130.2, "text": " In these problems, I will use the terms class and category relatively interchangeably."}, {"start": 130.2, "end": 132.48000000000002, "text": " They mean basically the same thing."}, {"start": 132.48, "end": 138.89999999999998, "text": " By convention, we can refer to these two classes or categories in a few common ways."}, {"start": 138.89999999999998, "end": 147.07999999999998, "text": " We often designate classes as no or yes, or sometimes equivalently, false or true, or"}, {"start": 147.07999999999998, "end": 154.48, "text": " very commonly using the numbers 0 or 1, following the common convention in computer science"}, {"start": 154.48, "end": 159.04, "text": " with 0 denoting false and 1 denoting true."}, {"start": 159.04, "end": 165.2, "text": " I'm usually going to use the numbers 0 and 1 to represent the answer why, because that"}, {"start": 165.2, "end": 170.48, "text": " will fit in most easily with the types of learning algorithms we want to implement."}, {"start": 170.48, "end": 177.6, "text": " But when we talk about it, we'll often say no or yes, or false or true, as well."}, {"start": 177.6, "end": 183.12, "text": " One of the terminology is commonly used is to call the false or zero class the negative"}, {"start": 183.12, "end": 189.44, "text": " class and the true or the one class the positive class."}, {"start": 189.44, "end": 196.0, "text": " For example, for spam classification, an email that is not spam may be referred to as a negative"}, {"start": 196.0, "end": 200.32, "text": " example because the output to the question of is it spam?"}, {"start": 200.32, "end": 203.72, "text": " The output is no or zero."}, {"start": 203.72, "end": 210.28, "text": " In contrast, an email that is spam might be referred to as a positive training example"}, {"start": 210.28, "end": 216.72, "text": " because the answer to is it spam is yes or true or one."}, {"start": 216.72, "end": 222.08, "text": " To be clear, negative and positive do not necessarily mean bad versus good or evil versus"}, {"start": 222.08, "end": 223.08, "text": " good."}, {"start": 223.08, "end": 227.96, "text": " It's just that negative and positive examples are used to convey the concepts of absence"}, {"start": 227.96, "end": 234.08, "text": " or zero or false versus the presence or true or one of something you might be looking for,"}, {"start": 234.08, "end": 240.8, "text": " such as the absence or presence of the spam units or the spam property of an email or"}, {"start": 240.8, "end": 247.36, "text": " the absence of presence of fraudulent activity or absence of presence of malignancy in a"}, {"start": 247.36, "end": 248.36, "text": " tumor."}, {"start": 248.36, "end": 254.28, "text": " Between non-spam and spam emails, which one you call false or zero and which one you call"}, {"start": 254.28, "end": 258.04, "text": " true or one is a little bit arbitrary."}, {"start": 258.04, "end": 260.52000000000004, "text": " Often either choice could work."}, {"start": 260.52, "end": 264.91999999999996, "text": " So a different engineer might actually swap it around and have the positive class be the"}, {"start": 264.91999999999996, "end": 271.64, "text": " presence of a good email or the positive class be the presence of a real financial transaction"}, {"start": 271.64, "end": 274.64, "text": " or a healthy patient."}, {"start": 274.64, "end": 278.84, "text": " So how do you build a classification algorithm?"}, {"start": 278.84, "end": 283.91999999999996, "text": " Here's the example of a training set for classifying if a tumor is malignant."}, {"start": 283.92, "end": 291.64000000000004, "text": " A class one positive class, yes class or benign class zero or negative class."}, {"start": 291.64000000000004, "end": 297.96000000000004, "text": " I plotted both the tumor size on the horizontal axis as well as the label Y on the vertical"}, {"start": 297.96000000000004, "end": 299.6, "text": " axis."}, {"start": 299.6, "end": 304.96000000000004, "text": " By the way, in week one, when we first talked about classification, this is how we previously"}, {"start": 304.96000000000004, "end": 310.8, "text": " visualized it on the number line, except that now we're calling the classes zero and one"}, {"start": 310.8, "end": 314.8, "text": " and plotting them on the vertical axis."}, {"start": 314.8, "end": 320.0, "text": " Now one thing you could try on this training set is to apply the algorithm you already"}, {"start": 320.0, "end": 325.32, "text": " know linear regression and try to fit a straight line to the data."}, {"start": 325.32, "end": 329.16, "text": " If you do that, maybe the straight line looks like this, right?"}, {"start": 329.16, "end": 332.02, "text": " And that's your f of X."}, {"start": 332.02, "end": 337.68, "text": " Linear regression predicts not just the values zero and one, but all numbers between zero"}, {"start": 337.68, "end": 342.04, "text": " and one or even less than zero or greater than one."}, {"start": 342.04, "end": 346.44, "text": " But here we want to predict categories."}, {"start": 346.44, "end": 354.12, "text": " One thing you could try is to pick a threshold of say 0.5 so that if the model outputs a"}, {"start": 354.12, "end": 361.24, "text": " value below 0.5, then you predict Y equals zero or not malignant."}, {"start": 361.24, "end": 367.88, "text": " And if the model outputs a number equal to or greater than 0.5, then predict Y equals"}, {"start": 367.88, "end": 370.92, "text": " one or malignant."}, {"start": 370.92, "end": 378.64, "text": " Notice that this threshold value 0.5 intersects the best fit straight line at this point."}, {"start": 378.64, "end": 384.46000000000004, "text": " So if you draw this vertical line here, everything to the left ends up with a prediction of Y"}, {"start": 384.46, "end": 391.88, "text": " equals zero and everything on the right ends up with a prediction of Y equals one."}, {"start": 391.88, "end": 396.52, "text": " Now for this particular data set, it looks like linear regression could do something"}, {"start": 396.52, "end": 397.52, "text": " reasonable."}, {"start": 397.52, "end": 403.24, "text": " But now let's see what happens if your data set has one more training example."}, {"start": 403.24, "end": 406.32, "text": " This one way over here on the right."}, {"start": 406.32, "end": 409.96, "text": " Let's also extend the horizontal axis."}, {"start": 409.96, "end": 415.71999999999997, "text": " Notice that this training example shouldn't really change how you classify the data points."}, {"start": 415.71999999999997, "end": 420.28, "text": " This vertical dividing line that we drew just now still makes sense as the cutoff where"}, {"start": 420.28, "end": 425.47999999999996, "text": " two meters smaller than this should be classified as zero and two meters greater than this should"}, {"start": 425.47999999999996, "end": 427.76, "text": " be classified as one."}, {"start": 427.76, "end": 431.64, "text": " But once you've added this extra training example on the right, the best fit line for"}, {"start": 431.64, "end": 436.15999999999997, "text": " linear regression will shift over like this."}, {"start": 436.16, "end": 442.12, "text": " And if you continue using the threshold of 0.5, you now notice that everything to the"}, {"start": 442.12, "end": 449.32000000000005, "text": " left of this point is predicted as zero, non-malignant, and everything to the right of this point"}, {"start": 449.32000000000005, "end": 453.70000000000005, "text": " is predicted to be one or malignant."}, {"start": 453.70000000000005, "end": 459.6, "text": " This isn't what we want, because adding that example way to the right shouldn't change"}, {"start": 459.6, "end": 465.36, "text": " any of our conclusions about how to classify malignant versus benign tumors."}, {"start": 465.36, "end": 470.7, "text": " But if you try to do this with linear regression, adding this one example, which feels like"}, {"start": 470.7, "end": 475.32, "text": " it shouldn't be changing anything, it ends up with us learning a much worse function"}, {"start": 475.32, "end": 477.28000000000003, "text": " for this classification problem."}, {"start": 477.28000000000003, "end": 483.76, "text": " Clearly, when a tumor is large, we want the algorithm to classify it as malignant."}, {"start": 483.76, "end": 490.0, "text": " So what we just saw was linear regression causes the best fit line when we added one"}, {"start": 490.0, "end": 497.16, "text": " more example to the right to shift over, and thus the dividing line also called the decision"}, {"start": 497.16, "end": 501.24, "text": " boundary to shift over to the right."}, {"start": 501.24, "end": 505.74, "text": " You learn more about the decision boundary in the next video."}, {"start": 505.74, "end": 511.36, "text": " You also learn about an algorithm called logistic regression, where the output value of the"}, {"start": 511.36, "end": 517.36, "text": " outcome will always be between 0 and 1, and the algorithm will avoid these problems that"}, {"start": 517.36, "end": 519.12, "text": " we're seeing on the slide."}, {"start": 519.12, "end": 524.68, "text": " By the way, one thing confusing about the name logistic regression is that even though"}, {"start": 524.68, "end": 529.68, "text": " it has the word regression in it, it's actually used for classification."}, {"start": 529.68, "end": 533.92, "text": " Don't be confused by the name, which was given for historical reasons."}, {"start": 533.92, "end": 539.04, "text": " It's actually used to solve binary classification problems where the output label y is either"}, {"start": 539.04, "end": 541.84, "text": " 0 or 1."}, {"start": 541.84, "end": 546.28, "text": " In the upcoming optional lab, you also get to take a look at what happens when you try"}, {"start": 546.28, "end": 551.24, "text": " to use linear regression for classification."}, {"start": 551.24, "end": 557.16, "text": " Sometimes you get lucky and it may work, but often it will not work well, which is why"}, {"start": 557.16, "end": 561.8, "text": " I don't use linear regression myself for classification."}, {"start": 561.8, "end": 566.52, "text": " In the optional lab, you see an interactive plot that attempts to classify between two"}, {"start": 566.52, "end": 573.64, "text": " categories, and you hopefully notice how this often doesn't work very well, which is okay,"}, {"start": 573.64, "end": 578.6, "text": " because that motivates the need for a different model to do classification tasks."}, {"start": 578.6, "end": 583.84, "text": " So please check out this optional lab, and after that, we'll go on to the next video"}, {"start": 583.84, "end": 604.24, "text": " to look at logistic regression for classification."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Pm8mRCZmYiU
3.2 Classification | Logistic regression --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's talk about logistic regression, which is probably the single most widely used classification algorithm in the world. This is something that I use all the time in my work. Let's continue with the example of classifying whether a tumor is malignant, where as before we're going to use the label 1 or yes the positive class to represent malignant tumors and 0 or no negative examples to represent benign tumors. Here's a graph of the dataset where the horizontal axis is the tumor size and the vertical axis takes on only values of 0 and 1 because it's a classification problem. You saw in the last video that linear regression is not a good algorithm for this problem. In contrast, what logistic regression will end up doing is fit a curve that looks like this, a sort of S-shaped curve to this dataset. And so for this example, if a patient comes in with a tumor of this size, which I'm showing on the x-axis, then the algorithm will output 0.7, suggesting that it's closer or maybe more likely to be malignant and benign. We'll say more later what 0.7 actually means in this context, but the output label y is never 0.7, is only ever 0 or 1. To build up to the logistic regression algorithm, there's an important mathematical function I'd like to describe which is called the sigmoid function, sometimes also referred to as the logistic function. The sigmoid function looks like this. Notice that the x-axis of the graphs on the left and right are different. In the graph to the left, on the x-axis is the tumor size, so it's all positive numbers. Whereas in the graph on the right, you have 0 down here, and the horizontal axis takes on both negative and positive values, and I've labeled the horizontal axis z. And I'm showing here just a range of negative 3 to plus 3. So the sigmoid function outputs values between 0 and 1. And if I use g of z to denote this function, then the formula of g of z is equal to 1 over 1 plus e to the negative z, where here e is a mathematical constant that takes on a value of about 2.7, and so e to the negative z is that mathematical constant to the power of negative z. Notice if z were really big, say 100, e to the negative z is e to the negative 100, which is a tiny, tiny number. So this ends up being 1 over 1 plus a tiny little number, and so the denominator will be basically very, very close to 1, which is why when z is large, g of z, that is the sigmoid function of z, is going to be very close to 1. And conversely, you can also check for yourself that when z is a very large negative number, then g of z becomes 1 over a giant number, which is why g of z is very close to 0. So that's why the sigmoid function has this shape, where it starts very close to 0 and slowly builds up, or grows, to the value of 1. Also, in the sigmoid function, when z is equal to 0, then e to the negative z is e to the negative 0, which is equal to 1, and so g of z is equal to 1 over 1 plus 1, which is 0.5. So that's why it passes the vertical axis at 0.5. Now let's use this to build up to the logistic regression algorithm. We're going to do this in two steps. In the first step, I hope you remember that a straight line function, like a linear regression function, can be defined as W dot product of x plus b. So let's store this value in a variable, which I'm going to call z. And this will turn out to be the same z as the one you saw on the previous slide, but we'll get to that in a minute. The next step then is to take this value of z and pass it to the sigmoid function, also called the logistic function g. So now g of z then outputs a value computed by this formula, 1 over 1 plus e to the negative z, that's going to be between 0 and 1. When you take these two equations and put them together, they then give you the logistic regression model f of x, which is equal to g of w x plus b, or equivalently g of z, which is equal to this formula over here. So this is the logistic regression model. And what it does is it inputs a feature or set of features x, and it outputs a number between 0 and 1. Next, let's take a look at how to interpret the output of logistic regression. We'll return to the tumor classification example. The way I'd encourage you to think of logistic regression's output is to think of it as outputting the probability that the class or the label y will be equal to 1, given a certain input x. So, for example, in this application where x is the tumor size and y is either 0 or 1, if you have a patient come in and she has a tumor of a certain size x, and if based on this input x, the model outputs 0.7, then what that means is that the model is predicting or the model thinks there's a 70% chance that the true label y will be equal to 1 for this patient. In other words, the model is telling us that it thinks the patient has a 70% chance of their tumor turning out to be malignant. Now, let me ask you a question. See if you can get this right. We know that y has to be either 0 or 1. So if y has a 70% chance of being 1, what is the chance that it is 0? So y has got to be either 0 or 1, and thus the probability of it being 0 or 1, these two numbers have to add up to 1 or to 100% chance. So that's why if the chance of y being 1 is 0.7 or 70% chance, then the chance of it being 0 has got to be 0.3 or 30% chance. If someday you read research papers or blog posts about logistic regression, sometimes you see this notation that f of x is equal to p of y equals 1, given the input features x and with parameters w and b. What the semicolon here is used to denote is just that w and b are parameters that affect this computation of what is the probability of y being equal to 1, given the input feature x. For the purpose of this class, don't worry too much about what this vertical line and what the semicolon mean. You don't need to remember or follow any of this mathematical notation for this class. I'm mentioning this only because you may see this in other places. In the optional lab that follows this video, you also get to see how the sigmoid function is implemented in code. You can see a plot that uses the sigmoid function so as to do better on the classification task that you saw in the previous optional lab. Remember that the code will be provided to you, so you just have to run it. I hope you take a look and get familiar with the code. So congrats on getting here. You now know what is the logistic regression model as well as the mathematical formula that defines logistic regression. For a long time, a lot of internet advertising was actually driven by basically a slight variation of logistic regression. This was very lucrative for some large companies, and this was basically the algorithm that decided what ad was shown to you and many others on some large websites. Now there's even more to learn about this algorithm. In the next video, we'll take a look at the details of logistic regression. We'll look at some visualizations and also examine something called the decision boundary. This will give you a few different ways to map the numbers that this model outputs, such as 0.3 or 0.7 or 0.65, to a prediction of whether y is actually 0 or 1. So let's go on to the next video to learn more about logistic regression.
[{"start": 0.0, "end": 6.0600000000000005, "text": " Let's talk about logistic regression, which is probably the single most widely used classification"}, {"start": 6.0600000000000005, "end": 7.54, "text": " algorithm in the world."}, {"start": 7.54, "end": 10.94, "text": " This is something that I use all the time in my work."}, {"start": 10.94, "end": 17.22, "text": " Let's continue with the example of classifying whether a tumor is malignant, where as before"}, {"start": 17.22, "end": 22.900000000000002, "text": " we're going to use the label 1 or yes the positive class to represent malignant tumors"}, {"start": 22.900000000000002, "end": 27.62, "text": " and 0 or no negative examples to represent benign tumors."}, {"start": 27.62, "end": 34.86, "text": " Here's a graph of the dataset where the horizontal axis is the tumor size and the vertical axis"}, {"start": 34.86, "end": 41.14, "text": " takes on only values of 0 and 1 because it's a classification problem."}, {"start": 41.14, "end": 46.68, "text": " You saw in the last video that linear regression is not a good algorithm for this problem."}, {"start": 46.68, "end": 53.540000000000006, "text": " In contrast, what logistic regression will end up doing is fit a curve that looks like"}, {"start": 53.54, "end": 59.339999999999996, "text": " this, a sort of S-shaped curve to this dataset."}, {"start": 59.339999999999996, "end": 65.62, "text": " And so for this example, if a patient comes in with a tumor of this size, which I'm showing"}, {"start": 65.62, "end": 73.66, "text": " on the x-axis, then the algorithm will output 0.7, suggesting that it's closer or maybe"}, {"start": 73.66, "end": 77.5, "text": " more likely to be malignant and benign."}, {"start": 77.5, "end": 84.66, "text": " We'll say more later what 0.7 actually means in this context, but the output label y is"}, {"start": 84.66, "end": 89.5, "text": " never 0.7, is only ever 0 or 1."}, {"start": 89.5, "end": 94.46000000000001, "text": " To build up to the logistic regression algorithm, there's an important mathematical function"}, {"start": 94.46000000000001, "end": 101.06, "text": " I'd like to describe which is called the sigmoid function, sometimes also referred to as the"}, {"start": 101.06, "end": 103.66, "text": " logistic function."}, {"start": 103.66, "end": 107.02, "text": " The sigmoid function looks like this."}, {"start": 107.02, "end": 112.16, "text": " Notice that the x-axis of the graphs on the left and right are different."}, {"start": 112.16, "end": 118.97999999999999, "text": " In the graph to the left, on the x-axis is the tumor size, so it's all positive numbers."}, {"start": 118.97999999999999, "end": 126.1, "text": " Whereas in the graph on the right, you have 0 down here, and the horizontal axis takes"}, {"start": 126.1, "end": 132.38, "text": " on both negative and positive values, and I've labeled the horizontal axis z."}, {"start": 132.38, "end": 138.74, "text": " And I'm showing here just a range of negative 3 to plus 3."}, {"start": 138.74, "end": 142.62, "text": " So the sigmoid function outputs values between 0 and 1."}, {"start": 142.62, "end": 151.26, "text": " And if I use g of z to denote this function, then the formula of g of z is equal to 1 over"}, {"start": 151.26, "end": 158.7, "text": " 1 plus e to the negative z, where here e is a mathematical constant that takes on a value"}, {"start": 158.7, "end": 164.5, "text": " of about 2.7, and so e to the negative z is that mathematical constant to the power of"}, {"start": 164.5, "end": 167.33999999999997, "text": " negative z."}, {"start": 167.33999999999997, "end": 175.5, "text": " Notice if z were really big, say 100, e to the negative z is e to the negative 100, which"}, {"start": 175.5, "end": 178.35999999999999, "text": " is a tiny, tiny number."}, {"start": 178.35999999999999, "end": 185.89999999999998, "text": " So this ends up being 1 over 1 plus a tiny little number, and so the denominator will"}, {"start": 185.9, "end": 193.14000000000001, "text": " be basically very, very close to 1, which is why when z is large, g of z, that is the"}, {"start": 193.14000000000001, "end": 198.46, "text": " sigmoid function of z, is going to be very close to 1."}, {"start": 198.46, "end": 206.14000000000001, "text": " And conversely, you can also check for yourself that when z is a very large negative number,"}, {"start": 206.14000000000001, "end": 215.64000000000001, "text": " then g of z becomes 1 over a giant number, which is why g of z is very close to 0."}, {"start": 215.64, "end": 221.88, "text": " So that's why the sigmoid function has this shape, where it starts very close to 0 and"}, {"start": 221.88, "end": 226.57999999999998, "text": " slowly builds up, or grows, to the value of 1."}, {"start": 226.57999999999998, "end": 234.89999999999998, "text": " Also, in the sigmoid function, when z is equal to 0, then e to the negative z is e to the"}, {"start": 234.89999999999998, "end": 243.94, "text": " negative 0, which is equal to 1, and so g of z is equal to 1 over 1 plus 1, which is"}, {"start": 243.94, "end": 245.94, "text": " 0.5."}, {"start": 245.94, "end": 251.5, "text": " So that's why it passes the vertical axis at 0.5."}, {"start": 251.5, "end": 256.18, "text": " Now let's use this to build up to the logistic regression algorithm."}, {"start": 256.18, "end": 258.98, "text": " We're going to do this in two steps."}, {"start": 258.98, "end": 264.7, "text": " In the first step, I hope you remember that a straight line function, like a linear regression"}, {"start": 264.7, "end": 271.92, "text": " function, can be defined as W dot product of x plus b."}, {"start": 271.92, "end": 277.90000000000003, "text": " So let's store this value in a variable, which I'm going to call z."}, {"start": 277.90000000000003, "end": 282.14000000000004, "text": " And this will turn out to be the same z as the one you saw on the previous slide, but"}, {"start": 282.14000000000004, "end": 284.18, "text": " we'll get to that in a minute."}, {"start": 284.18, "end": 291.78000000000003, "text": " The next step then is to take this value of z and pass it to the sigmoid function, also"}, {"start": 291.78000000000003, "end": 295.66, "text": " called the logistic function g."}, {"start": 295.66, "end": 303.94, "text": " So now g of z then outputs a value computed by this formula, 1 over 1 plus e to the negative"}, {"start": 303.94, "end": 308.90000000000003, "text": " z, that's going to be between 0 and 1."}, {"start": 308.90000000000003, "end": 314.22, "text": " When you take these two equations and put them together, they then give you the logistic"}, {"start": 314.22, "end": 328.18, "text": " regression model f of x, which is equal to g of w x plus b, or equivalently g of z, which"}, {"start": 328.18, "end": 332.86, "text": " is equal to this formula over here."}, {"start": 332.86, "end": 336.78000000000003, "text": " So this is the logistic regression model."}, {"start": 336.78000000000003, "end": 342.72, "text": " And what it does is it inputs a feature or set of features x, and it outputs a number"}, {"start": 342.72, "end": 344.82000000000005, "text": " between 0 and 1."}, {"start": 344.82000000000005, "end": 351.54, "text": " Next, let's take a look at how to interpret the output of logistic regression."}, {"start": 351.54, "end": 355.46000000000004, "text": " We'll return to the tumor classification example."}, {"start": 355.46000000000004, "end": 361.78000000000003, "text": " The way I'd encourage you to think of logistic regression's output is to think of it as outputting"}, {"start": 361.78000000000003, "end": 369.62, "text": " the probability that the class or the label y will be equal to 1, given a certain input"}, {"start": 369.62, "end": 370.62, "text": " x."}, {"start": 370.62, "end": 379.02, "text": " So, for example, in this application where x is the tumor size and y is either 0 or 1,"}, {"start": 379.02, "end": 384.78000000000003, "text": " if you have a patient come in and she has a tumor of a certain size x, and if based"}, {"start": 384.78000000000003, "end": 393.02, "text": " on this input x, the model outputs 0.7, then what that means is that the model is predicting"}, {"start": 393.02, "end": 399.86, "text": " or the model thinks there's a 70% chance that the true label y will be equal to 1 for this"}, {"start": 399.86, "end": 400.86, "text": " patient."}, {"start": 400.86, "end": 407.74, "text": " In other words, the model is telling us that it thinks the patient has a 70% chance of"}, {"start": 407.74, "end": 410.62, "text": " their tumor turning out to be malignant."}, {"start": 410.62, "end": 414.3, "text": " Now, let me ask you a question."}, {"start": 414.3, "end": 416.7, "text": " See if you can get this right."}, {"start": 416.7, "end": 421.34000000000003, "text": " We know that y has to be either 0 or 1."}, {"start": 421.34000000000003, "end": 428.98, "text": " So if y has a 70% chance of being 1, what is the chance that it is 0?"}, {"start": 428.98, "end": 435.66, "text": " So y has got to be either 0 or 1, and thus the probability of it being 0 or 1, these"}, {"start": 435.66, "end": 440.38, "text": " two numbers have to add up to 1 or to 100% chance."}, {"start": 440.38, "end": 447.34000000000003, "text": " So that's why if the chance of y being 1 is 0.7 or 70% chance, then the chance of it being"}, {"start": 447.34000000000003, "end": 452.70000000000005, "text": " 0 has got to be 0.3 or 30% chance."}, {"start": 452.70000000000005, "end": 457.94, "text": " If someday you read research papers or blog posts about logistic regression, sometimes"}, {"start": 457.94, "end": 467.38, "text": " you see this notation that f of x is equal to p of y equals 1, given the input features"}, {"start": 467.38, "end": 471.98, "text": " x and with parameters w and b."}, {"start": 471.98, "end": 478.7, "text": " What the semicolon here is used to denote is just that w and b are parameters that affect"}, {"start": 478.7, "end": 484.56, "text": " this computation of what is the probability of y being equal to 1, given the input feature"}, {"start": 484.56, "end": 486.1, "text": " x."}, {"start": 486.1, "end": 489.70000000000005, "text": " For the purpose of this class, don't worry too much about what this vertical line and"}, {"start": 489.70000000000005, "end": 492.26000000000005, "text": " what the semicolon mean."}, {"start": 492.26000000000005, "end": 497.18, "text": " You don't need to remember or follow any of this mathematical notation for this class."}, {"start": 497.18, "end": 501.40000000000003, "text": " I'm mentioning this only because you may see this in other places."}, {"start": 501.40000000000003, "end": 506.22, "text": " In the optional lab that follows this video, you also get to see how the sigmoid function"}, {"start": 506.22, "end": 508.74, "text": " is implemented in code."}, {"start": 508.74, "end": 514.5400000000001, "text": " You can see a plot that uses the sigmoid function so as to do better on the classification task"}, {"start": 514.54, "end": 517.5799999999999, "text": " that you saw in the previous optional lab."}, {"start": 517.5799999999999, "end": 521.78, "text": " Remember that the code will be provided to you, so you just have to run it."}, {"start": 521.78, "end": 525.9399999999999, "text": " I hope you take a look and get familiar with the code."}, {"start": 525.9399999999999, "end": 528.0999999999999, "text": " So congrats on getting here."}, {"start": 528.0999999999999, "end": 533.6999999999999, "text": " You now know what is the logistic regression model as well as the mathematical formula"}, {"start": 533.6999999999999, "end": 536.66, "text": " that defines logistic regression."}, {"start": 536.66, "end": 542.18, "text": " For a long time, a lot of internet advertising was actually driven by basically a slight"}, {"start": 542.18, "end": 544.8599999999999, "text": " variation of logistic regression."}, {"start": 544.8599999999999, "end": 548.78, "text": " This was very lucrative for some large companies, and this was basically the algorithm that"}, {"start": 548.78, "end": 554.3399999999999, "text": " decided what ad was shown to you and many others on some large websites."}, {"start": 554.3399999999999, "end": 557.66, "text": " Now there's even more to learn about this algorithm."}, {"start": 557.66, "end": 562.66, "text": " In the next video, we'll take a look at the details of logistic regression."}, {"start": 562.66, "end": 568.9, "text": " We'll look at some visualizations and also examine something called the decision boundary."}, {"start": 568.9, "end": 573.9399999999999, "text": " This will give you a few different ways to map the numbers that this model outputs, such"}, {"start": 573.9399999999999, "end": 582.74, "text": " as 0.3 or 0.7 or 0.65, to a prediction of whether y is actually 0 or 1."}, {"start": 582.74, "end": 599.54, "text": " So let's go on to the next video to learn more about logistic regression."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=QJdIpRcL_4U
3.3 Classification | Decision boundary --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you learned about the logistic regression model. Now, let's take a look at the decision boundary to get a better sense of how logistic regression is computing its predictions. To recap, here's how the logistic regression model's outputs are computed in two steps. In the first step, you compute z as w dot x plus b. Then you apply the sigmoid function g to this value z, and here again is the formula for the sigmoid function. Another way to write this is we can say f of x is equal to g, the sigmoid function, also called the logistic function, applied to w dot x plus b, where this is of course the value of z. And if you take the definition of the sigmoid function and plug in the definition of z, then you find that f of x is equal to this formula over here, 1 over 1 plus e to the negative z, where z is w x plus b. And you may remember we said in the previous video that we interpret this as the probability that y is equal to 1, given x, and with parameters w and b. And so this is going to be a number, like maybe 0.7 or 0.3. Now what if you want the learning algorithm to predict, is the value of y going to be 0 or 1? Well one thing you might do is set a threshold above which you predict y is 1, or you set y hat, the prediction, to be equal to 1, and below which you might say y hat, my prediction, is going to be equal to 0. So a common choice would be to pick a threshold of 0.5, so that if f of x is greater than or equal to 0.5, then predict y is 1, and we write that prediction as y hat equals 1, or if f of x is less than 0.5, then predict y is 0, or in other words, the prediction y hat is equal to 0. So now let's dive deeper into when the model would predict 1. In other words, when is f of x greater than or equal to 0.5? Well recall that f of x is just equal to g of z, and so f is greater than or equal to 0.5 whenever g of z is greater than or equal to 0.5. But when is g of z greater than or equal to 0.5? Well here's the sigmoid function over here, and so g of z is greater than or equal to 0.5 whenever z is greater than or equal to 0, right? That is, whenever z is on the right half of this axis. And finally, when is z greater than or equal to 0? Well z is equal to w dot x plus b, and so z is greater than or equal to 0 whenever w dot x plus b is greater than or equal to 0. So to recap, what you've seen here is that the model predicts 1 whenever w dot x plus b is greater than or equal to 0. And conversely, when w dot x plus b is less than 0, the algorithm predicts y is 0. So given this, let's now visualize how the model makes predictions. I'm going to take an example of a classification problem where you have two features, x1 and x2 instead of just one feature. Here's a training set where the little red crosses denote the positive examples, and the little blue circles denote negative examples. So the red crosses corresponds to y equals 1, and the blue circles correspond to y equals 0. So the logistic regression model will make predictions using this function, f of x equals g of z, where z is now this expression over here, w1 x1 plus w2 x2 plus b, because we have two features, x1 and x2. And let's just say for this example that the value of the parameters are w1 equals 1, w2 equals 1, and b equals negative 3. And let's now take a look at how logistic regression makes predictions. In particular, let's figure out when wx plus b is greater than or equal to 0, and when wx plus b is less than 0. To figure that out, there's a very interesting line to look at, which is when wx plus b is exactly equal to 0. It turns out that this line is also called the decision boundary, because that's the line where you're just almost neutral about whether y is 0 or y is 1. Now for the values of the parameters w1, w2, and b that we had written down above, this decision boundary is just x1 plus x2 minus 3. And so when is x1 plus x2 minus 3 equal to 0? Well that will correspond to the line x1 plus x2 equals 3. And that is this line shown over here. And so this line turns out to be the decision boundary, where if the features x are to the right of this line, logistic regression would predict 1, and to the left of this line, logistic regression would predict 0. In other words, what we have just visualized is the decision boundary for logistic regression when the parameters w1, w2, and b are 1, 1, and negative 3. Of course, if you had a different choice of the parameters, the decision boundary would be a different line. Now let's look at a more complex example where the decision boundary is no longer a straight line. As before, crosses denote the class y equals 1, and the little circles denote the class y equals 0. Earlier last week, you saw how to use polynomials in linear regression, and you can do the same in logistic regression. So let's set z to be w1 x1 squared plus w2 x2 squared plus b. With this choice of features, polynomial features into logistic regression, so f of x, which equals g of z, is now g of this expression over here. And let's say that we end up choosing w1 and w2 to be 1, and b to be negative 1. So z is equal to 1 times x1 squared plus 1 times x2 squared minus 1. And the decision boundary as before will correspond to when z is equal to 0, and so this expression will be equal to 0 when x1 squared plus x2 squared is equal to 1. And if you plot on the diagram on the left, the curve corresponding to x1 squared plus x2 squared equals 1, this turns out to be this circle. When x1 squared plus x2 squared is greater than or equal to 1, that's this area outside the circle, and that's when you predict y to be 1. Conversely, when x1 squared plus x2 squared is less than 1, that's this area inside the circle, and that's when you would predict y to be 0. So can we come up with even more complex decision boundaries than these? Yes you can. You can do so by having even higher order polynomial terms. Say z is w1 x1 plus w2 x2 plus w3 x1 squared plus w4 x1 x2 plus w5 x2 squared. Then it's possible you can get even more complex decision boundaries. The model can define decision boundaries such as this example, an ellipse that's like this, or with a different choice of the parameters. You can even get more complex decision boundaries which can look like functions that maybe look like that. So this is an example of an even more complex decision boundary than the ones we've seen previously, and this implementation of logistic regression will predict y equals 1 inside this shape, and outside the shape it will predict y equals 0. So with these polynomial features you can get very complex decision boundaries. In other words logistic regression can learn to fit pretty complex data. Although if you were to not include any of these higher order polynomials, so if the only features you use are x1 x2 x3 and so on, then the decision boundary for logistic regression will always be linear, will always be a straight line. In the upcoming optional lab you also get to see the code implementation of the decision boundary. In the example in the lab there will be two features so you can see the decision boundary as a line. So with these visualizations I hope that you now have a sense of the range of possible models you can get with logistic regression. Now that you've seen what f of x can potentially compute, let's take a look at how you can actually train a logistic regression model. We'll start by looking at the cost function for logistic regression and after that figure out how to apply gradient descent to it. Let's go on to the next video.
[{"start": 0.0, "end": 5.2, "text": " In the last video, you learned about the logistic regression model."}, {"start": 5.2, "end": 11.72, "text": " Now, let's take a look at the decision boundary to get a better sense of how logistic regression"}, {"start": 11.72, "end": 13.92, "text": " is computing its predictions."}, {"start": 13.92, "end": 21.04, "text": " To recap, here's how the logistic regression model's outputs are computed in two steps."}, {"start": 21.04, "end": 26.92, "text": " In the first step, you compute z as w dot x plus b."}, {"start": 26.92, "end": 33.120000000000005, "text": " Then you apply the sigmoid function g to this value z, and here again is the formula for"}, {"start": 33.120000000000005, "end": 35.980000000000004, "text": " the sigmoid function."}, {"start": 35.980000000000004, "end": 43.2, "text": " Another way to write this is we can say f of x is equal to g, the sigmoid function,"}, {"start": 43.2, "end": 50.160000000000004, "text": " also called the logistic function, applied to w dot x plus b, where this is of course"}, {"start": 50.160000000000004, "end": 53.120000000000005, "text": " the value of z."}, {"start": 53.12, "end": 59.879999999999995, "text": " And if you take the definition of the sigmoid function and plug in the definition of z,"}, {"start": 59.879999999999995, "end": 67.36, "text": " then you find that f of x is equal to this formula over here, 1 over 1 plus e to the"}, {"start": 67.36, "end": 72.08, "text": " negative z, where z is w x plus b."}, {"start": 72.08, "end": 77.47999999999999, "text": " And you may remember we said in the previous video that we interpret this as the probability"}, {"start": 77.47999999999999, "end": 82.36, "text": " that y is equal to 1, given x, and with parameters w and b."}, {"start": 82.36, "end": 87.84, "text": " And so this is going to be a number, like maybe 0.7 or 0.3."}, {"start": 87.84, "end": 93.24, "text": " Now what if you want the learning algorithm to predict, is the value of y going to be"}, {"start": 93.24, "end": 95.72, "text": " 0 or 1?"}, {"start": 95.72, "end": 102.72, "text": " Well one thing you might do is set a threshold above which you predict y is 1, or you set"}, {"start": 102.72, "end": 109.0, "text": " y hat, the prediction, to be equal to 1, and below which you might say y hat, my prediction,"}, {"start": 109.0, "end": 112.16, "text": " is going to be equal to 0."}, {"start": 112.16, "end": 119.08, "text": " So a common choice would be to pick a threshold of 0.5, so that if f of x is greater than"}, {"start": 119.08, "end": 126.96, "text": " or equal to 0.5, then predict y is 1, and we write that prediction as y hat equals 1,"}, {"start": 126.96, "end": 133.2, "text": " or if f of x is less than 0.5, then predict y is 0, or in other words, the prediction"}, {"start": 133.2, "end": 136.2, "text": " y hat is equal to 0."}, {"start": 136.2, "end": 140.12, "text": " So now let's dive deeper into when the model would predict 1."}, {"start": 140.12, "end": 145.68, "text": " In other words, when is f of x greater than or equal to 0.5?"}, {"start": 145.68, "end": 153.44, "text": " Well recall that f of x is just equal to g of z, and so f is greater than or equal to"}, {"start": 153.44, "end": 159.20000000000002, "text": " 0.5 whenever g of z is greater than or equal to 0.5."}, {"start": 159.20000000000002, "end": 163.8, "text": " But when is g of z greater than or equal to 0.5?"}, {"start": 163.8, "end": 171.08, "text": " Well here's the sigmoid function over here, and so g of z is greater than or equal to"}, {"start": 171.08, "end": 177.48000000000002, "text": " 0.5 whenever z is greater than or equal to 0, right?"}, {"start": 177.48000000000002, "end": 183.4, "text": " That is, whenever z is on the right half of this axis."}, {"start": 183.4, "end": 187.12, "text": " And finally, when is z greater than or equal to 0?"}, {"start": 187.12, "end": 196.24, "text": " Well z is equal to w dot x plus b, and so z is greater than or equal to 0 whenever w"}, {"start": 196.24, "end": 200.4, "text": " dot x plus b is greater than or equal to 0."}, {"start": 200.4, "end": 209.52, "text": " So to recap, what you've seen here is that the model predicts 1 whenever w dot x plus"}, {"start": 209.52, "end": 213.36, "text": " b is greater than or equal to 0."}, {"start": 213.36, "end": 223.36, "text": " And conversely, when w dot x plus b is less than 0, the algorithm predicts y is 0."}, {"start": 223.36, "end": 229.56, "text": " So given this, let's now visualize how the model makes predictions."}, {"start": 229.56, "end": 236.0, "text": " I'm going to take an example of a classification problem where you have two features, x1 and"}, {"start": 236.0, "end": 239.28000000000003, "text": " x2 instead of just one feature."}, {"start": 239.28, "end": 245.08, "text": " Here's a training set where the little red crosses denote the positive examples, and"}, {"start": 245.08, "end": 248.6, "text": " the little blue circles denote negative examples."}, {"start": 248.6, "end": 257.16, "text": " So the red crosses corresponds to y equals 1, and the blue circles correspond to y equals"}, {"start": 257.16, "end": 259.24, "text": " 0."}, {"start": 259.24, "end": 265.32, "text": " So the logistic regression model will make predictions using this function, f of x equals"}, {"start": 265.32, "end": 275.2, "text": " g of z, where z is now this expression over here, w1 x1 plus w2 x2 plus b, because we"}, {"start": 275.2, "end": 278.48, "text": " have two features, x1 and x2."}, {"start": 278.48, "end": 286.52, "text": " And let's just say for this example that the value of the parameters are w1 equals 1, w2"}, {"start": 286.52, "end": 291.0, "text": " equals 1, and b equals negative 3."}, {"start": 291.0, "end": 295.88, "text": " And let's now take a look at how logistic regression makes predictions."}, {"start": 295.88, "end": 301.28, "text": " In particular, let's figure out when wx plus b is greater than or equal to 0, and when"}, {"start": 301.28, "end": 304.8, "text": " wx plus b is less than 0."}, {"start": 304.8, "end": 311.08, "text": " To figure that out, there's a very interesting line to look at, which is when wx plus b is"}, {"start": 311.08, "end": 314.2, "text": " exactly equal to 0."}, {"start": 314.2, "end": 321.08, "text": " It turns out that this line is also called the decision boundary, because that's the"}, {"start": 321.08, "end": 327.59999999999997, "text": " line where you're just almost neutral about whether y is 0 or y is 1."}, {"start": 327.59999999999997, "end": 336.44, "text": " Now for the values of the parameters w1, w2, and b that we had written down above, this"}, {"start": 336.44, "end": 343.59999999999997, "text": " decision boundary is just x1 plus x2 minus 3."}, {"start": 343.6, "end": 350.12, "text": " And so when is x1 plus x2 minus 3 equal to 0?"}, {"start": 350.12, "end": 356.76000000000005, "text": " Well that will correspond to the line x1 plus x2 equals 3."}, {"start": 356.76000000000005, "end": 361.76000000000005, "text": " And that is this line shown over here."}, {"start": 361.76000000000005, "end": 369.26000000000005, "text": " And so this line turns out to be the decision boundary, where if the features x are to the"}, {"start": 369.26, "end": 375.96, "text": " right of this line, logistic regression would predict 1, and to the left of this line, logistic"}, {"start": 375.96, "end": 379.92, "text": " regression would predict 0."}, {"start": 379.92, "end": 386.88, "text": " In other words, what we have just visualized is the decision boundary for logistic regression"}, {"start": 386.88, "end": 392.98, "text": " when the parameters w1, w2, and b are 1, 1, and negative 3."}, {"start": 392.98, "end": 397.4, "text": " Of course, if you had a different choice of the parameters, the decision boundary would"}, {"start": 397.4, "end": 399.76, "text": " be a different line."}, {"start": 399.76, "end": 404.88, "text": " Now let's look at a more complex example where the decision boundary is no longer a straight"}, {"start": 404.88, "end": 406.32, "text": " line."}, {"start": 406.32, "end": 414.03999999999996, "text": " As before, crosses denote the class y equals 1, and the little circles denote the class"}, {"start": 414.03999999999996, "end": 417.84, "text": " y equals 0."}, {"start": 417.84, "end": 424.71999999999997, "text": " Earlier last week, you saw how to use polynomials in linear regression, and you can do the same"}, {"start": 424.72, "end": 427.58000000000004, "text": " in logistic regression."}, {"start": 427.58000000000004, "end": 437.18, "text": " So let's set z to be w1 x1 squared plus w2 x2 squared plus b."}, {"start": 437.18, "end": 442.42, "text": " With this choice of features, polynomial features into logistic regression, so f of x, which"}, {"start": 442.42, "end": 447.48, "text": " equals g of z, is now g of this expression over here."}, {"start": 447.48, "end": 456.54, "text": " And let's say that we end up choosing w1 and w2 to be 1, and b to be negative 1."}, {"start": 456.54, "end": 464.04, "text": " So z is equal to 1 times x1 squared plus 1 times x2 squared minus 1."}, {"start": 464.04, "end": 471.94, "text": " And the decision boundary as before will correspond to when z is equal to 0, and so this expression"}, {"start": 471.94, "end": 477.8, "text": " will be equal to 0 when x1 squared plus x2 squared is equal to 1."}, {"start": 477.8, "end": 483.76, "text": " And if you plot on the diagram on the left, the curve corresponding to x1 squared plus"}, {"start": 483.76, "end": 489.44, "text": " x2 squared equals 1, this turns out to be this circle."}, {"start": 489.44, "end": 494.74, "text": " When x1 squared plus x2 squared is greater than or equal to 1, that's this area outside"}, {"start": 494.74, "end": 499.56, "text": " the circle, and that's when you predict y to be 1."}, {"start": 499.56, "end": 506.96, "text": " Conversely, when x1 squared plus x2 squared is less than 1, that's this area inside the"}, {"start": 506.96, "end": 511.94, "text": " circle, and that's when you would predict y to be 0."}, {"start": 511.94, "end": 516.92, "text": " So can we come up with even more complex decision boundaries than these?"}, {"start": 516.92, "end": 518.28, "text": " Yes you can."}, {"start": 518.28, "end": 522.62, "text": " You can do so by having even higher order polynomial terms."}, {"start": 522.62, "end": 533.8, "text": " Say z is w1 x1 plus w2 x2 plus w3 x1 squared plus w4 x1 x2 plus w5 x2 squared."}, {"start": 533.8, "end": 538.24, "text": " Then it's possible you can get even more complex decision boundaries."}, {"start": 538.24, "end": 545.48, "text": " The model can define decision boundaries such as this example, an ellipse that's like this,"}, {"start": 545.48, "end": 549.48, "text": " or with a different choice of the parameters."}, {"start": 549.48, "end": 554.24, "text": " You can even get more complex decision boundaries which can look like functions that maybe look"}, {"start": 554.24, "end": 556.66, "text": " like that."}, {"start": 556.66, "end": 561.84, "text": " So this is an example of an even more complex decision boundary than the ones we've seen"}, {"start": 561.84, "end": 569.5600000000001, "text": " previously, and this implementation of logistic regression will predict y equals 1 inside"}, {"start": 569.5600000000001, "end": 574.84, "text": " this shape, and outside the shape it will predict y equals 0."}, {"start": 574.84, "end": 580.48, "text": " So with these polynomial features you can get very complex decision boundaries."}, {"start": 580.48, "end": 586.0, "text": " In other words logistic regression can learn to fit pretty complex data."}, {"start": 586.0, "end": 590.84, "text": " Although if you were to not include any of these higher order polynomials, so if the"}, {"start": 590.84, "end": 596.8000000000001, "text": " only features you use are x1 x2 x3 and so on, then the decision boundary for logistic"}, {"start": 596.8000000000001, "end": 601.7800000000001, "text": " regression will always be linear, will always be a straight line."}, {"start": 601.78, "end": 607.6, "text": " In the upcoming optional lab you also get to see the code implementation of the decision"}, {"start": 607.6, "end": 609.02, "text": " boundary."}, {"start": 609.02, "end": 614.04, "text": " In the example in the lab there will be two features so you can see the decision boundary"}, {"start": 614.04, "end": 615.74, "text": " as a line."}, {"start": 615.74, "end": 620.0, "text": " So with these visualizations I hope that you now have a sense of the range of possible"}, {"start": 620.0, "end": 623.9399999999999, "text": " models you can get with logistic regression."}, {"start": 623.9399999999999, "end": 629.28, "text": " Now that you've seen what f of x can potentially compute, let's take a look at how you can"}, {"start": 629.28, "end": 632.72, "text": " actually train a logistic regression model."}, {"start": 632.72, "end": 637.52, "text": " We'll start by looking at the cost function for logistic regression and after that figure"}, {"start": 637.52, "end": 640.24, "text": " out how to apply gradient descent to it."}, {"start": 640.24, "end": 660.28, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=E-ZcnndnY2I
3.4 Cost function | Cost function for logistic regression --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Remember that the cost function gives you a way to measure how well a specific set of parameters fits the training data, and it thereby gives you a way to try to choose better parameters. In this video, we'll look at how the squared error cost function is not an ideal cost function for logistic regression, and we'll take a look at a different cost function that can help us choose better parameters for logistic regression. Notice what the training set for a logistic regression model might look like, where here each row might correspond to a patient that was paying a visit to the doctor and wound up with some sort of diagnosis. As before, we'll use m to denote the number of training examples. Each training example has one or more features, such as the tumor size, the patient's age, and so on, for a total of n features. And so let's call the features x1 through xn. And since this is a binary classification task, the target label y takes on only two values, either 0 or 1. And finally, the logistic regression model is defined by this equation. Okay, so the question you want to answer is, given this training set, how can you choose parameters w and b? Recall for linear regression, this is the squared error cost function. The only thing I've changed is that I put the one half inside the summation instead of outside the summation. And you might remember that in the case of linear regression, where f of x is the linear function w dot x plus b, the cost function looks like this is a convex function or a bow shape or a hammock shape. And so gradient descent will look like this, where you take one step, one step, one step, and so on to converge at the global minimum. Now, you could try to use the same cost function for logistic regression, but it turns out that if I were to write f of x equals 1 over 1 plus e to the negative w x plus b, and plot the cost function using this value of f of x, then the cost will look like this. This becomes what's called a non-convex cost function, it's not convex. And what this means is that if you were to try to use gradient descent, there are lots of local minima that you can get stuck in. So it turns out that for logistic regression, this squared error cost function is not a good choice. Instead, there will be a different cost function that can make the cost function convex again, so that gradient descent can be guaranteed to converge to the global minimum. The only thing I've changed is that I put the one half inside the summation instead of outside the summation. This will make the math you see later on the slide a little bit simpler. In order to build a new cost function, one that we'll use for logistic regression, I'm going to change a little bit the definition of the cost function j of w and b. In particular, if you look inside the summation, let's call this term inside the loss on a single training example. And I'm going to denote the loss via this capital L, and is a function of the prediction of the learning algorithm f of x, as well as of the true label y. And so the loss, given the predicted f of x and the true label y, is equal in this case to one half of the squared difference. We'll see shortly that by choosing a different form for this loss function, we'll be able to keep the overall cost function, which is one over m times the sum of these loss functions, to be a convex function. Now the loss function inputs f of x and the true label y, and tells us how well we're doing on that example. I'm going to just write down here the definition of the loss function we'll use for logistic regression. If the label y is equal to one, then the loss is negative log of f of x. And if the label y is equal to zero, then the loss is negative log of one minus f of x. Let's take a look at why this loss function hopefully makes sense. Let's first consider the case of y equals one, and plot what this function looks like to gain some intuition about what this loss function is doing. And remember, the loss function measures how well you're doing on one training example, and is by summing up the losses on all of the training examples that you then get the cost function, which measures how well you're doing on the entire training set. So if you plot log of f, it looks like this curve here, where f here is on the horizontal axis. And so a plot of negative of the log of f looks like this, where we just flip the curve along the horizontal axis. Notice that it intersects the horizontal axis at f equals one, and continues downward from there. Now f is the output of logistic regression. Thus, f is always between zero and one, because the output of logistic regression is always between zero and one. The only part of the function that's relevant is therefore this part over here, corresponding to f between zero and one. So let's zoom in and take a closer look at this part of the graph. If the algorithm predicts a probability close to one, and the true label is one, then the loss is very small, it's pretty much zero, because you're very close to the right answer. Now continuing with the example of the true label y being one, so say it really is a malignant tumor, if the algorithm predicts 0.5, then the loss is at this point here, which is a bit higher, but not that high. Whereas in contrast, if the algorithm were to have output 0.1, if it thinks that there's only a 10% chance of the tumor being malignant, but y really is one, it really is malignant, then the loss is this much higher value over here. So when y is equal to one, the loss function incentivizes, or nudges, or helps push the algorithm to make more accurate predictions, because the loss is lowest when it predicts values close to one. Now on this slide, we've been looking at what the loss is when y is equal to one. On this slide, let's look at the second part of the loss function corresponding to when y is equal to zero. In this case, the loss is negative log of one minus f of x. When this function is plotted, it actually looks like this. The range of f is limited to zero to one, because logistic regression only outputs values between zero and one, and if we zoom in, this is what it looks like. So in this plot, corresponding to y equals zero, the vertical axis shows the value of the loss for different values of f of x. So when f is zero or very close to zero, the loss is also going to be very small, which means that if the true label is zero and the model's prediction is very close to zero, well, you nearly got it right. So the loss is appropriately very close to zero. And the larger the value of f of x gets, the bigger the loss, because the prediction is further from the true label zero. And in fact, as that prediction approaches one, the loss actually approaches infinity. Going back to the tumor prediction example, this is if a model predicts that the patient's tumor is almost certain to be malignant, say 99.9% chance of malignancy, but it turns out to actually not be malignant. So y equals zero, then we penalize the model with a very high loss. So in this case of y equals zero, similar to the case of y equals one on the previous slide, the further the prediction f of x is away from the true value of y, the higher the loss. And in fact, if f of x approaches zero, the loss here actually goes really, really large and in fact approaches infinity. So when the true label is one, the algorithm is strongly incentivized not to predict something too close to zero. So in this video, you saw why the squared error cost function doesn't work well for logistic regression. We also defined the loss for a single training example and came up with a new definition for the loss function for logistic regression. It turns out that with this choice of loss function, the overall cost function will be convex and thus you can reliably use gradient descent to take you to the global minimum. Proving that this function is convex is beyond the scope of this course. You may remember that the cost function is a function of the entire training set and is therefore the average or one over m times the sum of the loss function on the individual training examples. So the cost on a certain set of parameters w and b is equal to one over m times the sum over all the training examples of the loss on the training examples. And if you can find the value of the parameters w and b that minimizes this, then you'd have a pretty good set of values for the parameters w and b for logistic regression. In the upcoming optional lab, you get to take a look at how the squared error cost function doesn't work very well for classification because you see that the surface plot results in a very wiggly cost surface with many local minima. Then you take a look at the new logistic loss function and as you can see here, this produces a nice and smooth convex surface plot that does not have all those local minima. So please take a look at the code and the plots after this video. Alright so we've seen a lot in this video. In the next video, let's go back and take the loss function for a single training example and use that to define the overall cost function for the entire training set. And we'll also figure out a simpler way to write out the cost function, which will then later allow us to run gradient descent to find good parameters for logistic regression. Let's go on to the next video.
[{"start": 0.0, "end": 6.96, "text": " Remember that the cost function gives you a way to measure how well a specific set of"}, {"start": 6.96, "end": 12.92, "text": " parameters fits the training data, and it thereby gives you a way to try to choose better"}, {"start": 12.92, "end": 13.92, "text": " parameters."}, {"start": 13.92, "end": 20.52, "text": " In this video, we'll look at how the squared error cost function is not an ideal cost function"}, {"start": 20.52, "end": 25.16, "text": " for logistic regression, and we'll take a look at a different cost function that can"}, {"start": 25.16, "end": 29.28, "text": " help us choose better parameters for logistic regression."}, {"start": 29.28, "end": 35.1, "text": " Notice what the training set for a logistic regression model might look like, where here"}, {"start": 35.1, "end": 41.44, "text": " each row might correspond to a patient that was paying a visit to the doctor and wound"}, {"start": 41.44, "end": 44.84, "text": " up with some sort of diagnosis."}, {"start": 44.84, "end": 51.32, "text": " As before, we'll use m to denote the number of training examples."}, {"start": 51.32, "end": 57.64, "text": " Each training example has one or more features, such as the tumor size, the patient's age,"}, {"start": 57.64, "end": 62.0, "text": " and so on, for a total of n features."}, {"start": 62.0, "end": 66.72, "text": " And so let's call the features x1 through xn."}, {"start": 66.72, "end": 73.44, "text": " And since this is a binary classification task, the target label y takes on only two values,"}, {"start": 73.44, "end": 75.96000000000001, "text": " either 0 or 1."}, {"start": 75.96000000000001, "end": 81.28, "text": " And finally, the logistic regression model is defined by this equation."}, {"start": 81.28, "end": 87.04, "text": " Okay, so the question you want to answer is, given this training set, how can you choose"}, {"start": 87.04, "end": 90.72000000000001, "text": " parameters w and b?"}, {"start": 90.72000000000001, "end": 95.56, "text": " Recall for linear regression, this is the squared error cost function."}, {"start": 95.56, "end": 100.52000000000001, "text": " The only thing I've changed is that I put the one half inside the summation instead"}, {"start": 100.52000000000001, "end": 102.68, "text": " of outside the summation."}, {"start": 102.68, "end": 107.4, "text": " And you might remember that in the case of linear regression, where f of x is the linear"}, {"start": 107.4, "end": 114.68, "text": " function w dot x plus b, the cost function looks like this is a convex function or a"}, {"start": 114.68, "end": 116.88000000000001, "text": " bow shape or a hammock shape."}, {"start": 116.88, "end": 122.19999999999999, "text": " And so gradient descent will look like this, where you take one step, one step, one step,"}, {"start": 122.19999999999999, "end": 125.44, "text": " and so on to converge at the global minimum."}, {"start": 125.44, "end": 131.68, "text": " Now, you could try to use the same cost function for logistic regression, but it turns out"}, {"start": 131.68, "end": 139.72, "text": " that if I were to write f of x equals 1 over 1 plus e to the negative w x plus b, and plot"}, {"start": 139.72, "end": 146.64, "text": " the cost function using this value of f of x, then the cost will look like this."}, {"start": 146.64, "end": 152.51999999999998, "text": " This becomes what's called a non-convex cost function, it's not convex."}, {"start": 152.51999999999998, "end": 158.23999999999998, "text": " And what this means is that if you were to try to use gradient descent, there are lots"}, {"start": 158.23999999999998, "end": 161.83999999999997, "text": " of local minima that you can get stuck in."}, {"start": 161.83999999999997, "end": 167.79999999999998, "text": " So it turns out that for logistic regression, this squared error cost function is not a"}, {"start": 167.79999999999998, "end": 168.79999999999998, "text": " good choice."}, {"start": 168.79999999999998, "end": 175.72, "text": " Instead, there will be a different cost function that can make the cost function convex again,"}, {"start": 175.72, "end": 180.44, "text": " so that gradient descent can be guaranteed to converge to the global minimum."}, {"start": 180.44, "end": 184.6, "text": " The only thing I've changed is that I put the one half inside the summation instead"}, {"start": 184.6, "end": 186.72, "text": " of outside the summation."}, {"start": 186.72, "end": 191.88, "text": " This will make the math you see later on the slide a little bit simpler."}, {"start": 191.88, "end": 196.72, "text": " In order to build a new cost function, one that we'll use for logistic regression, I'm"}, {"start": 196.72, "end": 203.0, "text": " going to change a little bit the definition of the cost function j of w and b."}, {"start": 203.0, "end": 209.48, "text": " In particular, if you look inside the summation, let's call this term inside the loss on a"}, {"start": 209.48, "end": 212.84, "text": " single training example."}, {"start": 212.84, "end": 220.78, "text": " And I'm going to denote the loss via this capital L, and is a function of the prediction"}, {"start": 220.78, "end": 228.04, "text": " of the learning algorithm f of x, as well as of the true label y."}, {"start": 228.04, "end": 235.44, "text": " And so the loss, given the predicted f of x and the true label y, is equal in this case"}, {"start": 235.44, "end": 239.35999999999999, "text": " to one half of the squared difference."}, {"start": 239.35999999999999, "end": 244.28, "text": " We'll see shortly that by choosing a different form for this loss function, we'll be able"}, {"start": 244.28, "end": 251.07999999999998, "text": " to keep the overall cost function, which is one over m times the sum of these loss functions,"}, {"start": 251.07999999999998, "end": 253.28, "text": " to be a convex function."}, {"start": 253.28, "end": 261.28, "text": " Now the loss function inputs f of x and the true label y, and tells us how well we're"}, {"start": 261.28, "end": 264.28, "text": " doing on that example."}, {"start": 264.28, "end": 269.52, "text": " I'm going to just write down here the definition of the loss function we'll use for logistic"}, {"start": 269.52, "end": 271.4, "text": " regression."}, {"start": 271.4, "end": 279.2, "text": " If the label y is equal to one, then the loss is negative log of f of x."}, {"start": 279.2, "end": 287.15999999999997, "text": " And if the label y is equal to zero, then the loss is negative log of one minus f of"}, {"start": 287.15999999999997, "end": 289.4, "text": " x."}, {"start": 289.4, "end": 295.96, "text": " Let's take a look at why this loss function hopefully makes sense."}, {"start": 295.96, "end": 301.96, "text": " Let's first consider the case of y equals one, and plot what this function looks like"}, {"start": 301.96, "end": 306.64, "text": " to gain some intuition about what this loss function is doing."}, {"start": 306.64, "end": 312.2, "text": " And remember, the loss function measures how well you're doing on one training example,"}, {"start": 312.2, "end": 316.68, "text": " and is by summing up the losses on all of the training examples that you then get the"}, {"start": 316.68, "end": 322.71999999999997, "text": " cost function, which measures how well you're doing on the entire training set."}, {"start": 322.71999999999997, "end": 331.44, "text": " So if you plot log of f, it looks like this curve here, where f here is on the horizontal"}, {"start": 331.44, "end": 332.76, "text": " axis."}, {"start": 332.76, "end": 339.56, "text": " And so a plot of negative of the log of f looks like this, where we just flip the curve"}, {"start": 339.56, "end": 342.36, "text": " along the horizontal axis."}, {"start": 342.36, "end": 348.71999999999997, "text": " Notice that it intersects the horizontal axis at f equals one, and continues downward from"}, {"start": 348.71999999999997, "end": 351.15999999999997, "text": " there."}, {"start": 351.15999999999997, "end": 354.2, "text": " Now f is the output of logistic regression."}, {"start": 354.2, "end": 360.46, "text": " Thus, f is always between zero and one, because the output of logistic regression is always"}, {"start": 360.46, "end": 363.26, "text": " between zero and one."}, {"start": 363.26, "end": 370.46, "text": " The only part of the function that's relevant is therefore this part over here, corresponding"}, {"start": 370.46, "end": 373.41999999999996, "text": " to f between zero and one."}, {"start": 373.41999999999996, "end": 378.71999999999997, "text": " So let's zoom in and take a closer look at this part of the graph."}, {"start": 378.71999999999997, "end": 386.35999999999996, "text": " If the algorithm predicts a probability close to one, and the true label is one, then the"}, {"start": 386.36, "end": 394.36, "text": " loss is very small, it's pretty much zero, because you're very close to the right answer."}, {"start": 394.36, "end": 400.16, "text": " Now continuing with the example of the true label y being one, so say it really is a malignant"}, {"start": 400.16, "end": 409.48, "text": " tumor, if the algorithm predicts 0.5, then the loss is at this point here, which is a"}, {"start": 409.48, "end": 412.16, "text": " bit higher, but not that high."}, {"start": 412.16, "end": 419.0, "text": " Whereas in contrast, if the algorithm were to have output 0.1, if it thinks that there's"}, {"start": 419.0, "end": 425.84000000000003, "text": " only a 10% chance of the tumor being malignant, but y really is one, it really is malignant,"}, {"start": 425.84000000000003, "end": 431.24, "text": " then the loss is this much higher value over here."}, {"start": 431.24, "end": 437.02000000000004, "text": " So when y is equal to one, the loss function incentivizes, or nudges, or helps push the"}, {"start": 437.02, "end": 442.62, "text": " algorithm to make more accurate predictions, because the loss is lowest when it predicts"}, {"start": 442.62, "end": 445.24, "text": " values close to one."}, {"start": 445.24, "end": 450.64, "text": " Now on this slide, we've been looking at what the loss is when y is equal to one."}, {"start": 450.64, "end": 455.08, "text": " On this slide, let's look at the second part of the loss function corresponding to when"}, {"start": 455.08, "end": 457.4, "text": " y is equal to zero."}, {"start": 457.4, "end": 463.56, "text": " In this case, the loss is negative log of one minus f of x."}, {"start": 463.56, "end": 468.16, "text": " When this function is plotted, it actually looks like this."}, {"start": 468.16, "end": 474.72, "text": " The range of f is limited to zero to one, because logistic regression only outputs values"}, {"start": 474.72, "end": 480.94, "text": " between zero and one, and if we zoom in, this is what it looks like."}, {"start": 480.94, "end": 488.16, "text": " So in this plot, corresponding to y equals zero, the vertical axis shows the value of"}, {"start": 488.16, "end": 493.48, "text": " the loss for different values of f of x."}, {"start": 493.48, "end": 500.44, "text": " So when f is zero or very close to zero, the loss is also going to be very small, which"}, {"start": 500.44, "end": 505.12, "text": " means that if the true label is zero and the model's prediction is very close to zero,"}, {"start": 505.12, "end": 507.56, "text": " well, you nearly got it right."}, {"start": 507.56, "end": 512.16, "text": " So the loss is appropriately very close to zero."}, {"start": 512.16, "end": 517.22, "text": " And the larger the value of f of x gets, the bigger the loss, because the prediction is"}, {"start": 517.22, "end": 520.76, "text": " further from the true label zero."}, {"start": 520.76, "end": 527.36, "text": " And in fact, as that prediction approaches one, the loss actually approaches infinity."}, {"start": 527.36, "end": 532.68, "text": " Going back to the tumor prediction example, this is if a model predicts that the patient's"}, {"start": 532.68, "end": 539.36, "text": " tumor is almost certain to be malignant, say 99.9% chance of malignancy, but it turns out"}, {"start": 539.36, "end": 541.12, "text": " to actually not be malignant."}, {"start": 541.12, "end": 547.24, "text": " So y equals zero, then we penalize the model with a very high loss."}, {"start": 547.24, "end": 552.36, "text": " So in this case of y equals zero, similar to the case of y equals one on the previous"}, {"start": 552.36, "end": 557.6800000000001, "text": " slide, the further the prediction f of x is away from the true value of y, the higher"}, {"start": 557.6800000000001, "end": 558.6800000000001, "text": " the loss."}, {"start": 558.6800000000001, "end": 566.8, "text": " And in fact, if f of x approaches zero, the loss here actually goes really, really large"}, {"start": 566.8, "end": 569.84, "text": " and in fact approaches infinity."}, {"start": 569.84, "end": 576.26, "text": " So when the true label is one, the algorithm is strongly incentivized not to predict something"}, {"start": 576.26, "end": 578.88, "text": " too close to zero."}, {"start": 578.88, "end": 582.88, "text": " So in this video, you saw why the squared error cost function doesn't work well for"}, {"start": 582.88, "end": 585.36, "text": " logistic regression."}, {"start": 585.36, "end": 593.12, "text": " We also defined the loss for a single training example and came up with a new definition"}, {"start": 593.12, "end": 597.96, "text": " for the loss function for logistic regression."}, {"start": 597.96, "end": 603.4, "text": " It turns out that with this choice of loss function, the overall cost function will be"}, {"start": 603.4, "end": 610.0799999999999, "text": " convex and thus you can reliably use gradient descent to take you to the global minimum."}, {"start": 610.0799999999999, "end": 615.3199999999999, "text": " Proving that this function is convex is beyond the scope of this course."}, {"start": 615.3199999999999, "end": 621.3199999999999, "text": " You may remember that the cost function is a function of the entire training set and"}, {"start": 621.3199999999999, "end": 628.24, "text": " is therefore the average or one over m times the sum of the loss function on the individual"}, {"start": 628.24, "end": 630.36, "text": " training examples."}, {"start": 630.36, "end": 639.2, "text": " So the cost on a certain set of parameters w and b is equal to one over m times the sum"}, {"start": 639.2, "end": 644.48, "text": " over all the training examples of the loss on the training examples."}, {"start": 644.48, "end": 651.24, "text": " And if you can find the value of the parameters w and b that minimizes this, then you'd have"}, {"start": 651.24, "end": 657.08, "text": " a pretty good set of values for the parameters w and b for logistic regression."}, {"start": 657.08, "end": 662.5, "text": " In the upcoming optional lab, you get to take a look at how the squared error cost function"}, {"start": 662.5, "end": 667.7800000000001, "text": " doesn't work very well for classification because you see that the surface plot results"}, {"start": 667.7800000000001, "end": 672.84, "text": " in a very wiggly cost surface with many local minima."}, {"start": 672.84, "end": 680.36, "text": " Then you take a look at the new logistic loss function and as you can see here, this produces"}, {"start": 680.36, "end": 687.0, "text": " a nice and smooth convex surface plot that does not have all those local minima."}, {"start": 687.0, "end": 692.2, "text": " So please take a look at the code and the plots after this video."}, {"start": 692.2, "end": 695.64, "text": " Alright so we've seen a lot in this video."}, {"start": 695.64, "end": 700.68, "text": " In the next video, let's go back and take the loss function for a single training example"}, {"start": 700.68, "end": 706.0, "text": " and use that to define the overall cost function for the entire training set."}, {"start": 706.0, "end": 710.88, "text": " And we'll also figure out a simpler way to write out the cost function, which will then"}, {"start": 710.88, "end": 716.64, "text": " later allow us to run gradient descent to find good parameters for logistic regression."}, {"start": 716.64, "end": 718.24, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=wB08Jlmhi24
3.5 Cost Function | Simplified Cost Function for Logistic Regression --[Machine Learning|Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you saw the loss function and the cost function for logistic regression. In this video, you'll see a slightly simpler way to write out the loss and cost functions so that the implementation can be a bit simpler when we get to gradient descent for fitting the parameters of a logistic regression model. Let's take a look. As a reminder, here's the loss function that we had defined in the previous video for logistic regression. Now, because we're still working on the binary classification problem, y is either 0 or 1. Because y is either 0 or 1 and cannot take on any value other than 0 or 1, we'll be able to come up with a simpler way to write this loss function. You can write the loss function as follows. Given the prediction f of x and the target label y, the loss equals negative y times log of f minus 1 minus y times log of 1 minus f. And it turns out this equation, which we just wrote in one line, is completely equivalent to this more complex formula up here. Let's see why this is the case. Now remember, y can only take on the values of either 1 or 0. In the first case, let's say y equals 1. This first y over here is 1, and this 1 minus y is 1 minus 1, which is therefore equal to 0. And so the loss becomes negative 1 times log of f of x minus 0 times a bunch of stuff that becomes 0 and goes away. And so when y is equal to 1, the loss is indeed the first term on top, negative log of f of x. Let's look at the second case, when y is equal to 0. In this case, this y here is equal to 0, so this first term goes away. And the second term is 1 minus 0 times that logarithmic term. So the loss becomes this negative 1 times log of 1 minus f of x. And that's just equal to this second term up here. And so in the case of y equals 0, we also get back the original loss function as defined above. So what you see is that whether y is 1 or 0, the single expression here is equivalent to the more complex expression up here, which is why this gives us a simpler way to write the loss with just one equation without separating out these two cases like we did on top. Using this simplified loss function, let's go back and write out the cost function for logistic regression. So here again is the simplified loss function. And recall that the cost j is just the average loss, average across the entire training set of m examples. So it's 1 over m times the sum of the loss from i equals 1 to m. Now if you plug in a definition for the simplified loss from above, then it looks like this. 1 over m times the sum of this term above. And if you bring the negative signs and move them outside, then you end up with this expression over here. And this is the cost function. The cost function that pretty much everyone uses to train logistic regression. Now you might be wondering, why do we choose this particular function when there could be tons of other cost functions we could have chosen? Although we won't have time to go into great detail on this in this class, I'd just like to mention that this particular cost function is derived from statistics using a statistical principle called maximum likelihood estimation, which is an idea from statistics on how to efficiently find parameters for different models. And this cost function has the nice property that it is convex. But don't worry about learning the details of maximum likelihood. This puts a deeper rationale and justification behind this particular cost function. The upcoming optional lab will show you how the logistic cost function is implemented in code. I recommend taking a look at it because you implement this later in the practice lab at the end of the week. This upcoming optional lab also shows you how two different choices of the parameters will lead to different cost calculations. So you can see in the plot that the better fitting blue decision boundary has a lower cost relative to the magenta decision boundary. So with the simplified cost function, we're now ready to jump into applying gradient descent to logistic regression. Let's go see that in the next video.
[{"start": 0.0, "end": 8.16, "text": " In the last video, you saw the loss function and the cost function for logistic regression."}, {"start": 8.16, "end": 13.9, "text": " In this video, you'll see a slightly simpler way to write out the loss and cost functions"}, {"start": 13.9, "end": 19.0, "text": " so that the implementation can be a bit simpler when we get to gradient descent for fitting"}, {"start": 19.0, "end": 22.56, "text": " the parameters of a logistic regression model."}, {"start": 22.56, "end": 23.76, "text": " Let's take a look."}, {"start": 23.76, "end": 29.8, "text": " As a reminder, here's the loss function that we had defined in the previous video for logistic"}, {"start": 29.8, "end": 30.8, "text": " regression."}, {"start": 30.8, "end": 38.72, "text": " Now, because we're still working on the binary classification problem, y is either 0 or 1."}, {"start": 38.72, "end": 46.32, "text": " Because y is either 0 or 1 and cannot take on any value other than 0 or 1, we'll be able"}, {"start": 46.32, "end": 50.72, "text": " to come up with a simpler way to write this loss function."}, {"start": 50.72, "end": 53.96, "text": " You can write the loss function as follows."}, {"start": 53.96, "end": 61.72, "text": " Given the prediction f of x and the target label y, the loss equals negative y times"}, {"start": 61.72, "end": 70.6, "text": " log of f minus 1 minus y times log of 1 minus f."}, {"start": 70.6, "end": 76.92, "text": " And it turns out this equation, which we just wrote in one line, is completely equivalent"}, {"start": 76.92, "end": 80.88, "text": " to this more complex formula up here."}, {"start": 80.88, "end": 84.08, "text": " Let's see why this is the case."}, {"start": 84.08, "end": 92.19999999999999, "text": " Now remember, y can only take on the values of either 1 or 0."}, {"start": 92.19999999999999, "end": 96.32, "text": " In the first case, let's say y equals 1."}, {"start": 96.32, "end": 105.24, "text": " This first y over here is 1, and this 1 minus y is 1 minus 1, which is therefore equal to"}, {"start": 105.24, "end": 106.64, "text": " 0."}, {"start": 106.64, "end": 115.44, "text": " And so the loss becomes negative 1 times log of f of x minus 0 times a bunch of stuff that"}, {"start": 115.44, "end": 118.88, "text": " becomes 0 and goes away."}, {"start": 118.88, "end": 126.24000000000001, "text": " And so when y is equal to 1, the loss is indeed the first term on top, negative log of f of"}, {"start": 126.24000000000001, "end": 128.52, "text": " x."}, {"start": 128.52, "end": 134.6, "text": " Let's look at the second case, when y is equal to 0."}, {"start": 134.6, "end": 142.12, "text": " In this case, this y here is equal to 0, so this first term goes away."}, {"start": 142.12, "end": 150.24, "text": " And the second term is 1 minus 0 times that logarithmic term."}, {"start": 150.24, "end": 159.24, "text": " So the loss becomes this negative 1 times log of 1 minus f of x."}, {"start": 159.24, "end": 164.76000000000002, "text": " And that's just equal to this second term up here."}, {"start": 164.76000000000002, "end": 171.44, "text": " And so in the case of y equals 0, we also get back the original loss function as defined"}, {"start": 171.44, "end": 173.56, "text": " above."}, {"start": 173.56, "end": 180.68, "text": " So what you see is that whether y is 1 or 0, the single expression here is equivalent"}, {"start": 180.68, "end": 186.44, "text": " to the more complex expression up here, which is why this gives us a simpler way to write"}, {"start": 186.44, "end": 194.4, "text": " the loss with just one equation without separating out these two cases like we did on top."}, {"start": 194.4, "end": 199.0, "text": " Using this simplified loss function, let's go back and write out the cost function for"}, {"start": 199.0, "end": 202.66, "text": " logistic regression."}, {"start": 202.66, "end": 207.36, "text": " So here again is the simplified loss function."}, {"start": 207.36, "end": 214.16, "text": " And recall that the cost j is just the average loss, average across the entire training set"}, {"start": 214.16, "end": 216.48, "text": " of m examples."}, {"start": 216.48, "end": 223.35999999999999, "text": " So it's 1 over m times the sum of the loss from i equals 1 to m."}, {"start": 223.35999999999999, "end": 228.84, "text": " Now if you plug in a definition for the simplified loss from above, then it looks like this."}, {"start": 228.84, "end": 233.76, "text": " 1 over m times the sum of this term above."}, {"start": 233.76, "end": 239.51999999999998, "text": " And if you bring the negative signs and move them outside, then you end up with this expression"}, {"start": 239.51999999999998, "end": 241.12, "text": " over here."}, {"start": 241.12, "end": 243.4, "text": " And this is the cost function."}, {"start": 243.4, "end": 249.12, "text": " The cost function that pretty much everyone uses to train logistic regression."}, {"start": 249.12, "end": 254.48000000000002, "text": " Now you might be wondering, why do we choose this particular function when there could"}, {"start": 254.48000000000002, "end": 258.52, "text": " be tons of other cost functions we could have chosen?"}, {"start": 258.52, "end": 263.38, "text": " Although we won't have time to go into great detail on this in this class, I'd just like"}, {"start": 263.38, "end": 269.88, "text": " to mention that this particular cost function is derived from statistics using a statistical"}, {"start": 269.88, "end": 276.6, "text": " principle called maximum likelihood estimation, which is an idea from statistics on how to"}, {"start": 276.6, "end": 281.0, "text": " efficiently find parameters for different models."}, {"start": 281.0, "end": 286.96, "text": " And this cost function has the nice property that it is convex."}, {"start": 286.96, "end": 290.84, "text": " But don't worry about learning the details of maximum likelihood."}, {"start": 290.84, "end": 298.24, "text": " This puts a deeper rationale and justification behind this particular cost function."}, {"start": 298.24, "end": 303.88, "text": " The upcoming optional lab will show you how the logistic cost function is implemented"}, {"start": 303.88, "end": 305.0, "text": " in code."}, {"start": 305.0, "end": 311.12, "text": " I recommend taking a look at it because you implement this later in the practice lab at"}, {"start": 311.12, "end": 314.18, "text": " the end of the week."}, {"start": 314.18, "end": 319.68, "text": " This upcoming optional lab also shows you how two different choices of the parameters"}, {"start": 319.68, "end": 322.84000000000003, "text": " will lead to different cost calculations."}, {"start": 322.84, "end": 329.0, "text": " So you can see in the plot that the better fitting blue decision boundary has a lower"}, {"start": 329.0, "end": 334.4, "text": " cost relative to the magenta decision boundary."}, {"start": 334.4, "end": 339.88, "text": " So with the simplified cost function, we're now ready to jump into applying gradient descent"}, {"start": 339.88, "end": 342.03999999999996, "text": " to logistic regression."}, {"start": 342.04, "end": 353.68, "text": " Let's go see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=D59jK8T9dfI
3.6 Gradient Descent | Gradient Descent Implementation --[Machine Learning | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
To fit the parameters of a logistic regression model, we're going to try to find the values of the parameters w and b that minimise the cost function j of w and b. And we're going to apply gradient descent to do this. Let's take a look at how. In this video, we'll focus on how to find a good choice of the parameters w and b. After you've done so, if you give the model a new input x, say a new patient at the hospital with a certain tumour size and age that needs a diagnosis, the model can then make a prediction or it can try to estimate the probability that the label y is 1. The algorithm you can use to minimise the cost function is gradient descent. Here again is the cost function, and so if you want to minimise the cost j as a function of w and b, well, here's the usual gradient descent algorithm where you repeatedly update each parameter as the O value minus alpha, the learning rate, times this derivative term. Let's take a look at the derivative of j with respect to wj, this term up on top here, where as usual j goes from 1 through n, where n is the number of features. If someone were to apply the rules of calculus, you can show that the derivative with respect to wj of the cost function capital J is equal to this expression over here. Is 1 over m times the sum from 1 through m of this error term, that is f minus the label y times xj. Here this xij is the j feature of training example i. Now let's also look at the derivative of j with respect to the parameter b. It turns out to be this expression over here, and it's quite similar to the expression above, except that it is not multiplied by this x super strip i subscript j at the end. So just as a reminder, similar to what you saw for linear regression, the way to carry out these updates is to use simultaneous updates, meaning that you would first compute the right hand side for all of these updates, and then simultaneously overwrite all the values on the left at the same time. So let me take these derivative expressions here and plug them into these terms here. This gives you gradient descent for logistic regression. Now one funny thing you might be wondering is, huh, that's weird. These two equations look exactly like the algorithm we had come up with previously for linear regression. So you might be wondering, is linear regression actually secretly the same as logistic regression? Well, even though these equations look the same, the reason that this is not linear regression is because the definition for the function f of x has changed. In linear regression, f of x is this is w x plus b, but in logistic regression, f of x is defined to be the sigmoid function applied to w x plus b. So although the algorithm written looked the same for both linear regression and logistic regression, actually they're two very different algorithms because the definition for f of x is not the same. When we talked about gradient descent for linear regression previously, you saw how you can monitor gradient descent to make sure it converges. You can just apply the same method for logistic regression to make sure it also converges. I've written out these updates as if you're updating the parameters w j one parameter at a time. Similar to the discussion on vectorized implementations of linear regression, you can also use vectorization to make gradient descent run faster for logistic regression. I won't dive into the details of the vectorized implementation in this video, but you can also learn more about it and see the code in the optional labs. So now you know how to implement gradient descent for logistic regression. You might also remember feature scaling when we were using linear regression, where you saw how feature scaling that is scaling all the features to take on similar ranges of values, say between negative one and plus one, how that can help gradient descent to converge faster. Feature scaling applied the same way to scale the different features to take on similar ranges of values can also speed up gradient descent for logistic regression. In the upcoming optional lab, you also see how the gradient for the regression can be calculated in code. This will be useful to look at because you also implement this in a practice lab at the end of this week. After you run gradient descent in this lab, there'll be a nice set of animated plots that show gradient descent in action. You see the sigmoid function, the control plot of the cost, the 3d surface plot to the cost and the learning curve all evolve as gradient descent runs. There will be another optional lab after that, which is short and sweet, but also very useful because they'll show you how to use the popular scikit-learn library to train the logistic regression model for classification. Many machine learning practitioners in many companies today use scikit-learn regularly as part of their job. And so I hope you check out the scikit-learn function as well and take a look at how that is used. So that's it. You now know how to implement logistic regression. This is a very powerful, very widely used learning algorithm and you now know how to get it to work yourself. Congratulations.
[{"start": 0.0, "end": 7.28, "text": " To fit the parameters of a logistic regression model, we're going to try to find the values"}, {"start": 7.28, "end": 12.92, "text": " of the parameters w and b that minimise the cost function j of w and b."}, {"start": 12.92, "end": 16.4, "text": " And we're going to apply gradient descent to do this."}, {"start": 16.4, "end": 18.76, "text": " Let's take a look at how."}, {"start": 18.76, "end": 24.8, "text": " In this video, we'll focus on how to find a good choice of the parameters w and b."}, {"start": 24.8, "end": 31.92, "text": " After you've done so, if you give the model a new input x, say a new patient at the hospital"}, {"start": 31.92, "end": 39.08, "text": " with a certain tumour size and age that needs a diagnosis, the model can then make a prediction"}, {"start": 39.08, "end": 46.28, "text": " or it can try to estimate the probability that the label y is 1."}, {"start": 46.28, "end": 51.5, "text": " The algorithm you can use to minimise the cost function is gradient descent."}, {"start": 51.5, "end": 58.24, "text": " Here again is the cost function, and so if you want to minimise the cost j as a function"}, {"start": 58.24, "end": 65.64, "text": " of w and b, well, here's the usual gradient descent algorithm where you repeatedly update"}, {"start": 65.64, "end": 75.32, "text": " each parameter as the O value minus alpha, the learning rate, times this derivative term."}, {"start": 75.32, "end": 82.55999999999999, "text": " Let's take a look at the derivative of j with respect to wj, this term up on top here, where"}, {"start": 82.55999999999999, "end": 88.82, "text": " as usual j goes from 1 through n, where n is the number of features."}, {"start": 88.82, "end": 94.52, "text": " If someone were to apply the rules of calculus, you can show that the derivative with respect"}, {"start": 94.52, "end": 101.35999999999999, "text": " to wj of the cost function capital J is equal to this expression over here."}, {"start": 101.36, "end": 111.64, "text": " Is 1 over m times the sum from 1 through m of this error term, that is f minus the label"}, {"start": 111.64, "end": 116.96, "text": " y times xj."}, {"start": 116.96, "end": 123.44, "text": " Here this xij is the j feature of training example i."}, {"start": 123.44, "end": 129.07999999999998, "text": " Now let's also look at the derivative of j with respect to the parameter b."}, {"start": 129.08, "end": 135.4, "text": " It turns out to be this expression over here, and it's quite similar to the expression above,"}, {"start": 135.4, "end": 142.48000000000002, "text": " except that it is not multiplied by this x super strip i subscript j at the end."}, {"start": 142.48000000000002, "end": 147.36, "text": " So just as a reminder, similar to what you saw for linear regression, the way to carry"}, {"start": 147.36, "end": 153.08, "text": " out these updates is to use simultaneous updates, meaning that you would first compute the right"}, {"start": 153.08, "end": 159.60000000000002, "text": " hand side for all of these updates, and then simultaneously overwrite all the values on"}, {"start": 159.60000000000002, "end": 162.88000000000002, "text": " the left at the same time."}, {"start": 162.88000000000002, "end": 171.28, "text": " So let me take these derivative expressions here and plug them into these terms here."}, {"start": 171.28, "end": 177.5, "text": " This gives you gradient descent for logistic regression."}, {"start": 177.5, "end": 182.52, "text": " Now one funny thing you might be wondering is, huh, that's weird."}, {"start": 182.52, "end": 186.92000000000002, "text": " These two equations look exactly like the algorithm we had come up with previously for"}, {"start": 186.92000000000002, "end": 188.76000000000002, "text": " linear regression."}, {"start": 188.76000000000002, "end": 194.48000000000002, "text": " So you might be wondering, is linear regression actually secretly the same as logistic regression?"}, {"start": 194.48000000000002, "end": 201.52, "text": " Well, even though these equations look the same, the reason that this is not linear regression"}, {"start": 201.52, "end": 206.92000000000002, "text": " is because the definition for the function f of x has changed."}, {"start": 206.92, "end": 214.16, "text": " In linear regression, f of x is this is w x plus b, but in logistic regression, f of"}, {"start": 214.16, "end": 220.2, "text": " x is defined to be the sigmoid function applied to w x plus b."}, {"start": 220.2, "end": 225.56, "text": " So although the algorithm written looked the same for both linear regression and logistic"}, {"start": 225.56, "end": 231.0, "text": " regression, actually they're two very different algorithms because the definition for f of"}, {"start": 231.0, "end": 234.0, "text": " x is not the same."}, {"start": 234.0, "end": 238.92, "text": " When we talked about gradient descent for linear regression previously, you saw how"}, {"start": 238.92, "end": 243.36, "text": " you can monitor gradient descent to make sure it converges."}, {"start": 243.36, "end": 250.52, "text": " You can just apply the same method for logistic regression to make sure it also converges."}, {"start": 250.52, "end": 257.2, "text": " I've written out these updates as if you're updating the parameters w j one parameter"}, {"start": 257.2, "end": 260.96, "text": " at a time."}, {"start": 260.96, "end": 269.15999999999997, "text": " Similar to the discussion on vectorized implementations of linear regression, you can also use vectorization"}, {"start": 269.15999999999997, "end": 273.76, "text": " to make gradient descent run faster for logistic regression."}, {"start": 273.76, "end": 278.76, "text": " I won't dive into the details of the vectorized implementation in this video, but you can"}, {"start": 278.76, "end": 283.15999999999997, "text": " also learn more about it and see the code in the optional labs."}, {"start": 283.15999999999997, "end": 289.12, "text": " So now you know how to implement gradient descent for logistic regression."}, {"start": 289.12, "end": 294.72, "text": " You might also remember feature scaling when we were using linear regression, where you"}, {"start": 294.72, "end": 299.36, "text": " saw how feature scaling that is scaling all the features to take on similar ranges of"}, {"start": 299.36, "end": 304.64, "text": " values, say between negative one and plus one, how that can help gradient descent to"}, {"start": 304.64, "end": 306.8, "text": " converge faster."}, {"start": 306.8, "end": 310.96, "text": " Feature scaling applied the same way to scale the different features to take on similar"}, {"start": 310.96, "end": 316.68, "text": " ranges of values can also speed up gradient descent for logistic regression."}, {"start": 316.68, "end": 324.08, "text": " In the upcoming optional lab, you also see how the gradient for the regression can be"}, {"start": 324.08, "end": 326.16, "text": " calculated in code."}, {"start": 326.16, "end": 330.92, "text": " This will be useful to look at because you also implement this in a practice lab at the"}, {"start": 330.92, "end": 333.2, "text": " end of this week."}, {"start": 333.2, "end": 337.92, "text": " After you run gradient descent in this lab, there'll be a nice set of animated plots that"}, {"start": 337.92, "end": 340.52, "text": " show gradient descent in action."}, {"start": 340.52, "end": 345.92, "text": " You see the sigmoid function, the control plot of the cost, the 3d surface plot to the"}, {"start": 345.92, "end": 351.04, "text": " cost and the learning curve all evolve as gradient descent runs."}, {"start": 351.04, "end": 356.2, "text": " There will be another optional lab after that, which is short and sweet, but also very useful"}, {"start": 356.2, "end": 361.72, "text": " because they'll show you how to use the popular scikit-learn library to train the logistic"}, {"start": 361.72, "end": 365.20000000000005, "text": " regression model for classification."}, {"start": 365.20000000000005, "end": 370.16, "text": " Many machine learning practitioners in many companies today use scikit-learn regularly"}, {"start": 370.16, "end": 371.8, "text": " as part of their job."}, {"start": 371.8, "end": 376.40000000000003, "text": " And so I hope you check out the scikit-learn function as well and take a look at how that"}, {"start": 376.40000000000003, "end": 378.28000000000003, "text": " is used."}, {"start": 378.28000000000003, "end": 379.28000000000003, "text": " So that's it."}, {"start": 379.28000000000003, "end": 382.68, "text": " You now know how to implement logistic regression."}, {"start": 382.68, "end": 387.68, "text": " This is a very powerful, very widely used learning algorithm and you now know how to"}, {"start": 387.68, "end": 389.08000000000004, "text": " get it to work yourself."}, {"start": 389.08, "end": 404.71999999999997, "text": " Congratulations."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=wBx3NZ0ucgc
3.7 Regularization to Reduce Overfitting | The problem of Overfitting -[Machine Learning|Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Now you've seen a couple of different learning algorithms, linear regression and logistic regression. They work well for many tasks, but sometimes in an application the algorithm could run into a problem called overfitting, which can cause it to perform poorly. What I'd like to do in this video is to show you what is overfitting, as well as a closely related almost opposite problem called underfitting. And in the next videos after this, I'll share with you some techniques for addressing overfitting. In particular, there's a method called regularization. Very useful technique, I use it all the time, but regularization will help you minimize this overfitting problem and get your learning algorithms to work much better. So let's take a look at what is overfitting. To help us understand what is overfitting, let's take a look at a few examples. Let's go back to our original example of predicting housing prices with linear regression, where you want to predict the price as a function of the size of a house. To help us understand what is overfitting, let's take a look at a linear regression example. And I'm going to go back to our original running example of predicting housing prices with linear regression. Suppose your data set looks like this, with the input feature x being the size of the house and the value y they're trying to predict the price of the house. One thing you could do is fit a linear function to this data. And if you do that, you get a straight line fit to the data that maybe looks like this. But this isn't a very good model. Looking at the data, it seems pretty clear that as the size of the house increases, the housing prices kind of flatten out. So this algorithm does not fit the training data very well. The technical term for this is the model is underfitting the training data. Another term is the algorithm has high bias. You may have read in the news about some learning algorithms, really unfortunately, demonstrating bias against certain ethnicities or certain genders. In machine learning, the term bias has multiple meanings. Checking learning algorithms for bias based on characteristics such as gender or ethnicity is absolutely critical. But the term bias has a second technical meaning as well, which is the one I'm using here, which is if the algorithm has underfit the data, meaning that it's just not even able to fit the training set that well, that there's a clear pattern in the training data that the algorithm is just unable to capture. Another way to think of this form of bias is as if the learning algorithm has a very strong preconception or we say a very strong bias that the housing prices are going to be a completely linear function of the size, despite data to the contrary. So this preconception that the data is linear causes it to fit a straight line that fits the data poorly, leading it to underfit the data. Now let's look at a second variation of a model, which is if you instead fit a quadratic function to the data with two features, x and x squared, then when you fit the parameters w1 and w2, you can get a curve that fits the data somewhat better. Maybe it looks like this. So if you were to get a new house that's not in this set of five training examples, this model would probably do quite well on that new house. So if you're a real estate agent, the idea that you want your learning algorithm to do well, even on examples that are not on the training set, that's called generalization. Technically, we say that you want your learning algorithm to generalize well, which means to make good predictions even on brand new examples that it has never seen before. So this quadratic model seems to fit the training set not perfectly, but pretty well. And I think it will generalize well to new examples. Now let's look at the other extreme. What if you were to fit a fourth order polynomial to the data? So you have x, x squared, x cubed, and x to the fourth all as features. With this fourth order polynomial, you can actually fit the curve that passes through all five of the training examples exactly. And you might get a curve that looks like this. This on one hand seems to do an extremely good job fitting the training data, because it passes through all of the training data perfectly. In fact, you'll be able to choose parameters that will result in the cost function being exactly equal to zero, because the errors are zero on all five training examples. But this is a very wiggly curve. It's going up and down all over the place. And if you have this house size right here, the model would predict that this house is cheaper than houses that are smaller than it. So we don't think that this is a particularly good model for predicting housing prices. The technical term is that we'll say this model has overfit the data, or this model has an overfitting problem. Because even though it fits the training set very well, it has fit the data almost too well, hence is overfit. And it does not look like this model will generalize to new examples that has never seen before. Another term for this is that the algorithm has high variance. In machine learning, many people will use the terms overfit and high variance almost interchangeably, and we use the terms underfit and high bias almost interchangeably. The intuition behind overfitting or high variance is that the algorithm is trying very, very hard to fit every single training example. And it turns out that if your training set were just even a little bit different, say one house was priced just a little bit more, a little bit less, then the function that the algorithm fits could end up being totally different. So if two different machine learning engineers were to fit this fourth order polynomial model to just slightly different data sets, they could end up with totally different predictions or highly variable predictions. And that's why we say the algorithm has high variance. Using this rightmost model with the one in the middle for the same house, it seems the middle model gives the much more reasonable prediction for price. There isn't really a name for this case in the middle, but I'm just going to call this just right because it is neither underfit nor overfit. So we can say that the goal of machine learning is to find a model that hopefully is neither underfitting nor overfitting. In other words, hopefully a model that has neither high bias nor high variance. When I think about underfitting and overfitting, high bias and high variance, I'm sometimes reminded of the children's story of Goldilocks and the three bears. In this children's tale, a girl called Goldilocks visits the home of a bear family. There's a bowl of porridge that's too cold to taste and so that's no good. There's also a bowl of porridge that's too hot to eat. So that's no good either. But there's a bowl of porridge that is neither too cold nor too hot. The temperature is in the middle, which is just right to eat. So to recap, if you have too many features, like the full water polynomial on the right, then the model may fit the training set well, but almost too well or overfit in the high variance. On the flip side, if you have too few features, then in this example, like the one on the left, it underfits and has high bias. And in this example, using quadratic features x and x squared, that seems to be just right. So far, we've looked at underfitting and overfitting for linear regression model. Similarly, overfitting applies to classification as well. Here's a classification example with two features x1 and x2, where x1 is maybe the tumor size and x2 is the age of patient. And we're trying to classify if a tumor is malignant or benign, as denoted by these crosses and circles. One thing you can do is fit a logistic regression model, just a simple model like this, where as usual, g is the sigmoid function, and this term here inside is z. So if you do that, you end up with a straight line as the decision boundary. This is the line where z is equal to zero that separates the positive and negative examples. This straight line doesn't look terrible. It looks kind of okay, but it doesn't look like a very good fit to the data either. So this is an example of underfitting or of high bias. Let's look at another example. If you were to add to your features these quadratic terms, then z becomes this new term in the middle and the decision boundary, that is where z equals zero, can look more like this, more like an ellipse or part of an ellipse. And this is a pretty good fit to the data, even though it does not perfectly classify every single training sample in the training set. Notice how some of these crosses get classified among the circles. But this model looks pretty good. I'm going to call it just right, and it looks like this will generalize pretty well to new patients. And finally, at the other extreme, if you were to fit a very high order polynomial with many, many features like these, then the model may try really hard and can't hold or twist itself to find a decision boundary that fits your training data perfectly. Having all these higher order polynomial features allows the algorithm to choose this really overly complex decision boundary. If the features are tumor size and age, and you're trying to classify tumors as malignant or benign, then this doesn't really look like a very good model for making predictions. So once again, this is an instance of overfitting and high variance, because this model, despite doing very well in the training set, doesn't look like it will generalize well to new examples. So now you've seen how an algorithm can underfit or have high bias or overfit and have high variance. You may want to know how you can get a model that is just right. In the next video, we'll look at some ways you can address the issue of overfitting. We'll also touch on some ideas relevant for accessing underfitting. Let's go on to the next video.
[{"start": 0.0, "end": 7.28, "text": " Now you've seen a couple of different learning algorithms, linear regression and logistic"}, {"start": 7.28, "end": 8.28, "text": " regression."}, {"start": 8.28, "end": 13.88, "text": " They work well for many tasks, but sometimes in an application the algorithm could run"}, {"start": 13.88, "end": 19.6, "text": " into a problem called overfitting, which can cause it to perform poorly."}, {"start": 19.6, "end": 24.84, "text": " What I'd like to do in this video is to show you what is overfitting, as well as a closely"}, {"start": 24.84, "end": 28.88, "text": " related almost opposite problem called underfitting."}, {"start": 28.88, "end": 34.879999999999995, "text": " And in the next videos after this, I'll share with you some techniques for addressing overfitting."}, {"start": 34.879999999999995, "end": 37.84, "text": " In particular, there's a method called regularization."}, {"start": 37.84, "end": 42.64, "text": " Very useful technique, I use it all the time, but regularization will help you minimize"}, {"start": 42.64, "end": 47.4, "text": " this overfitting problem and get your learning algorithms to work much better."}, {"start": 47.4, "end": 51.879999999999995, "text": " So let's take a look at what is overfitting."}, {"start": 51.879999999999995, "end": 58.84, "text": " To help us understand what is overfitting, let's take a look at a few examples."}, {"start": 58.84, "end": 65.24000000000001, "text": " Let's go back to our original example of predicting housing prices with linear regression, where"}, {"start": 65.24000000000001, "end": 70.16, "text": " you want to predict the price as a function of the size of a house."}, {"start": 70.16, "end": 78.28, "text": " To help us understand what is overfitting, let's take a look at a linear regression example."}, {"start": 78.28, "end": 82.80000000000001, "text": " And I'm going to go back to our original running example of predicting housing prices with"}, {"start": 82.80000000000001, "end": 85.5, "text": " linear regression."}, {"start": 85.5, "end": 91.12, "text": " Suppose your data set looks like this, with the input feature x being the size of the"}, {"start": 91.12, "end": 96.56, "text": " house and the value y they're trying to predict the price of the house."}, {"start": 96.56, "end": 101.48, "text": " One thing you could do is fit a linear function to this data."}, {"start": 101.48, "end": 107.64, "text": " And if you do that, you get a straight line fit to the data that maybe looks like this."}, {"start": 107.64, "end": 110.36, "text": " But this isn't a very good model."}, {"start": 110.36, "end": 115.76, "text": " Looking at the data, it seems pretty clear that as the size of the house increases, the"}, {"start": 115.76, "end": 119.08, "text": " housing prices kind of flatten out."}, {"start": 119.08, "end": 124.2, "text": " So this algorithm does not fit the training data very well."}, {"start": 124.2, "end": 131.4, "text": " The technical term for this is the model is underfitting the training data."}, {"start": 131.4, "end": 136.57999999999998, "text": " Another term is the algorithm has high bias."}, {"start": 136.58, "end": 142.52, "text": " You may have read in the news about some learning algorithms, really unfortunately, demonstrating"}, {"start": 142.52, "end": 146.44, "text": " bias against certain ethnicities or certain genders."}, {"start": 146.44, "end": 152.08, "text": " In machine learning, the term bias has multiple meanings."}, {"start": 152.08, "end": 158.4, "text": " Checking learning algorithms for bias based on characteristics such as gender or ethnicity"}, {"start": 158.4, "end": 161.20000000000002, "text": " is absolutely critical."}, {"start": 161.2, "end": 167.56, "text": " But the term bias has a second technical meaning as well, which is the one I'm using here,"}, {"start": 167.56, "end": 172.28, "text": " which is if the algorithm has underfit the data, meaning that it's just not even able"}, {"start": 172.28, "end": 178.12, "text": " to fit the training set that well, that there's a clear pattern in the training data that"}, {"start": 178.12, "end": 181.79999999999998, "text": " the algorithm is just unable to capture."}, {"start": 181.79999999999998, "end": 187.32, "text": " Another way to think of this form of bias is as if the learning algorithm has a very"}, {"start": 187.32, "end": 192.64, "text": " strong preconception or we say a very strong bias that the housing prices are going to"}, {"start": 192.64, "end": 199.94, "text": " be a completely linear function of the size, despite data to the contrary."}, {"start": 199.94, "end": 206.24, "text": " So this preconception that the data is linear causes it to fit a straight line that fits"}, {"start": 206.24, "end": 210.92, "text": " the data poorly, leading it to underfit the data."}, {"start": 210.92, "end": 219.32, "text": " Now let's look at a second variation of a model, which is if you instead fit a quadratic"}, {"start": 219.32, "end": 225.6, "text": " function to the data with two features, x and x squared, then when you fit the parameters"}, {"start": 225.6, "end": 233.0, "text": " w1 and w2, you can get a curve that fits the data somewhat better."}, {"start": 233.0, "end": 235.88, "text": " Maybe it looks like this."}, {"start": 235.88, "end": 242.57999999999998, "text": " So if you were to get a new house that's not in this set of five training examples, this"}, {"start": 242.57999999999998, "end": 247.06, "text": " model would probably do quite well on that new house."}, {"start": 247.06, "end": 251.74, "text": " So if you're a real estate agent, the idea that you want your learning algorithm to do"}, {"start": 251.74, "end": 258.12, "text": " well, even on examples that are not on the training set, that's called generalization."}, {"start": 258.12, "end": 263.6, "text": " Technically, we say that you want your learning algorithm to generalize well, which means"}, {"start": 263.6, "end": 269.04, "text": " to make good predictions even on brand new examples that it has never seen before."}, {"start": 269.04, "end": 274.76000000000005, "text": " So this quadratic model seems to fit the training set not perfectly, but pretty well."}, {"start": 274.76000000000005, "end": 278.92, "text": " And I think it will generalize well to new examples."}, {"start": 278.92, "end": 281.96000000000004, "text": " Now let's look at the other extreme."}, {"start": 281.96000000000004, "end": 286.04, "text": " What if you were to fit a fourth order polynomial to the data?"}, {"start": 286.04, "end": 292.44, "text": " So you have x, x squared, x cubed, and x to the fourth all as features."}, {"start": 292.44, "end": 296.16, "text": " With this fourth order polynomial, you can actually fit the curve that passes through"}, {"start": 296.16, "end": 299.64, "text": " all five of the training examples exactly."}, {"start": 299.64, "end": 303.62, "text": " And you might get a curve that looks like this."}, {"start": 303.62, "end": 308.96, "text": " This on one hand seems to do an extremely good job fitting the training data, because"}, {"start": 308.96, "end": 312.84, "text": " it passes through all of the training data perfectly."}, {"start": 312.84, "end": 317.48, "text": " In fact, you'll be able to choose parameters that will result in the cost function being"}, {"start": 317.48, "end": 324.40000000000003, "text": " exactly equal to zero, because the errors are zero on all five training examples."}, {"start": 324.40000000000003, "end": 326.96000000000004, "text": " But this is a very wiggly curve."}, {"start": 326.96000000000004, "end": 329.88, "text": " It's going up and down all over the place."}, {"start": 329.88, "end": 335.20000000000005, "text": " And if you have this house size right here, the model would predict that this house is"}, {"start": 335.20000000000005, "end": 339.44, "text": " cheaper than houses that are smaller than it."}, {"start": 339.44, "end": 346.24, "text": " So we don't think that this is a particularly good model for predicting housing prices."}, {"start": 346.24, "end": 352.52, "text": " The technical term is that we'll say this model has overfit the data, or this model"}, {"start": 352.52, "end": 355.08, "text": " has an overfitting problem."}, {"start": 355.08, "end": 359.64, "text": " Because even though it fits the training set very well, it has fit the data almost too"}, {"start": 359.64, "end": 362.12, "text": " well, hence is overfit."}, {"start": 362.12, "end": 366.58, "text": " And it does not look like this model will generalize to new examples that has never"}, {"start": 366.58, "end": 368.6, "text": " seen before."}, {"start": 368.6, "end": 374.64, "text": " Another term for this is that the algorithm has high variance."}, {"start": 374.64, "end": 379.64, "text": " In machine learning, many people will use the terms overfit and high variance almost"}, {"start": 379.64, "end": 386.28, "text": " interchangeably, and we use the terms underfit and high bias almost interchangeably."}, {"start": 386.28, "end": 390.88, "text": " The intuition behind overfitting or high variance is that the algorithm is trying very, very"}, {"start": 390.88, "end": 394.64, "text": " hard to fit every single training example."}, {"start": 394.64, "end": 399.64, "text": " And it turns out that if your training set were just even a little bit different, say"}, {"start": 399.64, "end": 404.91999999999996, "text": " one house was priced just a little bit more, a little bit less, then the function that"}, {"start": 404.91999999999996, "end": 409.2, "text": " the algorithm fits could end up being totally different."}, {"start": 409.2, "end": 416.03999999999996, "text": " So if two different machine learning engineers were to fit this fourth order polynomial model"}, {"start": 416.03999999999996, "end": 420.36, "text": " to just slightly different data sets, they could end up with totally different predictions"}, {"start": 420.36, "end": 423.15999999999997, "text": " or highly variable predictions."}, {"start": 423.15999999999997, "end": 428.76, "text": " And that's why we say the algorithm has high variance."}, {"start": 428.76, "end": 434.36, "text": " Using this rightmost model with the one in the middle for the same house, it seems the"}, {"start": 434.36, "end": 439.15999999999997, "text": " middle model gives the much more reasonable prediction for price."}, {"start": 439.15999999999997, "end": 443.44, "text": " There isn't really a name for this case in the middle, but I'm just going to call this"}, {"start": 443.44, "end": 448.48, "text": " just right because it is neither underfit nor overfit."}, {"start": 448.48, "end": 453.56, "text": " So we can say that the goal of machine learning is to find a model that hopefully is neither"}, {"start": 453.56, "end": 455.84, "text": " underfitting nor overfitting."}, {"start": 455.84, "end": 462.91999999999996, "text": " In other words, hopefully a model that has neither high bias nor high variance."}, {"start": 462.91999999999996, "end": 468.32, "text": " When I think about underfitting and overfitting, high bias and high variance, I'm sometimes"}, {"start": 468.32, "end": 474.28, "text": " reminded of the children's story of Goldilocks and the three bears."}, {"start": 474.28, "end": 481.0, "text": " In this children's tale, a girl called Goldilocks visits the home of a bear family."}, {"start": 481.0, "end": 486.84, "text": " There's a bowl of porridge that's too cold to taste and so that's no good."}, {"start": 486.84, "end": 490.52, "text": " There's also a bowl of porridge that's too hot to eat."}, {"start": 490.52, "end": 492.28, "text": " So that's no good either."}, {"start": 492.28, "end": 496.0, "text": " But there's a bowl of porridge that is neither too cold nor too hot."}, {"start": 496.0, "end": 500.76, "text": " The temperature is in the middle, which is just right to eat."}, {"start": 500.76, "end": 506.12, "text": " So to recap, if you have too many features, like the full water polynomial on the right,"}, {"start": 506.12, "end": 511.32, "text": " then the model may fit the training set well, but almost too well or overfit in the high"}, {"start": 511.32, "end": 512.72, "text": " variance."}, {"start": 512.72, "end": 517.24, "text": " On the flip side, if you have too few features, then in this example, like the one on the"}, {"start": 517.24, "end": 520.68, "text": " left, it underfits and has high bias."}, {"start": 520.68, "end": 527.78, "text": " And in this example, using quadratic features x and x squared, that seems to be just right."}, {"start": 527.78, "end": 532.84, "text": " So far, we've looked at underfitting and overfitting for linear regression model."}, {"start": 532.84, "end": 537.0, "text": " Similarly, overfitting applies to classification as well."}, {"start": 537.0, "end": 543.2800000000001, "text": " Here's a classification example with two features x1 and x2, where x1 is maybe the tumor size"}, {"start": 543.2800000000001, "end": 546.38, "text": " and x2 is the age of patient."}, {"start": 546.38, "end": 552.32, "text": " And we're trying to classify if a tumor is malignant or benign, as denoted by these crosses"}, {"start": 552.32, "end": 554.2800000000001, "text": " and circles."}, {"start": 554.2800000000001, "end": 561.0, "text": " One thing you can do is fit a logistic regression model, just a simple model like this, where"}, {"start": 561.0, "end": 569.76, "text": " as usual, g is the sigmoid function, and this term here inside is z."}, {"start": 569.76, "end": 576.54, "text": " So if you do that, you end up with a straight line as the decision boundary."}, {"start": 576.54, "end": 582.5, "text": " This is the line where z is equal to zero that separates the positive and negative examples."}, {"start": 582.5, "end": 584.28, "text": " This straight line doesn't look terrible."}, {"start": 584.28, "end": 589.22, "text": " It looks kind of okay, but it doesn't look like a very good fit to the data either."}, {"start": 589.22, "end": 595.0, "text": " So this is an example of underfitting or of high bias."}, {"start": 595.0, "end": 597.12, "text": " Let's look at another example."}, {"start": 597.12, "end": 603.08, "text": " If you were to add to your features these quadratic terms, then z becomes this new term"}, {"start": 603.08, "end": 609.2, "text": " in the middle and the decision boundary, that is where z equals zero, can look more like"}, {"start": 609.2, "end": 613.52, "text": " this, more like an ellipse or part of an ellipse."}, {"start": 613.52, "end": 618.32, "text": " And this is a pretty good fit to the data, even though it does not perfectly classify"}, {"start": 618.32, "end": 621.6, "text": " every single training sample in the training set."}, {"start": 621.6, "end": 626.0400000000001, "text": " Notice how some of these crosses get classified among the circles."}, {"start": 626.0400000000001, "end": 627.88, "text": " But this model looks pretty good."}, {"start": 627.88, "end": 632.36, "text": " I'm going to call it just right, and it looks like this will generalize pretty well to new"}, {"start": 632.36, "end": 634.5600000000001, "text": " patients."}, {"start": 634.5600000000001, "end": 640.72, "text": " And finally, at the other extreme, if you were to fit a very high order polynomial with"}, {"start": 640.72, "end": 647.8800000000001, "text": " many, many features like these, then the model may try really hard and can't hold or twist"}, {"start": 647.88, "end": 654.04, "text": " itself to find a decision boundary that fits your training data perfectly."}, {"start": 654.04, "end": 660.04, "text": " Having all these higher order polynomial features allows the algorithm to choose this really"}, {"start": 660.04, "end": 664.04, "text": " overly complex decision boundary."}, {"start": 664.04, "end": 669.48, "text": " If the features are tumor size and age, and you're trying to classify tumors as malignant"}, {"start": 669.48, "end": 675.64, "text": " or benign, then this doesn't really look like a very good model for making predictions."}, {"start": 675.64, "end": 681.6, "text": " So once again, this is an instance of overfitting and high variance, because this model, despite"}, {"start": 681.6, "end": 687.98, "text": " doing very well in the training set, doesn't look like it will generalize well to new examples."}, {"start": 687.98, "end": 693.52, "text": " So now you've seen how an algorithm can underfit or have high bias or overfit and have high"}, {"start": 693.52, "end": 695.28, "text": " variance."}, {"start": 695.28, "end": 699.4, "text": " You may want to know how you can get a model that is just right."}, {"start": 699.4, "end": 704.84, "text": " In the next video, we'll look at some ways you can address the issue of overfitting."}, {"start": 704.84, "end": 709.32, "text": " We'll also touch on some ideas relevant for accessing underfitting."}, {"start": 709.32, "end": 736.32, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=ce4CPW8AFE4
3.8 Regularization to Reduce Overfitting | Addressing Overfitting-[Machine Learning|Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
Later in this specialization, we'll talk about debugging and diagnosing things that can go wrong with learning algorithms. You also learn about specific tools to recognize when overfitting and underfitting may be occurring. But for now, when you think overfitting has occurred, let's talk about what you can do to address it. Let's say you fit a model and it has high variance as overfit. Here's our overfit house price prediction model. One way to address this problem is to collect more training data. So that's one option. If you're able to get more data, that is more training examples on sizes and prices of houses, then with the larger training set, the learning algorithm will learn to fit a function that is less wiggly. So you can continue to fit a high order polynomial or some other function with a lot of features. And if you have enough training examples, it will still do okay. So to summarize, the number one tool you can use against overfitting is to get more training data. Now, getting more data isn't always an option. Maybe only so many houses have been sold in this location. So maybe there just isn't more data to be had. So when the data is available, this can work really well. A second option for addressing overfitting is to see if you can use fewer features. In the previous video, our model's features included the size x as well as the size squared, that is x squared, and x cubed and x to the 4 and so on. These were a lot of polynomial features. So in that case, one way to reduce overfitting is to just not use so many of these polynomial features. But now let's look at a different example. Maybe you have a lot of different features of a house with which to try to predict its price, ranging from the size, number of bedrooms, number of floors, the age, average income of the neighborhood, and so on and so forth to the distance to the nearest coffee shop. It turns out that if you have a lot of features like these, but don't have enough training data, then your learning algorithm may also overfit to your training set. Now instead of using all 100 features, if we were to pick just a subset of the most useful ones, maybe size, bedrooms, and the age of the house, if you think those are the most relevant features, then using just that smaller subset of features, you may find that your model no longer overfits as badly. Choosing the most appropriate set of features to use is sometimes also called feature selection. One way you could do so is to use your intuition to choose what you think is the best set of features. What's most relevant for predicting the price. Now one disadvantage of feature selection is that by using only a subset of the features, the algorithm is throwing away some of the information that you have about the houses. For example, maybe all of these features, or 100 of them, are actually useful for predicting the price of a house. So maybe you don't want to throw away some of the information by throwing away some of the features. Later in course two, you also see some algorithms for automatically choosing the most appropriate set of features to use for prediction tasks. Now this takes us to the third option for reducing overfitting. This technique, which we'll look at in even greater depth in the next video, is called regularization. If you look at an overfit model, here's a model using polynomial features x, x squared, x cubed, and so on. You find that the parameters are often relatively large. Now if you were to eliminate some of these features, say if you were to eliminate the feature x4, that corresponds to setting this parameter to zero. So setting a parameter to zero is equivalent to eliminating a feature, which is what we saw on the previous slide. It turns out that regularization is a way to more gently reduce the impacts of some of the features without doing something as harsh as eliminating it outright. What regularization does is encourage the learning algorithm to shrink the values of the parameters without necessarily demanding that the parameter is set to exactly zero. And it turns out that even if you fit a higher order polynomial like this, so long as you can get the algorithm to use smaller parameter values, w1, w2, w3, w4, you end up with a curve that ends up fitting the training data much better. So what regularization does is it lets you keep all of your features, but it just prevents the features from having an overly large effect, which is what sometimes can cause overfitting. By the way, by convention, we normally just reduce the size of the wj parameters, that is w1 through wn. It kind of doesn't make a huge difference whether you regularize the parameter b as well. You could do so if you want or not if you don't. I usually don't and it's just fine to regularize w1, w2 all the way to wn, but not really encourage b to become smaller. In practice, it should make very little difference whether you also regularize b or not. So to recap, these are the three ways you saw in this video for addressing overfitting. One, collect more data. If you can get more data, this can really help reduce overfitting. Sometimes that's not possible, in which case some of the options are, two, try selecting and using only a subset of the features. You learn more about feature selection in course two. Three would be to reduce the size of the parameters using regularization. This will be the subject of the next video as well. As for myself, I use regularization all the time. So this is a very useful technique for training learning algorithms, including neural networks specifically, which you see later in the specialization as well. I hope you also check out the optional lab on overfitting. In the lab, you will see different examples of overfitting and adjust those examples by clicking on options in the plots. You also be able to add your own data points by clicking on the plot and see how that changes the curve that is fit. You can also try examples for both regression and classification, and you really change the degree of the polynomial to be x, x squared, x cubed, and so on. The lab also lets you play with two different options for addressing overfitting. You can add additional training data to reduce overfitting, and you can also select which features to include or to exclude as another way to try to reduce overfitting. So please take a look at the lab, which I hope will help you build your intuition about overfitting as well as some methods for addressing it. In this video, you also saw the idea of regularization at a relatively high level. I realize that all of these details on regularization may not fully make sense to you yet, but in the next video, we'll start to formulate exactly how to apply regularization and exactly what regularization means. And then we'll start to figure out how to make this work with our learning algorithms to make linear regression and literacy regression and in the future, other algorithms as well, avoid overfitting. Let's take a look at that in the next video.
[{"start": 0.0, "end": 7.04, "text": " Later in this specialization, we'll talk about debugging and diagnosing things that"}, {"start": 7.04, "end": 9.52, "text": " can go wrong with learning algorithms."}, {"start": 9.52, "end": 15.8, "text": " You also learn about specific tools to recognize when overfitting and underfitting may be occurring."}, {"start": 15.8, "end": 20.84, "text": " But for now, when you think overfitting has occurred, let's talk about what you can do"}, {"start": 20.84, "end": 22.56, "text": " to address it."}, {"start": 22.56, "end": 27.84, "text": " Let's say you fit a model and it has high variance as overfit."}, {"start": 27.84, "end": 32.28, "text": " Here's our overfit house price prediction model."}, {"start": 32.28, "end": 37.12, "text": " One way to address this problem is to collect more training data."}, {"start": 37.12, "end": 39.2, "text": " So that's one option."}, {"start": 39.2, "end": 46.08, "text": " If you're able to get more data, that is more training examples on sizes and prices of houses,"}, {"start": 46.08, "end": 52.08, "text": " then with the larger training set, the learning algorithm will learn to fit a function that"}, {"start": 52.08, "end": 54.16, "text": " is less wiggly."}, {"start": 54.16, "end": 60.239999999999995, "text": " So you can continue to fit a high order polynomial or some other function with a lot of features."}, {"start": 60.239999999999995, "end": 64.88, "text": " And if you have enough training examples, it will still do okay."}, {"start": 64.88, "end": 70.8, "text": " So to summarize, the number one tool you can use against overfitting is to get more training"}, {"start": 70.8, "end": 71.8, "text": " data."}, {"start": 71.8, "end": 75.19999999999999, "text": " Now, getting more data isn't always an option."}, {"start": 75.19999999999999, "end": 78.64, "text": " Maybe only so many houses have been sold in this location."}, {"start": 78.64, "end": 81.75999999999999, "text": " So maybe there just isn't more data to be had."}, {"start": 81.76, "end": 84.88000000000001, "text": " So when the data is available, this can work really well."}, {"start": 84.88000000000001, "end": 91.64, "text": " A second option for addressing overfitting is to see if you can use fewer features."}, {"start": 91.64, "end": 98.64, "text": " In the previous video, our model's features included the size x as well as the size squared,"}, {"start": 98.64, "end": 104.48, "text": " that is x squared, and x cubed and x to the 4 and so on."}, {"start": 104.48, "end": 108.04, "text": " These were a lot of polynomial features."}, {"start": 108.04, "end": 114.16000000000001, "text": " So in that case, one way to reduce overfitting is to just not use so many of these polynomial"}, {"start": 114.16000000000001, "end": 115.60000000000001, "text": " features."}, {"start": 115.60000000000001, "end": 118.46000000000001, "text": " But now let's look at a different example."}, {"start": 118.46000000000001, "end": 122.36000000000001, "text": " Maybe you have a lot of different features of a house with which to try to predict its"}, {"start": 122.36000000000001, "end": 127.80000000000001, "text": " price, ranging from the size, number of bedrooms, number of floors, the age, average income"}, {"start": 127.80000000000001, "end": 133.32, "text": " of the neighborhood, and so on and so forth to the distance to the nearest coffee shop."}, {"start": 133.32, "end": 138.84, "text": " It turns out that if you have a lot of features like these, but don't have enough training"}, {"start": 138.84, "end": 144.28, "text": " data, then your learning algorithm may also overfit to your training set."}, {"start": 144.28, "end": 149.23999999999998, "text": " Now instead of using all 100 features, if we were to pick just a subset of the most"}, {"start": 149.23999999999998, "end": 157.2, "text": " useful ones, maybe size, bedrooms, and the age of the house, if you think those are the"}, {"start": 157.2, "end": 162.51999999999998, "text": " most relevant features, then using just that smaller subset of features, you may find that"}, {"start": 162.52, "end": 166.72, "text": " your model no longer overfits as badly."}, {"start": 166.72, "end": 172.88, "text": " Choosing the most appropriate set of features to use is sometimes also called feature selection."}, {"start": 172.88, "end": 177.84, "text": " One way you could do so is to use your intuition to choose what you think is the best set of"}, {"start": 177.84, "end": 178.84, "text": " features."}, {"start": 178.84, "end": 181.76000000000002, "text": " What's most relevant for predicting the price."}, {"start": 181.76000000000002, "end": 188.44, "text": " Now one disadvantage of feature selection is that by using only a subset of the features,"}, {"start": 188.44, "end": 193.32, "text": " the algorithm is throwing away some of the information that you have about the houses."}, {"start": 193.32, "end": 198.8, "text": " For example, maybe all of these features, or 100 of them, are actually useful for predicting"}, {"start": 198.8, "end": 200.4, "text": " the price of a house."}, {"start": 200.4, "end": 204.72, "text": " So maybe you don't want to throw away some of the information by throwing away some of"}, {"start": 204.72, "end": 205.72, "text": " the features."}, {"start": 205.72, "end": 212.44, "text": " Later in course two, you also see some algorithms for automatically choosing the most appropriate"}, {"start": 212.44, "end": 215.72, "text": " set of features to use for prediction tasks."}, {"start": 215.72, "end": 219.76, "text": " Now this takes us to the third option for reducing overfitting."}, {"start": 219.76, "end": 224.56, "text": " This technique, which we'll look at in even greater depth in the next video, is called"}, {"start": 224.56, "end": 227.2, "text": " regularization."}, {"start": 227.2, "end": 234.48, "text": " If you look at an overfit model, here's a model using polynomial features x, x squared,"}, {"start": 234.48, "end": 236.07999999999998, "text": " x cubed, and so on."}, {"start": 236.07999999999998, "end": 240.34, "text": " You find that the parameters are often relatively large."}, {"start": 240.34, "end": 245.6, "text": " Now if you were to eliminate some of these features, say if you were to eliminate the"}, {"start": 245.6, "end": 253.12, "text": " feature x4, that corresponds to setting this parameter to zero."}, {"start": 253.12, "end": 258.44, "text": " So setting a parameter to zero is equivalent to eliminating a feature, which is what we"}, {"start": 258.44, "end": 261.06, "text": " saw on the previous slide."}, {"start": 261.06, "end": 266.6, "text": " It turns out that regularization is a way to more gently reduce the impacts of some"}, {"start": 266.6, "end": 272.42, "text": " of the features without doing something as harsh as eliminating it outright."}, {"start": 272.42, "end": 277.40000000000003, "text": " What regularization does is encourage the learning algorithm to shrink the values of"}, {"start": 277.40000000000003, "end": 284.08000000000004, "text": " the parameters without necessarily demanding that the parameter is set to exactly zero."}, {"start": 284.08000000000004, "end": 289.52000000000004, "text": " And it turns out that even if you fit a higher order polynomial like this, so long as you"}, {"start": 289.52000000000004, "end": 296.20000000000005, "text": " can get the algorithm to use smaller parameter values, w1, w2, w3, w4, you end up with a"}, {"start": 296.20000000000005, "end": 300.92, "text": " curve that ends up fitting the training data much better."}, {"start": 300.92, "end": 306.40000000000003, "text": " So what regularization does is it lets you keep all of your features, but it just prevents"}, {"start": 306.40000000000003, "end": 314.56, "text": " the features from having an overly large effect, which is what sometimes can cause overfitting."}, {"start": 314.56, "end": 321.20000000000005, "text": " By the way, by convention, we normally just reduce the size of the wj parameters, that"}, {"start": 321.20000000000005, "end": 323.48, "text": " is w1 through wn."}, {"start": 323.48, "end": 328.20000000000005, "text": " It kind of doesn't make a huge difference whether you regularize the parameter b as"}, {"start": 328.20000000000005, "end": 329.20000000000005, "text": " well."}, {"start": 329.2, "end": 331.59999999999997, "text": " You could do so if you want or not if you don't."}, {"start": 331.59999999999997, "end": 339.68, "text": " I usually don't and it's just fine to regularize w1, w2 all the way to wn, but not really encourage"}, {"start": 339.68, "end": 341.8, "text": " b to become smaller."}, {"start": 341.8, "end": 347.91999999999996, "text": " In practice, it should make very little difference whether you also regularize b or not."}, {"start": 347.91999999999996, "end": 354.64, "text": " So to recap, these are the three ways you saw in this video for addressing overfitting."}, {"start": 354.64, "end": 357.32, "text": " One, collect more data."}, {"start": 357.32, "end": 362.62, "text": " If you can get more data, this can really help reduce overfitting."}, {"start": 362.62, "end": 368.71999999999997, "text": " Sometimes that's not possible, in which case some of the options are, two, try selecting"}, {"start": 368.71999999999997, "end": 372.24, "text": " and using only a subset of the features."}, {"start": 372.24, "end": 377.32, "text": " You learn more about feature selection in course two."}, {"start": 377.32, "end": 383.8, "text": " Three would be to reduce the size of the parameters using regularization."}, {"start": 383.8, "end": 387.2, "text": " This will be the subject of the next video as well."}, {"start": 387.2, "end": 390.0, "text": " As for myself, I use regularization all the time."}, {"start": 390.0, "end": 395.03999999999996, "text": " So this is a very useful technique for training learning algorithms, including neural networks"}, {"start": 395.03999999999996, "end": 398.96, "text": " specifically, which you see later in the specialization as well."}, {"start": 398.96, "end": 404.4, "text": " I hope you also check out the optional lab on overfitting."}, {"start": 404.4, "end": 410.21999999999997, "text": " In the lab, you will see different examples of overfitting and adjust those examples by"}, {"start": 410.21999999999997, "end": 413.28, "text": " clicking on options in the plots."}, {"start": 413.28, "end": 419.03999999999996, "text": " You also be able to add your own data points by clicking on the plot and see how that changes"}, {"start": 419.03999999999996, "end": 421.52, "text": " the curve that is fit."}, {"start": 421.52, "end": 427.47999999999996, "text": " You can also try examples for both regression and classification, and you really change"}, {"start": 427.47999999999996, "end": 433.96, "text": " the degree of the polynomial to be x, x squared, x cubed, and so on."}, {"start": 433.96, "end": 439.35999999999996, "text": " The lab also lets you play with two different options for addressing overfitting."}, {"start": 439.36, "end": 444.58000000000004, "text": " You can add additional training data to reduce overfitting, and you can also select which"}, {"start": 444.58000000000004, "end": 451.2, "text": " features to include or to exclude as another way to try to reduce overfitting."}, {"start": 451.2, "end": 455.96000000000004, "text": " So please take a look at the lab, which I hope will help you build your intuition about"}, {"start": 455.96000000000004, "end": 460.24, "text": " overfitting as well as some methods for addressing it."}, {"start": 460.24, "end": 466.08000000000004, "text": " In this video, you also saw the idea of regularization at a relatively high level."}, {"start": 466.08, "end": 472.47999999999996, "text": " I realize that all of these details on regularization may not fully make sense to you yet, but in"}, {"start": 472.47999999999996, "end": 478.44, "text": " the next video, we'll start to formulate exactly how to apply regularization and exactly what"}, {"start": 478.44, "end": 480.24, "text": " regularization means."}, {"start": 480.24, "end": 485.03999999999996, "text": " And then we'll start to figure out how to make this work with our learning algorithms"}, {"start": 485.03999999999996, "end": 490.47999999999996, "text": " to make linear regression and literacy regression and in the future, other algorithms as well,"}, {"start": 490.47999999999996, "end": 491.47999999999996, "text": " avoid overfitting."}, {"start": 491.48, "end": 496.24, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=SCj3h47dKL0
3.9 Regularization to Reduce Overfitting | Cost function with regularization- [ML | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, we saw that regularization tries to make the parameter values w1 through wn small to reduce overfitting. In this video, we'll build on that intuition and develop a modified cost function for your learning algorithm they can use to actually apply regularization. Let's jump in. Recall this example from the previous video in which we saw that if you fit a quadratic function to this data, it gives a pretty good fit. But if you fit a very high order polynomial, you end up with a curve that overfits the data. But now consider the following. Suppose that you had a way to make the parameters w3 and w4 really, really small, say close to zero. Here's what I mean. Let's say instead of minimizing this objective function, this is a cost function for linear regression. Let's say you were to modify the cost function and add to it 1000 times w3 squared plus 1000 times w4 squared. And here I'm just choosing 1000 because it's a big number, but any other really large number would be okay. So with this modified cost function, you'd in fact be penalizing the model if w3 and w4 are large. Because if you want to minimize this function, the only way to make this new cost function small is if w3 and w4 are both small, right? Because otherwise this 1000 times w3 squared and 1000 times w4 squared terms are going to be really, really big. So when you minimize this function, you're going to end up with w3 close to zero and w4 close to zero. So we're effectively nearly canceling out the effect of the features x cubed and x to the power of four and getting rid of these two terms over here. And if we do that, then we end up with a fit to the data that's much closer to the quadratic function, including maybe just tiny contributions from the features x cubed and x to the fourth. And this is good because it's a much better fit to the data compared to if all the parameters could be large and you end up with this wiggly quadratic function. More generally, here's the idea behind regularization. The idea is that if there are smaller values for the parameters, then that's a bit like having a simpler model, maybe one with fewer features, which is therefore less prone to overfitting. On the last slide, we penalize or we say we regularize only w3 and w4. But more generally, the way that regularization tends to be implemented is if you have a lot of features, say 100 features, you may not know which are the most important features and which ones to penalize. So the way regularization is typically implemented is to penalize all of the features or more precisely, you penalize all the wj parameters. And it's possible to show that this will usually result in fitting a smoother, simpler, less wiggly function that's less prone to overfitting. So for this example, if you have data with 100 features for each house, it may be hard to pick in advance which features to include and which ones to exclude. So let's build a model that uses all 100 features. So you have these 100 parameters, w1 through w100, as well as the 101st parameter b. Because we don't know which of these parameters are going to be the important ones, let's penalize all of them a bit and shrink all of them by adding this new term lambda times the sum from j equals 1 through n, where n is 100, the number of features of wj squared. This value lambda here is the Greek alphabet lambda, and it's also called a regularization parameter. So similar to picking a learning rate alpha, you now also have to choose a number for lambda. A couple things I would like to point out, by convention, instead of using lambda times the sum of wj squared, we also divide lambda by 2m, so that both the first and second terms here are scaled by 1 over 2m. It turns out that by scaling both terms the same way, it becomes a little bit easier to choose a good value for lambda, and in particular, you find that even if your training set size grows, say you find more training examples, so m, the training set size, is now bigger, the same value of lambda that you picked previously is now also more likely to continue to work if you have this extra scaling by 2m. Also by the way, by convention, we're not going to penalize the parameter b for being large. In practice, it makes very little difference whether you do or not, and some machine learning engineers, and actually some learning algorithm implementations, will also include lambda over 2m times the b squared term. But this makes very little difference in practice, and the more common convention which we'll use in this course is to regularize only the parameters w rather than the parameter b. So to summarize, in this modified cost function, we want to minimize the original cost, which is the mean squared error cost, plus additionally the second term, which is called the regularization term. And so this new cost function trades off two goals that you might have. Trying to minimize this first term encourages the algorithm to fit the training data well by minimizing the squared differences of the predictions and the actual values, and trying to minimize the second term, the algorithm also tries to keep the parameters wj small, which will tend to reduce overfitting. The value of lambda that you choose specifies the relative importance or the relative tradeoff or how you balance between these two goals. Let's take a look at what different values of lambda will cause your learning algorithm to do. Let's use the housing price prediction example using linear regression. So f of x is the linear regression model. If lambda was set to be zero, then you're not using the regularization term at all, because the regularization term is multiplied by zero. And so if lambda was zero, you end up fitting this overly wiggly, overly complex curve, and it overfits. So that was one extreme of if lambda was zero. Let's now look at the other extreme. If you set lambda to be a really, really, really large number, say lambda equals 10 to the power of 10, then you're placing a very heavy weight on this regularization term on the right. And the only way to minimize this is to be sure that all the values of w are pretty much very close to zero. So if lambda is very, very large, the learning algorithm will choose w1, w2, w3, and w4 to be extremely close to zero. And thus, f of x is basically equal to b, and so the learning algorithm fits a horizontal straight line and underfits. To recap, if lambda is zero, this model will overfit. If lambda is enormous, like 10 to the power of 10, this model will underfit. And so what you want is some value of lambda that is in between that more appropriately balances these first and second terms of trading off, minimizing the mean squared error and keeping the parameters small. And when the value of lambda is not too small and not too large, but just right, then hopefully you end up able to fit a fourth order polynomial keeping all of these features, but with a function that looks like this. So that's how regularization works. When we talk about model selection later in the specialization, we'll also see a variety of ways to choose good values for lambda. In the next two videos, we'll flesh out how to apply regularization to linear regression and logistic regression, and how to train these models with gradient descent. With that, you'll be able to avoid overfitting with both of these algorithms.
[{"start": 0.0, "end": 7.28, "text": " In the last video, we saw that regularization tries to make the parameter values w1 through"}, {"start": 7.28, "end": 10.8, "text": " wn small to reduce overfitting."}, {"start": 10.8, "end": 15.84, "text": " In this video, we'll build on that intuition and develop a modified cost function for your"}, {"start": 15.84, "end": 20.64, "text": " learning algorithm they can use to actually apply regularization."}, {"start": 20.64, "end": 22.12, "text": " Let's jump in."}, {"start": 22.12, "end": 28.16, "text": " Recall this example from the previous video in which we saw that if you fit a quadratic"}, {"start": 28.16, "end": 32.0, "text": " function to this data, it gives a pretty good fit."}, {"start": 32.0, "end": 36.88, "text": " But if you fit a very high order polynomial, you end up with a curve that overfits the"}, {"start": 36.88, "end": 37.88, "text": " data."}, {"start": 37.88, "end": 41.6, "text": " But now consider the following."}, {"start": 41.6, "end": 48.879999999999995, "text": " Suppose that you had a way to make the parameters w3 and w4 really, really small, say close"}, {"start": 48.879999999999995, "end": 49.879999999999995, "text": " to zero."}, {"start": 49.879999999999995, "end": 52.400000000000006, "text": " Here's what I mean."}, {"start": 52.4, "end": 58.56, "text": " Let's say instead of minimizing this objective function, this is a cost function for linear"}, {"start": 58.56, "end": 60.32, "text": " regression."}, {"start": 60.32, "end": 68.2, "text": " Let's say you were to modify the cost function and add to it 1000 times w3 squared plus 1000"}, {"start": 68.2, "end": 70.75999999999999, "text": " times w4 squared."}, {"start": 70.75999999999999, "end": 75.84, "text": " And here I'm just choosing 1000 because it's a big number, but any other really large number"}, {"start": 75.84, "end": 77.88, "text": " would be okay."}, {"start": 77.88, "end": 84.24, "text": " So with this modified cost function, you'd in fact be penalizing the model if w3 and"}, {"start": 84.24, "end": 86.56, "text": " w4 are large."}, {"start": 86.56, "end": 91.36, "text": " Because if you want to minimize this function, the only way to make this new cost function"}, {"start": 91.36, "end": 95.72, "text": " small is if w3 and w4 are both small, right?"}, {"start": 95.72, "end": 102.36, "text": " Because otherwise this 1000 times w3 squared and 1000 times w4 squared terms are going"}, {"start": 102.36, "end": 105.25999999999999, "text": " to be really, really big."}, {"start": 105.26, "end": 111.04, "text": " So when you minimize this function, you're going to end up with w3 close to zero and"}, {"start": 111.04, "end": 113.48, "text": " w4 close to zero."}, {"start": 113.48, "end": 120.52000000000001, "text": " So we're effectively nearly canceling out the effect of the features x cubed and x to"}, {"start": 120.52000000000001, "end": 125.9, "text": " the power of four and getting rid of these two terms over here."}, {"start": 125.9, "end": 130.64000000000001, "text": " And if we do that, then we end up with a fit to the data that's much closer to the quadratic"}, {"start": 130.64, "end": 138.0, "text": " function, including maybe just tiny contributions from the features x cubed and x to the fourth."}, {"start": 138.0, "end": 143.51999999999998, "text": " And this is good because it's a much better fit to the data compared to if all the parameters"}, {"start": 143.51999999999998, "end": 147.83999999999997, "text": " could be large and you end up with this wiggly quadratic function."}, {"start": 147.83999999999997, "end": 151.2, "text": " More generally, here's the idea behind regularization."}, {"start": 151.2, "end": 156.64, "text": " The idea is that if there are smaller values for the parameters, then that's a bit like"}, {"start": 156.64, "end": 161.83999999999997, "text": " having a simpler model, maybe one with fewer features, which is therefore less prone to"}, {"start": 161.83999999999997, "end": 164.35999999999999, "text": " overfitting."}, {"start": 164.35999999999999, "end": 172.48, "text": " On the last slide, we penalize or we say we regularize only w3 and w4."}, {"start": 172.48, "end": 178.2, "text": " But more generally, the way that regularization tends to be implemented is if you have a lot"}, {"start": 178.2, "end": 183.27999999999997, "text": " of features, say 100 features, you may not know which are the most important features"}, {"start": 183.27999999999997, "end": 185.39999999999998, "text": " and which ones to penalize."}, {"start": 185.4, "end": 190.20000000000002, "text": " So the way regularization is typically implemented is to penalize all of the features or more"}, {"start": 190.20000000000002, "end": 195.16, "text": " precisely, you penalize all the wj parameters."}, {"start": 195.16, "end": 200.5, "text": " And it's possible to show that this will usually result in fitting a smoother, simpler, less"}, {"start": 200.5, "end": 204.48000000000002, "text": " wiggly function that's less prone to overfitting."}, {"start": 204.48000000000002, "end": 208.72, "text": " So for this example, if you have data with 100 features for each house, it may be hard"}, {"start": 208.72, "end": 212.76, "text": " to pick in advance which features to include and which ones to exclude."}, {"start": 212.76, "end": 217.64, "text": " So let's build a model that uses all 100 features."}, {"start": 217.64, "end": 227.64, "text": " So you have these 100 parameters, w1 through w100, as well as the 101st parameter b."}, {"start": 227.64, "end": 231.95999999999998, "text": " Because we don't know which of these parameters are going to be the important ones, let's"}, {"start": 231.95999999999998, "end": 239.35999999999999, "text": " penalize all of them a bit and shrink all of them by adding this new term lambda times"}, {"start": 239.36, "end": 248.88000000000002, "text": " the sum from j equals 1 through n, where n is 100, the number of features of wj squared."}, {"start": 248.88000000000002, "end": 256.24, "text": " This value lambda here is the Greek alphabet lambda, and it's also called a regularization"}, {"start": 256.24, "end": 258.0, "text": " parameter."}, {"start": 258.0, "end": 265.92, "text": " So similar to picking a learning rate alpha, you now also have to choose a number for lambda."}, {"start": 265.92, "end": 272.40000000000003, "text": " A couple things I would like to point out, by convention, instead of using lambda times"}, {"start": 272.40000000000003, "end": 280.40000000000003, "text": " the sum of wj squared, we also divide lambda by 2m, so that both the first and second terms"}, {"start": 280.40000000000003, "end": 284.36, "text": " here are scaled by 1 over 2m."}, {"start": 284.36, "end": 290.08000000000004, "text": " It turns out that by scaling both terms the same way, it becomes a little bit easier to"}, {"start": 290.08, "end": 297.03999999999996, "text": " choose a good value for lambda, and in particular, you find that even if your training set size"}, {"start": 297.03999999999996, "end": 302.76, "text": " grows, say you find more training examples, so m, the training set size, is now bigger,"}, {"start": 302.76, "end": 309.44, "text": " the same value of lambda that you picked previously is now also more likely to continue to work"}, {"start": 309.44, "end": 313.44, "text": " if you have this extra scaling by 2m."}, {"start": 313.44, "end": 318.2, "text": " Also by the way, by convention, we're not going to penalize the parameter b for being"}, {"start": 318.2, "end": 319.2, "text": " large."}, {"start": 319.2, "end": 323.68, "text": " In practice, it makes very little difference whether you do or not, and some machine learning"}, {"start": 323.68, "end": 330.47999999999996, "text": " engineers, and actually some learning algorithm implementations, will also include lambda"}, {"start": 330.47999999999996, "end": 334.03999999999996, "text": " over 2m times the b squared term."}, {"start": 334.03999999999996, "end": 338.86, "text": " But this makes very little difference in practice, and the more common convention which we'll"}, {"start": 338.86, "end": 346.0, "text": " use in this course is to regularize only the parameters w rather than the parameter b."}, {"start": 346.0, "end": 353.52, "text": " So to summarize, in this modified cost function, we want to minimize the original cost, which"}, {"start": 353.52, "end": 359.16, "text": " is the mean squared error cost, plus additionally the second term, which is called the regularization"}, {"start": 359.16, "end": 361.22, "text": " term."}, {"start": 361.22, "end": 366.08, "text": " And so this new cost function trades off two goals that you might have."}, {"start": 366.08, "end": 371.0, "text": " Trying to minimize this first term encourages the algorithm to fit the training data well"}, {"start": 371.0, "end": 376.4, "text": " by minimizing the squared differences of the predictions and the actual values, and trying"}, {"start": 376.4, "end": 383.08, "text": " to minimize the second term, the algorithm also tries to keep the parameters wj small,"}, {"start": 383.08, "end": 386.28, "text": " which will tend to reduce overfitting."}, {"start": 386.28, "end": 392.64, "text": " The value of lambda that you choose specifies the relative importance or the relative tradeoff"}, {"start": 392.64, "end": 396.64, "text": " or how you balance between these two goals."}, {"start": 396.64, "end": 401.4, "text": " Let's take a look at what different values of lambda will cause your learning algorithm"}, {"start": 401.4, "end": 402.91999999999996, "text": " to do."}, {"start": 402.91999999999996, "end": 407.06, "text": " Let's use the housing price prediction example using linear regression."}, {"start": 407.06, "end": 410.4, "text": " So f of x is the linear regression model."}, {"start": 410.4, "end": 416.8, "text": " If lambda was set to be zero, then you're not using the regularization term at all,"}, {"start": 416.8, "end": 420.56, "text": " because the regularization term is multiplied by zero."}, {"start": 420.56, "end": 426.5, "text": " And so if lambda was zero, you end up fitting this overly wiggly, overly complex curve,"}, {"start": 426.5, "end": 428.36, "text": " and it overfits."}, {"start": 428.36, "end": 432.2, "text": " So that was one extreme of if lambda was zero."}, {"start": 432.2, "end": 434.38, "text": " Let's now look at the other extreme."}, {"start": 434.38, "end": 439.68, "text": " If you set lambda to be a really, really, really large number, say lambda equals 10"}, {"start": 439.68, "end": 444.82, "text": " to the power of 10, then you're placing a very heavy weight on this regularization term"}, {"start": 444.82, "end": 446.04, "text": " on the right."}, {"start": 446.04, "end": 451.24, "text": " And the only way to minimize this is to be sure that all the values of w are pretty much"}, {"start": 451.24, "end": 454.62, "text": " very close to zero."}, {"start": 454.62, "end": 462.12, "text": " So if lambda is very, very large, the learning algorithm will choose w1, w2, w3, and w4 to"}, {"start": 462.12, "end": 464.4, "text": " be extremely close to zero."}, {"start": 464.4, "end": 472.24, "text": " And thus, f of x is basically equal to b, and so the learning algorithm fits a horizontal"}, {"start": 472.24, "end": 475.38, "text": " straight line and underfits."}, {"start": 475.38, "end": 480.38, "text": " To recap, if lambda is zero, this model will overfit."}, {"start": 480.38, "end": 485.94, "text": " If lambda is enormous, like 10 to the power of 10, this model will underfit."}, {"start": 485.94, "end": 491.26, "text": " And so what you want is some value of lambda that is in between that more appropriately"}, {"start": 491.26, "end": 498.92, "text": " balances these first and second terms of trading off, minimizing the mean squared error and"}, {"start": 498.92, "end": 502.64, "text": " keeping the parameters small."}, {"start": 502.64, "end": 508.15999999999997, "text": " And when the value of lambda is not too small and not too large, but just right, then hopefully"}, {"start": 508.16, "end": 513.4200000000001, "text": " you end up able to fit a fourth order polynomial keeping all of these features, but with a"}, {"start": 513.4200000000001, "end": 516.7, "text": " function that looks like this."}, {"start": 516.7, "end": 519.48, "text": " So that's how regularization works."}, {"start": 519.48, "end": 524.6, "text": " When we talk about model selection later in the specialization, we'll also see a variety"}, {"start": 524.6, "end": 528.44, "text": " of ways to choose good values for lambda."}, {"start": 528.44, "end": 533.38, "text": " In the next two videos, we'll flesh out how to apply regularization to linear regression"}, {"start": 533.38, "end": 537.94, "text": " and logistic regression, and how to train these models with gradient descent."}, {"start": 537.94, "end": 541.62, "text": " With that, you'll be able to avoid overfitting with both of these algorithms."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=yRSKygmsvSI
3.10 Regularization to Reduce Overfitting | Regularized linear regression-- [ML | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, we'll figure out how to get gradient descent to work with regularized linear regression. Let's jump in. Here's the cost function we've come up with in the last video for regularized linear regression. The first part is the usual squared error cost function. And now you have this additional regularization term, where lambda is the regularization parameter. And you'd like to find parameters w and b that minimize the regularized cost function. Previously, we were using gradient descent for the original cost function, just the first term, before we added that second regularization term. And previously, we had the following gradient descent algorithm, which is that we repeatedly update the parameters wj and b for j equals 1 through n according to this formula, and b is also updated similarly. Again, alpha is a very small positive number called the learning rate. In fact, the updates for regularized linear regression look exactly the same, except that now the cost j is defined a bit differently. Previously, the derivative of j with respect to wj was given by this expression over here, and the derivative with respect to b was given by this expression over here. Now that we've added this additional regularization term, the only thing that changes is that the expression for the derivative with respect to wj ends up with one additional term. This plus lambda over m times wj. And in particular, for the new definition of the cost function j, these two expressions over here, these are the new derivatives of j with respect to wj, and the derivative of j with respect to b. Recall that we don't regularize b, so we're not trying to shrink b. That's why the update to b remains the same as before, whereas the update to w changes because the regularization term causes us to try to shrink wj. So let's take these definitions for the derivatives and put them back into the expression on the left to write out the gradient descent algorithm for regularized linear regression. So to implement gradient descent for regularized linear regression, this is what you would have your code do. Here's the update for wj for j equals one through n, and here's the update for b. And as usual, please remember to carry out simultaneous updates for all of these parameters. Now, in order for you to get this algorithm to work, this is all you need to know. But what I'd like to do in the remainder of this video is to go over some optional material to convey a slightly deeper intuition about what this formula is actually doing, as well as chat briefly about how these derivatives are derived. The rest of this video is completely optional. It's completely okay if you skip the rest of this video. But if you have a strong interest in math, then stick with me. It's always nice to hang out with you here. And through these equations, perhaps you can build a deeper intuition about what the math and what the derivatives are doing as well. So let's take a look. Let's look at the update rule for wj and rewrite it in another way. We're updating wj as one times wj minus alpha times lambda over m times wj. So I've moved the term from the end to the front here. And then minus alpha times one over m and then the rest of that term over there. So we just rearrange the terms a little bit. And if we simplify, then we're seeing that wj is updated as wj times one minus alpha times lambda over m minus alpha times this other term over here. And you might recognize this second term as the usual gradient descent update for unregularized linear regression. This is the update for linear regression before we had regularization. And this is the term we saw in week two of this course. And so the only change when you have regularization is that instead of wj being set to be equal to wj minus alpha times this term is now w times this number minus the usual update. So this is what we had in week one of this course. So what is this first term over here? Well alpha is a very small positive number, say 0.01. Lambda is usually a small number, say one or maybe 10. Let's say lambda is one for this example, and m is the training set size, say 50. And so when you multiply alpha lambda over m, say 0.01 times one divided by 50, this term ends up being a small positive number, say 0.0002. And thus, one minus alpha lambda over m is going to be a number just slightly less than one. So this is wj's 0.9998. And so the effect of this term is that on every single iteration of gradient descent, you're taking wj and multiplying it by 0.9998. That is by some numbers slightly less than one before carrying out the usual update. So what regularization is doing on every single iteration is you're multiplying w by a number slightly less than one, and that has the effect of shrinking the value of wj just a little bit. So this gives us another view on why regularization has the effect of shrinking the parameters wj a little bit on every iteration. And so that's how regularization works. If you're curious about how these derivative terms were computed, I have just one last optional slide that goes through just a little bit of the calculation of the derivative term. Again, this slide and the rest of this video are completely optional, meaning you won't need any of this to do the practice labs and the quizzes. So let's step through quickly the derivative calculation. The derivative of j with respect to wj looks like this. Recall that f of x for linear regression is defined as w dot x plus b or w dot product x plus b. And it turns out that by the rules of calculus, the derivatives look like this is one over two m times the sum i equals one through m of w dot x plus b minus y times two xj plus the derivative of the regularization term, which is lambda over two m times two wj. Notice that the second term does not have the summation term from j equals one through n anymore. The twos cancel out here and here and also here and here. And so it simplifies to this expression over here. And finally, remember that wx plus b is f of x. And so you can rewrite it as this expression down here. So this is why this expression is used to compute the gradient in regularized linear regression. So you now know how to implement regularized linear regression. Using this, you will reduce overfitting when you have a lot of features and relatively small training set. And this should let you get linear regression to work much better on many problems. In the next video, we'll take this regularization idea and apply it to logistic regression to avoid overfitting for logistic regression as well. Let's take a look at that in the next video.
[{"start": 0.0, "end": 6.140000000000001, "text": " In this video, we'll figure out how to get gradient descent to work with regularized"}, {"start": 6.140000000000001, "end": 7.140000000000001, "text": " linear regression."}, {"start": 7.140000000000001, "end": 8.14, "text": " Let's jump in."}, {"start": 8.14, "end": 14.66, "text": " Here's the cost function we've come up with in the last video for regularized linear regression."}, {"start": 14.66, "end": 18.84, "text": " The first part is the usual squared error cost function."}, {"start": 18.84, "end": 25.68, "text": " And now you have this additional regularization term, where lambda is the regularization parameter."}, {"start": 25.68, "end": 32.48, "text": " And you'd like to find parameters w and b that minimize the regularized cost function."}, {"start": 32.48, "end": 38.72, "text": " Previously, we were using gradient descent for the original cost function, just the first"}, {"start": 38.72, "end": 43.44, "text": " term, before we added that second regularization term."}, {"start": 43.44, "end": 49.28, "text": " And previously, we had the following gradient descent algorithm, which is that we repeatedly"}, {"start": 49.28, "end": 56.4, "text": " update the parameters wj and b for j equals 1 through n according to this formula, and"}, {"start": 56.4, "end": 58.84, "text": " b is also updated similarly."}, {"start": 58.84, "end": 64.28, "text": " Again, alpha is a very small positive number called the learning rate."}, {"start": 64.28, "end": 69.48, "text": " In fact, the updates for regularized linear regression look exactly the same, except that"}, {"start": 69.48, "end": 73.8, "text": " now the cost j is defined a bit differently."}, {"start": 73.8, "end": 81.39999999999999, "text": " Previously, the derivative of j with respect to wj was given by this expression over here,"}, {"start": 81.39999999999999, "end": 87.36, "text": " and the derivative with respect to b was given by this expression over here."}, {"start": 87.36, "end": 92.96, "text": " Now that we've added this additional regularization term, the only thing that changes is that"}, {"start": 92.96, "end": 99.24, "text": " the expression for the derivative with respect to wj ends up with one additional term."}, {"start": 99.24, "end": 104.36, "text": " This plus lambda over m times wj."}, {"start": 104.36, "end": 109.39999999999999, "text": " And in particular, for the new definition of the cost function j, these two expressions"}, {"start": 109.39999999999999, "end": 116.75999999999999, "text": " over here, these are the new derivatives of j with respect to wj, and the derivative of"}, {"start": 116.75999999999999, "end": 119.16, "text": " j with respect to b."}, {"start": 119.16, "end": 124.56, "text": " Recall that we don't regularize b, so we're not trying to shrink b."}, {"start": 124.56, "end": 130.24, "text": " That's why the update to b remains the same as before, whereas the update to w changes"}, {"start": 130.24, "end": 136.84, "text": " because the regularization term causes us to try to shrink wj."}, {"start": 136.84, "end": 142.12, "text": " So let's take these definitions for the derivatives and put them back into the expression on the"}, {"start": 142.12, "end": 149.16, "text": " left to write out the gradient descent algorithm for regularized linear regression."}, {"start": 149.16, "end": 155.2, "text": " So to implement gradient descent for regularized linear regression, this is what you would"}, {"start": 155.2, "end": 157.68, "text": " have your code do."}, {"start": 157.68, "end": 162.92, "text": " Here's the update for wj for j equals one through n, and here's the update for b."}, {"start": 162.92, "end": 169.84, "text": " And as usual, please remember to carry out simultaneous updates for all of these parameters."}, {"start": 169.84, "end": 175.92, "text": " Now, in order for you to get this algorithm to work, this is all you need to know."}, {"start": 175.92, "end": 180.67999999999998, "text": " But what I'd like to do in the remainder of this video is to go over some optional material"}, {"start": 180.67999999999998, "end": 186.07999999999998, "text": " to convey a slightly deeper intuition about what this formula is actually doing, as well"}, {"start": 186.07999999999998, "end": 190.23999999999998, "text": " as chat briefly about how these derivatives are derived."}, {"start": 190.23999999999998, "end": 192.39999999999998, "text": " The rest of this video is completely optional."}, {"start": 192.39999999999998, "end": 196.92, "text": " It's completely okay if you skip the rest of this video."}, {"start": 196.92, "end": 199.88, "text": " But if you have a strong interest in math, then stick with me."}, {"start": 199.88, "end": 202.32, "text": " It's always nice to hang out with you here."}, {"start": 202.32, "end": 207.4, "text": " And through these equations, perhaps you can build a deeper intuition about what the math"}, {"start": 207.4, "end": 209.84, "text": " and what the derivatives are doing as well."}, {"start": 209.84, "end": 212.51999999999998, "text": " So let's take a look."}, {"start": 212.51999999999998, "end": 217.76, "text": " Let's look at the update rule for wj and rewrite it in another way."}, {"start": 217.76, "end": 229.51999999999998, "text": " We're updating wj as one times wj minus alpha times lambda over m times wj."}, {"start": 229.52, "end": 233.44, "text": " So I've moved the term from the end to the front here."}, {"start": 233.44, "end": 241.52, "text": " And then minus alpha times one over m and then the rest of that term over there."}, {"start": 241.52, "end": 244.78, "text": " So we just rearrange the terms a little bit."}, {"start": 244.78, "end": 253.28, "text": " And if we simplify, then we're seeing that wj is updated as wj times one minus alpha"}, {"start": 253.28, "end": 260.16, "text": " times lambda over m minus alpha times this other term over here."}, {"start": 260.16, "end": 266.52, "text": " And you might recognize this second term as the usual gradient descent update for unregularized"}, {"start": 266.52, "end": 268.28, "text": " linear regression."}, {"start": 268.28, "end": 272.84, "text": " This is the update for linear regression before we had regularization."}, {"start": 272.84, "end": 277.56, "text": " And this is the term we saw in week two of this course."}, {"start": 277.56, "end": 283.24, "text": " And so the only change when you have regularization is that instead of wj being set to be equal"}, {"start": 283.24, "end": 293.52, "text": " to wj minus alpha times this term is now w times this number minus the usual update."}, {"start": 293.52, "end": 296.62, "text": " So this is what we had in week one of this course."}, {"start": 296.62, "end": 300.04, "text": " So what is this first term over here?"}, {"start": 300.04, "end": 304.24, "text": " Well alpha is a very small positive number, say 0.01."}, {"start": 304.24, "end": 311.44, "text": " Lambda is usually a small number, say one or maybe 10."}, {"start": 311.44, "end": 318.32, "text": " Let's say lambda is one for this example, and m is the training set size, say 50."}, {"start": 318.32, "end": 328.36, "text": " And so when you multiply alpha lambda over m, say 0.01 times one divided by 50, this"}, {"start": 328.36, "end": 335.04, "text": " term ends up being a small positive number, say 0.0002."}, {"start": 335.04, "end": 340.36, "text": " And thus, one minus alpha lambda over m is going to be a number just slightly less than"}, {"start": 340.36, "end": 341.36, "text": " one."}, {"start": 341.36, "end": 344.12, "text": " So this is wj's 0.9998."}, {"start": 344.12, "end": 349.6, "text": " And so the effect of this term is that on every single iteration of gradient descent,"}, {"start": 349.6, "end": 354.88, "text": " you're taking wj and multiplying it by 0.9998."}, {"start": 354.88, "end": 360.3, "text": " That is by some numbers slightly less than one before carrying out the usual update."}, {"start": 360.3, "end": 366.24, "text": " So what regularization is doing on every single iteration is you're multiplying w by a number"}, {"start": 366.24, "end": 372.24, "text": " slightly less than one, and that has the effect of shrinking the value of wj just a little"}, {"start": 372.24, "end": 373.48, "text": " bit."}, {"start": 373.48, "end": 379.08, "text": " So this gives us another view on why regularization has the effect of shrinking the parameters"}, {"start": 379.08, "end": 382.32, "text": " wj a little bit on every iteration."}, {"start": 382.32, "end": 385.84000000000003, "text": " And so that's how regularization works."}, {"start": 385.84000000000003, "end": 390.40000000000003, "text": " If you're curious about how these derivative terms were computed, I have just one last"}, {"start": 390.40000000000003, "end": 396.12, "text": " optional slide that goes through just a little bit of the calculation of the derivative term."}, {"start": 396.12, "end": 401.0, "text": " Again, this slide and the rest of this video are completely optional, meaning you won't"}, {"start": 401.0, "end": 405.68, "text": " need any of this to do the practice labs and the quizzes."}, {"start": 405.68, "end": 409.32, "text": " So let's step through quickly the derivative calculation."}, {"start": 409.32, "end": 426.84, "text": " The derivative of j with respect to wj looks like this."}, {"start": 426.84, "end": 434.76, "text": " Recall that f of x for linear regression is defined as w dot x plus b or w dot product"}, {"start": 434.76, "end": 436.64, "text": " x plus b."}, {"start": 436.64, "end": 443.36, "text": " And it turns out that by the rules of calculus, the derivatives look like this is one over"}, {"start": 443.36, "end": 456.84, "text": " two m times the sum i equals one through m of w dot x plus b minus y times two xj plus"}, {"start": 456.84, "end": 467.2, "text": " the derivative of the regularization term, which is lambda over two m times two wj."}, {"start": 467.2, "end": 472.35999999999996, "text": " Notice that the second term does not have the summation term from j equals one through"}, {"start": 472.35999999999996, "end": 473.96, "text": " n anymore."}, {"start": 473.96, "end": 480.28, "text": " The twos cancel out here and here and also here and here."}, {"start": 480.28, "end": 488.03999999999996, "text": " And so it simplifies to this expression over here."}, {"start": 488.03999999999996, "end": 493.0, "text": " And finally, remember that wx plus b is f of x."}, {"start": 493.0, "end": 498.0, "text": " And so you can rewrite it as this expression down here."}, {"start": 498.0, "end": 503.35999999999996, "text": " So this is why this expression is used to compute the gradient in regularized linear"}, {"start": 503.35999999999996, "end": 505.2, "text": " regression."}, {"start": 505.2, "end": 509.64, "text": " So you now know how to implement regularized linear regression."}, {"start": 509.64, "end": 513.64, "text": " Using this, you will reduce overfitting when you have a lot of features and relatively"}, {"start": 513.64, "end": 515.6, "text": " small training set."}, {"start": 515.6, "end": 520.96, "text": " And this should let you get linear regression to work much better on many problems."}, {"start": 520.96, "end": 526.52, "text": " In the next video, we'll take this regularization idea and apply it to logistic regression to"}, {"start": 526.52, "end": 529.4399999999999, "text": " avoid overfitting for logistic regression as well."}, {"start": 529.44, "end": 540.12, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=MFp4uQMQ1rk
3.11 Regularization to Reduce Overfitting | Regularized logistic regression-- [ML | Andrew Ng]
First Course: Supervised Machine Learning : Regression and Classification. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, you see how to implement regularized logistic regression. Just as the gradient update for logistic regression has seemed surprisingly similar to the gradient update for linear regression, you find that the gradient descent update for regularized logistic regression will also look similar to the update for regularized linear regression. Let's take a look. Here's the idea. We saw earlier that logistic regression can be prone to overfitting if you fit it with very high-order polynomial features like this. Here z is a high-order polynomial that gets passed into the sigmoid function, like so, to compute f. And in particular, you can end up with a decision boundary that is overly complex and overfits this training set. More generally, when you train logistic regression with a lot of features, whether polynomial features or some other features, there can be a higher risk of overfitting. This was the cost function for logistic regression. If you want to modify it to use regularization, all you need to do is add to it the following term. Let's add lambda, the regularization parameter, over 2m, times the sum from j equals 1 through n, where n is the number of features as usual, of wj squared. So when you minimize this cost function as a function of w and b, it has the effect of penalizing parameters w1, w2, through wn and preventing them from being too large. And if you do this, then even though you're fitting a high-order polynomial with a lot of parameters, you still get a decision boundary that looks like this, something that looks more reasonable for separating positive and negative examples while also generalizing, hopefully, to new examples not in the training set. So when using regularization, even when you have a lot of features, how can you actually implement this? How can you actually minimize this cost function j of wb that includes the regularization term? Well, let's use gradient descent as before. So here's the cost function that you want to minimize. And to implement gradient descent, as before, we'll carry out the following simultaneous updates over wj and b. These are the usual update rules for gradient descent. And just like regularized linear regression, when you compute what are these derivative terms, the only thing that changes now is that the derivative with respect to wj gets this additional term, lambda over m times wj added here at the end. And again, it looks a lot like the update for regularized linear regression. In fact, it's the exact same equation, except for the fact that the definition of f is now no longer the linear function, it is the logistic function applied to z. And similar to linear regression, we will regularize only the parameters wj, but not the parameter b, which is why there's no change to the update you would make for b. In the final optional lab of this week, you revisit overfitting. And in the interactive plot in the optional lab, you can now choose to regularize your models, both regression and classification, by enabling regularization during gradient descent, by selecting a value for lambda. Please take a look at the code for implementing regularized logistic regression in particular, because you implement this in a practice lab yourself at the end of this week. So, now you know how to implement regularized logistic regression. When I walk around Silicon Valley, there are many engineers using machine learning to create a ton of value, sometimes making a lot of money for the companies. And I know you've only been studying this stuff for a few weeks. But if you understand and can apply linear regression and logistic regression, that's actually all you need to create some very valuable applications. While the specific learning algorithms you use are important, knowing things like when and how to reduce overfitting turns out to be one of the very valuable skills in the real world as well. So, I want to say, congratulations on how far you've come. And I want to say great job for getting through all the way to the end of this video. I hope you also work through the practice labs and quizzes. Having said that, there's still many more exciting things to learn. In the second course of this specialization, you learn about neural networks, also called deep learning algorithms. Neural networks are responsible for many of the latest breakthroughs in AI today, from practical speech recognition, to computers accurately recognizing objects and images, to self-driving cars. The way a neural network gets built actually uses a lot of what you've already learned, like cost functions and gradient descent and sigmoid functions. So again, congratulations on reaching the end of this third and final week of course one. I hope you have fun in the labs and I will see you in next week's material on neural networks.
[{"start": 0.0, "end": 6.4, "text": " In this video, you see how to implement regularized logistic regression."}, {"start": 6.4, "end": 11.8, "text": " Just as the gradient update for logistic regression has seemed surprisingly similar to the gradient"}, {"start": 11.8, "end": 16.44, "text": " update for linear regression, you find that the gradient descent update for regularized"}, {"start": 16.44, "end": 21.28, "text": " logistic regression will also look similar to the update for regularized linear regression."}, {"start": 21.28, "end": 22.8, "text": " Let's take a look."}, {"start": 22.8, "end": 24.32, "text": " Here's the idea."}, {"start": 24.32, "end": 28.92, "text": " We saw earlier that logistic regression can be prone to overfitting if you fit it with"}, {"start": 28.92, "end": 32.96, "text": " very high-order polynomial features like this."}, {"start": 32.96, "end": 40.0, "text": " Here z is a high-order polynomial that gets passed into the sigmoid function, like so,"}, {"start": 40.0, "end": 41.0, "text": " to compute f."}, {"start": 41.0, "end": 47.6, "text": " And in particular, you can end up with a decision boundary that is overly complex and overfits"}, {"start": 47.6, "end": 50.480000000000004, "text": " this training set."}, {"start": 50.480000000000004, "end": 55.84, "text": " More generally, when you train logistic regression with a lot of features, whether polynomial"}, {"start": 55.84, "end": 62.28, "text": " features or some other features, there can be a higher risk of overfitting."}, {"start": 62.28, "end": 65.88000000000001, "text": " This was the cost function for logistic regression."}, {"start": 65.88000000000001, "end": 72.80000000000001, "text": " If you want to modify it to use regularization, all you need to do is add to it the following"}, {"start": 72.80000000000001, "end": 73.80000000000001, "text": " term."}, {"start": 73.80000000000001, "end": 81.2, "text": " Let's add lambda, the regularization parameter, over 2m, times the sum from j equals 1 through"}, {"start": 81.2, "end": 86.72, "text": " n, where n is the number of features as usual, of wj squared."}, {"start": 86.72, "end": 92.12, "text": " So when you minimize this cost function as a function of w and b, it has the effect of"}, {"start": 92.12, "end": 99.72, "text": " penalizing parameters w1, w2, through wn and preventing them from being too large."}, {"start": 99.72, "end": 104.60000000000001, "text": " And if you do this, then even though you're fitting a high-order polynomial with a lot"}, {"start": 104.60000000000001, "end": 110.24000000000001, "text": " of parameters, you still get a decision boundary that looks like this, something that looks"}, {"start": 110.24, "end": 116.08, "text": " more reasonable for separating positive and negative examples while also generalizing,"}, {"start": 116.08, "end": 120.44, "text": " hopefully, to new examples not in the training set."}, {"start": 120.44, "end": 126.24, "text": " So when using regularization, even when you have a lot of features, how can you actually"}, {"start": 126.24, "end": 127.36, "text": " implement this?"}, {"start": 127.36, "end": 133.44, "text": " How can you actually minimize this cost function j of wb that includes the regularization term?"}, {"start": 133.44, "end": 137.78, "text": " Well, let's use gradient descent as before."}, {"start": 137.78, "end": 141.68, "text": " So here's the cost function that you want to minimize."}, {"start": 141.68, "end": 146.4, "text": " And to implement gradient descent, as before, we'll carry out the following simultaneous"}, {"start": 146.4, "end": 151.68, "text": " updates over wj and b."}, {"start": 151.68, "end": 155.12, "text": " These are the usual update rules for gradient descent."}, {"start": 155.12, "end": 160.88, "text": " And just like regularized linear regression, when you compute what are these derivative"}, {"start": 160.88, "end": 166.96, "text": " terms, the only thing that changes now is that the derivative with respect to wj gets"}, {"start": 166.96, "end": 174.96, "text": " this additional term, lambda over m times wj added here at the end."}, {"start": 174.96, "end": 179.92000000000002, "text": " And again, it looks a lot like the update for regularized linear regression."}, {"start": 179.92000000000002, "end": 185.48000000000002, "text": " In fact, it's the exact same equation, except for the fact that the definition of f is now"}, {"start": 185.48000000000002, "end": 192.32, "text": " no longer the linear function, it is the logistic function applied to z."}, {"start": 192.32, "end": 197.64, "text": " And similar to linear regression, we will regularize only the parameters wj, but not"}, {"start": 197.64, "end": 204.51999999999998, "text": " the parameter b, which is why there's no change to the update you would make for b."}, {"start": 204.51999999999998, "end": 210.32, "text": " In the final optional lab of this week, you revisit overfitting."}, {"start": 210.32, "end": 215.88, "text": " And in the interactive plot in the optional lab, you can now choose to regularize your"}, {"start": 215.88, "end": 222.6, "text": " models, both regression and classification, by enabling regularization during gradient descent,"}, {"start": 222.6, "end": 225.48, "text": " by selecting a value for lambda."}, {"start": 225.48, "end": 230.6, "text": " Please take a look at the code for implementing regularized logistic regression in particular,"}, {"start": 230.6, "end": 235.51999999999998, "text": " because you implement this in a practice lab yourself at the end of this week."}, {"start": 235.51999999999998, "end": 241.56, "text": " So, now you know how to implement regularized logistic regression."}, {"start": 241.56, "end": 245.85999999999999, "text": " When I walk around Silicon Valley, there are many engineers using machine learning to create"}, {"start": 245.86, "end": 249.8, "text": " a ton of value, sometimes making a lot of money for the companies."}, {"start": 249.8, "end": 253.96, "text": " And I know you've only been studying this stuff for a few weeks."}, {"start": 253.96, "end": 259.16, "text": " But if you understand and can apply linear regression and logistic regression, that's"}, {"start": 259.16, "end": 264.0, "text": " actually all you need to create some very valuable applications."}, {"start": 264.0, "end": 268.56, "text": " While the specific learning algorithms you use are important, knowing things like when"}, {"start": 268.56, "end": 273.40000000000003, "text": " and how to reduce overfitting turns out to be one of the very valuable skills in the"}, {"start": 273.40000000000003, "end": 274.64, "text": " real world as well."}, {"start": 274.64, "end": 279.28, "text": " So, I want to say, congratulations on how far you've come."}, {"start": 279.28, "end": 284.56, "text": " And I want to say great job for getting through all the way to the end of this video."}, {"start": 284.56, "end": 289.24, "text": " I hope you also work through the practice labs and quizzes."}, {"start": 289.24, "end": 293.68, "text": " Having said that, there's still many more exciting things to learn."}, {"start": 293.68, "end": 298.84, "text": " In the second course of this specialization, you learn about neural networks, also called"}, {"start": 298.84, "end": 301.03999999999996, "text": " deep learning algorithms."}, {"start": 301.04, "end": 305.20000000000005, "text": " Neural networks are responsible for many of the latest breakthroughs in AI today, from"}, {"start": 305.20000000000005, "end": 310.36, "text": " practical speech recognition, to computers accurately recognizing objects and images,"}, {"start": 310.36, "end": 312.12, "text": " to self-driving cars."}, {"start": 312.12, "end": 317.16, "text": " The way a neural network gets built actually uses a lot of what you've already learned,"}, {"start": 317.16, "end": 321.20000000000005, "text": " like cost functions and gradient descent and sigmoid functions."}, {"start": 321.20000000000005, "end": 325.88, "text": " So again, congratulations on reaching the end of this third and final week of course"}, {"start": 325.88, "end": 326.88, "text": " one."}, {"start": 326.88, "end": 331.44, "text": " I hope you have fun in the labs and I will see you in next week's material on neural"}, {"start": 331.44, "end": 358.08, "text": " networks."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=cuU8pCflXCo
4.1 Advanced Learning Algorithms | Welcome! --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to course two of this machine learning specialization. In this course, you learn about neural networks, also called deep learning algorithms, as well as decision trees. These are some of the most powerful and widely used machine learning algorithms, and you get to implement them and get them to work for yourself. One of the things you see also in this course is practical advice on how to build machine learning systems. This part of the material is quite unique to this course. When you're building a practical machine learning system, there are a lot of decisions you have to make, such as should you spend more time collecting data, or should you buy a much bigger GPU to build a much bigger neural network. Even today, when I visit a leading tech company and talk to the team working there on a machine learning application, unfortunately, sometimes I look at what they've been doing for the last six months and go, gee, someone could have told you maybe even six months ago that that approach wasn't going to work that well. With some of the tips that you learn in this course, I hope that you'll be one of the ones to not waste those six months, but instead be able to make more systematic and better decisions about how to build practical working machine learning applications. So with that, let's dive in. In detail, this is what you see in the four weeks of this course. In week one, we'll go over neural networks and how to carry out inference or prediction. So if you were to go to the internet and download the parameters of a neural network that someone else had trained and whose parameters had posted on the internet, then to use that neural network to make predictions would be called inference. And you learn how neural networks work and how to do inference in week one in this week. Next week, you learn how to train your own neural network. In particular, if you have a trading set of labeled examples, X and Y, how do you train the parameters of a neural network for yourself? In the third week, we'll then go into practical advice for building machine learning systems. And I'll share with you some tips that I think even highly paid engineers building machine learning systems very successfully today don't really always manage to consistently apply. And I think that will help you build systems yourself efficiently and quickly. And then in the final week of this course, you learn about decision trees. While decision trees don't get as much buzz in the media, there's a little bit less hype about decision trees compared to neural networks. They are also one of the widely used and very powerful learning algorithms that I think there's a good chance you end up using yourself if you end up building an application. So with that, let's jump into neural networks, and we're going to start by taking a quick look at how the human brain, that is how the biological brain works. Let's go on to the next video.
[{"start": 0.0, "end": 4.9, "text": " Welcome to course two of this machine learning specialization."}, {"start": 4.9, "end": 9.98, "text": " In this course, you learn about neural networks, also called deep learning algorithms, as well"}, {"start": 9.98, "end": 11.94, "text": " as decision trees."}, {"start": 11.94, "end": 16.22, "text": " These are some of the most powerful and widely used machine learning algorithms, and you"}, {"start": 16.22, "end": 20.38, "text": " get to implement them and get them to work for yourself."}, {"start": 20.38, "end": 25.78, "text": " One of the things you see also in this course is practical advice on how to build machine"}, {"start": 25.78, "end": 27.54, "text": " learning systems."}, {"start": 27.54, "end": 30.64, "text": " This part of the material is quite unique to this course."}, {"start": 30.64, "end": 35.32, "text": " When you're building a practical machine learning system, there are a lot of decisions you have"}, {"start": 35.32, "end": 39.879999999999995, "text": " to make, such as should you spend more time collecting data, or should you buy a much"}, {"start": 39.879999999999995, "end": 44.599999999999994, "text": " bigger GPU to build a much bigger neural network."}, {"start": 44.599999999999994, "end": 49.56, "text": " Even today, when I visit a leading tech company and talk to the team working there on a machine"}, {"start": 49.56, "end": 54.0, "text": " learning application, unfortunately, sometimes I look at what they've been doing for the"}, {"start": 54.0, "end": 59.6, "text": " last six months and go, gee, someone could have told you maybe even six months ago that"}, {"start": 59.6, "end": 63.24, "text": " that approach wasn't going to work that well."}, {"start": 63.24, "end": 67.56, "text": " With some of the tips that you learn in this course, I hope that you'll be one of the ones"}, {"start": 67.56, "end": 72.8, "text": " to not waste those six months, but instead be able to make more systematic and better"}, {"start": 72.8, "end": 77.92, "text": " decisions about how to build practical working machine learning applications."}, {"start": 77.92, "end": 79.84, "text": " So with that, let's dive in."}, {"start": 79.84, "end": 84.92, "text": " In detail, this is what you see in the four weeks of this course."}, {"start": 84.92, "end": 91.52000000000001, "text": " In week one, we'll go over neural networks and how to carry out inference or prediction."}, {"start": 91.52000000000001, "end": 96.84, "text": " So if you were to go to the internet and download the parameters of a neural network that someone"}, {"start": 96.84, "end": 101.64, "text": " else had trained and whose parameters had posted on the internet, then to use that neural"}, {"start": 101.64, "end": 104.76, "text": " network to make predictions would be called inference."}, {"start": 104.76, "end": 109.80000000000001, "text": " And you learn how neural networks work and how to do inference in week one in this week."}, {"start": 109.8, "end": 113.34, "text": " Next week, you learn how to train your own neural network."}, {"start": 113.34, "end": 118.32, "text": " In particular, if you have a trading set of labeled examples, X and Y, how do you train"}, {"start": 118.32, "end": 121.67999999999999, "text": " the parameters of a neural network for yourself?"}, {"start": 121.67999999999999, "end": 126.96, "text": " In the third week, we'll then go into practical advice for building machine learning systems."}, {"start": 126.96, "end": 131.48, "text": " And I'll share with you some tips that I think even highly paid engineers building machine"}, {"start": 131.48, "end": 137.36, "text": " learning systems very successfully today don't really always manage to consistently apply."}, {"start": 137.36, "end": 142.20000000000002, "text": " And I think that will help you build systems yourself efficiently and quickly."}, {"start": 142.20000000000002, "end": 146.20000000000002, "text": " And then in the final week of this course, you learn about decision trees."}, {"start": 146.20000000000002, "end": 150.8, "text": " While decision trees don't get as much buzz in the media, there's a little bit less hype"}, {"start": 150.8, "end": 153.36, "text": " about decision trees compared to neural networks."}, {"start": 153.36, "end": 158.56, "text": " They are also one of the widely used and very powerful learning algorithms that I think"}, {"start": 158.56, "end": 163.08, "text": " there's a good chance you end up using yourself if you end up building an application."}, {"start": 163.08, "end": 167.52, "text": " So with that, let's jump into neural networks, and we're going to start by taking a quick"}, {"start": 167.52, "end": 172.16000000000003, "text": " look at how the human brain, that is how the biological brain works."}, {"start": 172.16, "end": 193.88, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=QpJ35mMLIOA
4.2 Neural Networks Intuition |Neurons and the brain --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
When neural networks were first invented many decades ago, the original motivation was the right software that could mimic how the human brain or how the biological brain learns and thinks. And even though today, neural networks, sometimes also called artificial neural networks, have become very different than how any of us might think about how the brain actually works and learns, some of the biological motivation still remains in the way we think about artificial neural networks or computer neural networks today. So let's start by taking a look at how the brain works and how that relates to neural networks. The human brain, or maybe more generally the biological brain, demonstrates a higher level or more capable level of intelligence than anything else we've been able to build so far. And so neural networks have started with the motivation of trying to build software to mimic the brain. Work in neural networks had started back in the 1950s and then it fell out of favor for a while. Then in the 1980s and early 1990s, they gained in popularity again and showed tremendous traction in some applications like handwritten digit recognition, which were used even back then to read postal codes for routing mail and for reading dollar figures in handwritten checks. But then it fell out of favor again in the late 1990s and it was from about 2005 that it enjoyed a resurgence and also became maybe rebranded a little bit with deep learning. One of the things that surprised me back then was deep learning and neural networks meant very similar things, but I maybe underappreciated at the time that the term deep learning just sounds much better because it's deep and it's learning. And so that turned out to be the brand that took off in the last decade or decade and a half. And since then, neural networks have revolutionized application area after application area. I think the first application area that modern neural networks or deep learning had a huge impact on was probably speech recognition, where we started to see much better speech recognition systems due to modern deep learning and authors such as Lee Dang and Jeff Hinton were instrumental to this. And then it started to make inroads into computer vision. And sometimes people still speak of the ImageNet moment in 2012 and that was maybe a bigger splash where it then caught broader imagination and had a big impact on computer vision. Then the next few years, it made its inroads into text or into natural language processing and so on and so forth. And now neural networks are used in everything from climate change to medical imaging to online advertising to product recommendations and really lots of application areas of machine learning now use neural networks. Even though today's neural networks have almost nothing to do with how the brain learns, there was the early motivation of trying to build software to mimic the brain. So how does the brain work? Here's a diagram illustrating what neurons in a brain looks like. All of human thought is from neurons like these in your brain and mind sending electrical impulses and sometimes forming new connections with other neurons. Given a neuron like this one, it has a number of inputs where it receives electrical impulses from other neurons. And then this neuron that I've circled carries out some computations and will then send this output to other neurons via these electrical impulses. And this upper neurons output in turn becomes the input to this neuron down below, which again, advocates inputs from multiple other neurons to then maybe send his own output to yet other neurons. And this is the stuff of which human thought is made. Here's a simplified diagram of a biological neuron. A neuron comprises a cell body shown here on the left. And if you have taken a class in biology, you may recognize this to be the nucleus of the neuron. And as we saw on the previous slide, the neuron has different inputs. And in a biological neuron, the input wires are called the dendrites. And it then occasionally sends electrical impulses to other neurons via the output wire, which is called the axon. Don't worry about these biological terms. If you saw them in a biology class, you may remember them, but you don't really need to memorize any of these terms for the purpose of building artificial neural networks. But this biological neuron may then send electrical impulses that becomes the input to another neuron. So the artificial neural network uses a very simplified mathematical model of what a biological neuron does. And I'm going to draw a little circle here to denote a single neuron. And what a neuron does is it takes some inputs, one or more inputs, which are just numbers, and it does some computation and it outputs some other number, which then could be an input to a second neuron shown here on the right. When you're building an artificial neural network or a deep learning algorithm, rather than building one neuron at a time, you often want to simulate many such neurons at the same time. And so when in this diagram, I'm drawing three neurons, and what these neurons do collectively is input a few numbers, carry out some computation, and output some other numbers. Now at this point, I'd like to give one big caveat, which is that even though I made a loose analogy between biological neurons and artificial neurons, I think that today we have almost no idea how the human brain works. In fact, every few years, neuroscientists make some fundamental breakthrough about how the brain works, and I think we'll continue to do so for the foreseeable future. And that to me is a sign that there are many breakthroughs that are yet to be discovered about how the brain actually works, and thus attempts to blindly mimic what we know of the human brain today, which is frankly very little, probably won't get us that far to our building role intelligence, certainly not with our current level of knowledge in neuroscience. Having said that, even with these extremely simplified models of a neuron, which we'll talk about, we'll be able to build really powerful deep learning algorithms. And so as you go deeper into neural networks and into deep learning, even though the origins were biologically motivated, don't take the biological motivation too seriously. In fact, those of us that do research in deep learning have shifted away from looking to biological motivation that much, but instead are just using engineering principles to figure out how to build algorithms that are more effective. But I think it might still be fun to speculate and think about how biological neurons work every now and then. The ideas of neural networks have been around for many decades. So a few people have asked me, hey, Andrew, why now? Why is it that only in the last handful of years that neural networks have really taken off? This is a picture I draw for them when I'm asked that question, and that maybe you could draw for others as well if they ask you that question. Let me plot on the horizontal axis, the amount of data you have for a problem, and on the vertical axis, the performance or the accuracy of a learning algorithm applied to that problem. Over the last couple of decades, with the rise of the internet, the rise of mobile phones, the digitalization of our society, the amount of data we have for a lot of applications has steadily marched to the right. A lot of records that used to be on paper, such as if you order something, rather than it being on a piece of paper, that's much more likely to be a digital record. Or health record, if you see a doctor, it's much more likely to be digital now compared to on pieces of paper. And so in many application areas, the amount of digital data has exploded. And what we saw was with traditional machine learning algorithms, such as logistic regression and linear regression, even as you fed those algorithms more data, it was very difficult to get the performance to keep on going up. So it was as if the traditional learning algorithms like linear regression and logistic regression, they just weren't able to scale with the amount of data we could now feed it. And they weren't able to take effective advantage of all this data we had for different applications. And what AI researchers started to observe was that if you were to train a small neural network on this data set, then the performance maybe looks like this. And if you were to train a medium sized neural network, meaning one with more neurons in it, this performance may look like that. And if you were to train a very large neural network, meaning one with a lot of these artificial neurons, then for some applications, the performance would just keep on going up. And so this meant two things. It meant that for a certain class of applications where you do have a lot of data, sometimes you hear the term big data toss around, if you're able to train a very large neural network to take advantage of that huge amount of data you have, then you could attain performance on anything ranging from speech recognition to image recognition to natural language processing applications and many more that just were not possible with earlier generations of learning algorithms. And this caused deep learning algorithms to take off. And this too is why faster computer processes, including the rise of GPUs or graphics processor units, this is hardware originally designed to generate nice looking computer graphics, but turned out to be really powerful for deep learning as well. That was also a major force in allowing deep learning algorithms to become what it is today. That's how neural networks got started as well as why they took off so quickly in the last several years. Let's now dive more deeply into the details of how a neural network actually works. Please go on to the next video.
[{"start": 0.0, "end": 5.64, "text": " When neural networks were first invented many decades ago, the original motivation was the"}, {"start": 5.64, "end": 11.64, "text": " right software that could mimic how the human brain or how the biological brain learns and"}, {"start": 11.64, "end": 12.64, "text": " thinks."}, {"start": 12.64, "end": 17.68, "text": " And even though today, neural networks, sometimes also called artificial neural networks, have"}, {"start": 17.68, "end": 22.44, "text": " become very different than how any of us might think about how the brain actually works and"}, {"start": 22.44, "end": 28.32, "text": " learns, some of the biological motivation still remains in the way we think about artificial"}, {"start": 28.32, "end": 31.28, "text": " neural networks or computer neural networks today."}, {"start": 31.28, "end": 36.28, "text": " So let's start by taking a look at how the brain works and how that relates to neural"}, {"start": 36.28, "end": 37.28, "text": " networks."}, {"start": 37.28, "end": 43.24, "text": " The human brain, or maybe more generally the biological brain, demonstrates a higher level"}, {"start": 43.24, "end": 47.6, "text": " or more capable level of intelligence than anything else we've been able to build so"}, {"start": 47.6, "end": 48.6, "text": " far."}, {"start": 48.6, "end": 53.92, "text": " And so neural networks have started with the motivation of trying to build software to"}, {"start": 53.92, "end": 56.08, "text": " mimic the brain."}, {"start": 56.08, "end": 61.56, "text": " Work in neural networks had started back in the 1950s and then it fell out of favor for"}, {"start": 61.56, "end": 62.879999999999995, "text": " a while."}, {"start": 62.879999999999995, "end": 69.32, "text": " Then in the 1980s and early 1990s, they gained in popularity again and showed tremendous"}, {"start": 69.32, "end": 75.0, "text": " traction in some applications like handwritten digit recognition, which were used even back"}, {"start": 75.0, "end": 81.48, "text": " then to read postal codes for routing mail and for reading dollar figures in handwritten"}, {"start": 81.48, "end": 82.48, "text": " checks."}, {"start": 82.48, "end": 90.04, "text": " But then it fell out of favor again in the late 1990s and it was from about 2005 that"}, {"start": 90.04, "end": 97.64, "text": " it enjoyed a resurgence and also became maybe rebranded a little bit with deep learning."}, {"start": 97.64, "end": 102.44, "text": " One of the things that surprised me back then was deep learning and neural networks meant"}, {"start": 102.44, "end": 108.4, "text": " very similar things, but I maybe underappreciated at the time that the term deep learning just"}, {"start": 108.4, "end": 112.28, "text": " sounds much better because it's deep and it's learning."}, {"start": 112.28, "end": 116.52, "text": " And so that turned out to be the brand that took off in the last decade or decade and"}, {"start": 116.52, "end": 117.52, "text": " a half."}, {"start": 117.52, "end": 123.28, "text": " And since then, neural networks have revolutionized application area after application area."}, {"start": 123.28, "end": 127.96000000000001, "text": " I think the first application area that modern neural networks or deep learning had a huge"}, {"start": 127.96000000000001, "end": 133.0, "text": " impact on was probably speech recognition, where we started to see much better speech"}, {"start": 133.0, "end": 138.44, "text": " recognition systems due to modern deep learning and authors such as Lee Dang and Jeff Hinton"}, {"start": 138.44, "end": 141.08, "text": " were instrumental to this."}, {"start": 141.08, "end": 145.32000000000002, "text": " And then it started to make inroads into computer vision."}, {"start": 145.32000000000002, "end": 151.9, "text": " And sometimes people still speak of the ImageNet moment in 2012 and that was maybe a bigger"}, {"start": 151.9, "end": 158.52, "text": " splash where it then caught broader imagination and had a big impact on computer vision."}, {"start": 158.52, "end": 163.84, "text": " Then the next few years, it made its inroads into text or into natural language processing"}, {"start": 163.84, "end": 165.04000000000002, "text": " and so on and so forth."}, {"start": 165.04000000000002, "end": 170.0, "text": " And now neural networks are used in everything from climate change to medical imaging to"}, {"start": 170.0, "end": 175.36, "text": " online advertising to product recommendations and really lots of application areas of machine"}, {"start": 175.36, "end": 178.08, "text": " learning now use neural networks."}, {"start": 178.08, "end": 184.08, "text": " Even though today's neural networks have almost nothing to do with how the brain learns, there"}, {"start": 184.08, "end": 189.52, "text": " was the early motivation of trying to build software to mimic the brain."}, {"start": 189.52, "end": 192.08, "text": " So how does the brain work?"}, {"start": 192.08, "end": 197.0, "text": " Here's a diagram illustrating what neurons in a brain looks like."}, {"start": 197.0, "end": 202.84, "text": " All of human thought is from neurons like these in your brain and mind sending electrical"}, {"start": 202.84, "end": 207.6, "text": " impulses and sometimes forming new connections with other neurons."}, {"start": 207.6, "end": 214.24, "text": " Given a neuron like this one, it has a number of inputs where it receives electrical impulses"}, {"start": 214.24, "end": 216.2, "text": " from other neurons."}, {"start": 216.2, "end": 222.08, "text": " And then this neuron that I've circled carries out some computations and will then send this"}, {"start": 222.08, "end": 227.56, "text": " output to other neurons via these electrical impulses."}, {"start": 227.56, "end": 233.60000000000002, "text": " And this upper neurons output in turn becomes the input to this neuron down below, which"}, {"start": 233.60000000000002, "end": 238.64000000000001, "text": " again, advocates inputs from multiple other neurons to then maybe send his own output"}, {"start": 238.64000000000001, "end": 240.4, "text": " to yet other neurons."}, {"start": 240.4, "end": 245.24, "text": " And this is the stuff of which human thought is made."}, {"start": 245.24, "end": 249.96, "text": " Here's a simplified diagram of a biological neuron."}, {"start": 249.96, "end": 254.72, "text": " A neuron comprises a cell body shown here on the left."}, {"start": 254.72, "end": 260.72, "text": " And if you have taken a class in biology, you may recognize this to be the nucleus of"}, {"start": 260.72, "end": 262.40000000000003, "text": " the neuron."}, {"start": 262.40000000000003, "end": 267.24, "text": " And as we saw on the previous slide, the neuron has different inputs."}, {"start": 267.24, "end": 273.44, "text": " And in a biological neuron, the input wires are called the dendrites."}, {"start": 273.44, "end": 278.92, "text": " And it then occasionally sends electrical impulses to other neurons via the output wire,"}, {"start": 278.92, "end": 281.04, "text": " which is called the axon."}, {"start": 281.04, "end": 283.0, "text": " Don't worry about these biological terms."}, {"start": 283.0, "end": 287.36, "text": " If you saw them in a biology class, you may remember them, but you don't really need to"}, {"start": 287.36, "end": 292.92, "text": " memorize any of these terms for the purpose of building artificial neural networks."}, {"start": 292.92, "end": 298.78000000000003, "text": " But this biological neuron may then send electrical impulses that becomes the input to another"}, {"start": 298.78000000000003, "end": 300.72, "text": " neuron."}, {"start": 300.72, "end": 308.32, "text": " So the artificial neural network uses a very simplified mathematical model of what a biological"}, {"start": 308.32, "end": 310.2, "text": " neuron does."}, {"start": 310.2, "end": 316.96, "text": " And I'm going to draw a little circle here to denote a single neuron."}, {"start": 316.96, "end": 324.44, "text": " And what a neuron does is it takes some inputs, one or more inputs, which are just numbers,"}, {"start": 324.44, "end": 329.8, "text": " and it does some computation and it outputs some other number, which then could be an"}, {"start": 329.8, "end": 334.84, "text": " input to a second neuron shown here on the right."}, {"start": 334.84, "end": 339.0, "text": " When you're building an artificial neural network or a deep learning algorithm, rather"}, {"start": 339.0, "end": 345.44, "text": " than building one neuron at a time, you often want to simulate many such neurons at the"}, {"start": 345.44, "end": 346.79999999999995, "text": " same time."}, {"start": 346.79999999999995, "end": 355.28, "text": " And so when in this diagram, I'm drawing three neurons, and what these neurons do collectively"}, {"start": 355.28, "end": 362.47999999999996, "text": " is input a few numbers, carry out some computation, and output some other numbers."}, {"start": 362.48, "end": 367.68, "text": " Now at this point, I'd like to give one big caveat, which is that even though I made a"}, {"start": 367.68, "end": 373.40000000000003, "text": " loose analogy between biological neurons and artificial neurons, I think that today we"}, {"start": 373.40000000000003, "end": 376.96000000000004, "text": " have almost no idea how the human brain works."}, {"start": 376.96000000000004, "end": 381.72, "text": " In fact, every few years, neuroscientists make some fundamental breakthrough about how the"}, {"start": 381.72, "end": 386.32, "text": " brain works, and I think we'll continue to do so for the foreseeable future."}, {"start": 386.32, "end": 390.90000000000003, "text": " And that to me is a sign that there are many breakthroughs that are yet to be discovered"}, {"start": 390.9, "end": 395.96, "text": " about how the brain actually works, and thus attempts to blindly mimic what we know of"}, {"start": 395.96, "end": 401.44, "text": " the human brain today, which is frankly very little, probably won't get us that far to"}, {"start": 401.44, "end": 405.64, "text": " our building role intelligence, certainly not with our current level of knowledge in"}, {"start": 405.64, "end": 407.76, "text": " neuroscience."}, {"start": 407.76, "end": 412.64, "text": " Having said that, even with these extremely simplified models of a neuron, which we'll"}, {"start": 412.64, "end": 417.91999999999996, "text": " talk about, we'll be able to build really powerful deep learning algorithms."}, {"start": 417.92, "end": 423.16, "text": " And so as you go deeper into neural networks and into deep learning, even though the origins"}, {"start": 423.16, "end": 428.52000000000004, "text": " were biologically motivated, don't take the biological motivation too seriously."}, {"start": 428.52000000000004, "end": 433.08000000000004, "text": " In fact, those of us that do research in deep learning have shifted away from looking to"}, {"start": 433.08000000000004, "end": 439.32, "text": " biological motivation that much, but instead are just using engineering principles to figure"}, {"start": 439.32, "end": 441.88, "text": " out how to build algorithms that are more effective."}, {"start": 441.88, "end": 446.82, "text": " But I think it might still be fun to speculate and think about how biological neurons work"}, {"start": 446.82, "end": 448.52, "text": " every now and then."}, {"start": 448.52, "end": 451.84, "text": " The ideas of neural networks have been around for many decades."}, {"start": 451.84, "end": 455.04, "text": " So a few people have asked me, hey, Andrew, why now?"}, {"start": 455.04, "end": 460.0, "text": " Why is it that only in the last handful of years that neural networks have really taken"}, {"start": 460.0, "end": 461.36, "text": " off?"}, {"start": 461.36, "end": 466.24, "text": " This is a picture I draw for them when I'm asked that question, and that maybe you could"}, {"start": 466.24, "end": 469.76, "text": " draw for others as well if they ask you that question."}, {"start": 469.76, "end": 475.32, "text": " Let me plot on the horizontal axis, the amount of data you have for a problem, and on the"}, {"start": 475.32, "end": 482.36, "text": " vertical axis, the performance or the accuracy of a learning algorithm applied to that problem."}, {"start": 482.36, "end": 487.36, "text": " Over the last couple of decades, with the rise of the internet, the rise of mobile phones,"}, {"start": 487.36, "end": 492.76, "text": " the digitalization of our society, the amount of data we have for a lot of applications"}, {"start": 492.76, "end": 494.88, "text": " has steadily marched to the right."}, {"start": 494.88, "end": 500.56, "text": " A lot of records that used to be on paper, such as if you order something, rather than"}, {"start": 500.56, "end": 505.0, "text": " it being on a piece of paper, that's much more likely to be a digital record."}, {"start": 505.0, "end": 509.44, "text": " Or health record, if you see a doctor, it's much more likely to be digital now compared"}, {"start": 509.44, "end": 512.64, "text": " to on pieces of paper."}, {"start": 512.64, "end": 518.06, "text": " And so in many application areas, the amount of digital data has exploded."}, {"start": 518.06, "end": 523.48, "text": " And what we saw was with traditional machine learning algorithms, such as logistic regression"}, {"start": 523.48, "end": 529.94, "text": " and linear regression, even as you fed those algorithms more data, it was very difficult"}, {"start": 529.94, "end": 533.32, "text": " to get the performance to keep on going up."}, {"start": 533.32, "end": 538.5600000000001, "text": " So it was as if the traditional learning algorithms like linear regression and logistic regression,"}, {"start": 538.5600000000001, "end": 542.72, "text": " they just weren't able to scale with the amount of data we could now feed it."}, {"start": 542.72, "end": 548.46, "text": " And they weren't able to take effective advantage of all this data we had for different applications."}, {"start": 548.46, "end": 554.88, "text": " And what AI researchers started to observe was that if you were to train a small neural"}, {"start": 554.88, "end": 560.4200000000001, "text": " network on this data set, then the performance maybe looks like this."}, {"start": 560.42, "end": 566.0799999999999, "text": " And if you were to train a medium sized neural network, meaning one with more neurons in"}, {"start": 566.0799999999999, "end": 568.92, "text": " it, this performance may look like that."}, {"start": 568.92, "end": 573.7199999999999, "text": " And if you were to train a very large neural network, meaning one with a lot of these artificial"}, {"start": 573.7199999999999, "end": 579.5999999999999, "text": " neurons, then for some applications, the performance would just keep on going up."}, {"start": 579.5999999999999, "end": 581.36, "text": " And so this meant two things."}, {"start": 581.36, "end": 585.8399999999999, "text": " It meant that for a certain class of applications where you do have a lot of data, sometimes"}, {"start": 585.84, "end": 592.36, "text": " you hear the term big data toss around, if you're able to train a very large neural network"}, {"start": 592.36, "end": 597.9200000000001, "text": " to take advantage of that huge amount of data you have, then you could attain performance"}, {"start": 597.9200000000001, "end": 603.5600000000001, "text": " on anything ranging from speech recognition to image recognition to natural language processing"}, {"start": 603.5600000000001, "end": 608.12, "text": " applications and many more that just were not possible with earlier generations of learning"}, {"start": 608.12, "end": 609.48, "text": " algorithms."}, {"start": 609.48, "end": 613.0400000000001, "text": " And this caused deep learning algorithms to take off."}, {"start": 613.04, "end": 621.16, "text": " And this too is why faster computer processes, including the rise of GPUs or graphics processor"}, {"start": 621.16, "end": 628.04, "text": " units, this is hardware originally designed to generate nice looking computer graphics,"}, {"start": 628.04, "end": 632.3199999999999, "text": " but turned out to be really powerful for deep learning as well."}, {"start": 632.3199999999999, "end": 639.4399999999999, "text": " That was also a major force in allowing deep learning algorithms to become what it is today."}, {"start": 639.44, "end": 643.96, "text": " That's how neural networks got started as well as why they took off so quickly in the"}, {"start": 643.96, "end": 645.5600000000001, "text": " last several years."}, {"start": 645.5600000000001, "end": 650.1600000000001, "text": " Let's now dive more deeply into the details of how a neural network actually works."}, {"start": 650.16, "end": 670.1999999999999, "text": " Please go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=cHFM92fhpew
4.3 Neural Networks Intuition | Demand Prediction --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
To illustrate how neural networks work, let's start with an example. We'll use an example from demand prediction in which you look at a product and try to predict, will this product be a top seller or not? Let's take a look. In this example, you're selling t-shirts and you would like to know if a particular t-shirt will be a top seller, you know, yes or no. And you have collected data of different t-shirts that were sold at different prices, as well as which ones became a top seller. This type of application is used by retailers today in order to plan better inventory levels as well as marketing campaigns. If you know what's likely to be a top seller, you would plan, for example, to just purchase more of that stock in advance. So in this example, the input feature X is the price of the t-shirt. And so that's the input to the learning algorithm. And if you apply logistic regression to fit a sigmoid function to the data that might look like that, then the output of your prediction might look like this, 1 over 1 plus e to the negative wx plus b. Previously we had written this as f of x as the output of the learning algorithm. In order to set this up to build a neural network, I'm going to switch the terminology a little bit and use the alphabet a to denounce the output of this logistic regression algorithm. The term a stands for activation, and it's actually a term from neuroscience, and it refers to how much a neuron is sending a high output to other neurons downstream from it. It turns out that this logistic regression unit, or this little logistic regression algorithm, can be thought of as a very simplified model of a single neuron in the brain, where what the neuron does is it takes as input the price x, and then it computes this formula on top, and it outputs the number a, which is computed via this formula, and it outputs the probability of this t-shirt being a top seller. Another way to think of a neuron is as a tiny little computer whose only job is to input one number, or a few numbers, such as a price, and then to output one number or maybe a few other numbers, which in this case is the probability of the t-shirt being a top seller. As I alluded in the previous video, a logistic regression algorithm is much simpler than what any biological neuron in your brain or mind does, which is why the artificial neural network is such a vastly oversimplified model of the human brain, even though in practice, as you know, deep learning algorithms do work very well. Given this description of a single neuron, building a neural network now just requires taking a bunch of these neurons and wiring them together or putting them together. Let's now look at a more complex example of demand prediction. In this example, we're going to have four features to predict whether or not a t-shirt is a top seller. The features are the price of the t-shirt, the shipping costs, the amount of marketing of that particular t-shirt, as well as the material quality. Is this a high quality thick cotton, or is this maybe a lower quality material? Now you might suspect that whether or not a t-shirt becomes a top seller actually depends on a few factors. First, what is the affordability of this t-shirt? Second is what's the degree of awareness of this t-shirt that potential buyers have? And third is perceived quality. Do buyers or potential buyers think this is a high quality t-shirt? And so what I'm going to do is create one artificial neuron to try to estimate the probability that this t-shirt is perceived as highly affordable. And affordability is mainly a function of price and shipping costs because the total amount you have to pay is the sum of the price plus the shipping costs. And so we're going to use a little neuron here, a logistic regression unit, to input price and shipping costs and predict, do people think this is affordable? Second, I'm going to create another artificial neuron here to estimate, is there high awareness of this? And awareness in this case is mainly a function of the marketing of the t-shirt. And finally, going to create another neuron to estimate, do people perceive this to be of high quality? And that may mainly be a function of the price of the t-shirt and of the material quality. Price is a factor here because fortunately or unfortunately, if there's a very high price t-shirt, people will sometimes perceive that to be of high quality because if it's very expensive then maybe people think it's got to be of high quality. Given these estimates of affordability, awareness, and perceived quality, we then wire the outputs of these three neurons to another neuron here on the right that then is another logistic regression unit that finally inputs those three numbers and outputs the probability of this t-shirt being a top seller. So in the terminology of neural networks, we're going to group these three neurons together into what's called a layer, and a layer is a grouping of neurons which take as input the same or similar features and that in turn outputs a few numbers together. So these three neurons on the left form one layer, which is why I drew them on top of each other. And the single neuron on the right is also one layer. The layer on the left has three neurons, so a layer can have multiple neurons, or it can also have a single neuron, as in the case of this layer on the right. This layer on the right is also called the output layer because the output of this final neuron is the output probability predicted by the neural network. In the terminology of neural networks, we're also going to call affordability, awareness, and perceived quality to be activations. The term activations comes from biological neurons, and it refers to the degree that a biological neuron is sending a high output value or sending many electrical impulses to other neurons, to the downstream from it. And so these numbers on affordability, awareness, and perceived quality are the activations of these three neurons in this layer. And also, this output probability is the activation of this neuron shown here on the right. So, this particular neural network, therefore, carries out computations as follows. It inputs four numbers, then this layer of the neural network uses those four numbers to compute three new numbers, also called activation values, and then the final layer, the output layer of the neural network, uses those three numbers to compute one number. And in a neural network, this list of four numbers is also called the input layer, and that's just a list of four numbers. Now, there's one simplification I'd like to make to this neural network, which is the way I've described it so far, we had to go through the neurons one at a time and decide what inputs it would take from the previous layer. So for example, we said affordability is a function of just price and shipping costs and awareness is a function of just marketing and so on. But if you're building a large neural network, it'd be a lot of work to go through and manually decide which neurons should take which features as inputs. The way a neural network is implemented in practice, each neuron in a certain layer, say this layer in the middle, will have access to every feature, to every value from the previous layer, from the input layer, which is why I'm now drawing arrows from every input feature to every one of these neurons shown here in the middle. And you can imagine that if you're trying to predict affordability and it knows what the price shipping costs, marketing and material may be, or learn to ignore marketing and material and just figure out through setting the parameters appropriately to only focus on the subset of features that are most relevant to affordability. To further simplify the notation and the description of this neural network, I'm going to take these four input features and write them as a vector x and we're going to view the neural network as having four features that comprise this feature vector x and this feature vector is fed to this layer in the middle, which then computes three activation values, that is these three numbers, and these three activation values in turn becomes another vector, which is fed to this final output layer that finally outputs the probability of this t-shirt being a top seller. So that's all a neural network is. It has a few layers where each layer inputs a vector and outputs another vector of numbers, where for example, this layer in the middle inputs four numbers x and outputs three numbers corresponding to affordability, awareness and perceived quality. To add a little bit more terminology, you've seen that this layer is called the output layer and this layer is called the input layer. To give the layer in the middle a name as well, this layer in the middle is called a hidden layer. I know that this is maybe not the best or the most intuitive name, but that terminology comes from that when you have a training set, in the training set, you get to observe both x and y, your data set tells you what is x and what is y, and so you get data that tells you what are the correct inputs and the correct outputs, but your data set doesn't tell you what are the correct values for affordability, awareness and perceived quality. And so the correct values for those are hidden, you don't see them in the training set, which is why this layer in the middle is called a hidden layer. I'd like to share with you another way of thinking about neural networks that I found useful for building my intuition about it. Just let me cover up the left half of this diagram and see what we're left with. What you see here is that there is a logistic regression algorithm or logistic regression unit that is taking as input affordability, awareness and perceived quality of a t-shirt and using these three features to estimate the probability of the t-shirt being a top seller. So this is just logistic regression. But the cool thing about this is rather than using the original features, price, shipping, cost, marketing and so on, is using a new, maybe better set of features, affordability, awareness and perceived quality that are hopefully more predictive of whether or not this t-shirt will be a top seller. So one way to think of this neural network is just logistic regression, but it is a version of logistic regression that can learn its own features that makes it easier to make accurate predictions. In fact, you might remember from the previous week, this housing example where we said that if you want to predict the price of the house, you might take the frontage or the width of lots and multiply that by the depth of a lot to construct a more complex feature, X1 times X2, which was the size of the lot. So there we were doing manual feature engineering where we had to look at the features X1 and X2 and decide by hand how to combine them together to come up with better features. What the neural network does is instead of you needing to manually engineer the features, it can learn, as you see later, its own features to make the learning problem easier for itself. So this is what makes neural networks one of the most powerful learning algorithms in the world today. So to summarize, a neural network does this. The input layer has a vector of features, four numbers in this example. It is input to the hidden layer, which outputs three numbers. And I'm going to use a vector to denote this vector of activations that this hidden layer outputs. And then the output layer takes this input, those three numbers and outputs one number, which would be the final activation or the final prediction of the neural network. One note, even though I previously described this neural network as computing affordability, awareness, and perceived quality, one of the really nice properties of a neural network is when you train it from data, you don't need to go and explicitly decide what are the features such as affordability and so on that a neural network should compute. Then it will figure out all by itself what are the features it wants to use in this hidden layer. And that's what makes it such a powerful learning algorithm. So you've seen here one example of a neural network, and this neural network has a single layer that is a hidden layer. Let's take a look at some other examples of neural networks, specifically examples with more than one hidden layer. Here's an example. This neural network has an input feature vector X that is fed to one hidden layer, and I'm going to call this the first hidden layer. And so if this hidden layer has three neurons, it will then output a vector of three activation values. These three numbers can then be input to a second hidden layer. And if the second hidden layer has two neurons, two logistic units, then this second hidden layer will output another vector of now two activation values that maybe goes to the output layer that then outputs the neural network's final prediction. Or here's another example. Here's a neural network that has this input go to the first hidden layer, that the output to the first hidden layer goes to the second hidden layer, goes to the third hidden layer, and then finally to the output layer. When you're building your own neural network, one of the decisions you need to make is how many hidden layers do you want and how many neurons do you want each hidden layer to have. And this question of how many hidden layers and how many neurons per hidden layer is a question of the architecture of the neural network. You learn later in this course some tips for choosing an appropriate architecture for a neural network. But choosing the right number of hidden layers and number of hidden units per layer can have an impact on the performance of your learning algorithm as well. So later in this course, you learn how to choose a good architecture for your neural network as well. Oh, and by the way, in some of the literature, you see this type of neural network with multiple layers like this called a multi-layer perceptron. So if you see that, that just refers to a neural network that looks like what you're seeing here on the slide. So that's a neural network. I know we went through a lot of this video. So thank you for sticking with me, but you now know how a neural network works. In the next video, let's take a look at how these ideas can be applied to other applications as well. In particular, we'll take a look at the computer vision application of face recognition. Let's go on to the next video.
[{"start": 0.0, "end": 5.44, "text": " To illustrate how neural networks work, let's start with an example."}, {"start": 5.44, "end": 10.0, "text": " We'll use an example from demand prediction in which you look at a product and try to"}, {"start": 10.0, "end": 13.44, "text": " predict, will this product be a top seller or not?"}, {"start": 13.44, "end": 14.44, "text": " Let's take a look."}, {"start": 14.44, "end": 20.28, "text": " In this example, you're selling t-shirts and you would like to know if a particular"}, {"start": 20.28, "end": 23.64, "text": " t-shirt will be a top seller, you know, yes or no."}, {"start": 23.64, "end": 28.64, "text": " And you have collected data of different t-shirts that were sold at different prices, as well"}, {"start": 28.64, "end": 31.48, "text": " as which ones became a top seller."}, {"start": 31.48, "end": 38.480000000000004, "text": " This type of application is used by retailers today in order to plan better inventory levels"}, {"start": 38.480000000000004, "end": 40.28, "text": " as well as marketing campaigns."}, {"start": 40.28, "end": 44.900000000000006, "text": " If you know what's likely to be a top seller, you would plan, for example, to just purchase"}, {"start": 44.900000000000006, "end": 47.32, "text": " more of that stock in advance."}, {"start": 47.32, "end": 53.68, "text": " So in this example, the input feature X is the price of the t-shirt."}, {"start": 53.68, "end": 56.82, "text": " And so that's the input to the learning algorithm."}, {"start": 56.82, "end": 62.82, "text": " And if you apply logistic regression to fit a sigmoid function to the data that might"}, {"start": 62.82, "end": 68.96000000000001, "text": " look like that, then the output of your prediction might look like this, 1 over 1 plus e to the"}, {"start": 68.96000000000001, "end": 71.84, "text": " negative wx plus b."}, {"start": 71.84, "end": 77.84, "text": " Previously we had written this as f of x as the output of the learning algorithm."}, {"start": 77.84, "end": 81.96000000000001, "text": " In order to set this up to build a neural network, I'm going to switch the terminology"}, {"start": 81.96, "end": 89.16, "text": " a little bit and use the alphabet a to denounce the output of this logistic regression algorithm."}, {"start": 89.16, "end": 94.44, "text": " The term a stands for activation, and it's actually a term from neuroscience, and it"}, {"start": 94.44, "end": 102.83999999999999, "text": " refers to how much a neuron is sending a high output to other neurons downstream from it."}, {"start": 102.83999999999999, "end": 109.0, "text": " It turns out that this logistic regression unit, or this little logistic regression algorithm,"}, {"start": 109.0, "end": 115.48, "text": " can be thought of as a very simplified model of a single neuron in the brain, where what"}, {"start": 115.48, "end": 124.08, "text": " the neuron does is it takes as input the price x, and then it computes this formula on top,"}, {"start": 124.08, "end": 130.48, "text": " and it outputs the number a, which is computed via this formula, and it outputs the probability"}, {"start": 130.48, "end": 133.44, "text": " of this t-shirt being a top seller."}, {"start": 133.44, "end": 141.0, "text": " Another way to think of a neuron is as a tiny little computer whose only job is to input"}, {"start": 141.0, "end": 147.48, "text": " one number, or a few numbers, such as a price, and then to output one number or maybe a few"}, {"start": 147.48, "end": 154.4, "text": " other numbers, which in this case is the probability of the t-shirt being a top seller."}, {"start": 154.4, "end": 160.84, "text": " As I alluded in the previous video, a logistic regression algorithm is much simpler than"}, {"start": 160.84, "end": 166.16, "text": " what any biological neuron in your brain or mind does, which is why the artificial neural"}, {"start": 166.16, "end": 172.24, "text": " network is such a vastly oversimplified model of the human brain, even though in practice,"}, {"start": 172.24, "end": 176.12, "text": " as you know, deep learning algorithms do work very well."}, {"start": 176.12, "end": 181.64000000000001, "text": " Given this description of a single neuron, building a neural network now just requires"}, {"start": 181.64000000000001, "end": 186.96, "text": " taking a bunch of these neurons and wiring them together or putting them together."}, {"start": 186.96, "end": 190.64000000000001, "text": " Let's now look at a more complex example of demand prediction."}, {"start": 190.64, "end": 196.11999999999998, "text": " In this example, we're going to have four features to predict whether or not a t-shirt"}, {"start": 196.11999999999998, "end": 197.2, "text": " is a top seller."}, {"start": 197.2, "end": 202.67999999999998, "text": " The features are the price of the t-shirt, the shipping costs, the amount of marketing"}, {"start": 202.67999999999998, "end": 206.67999999999998, "text": " of that particular t-shirt, as well as the material quality."}, {"start": 206.67999999999998, "end": 212.83999999999997, "text": " Is this a high quality thick cotton, or is this maybe a lower quality material?"}, {"start": 212.83999999999997, "end": 218.51999999999998, "text": " Now you might suspect that whether or not a t-shirt becomes a top seller actually depends"}, {"start": 218.51999999999998, "end": 219.92, "text": " on a few factors."}, {"start": 219.92, "end": 223.95999999999998, "text": " First, what is the affordability of this t-shirt?"}, {"start": 223.95999999999998, "end": 229.6, "text": " Second is what's the degree of awareness of this t-shirt that potential buyers have?"}, {"start": 229.6, "end": 231.88, "text": " And third is perceived quality."}, {"start": 231.88, "end": 236.64, "text": " Do buyers or potential buyers think this is a high quality t-shirt?"}, {"start": 236.64, "end": 244.23999999999998, "text": " And so what I'm going to do is create one artificial neuron to try to estimate the probability"}, {"start": 244.23999999999998, "end": 248.27999999999997, "text": " that this t-shirt is perceived as highly affordable."}, {"start": 248.28, "end": 253.04, "text": " And affordability is mainly a function of price and shipping costs because the total"}, {"start": 253.04, "end": 256.56, "text": " amount you have to pay is the sum of the price plus the shipping costs."}, {"start": 256.56, "end": 260.68, "text": " And so we're going to use a little neuron here, a logistic regression unit, to input"}, {"start": 260.68, "end": 265.32, "text": " price and shipping costs and predict, do people think this is affordable?"}, {"start": 265.32, "end": 271.56, "text": " Second, I'm going to create another artificial neuron here to estimate, is there high awareness"}, {"start": 271.56, "end": 272.62, "text": " of this?"}, {"start": 272.62, "end": 278.16, "text": " And awareness in this case is mainly a function of the marketing of the t-shirt."}, {"start": 278.16, "end": 284.04, "text": " And finally, going to create another neuron to estimate, do people perceive this to be"}, {"start": 284.04, "end": 286.12, "text": " of high quality?"}, {"start": 286.12, "end": 292.68, "text": " And that may mainly be a function of the price of the t-shirt and of the material quality."}, {"start": 292.68, "end": 297.8, "text": " Price is a factor here because fortunately or unfortunately, if there's a very high price"}, {"start": 297.8, "end": 302.92, "text": " t-shirt, people will sometimes perceive that to be of high quality because if it's very"}, {"start": 302.92, "end": 307.32000000000005, "text": " expensive then maybe people think it's got to be of high quality."}, {"start": 307.32, "end": 313.44, "text": " Given these estimates of affordability, awareness, and perceived quality, we then wire the outputs"}, {"start": 313.44, "end": 318.84, "text": " of these three neurons to another neuron here on the right that then is another logistic"}, {"start": 318.84, "end": 324.62, "text": " regression unit that finally inputs those three numbers and outputs the probability"}, {"start": 324.62, "end": 327.46, "text": " of this t-shirt being a top seller."}, {"start": 327.46, "end": 333.1, "text": " So in the terminology of neural networks, we're going to group these three neurons"}, {"start": 333.1, "end": 340.52000000000004, "text": " together into what's called a layer, and a layer is a grouping of neurons which take"}, {"start": 340.52000000000004, "end": 347.64000000000004, "text": " as input the same or similar features and that in turn outputs a few numbers together."}, {"start": 347.64000000000004, "end": 352.12, "text": " So these three neurons on the left form one layer, which is why I drew them on top of"}, {"start": 352.12, "end": 353.40000000000003, "text": " each other."}, {"start": 353.40000000000003, "end": 358.04, "text": " And the single neuron on the right is also one layer."}, {"start": 358.04, "end": 362.8, "text": " The layer on the left has three neurons, so a layer can have multiple neurons, or it can"}, {"start": 362.8, "end": 367.68, "text": " also have a single neuron, as in the case of this layer on the right."}, {"start": 367.68, "end": 374.0, "text": " This layer on the right is also called the output layer because the output of this final"}, {"start": 374.0, "end": 379.32, "text": " neuron is the output probability predicted by the neural network."}, {"start": 379.32, "end": 384.8, "text": " In the terminology of neural networks, we're also going to call affordability, awareness,"}, {"start": 384.8, "end": 388.32, "text": " and perceived quality to be activations."}, {"start": 388.32, "end": 393.24, "text": " The term activations comes from biological neurons, and it refers to the degree that"}, {"start": 393.24, "end": 398.4, "text": " a biological neuron is sending a high output value or sending many electrical impulses"}, {"start": 398.4, "end": 401.36, "text": " to other neurons, to the downstream from it."}, {"start": 401.36, "end": 406.71999999999997, "text": " And so these numbers on affordability, awareness, and perceived quality are the activations"}, {"start": 406.71999999999997, "end": 410.12, "text": " of these three neurons in this layer."}, {"start": 410.12, "end": 418.03999999999996, "text": " And also, this output probability is the activation of this neuron shown here on the right."}, {"start": 418.04, "end": 423.88, "text": " So, this particular neural network, therefore, carries out computations as follows."}, {"start": 423.88, "end": 429.24, "text": " It inputs four numbers, then this layer of the neural network uses those four numbers"}, {"start": 429.24, "end": 435.64000000000004, "text": " to compute three new numbers, also called activation values, and then the final layer,"}, {"start": 435.64000000000004, "end": 442.24, "text": " the output layer of the neural network, uses those three numbers to compute one number."}, {"start": 442.24, "end": 450.84000000000003, "text": " And in a neural network, this list of four numbers is also called the input layer, and"}, {"start": 450.84000000000003, "end": 453.04, "text": " that's just a list of four numbers."}, {"start": 453.04, "end": 459.92, "text": " Now, there's one simplification I'd like to make to this neural network, which is the"}, {"start": 459.92, "end": 465.08, "text": " way I've described it so far, we had to go through the neurons one at a time and decide"}, {"start": 465.08, "end": 468.40000000000003, "text": " what inputs it would take from the previous layer."}, {"start": 468.4, "end": 473.08, "text": " So for example, we said affordability is a function of just price and shipping costs"}, {"start": 473.08, "end": 477.0, "text": " and awareness is a function of just marketing and so on."}, {"start": 477.0, "end": 481.08, "text": " But if you're building a large neural network, it'd be a lot of work to go through and manually"}, {"start": 481.08, "end": 485.91999999999996, "text": " decide which neurons should take which features as inputs."}, {"start": 485.91999999999996, "end": 491.32, "text": " The way a neural network is implemented in practice, each neuron in a certain layer,"}, {"start": 491.32, "end": 498.15999999999997, "text": " say this layer in the middle, will have access to every feature, to every value from the"}, {"start": 498.16, "end": 505.16, "text": " previous layer, from the input layer, which is why I'm now drawing arrows from every input"}, {"start": 505.16, "end": 510.18, "text": " feature to every one of these neurons shown here in the middle."}, {"start": 510.18, "end": 515.28, "text": " And you can imagine that if you're trying to predict affordability and it knows what"}, {"start": 515.28, "end": 521.0, "text": " the price shipping costs, marketing and material may be, or learn to ignore marketing and material"}, {"start": 521.0, "end": 526.84, "text": " and just figure out through setting the parameters appropriately to only focus on the subset"}, {"start": 526.84, "end": 530.9200000000001, "text": " of features that are most relevant to affordability."}, {"start": 530.9200000000001, "end": 537.08, "text": " To further simplify the notation and the description of this neural network, I'm going to take"}, {"start": 537.08, "end": 544.08, "text": " these four input features and write them as a vector x and we're going to view the neural"}, {"start": 544.08, "end": 552.6800000000001, "text": " network as having four features that comprise this feature vector x and this feature vector"}, {"start": 552.68, "end": 559.52, "text": " is fed to this layer in the middle, which then computes three activation values, that"}, {"start": 559.52, "end": 567.9599999999999, "text": " is these three numbers, and these three activation values in turn becomes another vector, which"}, {"start": 567.9599999999999, "end": 576.1999999999999, "text": " is fed to this final output layer that finally outputs the probability of this t-shirt being"}, {"start": 576.1999999999999, "end": 578.04, "text": " a top seller."}, {"start": 578.04, "end": 580.3399999999999, "text": " So that's all a neural network is."}, {"start": 580.34, "end": 588.5600000000001, "text": " It has a few layers where each layer inputs a vector and outputs another vector of numbers,"}, {"start": 588.5600000000001, "end": 595.14, "text": " where for example, this layer in the middle inputs four numbers x and outputs three numbers"}, {"start": 595.14, "end": 598.88, "text": " corresponding to affordability, awareness and perceived quality."}, {"start": 598.88, "end": 605.6800000000001, "text": " To add a little bit more terminology, you've seen that this layer is called the output"}, {"start": 605.68, "end": 610.0799999999999, "text": " layer and this layer is called the input layer."}, {"start": 610.0799999999999, "end": 614.64, "text": " To give the layer in the middle a name as well, this layer in the middle is called a"}, {"start": 614.64, "end": 616.76, "text": " hidden layer."}, {"start": 616.76, "end": 622.2399999999999, "text": " I know that this is maybe not the best or the most intuitive name, but that terminology"}, {"start": 622.2399999999999, "end": 628.4799999999999, "text": " comes from that when you have a training set, in the training set, you get to observe both"}, {"start": 628.4799999999999, "end": 634.92, "text": " x and y, your data set tells you what is x and what is y, and so you get data that tells"}, {"start": 634.92, "end": 640.56, "text": " you what are the correct inputs and the correct outputs, but your data set doesn't tell you"}, {"start": 640.56, "end": 645.3199999999999, "text": " what are the correct values for affordability, awareness and perceived quality."}, {"start": 645.3199999999999, "end": 649.24, "text": " And so the correct values for those are hidden, you don't see them in the training set, which"}, {"start": 649.24, "end": 653.12, "text": " is why this layer in the middle is called a hidden layer."}, {"start": 653.12, "end": 658.28, "text": " I'd like to share with you another way of thinking about neural networks that I found"}, {"start": 658.28, "end": 661.3199999999999, "text": " useful for building my intuition about it."}, {"start": 661.32, "end": 666.36, "text": " Just let me cover up the left half of this diagram and see what we're left with."}, {"start": 666.36, "end": 671.7600000000001, "text": " What you see here is that there is a logistic regression algorithm or logistic regression"}, {"start": 671.7600000000001, "end": 678.1600000000001, "text": " unit that is taking as input affordability, awareness and perceived quality of a t-shirt"}, {"start": 678.1600000000001, "end": 683.6, "text": " and using these three features to estimate the probability of the t-shirt being a top"}, {"start": 683.6, "end": 684.6800000000001, "text": " seller."}, {"start": 684.6800000000001, "end": 688.48, "text": " So this is just logistic regression."}, {"start": 688.48, "end": 694.12, "text": " But the cool thing about this is rather than using the original features, price, shipping,"}, {"start": 694.12, "end": 699.08, "text": " cost, marketing and so on, is using a new, maybe better set of features, affordability,"}, {"start": 699.08, "end": 704.9200000000001, "text": " awareness and perceived quality that are hopefully more predictive of whether or not this t-shirt"}, {"start": 704.9200000000001, "end": 706.52, "text": " will be a top seller."}, {"start": 706.52, "end": 713.8000000000001, "text": " So one way to think of this neural network is just logistic regression, but it is a version"}, {"start": 713.8, "end": 719.16, "text": " of logistic regression that can learn its own features that makes it easier to make"}, {"start": 719.16, "end": 721.04, "text": " accurate predictions."}, {"start": 721.04, "end": 726.76, "text": " In fact, you might remember from the previous week, this housing example where we said that"}, {"start": 726.76, "end": 731.28, "text": " if you want to predict the price of the house, you might take the frontage or the width of"}, {"start": 731.28, "end": 737.4, "text": " lots and multiply that by the depth of a lot to construct a more complex feature, X1 times"}, {"start": 737.4, "end": 740.28, "text": " X2, which was the size of the lot."}, {"start": 740.28, "end": 745.6, "text": " So there we were doing manual feature engineering where we had to look at the features X1 and"}, {"start": 745.6, "end": 751.04, "text": " X2 and decide by hand how to combine them together to come up with better features."}, {"start": 751.04, "end": 756.56, "text": " What the neural network does is instead of you needing to manually engineer the features,"}, {"start": 756.56, "end": 764.24, "text": " it can learn, as you see later, its own features to make the learning problem easier for itself."}, {"start": 764.24, "end": 768.52, "text": " So this is what makes neural networks one of the most powerful learning algorithms in"}, {"start": 768.52, "end": 769.8399999999999, "text": " the world today."}, {"start": 769.84, "end": 773.72, "text": " So to summarize, a neural network does this."}, {"start": 773.72, "end": 778.36, "text": " The input layer has a vector of features, four numbers in this example."}, {"start": 778.36, "end": 783.32, "text": " It is input to the hidden layer, which outputs three numbers."}, {"start": 783.32, "end": 790.52, "text": " And I'm going to use a vector to denote this vector of activations that this hidden layer"}, {"start": 790.52, "end": 791.8000000000001, "text": " outputs."}, {"start": 791.8000000000001, "end": 798.08, "text": " And then the output layer takes this input, those three numbers and outputs one number,"}, {"start": 798.08, "end": 804.44, "text": " which would be the final activation or the final prediction of the neural network."}, {"start": 804.44, "end": 809.12, "text": " One note, even though I previously described this neural network as computing affordability,"}, {"start": 809.12, "end": 813.5600000000001, "text": " awareness, and perceived quality, one of the really nice properties of a neural network"}, {"start": 813.5600000000001, "end": 819.24, "text": " is when you train it from data, you don't need to go and explicitly decide what are"}, {"start": 819.24, "end": 824.32, "text": " the features such as affordability and so on that a neural network should compute."}, {"start": 824.32, "end": 829.72, "text": " Then it will figure out all by itself what are the features it wants to use in this hidden"}, {"start": 829.72, "end": 830.72, "text": " layer."}, {"start": 830.72, "end": 834.08, "text": " And that's what makes it such a powerful learning algorithm."}, {"start": 834.08, "end": 839.48, "text": " So you've seen here one example of a neural network, and this neural network has a single"}, {"start": 839.48, "end": 841.88, "text": " layer that is a hidden layer."}, {"start": 841.88, "end": 846.7600000000001, "text": " Let's take a look at some other examples of neural networks, specifically examples with"}, {"start": 846.7600000000001, "end": 849.6, "text": " more than one hidden layer."}, {"start": 849.6, "end": 851.2800000000001, "text": " Here's an example."}, {"start": 851.28, "end": 858.92, "text": " This neural network has an input feature vector X that is fed to one hidden layer, and I'm"}, {"start": 858.92, "end": 861.42, "text": " going to call this the first hidden layer."}, {"start": 861.42, "end": 868.36, "text": " And so if this hidden layer has three neurons, it will then output a vector of three activation"}, {"start": 868.36, "end": 869.88, "text": " values."}, {"start": 869.88, "end": 875.12, "text": " These three numbers can then be input to a second hidden layer."}, {"start": 875.12, "end": 881.04, "text": " And if the second hidden layer has two neurons, two logistic units, then this second hidden"}, {"start": 881.04, "end": 887.36, "text": " layer will output another vector of now two activation values that maybe goes to the output"}, {"start": 887.36, "end": 891.68, "text": " layer that then outputs the neural network's final prediction."}, {"start": 891.68, "end": 893.48, "text": " Or here's another example."}, {"start": 893.48, "end": 899.0799999999999, "text": " Here's a neural network that has this input go to the first hidden layer, that the output"}, {"start": 899.0799999999999, "end": 902.92, "text": " to the first hidden layer goes to the second hidden layer, goes to the third hidden layer,"}, {"start": 902.92, "end": 906.0999999999999, "text": " and then finally to the output layer."}, {"start": 906.0999999999999, "end": 910.4399999999999, "text": " When you're building your own neural network, one of the decisions you need to make is how"}, {"start": 910.44, "end": 916.5600000000001, "text": " many hidden layers do you want and how many neurons do you want each hidden layer to have."}, {"start": 916.5600000000001, "end": 922.4000000000001, "text": " And this question of how many hidden layers and how many neurons per hidden layer is a"}, {"start": 922.4000000000001, "end": 925.8800000000001, "text": " question of the architecture of the neural network."}, {"start": 925.8800000000001, "end": 931.0, "text": " You learn later in this course some tips for choosing an appropriate architecture for a"}, {"start": 931.0, "end": 932.0, "text": " neural network."}, {"start": 932.0, "end": 937.5200000000001, "text": " But choosing the right number of hidden layers and number of hidden units per layer can have"}, {"start": 937.52, "end": 940.8, "text": " an impact on the performance of your learning algorithm as well."}, {"start": 940.8, "end": 945.68, "text": " So later in this course, you learn how to choose a good architecture for your neural"}, {"start": 945.68, "end": 946.68, "text": " network as well."}, {"start": 946.68, "end": 951.4, "text": " Oh, and by the way, in some of the literature, you see this type of neural network with multiple"}, {"start": 951.4, "end": 954.48, "text": " layers like this called a multi-layer perceptron."}, {"start": 954.48, "end": 958.36, "text": " So if you see that, that just refers to a neural network that looks like what you're"}, {"start": 958.36, "end": 960.86, "text": " seeing here on the slide."}, {"start": 960.86, "end": 963.36, "text": " So that's a neural network."}, {"start": 963.36, "end": 965.52, "text": " I know we went through a lot of this video."}, {"start": 965.52, "end": 970.52, "text": " So thank you for sticking with me, but you now know how a neural network works."}, {"start": 970.52, "end": 975.0, "text": " In the next video, let's take a look at how these ideas can be applied to other applications"}, {"start": 975.0, "end": 976.0, "text": " as well."}, {"start": 976.0, "end": 981.04, "text": " In particular, we'll take a look at the computer vision application of face recognition."}, {"start": 981.04, "end": 996.68, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=3RIUt73mj3Q
4.4 Neural Networks Intuition | Example Recognizing Images--[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you saw how a neural network works in a demand prediction example. Let's take a look at how you can apply a similar type of idea to a computer vision application. Let's dive in. If you're building a face recognition application, you might want to train, say, a neural network that takes this input to picture like this and outputs the identity of the person in the picture. This image is a thousand by a thousand pixels and so its representation in the computer is actually as a thousand by a thousand grid or also called a thousand by a thousand matrix of pixel intensity values. In this example, my pixel intensity values or pixel brightness values goes from 0 to 255 and so 197 here would be the brightness of the pixel in the very upper left of the image, 185 is the brightness of the pixel, one pixel over and so on, down to 214 would be the lower right corner of this image. If you were to take these pixel intensity values and unroll them into a vector, you end up with a list or a vector of a million pixel intensity values, one million because a thousand by a thousand squared gives you a million numbers. The face recognition problem is, can you train a neural network that takes this input, a feature vector with a million pixel brightness values and outputs the identity of the person in the picture? This is how you might build a neural network to carry out this task. The input image X is fed to this layer of neurons, this is the first hidden layer, which then extracts some features and the output of this first hidden layer is fed to a second hidden layer and that output is fed to a third hidden layer and then finally to the output layer which then estimates, say, the probability of this being a particular person. One interesting thing would be if you look at a neural network that's been trained on a lot of images of faces and to try to visualize what are these hidden layers trying to compute. It turns out that when you train a system like this on a lot of pictures of faces and you peer at the different neurons in the hidden layers to figure what they may be computing, this is what you might find. In the first hidden layer, you might find one neuron that is looking for a little vertical line or a vertical edge like that and a second neuron looking for a oriented line or oriented edge like that and a third neuron looking for a line at that orientation and so on. In the earliest layers of a neural network, you might find that the neurons are looking for very short lines or very short edges in the image. If you look at the next hidden layer, you find that these neurons might learn to group together lots of little short lines, little short edge segments in order to look for parts of faces. For example, each of these little square boxes is a visualization of what that neuron is trying to detect. This first neuron looks like it's trying to detect the presence or absence of an eye in a certain position of the image and the second neuron looks like it's trying to detect the horn of a nose and maybe this neuron over here is trying to detect the bottom of a nose. And then as you look at the next hidden layer, in this example, the neural network is aggregating different parts of faces to then try to detect presence or absence of larger, coarser face shapes. And then finally, detecting how much the face corresponds to different face shapes creates a rich set of features that then helps the upper layer try to determine the identity of the person pictured. And a remarkable thing about the neural network is it can learn these feature detectors at the different hidden layers all by itself. In this example, no one ever told it to look for short little edges in the first layer and eyes and noses and face parts in the second layer and then more complete face shapes at the third layer. The neural network is able to figure out these things all by itself from data. Just one note, in this visualization, the neurons in the first hidden layer are shown looking at relatively small windows to look for these edges. And then the second hidden layer is looking at a bigger window and the third hidden layer is looking at an even bigger window. So these little neurons visualizations actually correspond to differently sized regions in the image. Just for fun, let's see what happens if you were to train this neural network on a different data set, say on lots of pictures of cars pictured on the side. The same learning algorithm, if it's asked to detect cars, will then learn edges in the first layer, so pretty similar, but then they'll learn to detect parts of cars in the second hidden layer and then more complete car shapes in the third hidden layer. So just by feeding it different data, the neural network automatically learns to detect very different features so as to try to make the predictions of car detection or person recognition or whatever is a particular given task that it's trained on. So that's how a neural network works for a computer vision application. And in fact, later this week, you see how you can build a neural network yourself and apply it to a handwritten digit recognition application. So far, we've been going over the description of intuitions of neural networks to give you a feel for how they work. In the next video, let's look more deeply into the concrete mathematics and the concrete implementational details of how you actually build one or more layers of a neural network and therefore how you can implement one of these things yourself. Let's go on to the next video.
[{"start": 0.0, "end": 6.92, "text": " In the last video, you saw how a neural network works in a demand prediction example."}, {"start": 6.92, "end": 12.56, "text": " Let's take a look at how you can apply a similar type of idea to a computer vision application."}, {"start": 12.56, "end": 13.56, "text": " Let's dive in."}, {"start": 13.56, "end": 18.56, "text": " If you're building a face recognition application, you might want to train, say, a neural network"}, {"start": 18.56, "end": 23.88, "text": " that takes this input to picture like this and outputs the identity of the person in"}, {"start": 23.88, "end": 25.6, "text": " the picture."}, {"start": 25.6, "end": 32.760000000000005, "text": " This image is a thousand by a thousand pixels and so its representation in the computer"}, {"start": 32.760000000000005, "end": 39.72, "text": " is actually as a thousand by a thousand grid or also called a thousand by a thousand matrix"}, {"start": 39.72, "end": 42.88, "text": " of pixel intensity values."}, {"start": 42.88, "end": 48.36, "text": " In this example, my pixel intensity values or pixel brightness values goes from 0 to"}, {"start": 48.36, "end": 55.96, "text": " 255 and so 197 here would be the brightness of the pixel in the very upper left of the"}, {"start": 55.96, "end": 63.480000000000004, "text": " image, 185 is the brightness of the pixel, one pixel over and so on, down to 214 would"}, {"start": 63.480000000000004, "end": 67.28, "text": " be the lower right corner of this image."}, {"start": 67.28, "end": 73.72, "text": " If you were to take these pixel intensity values and unroll them into a vector, you"}, {"start": 73.72, "end": 81.24, "text": " end up with a list or a vector of a million pixel intensity values, one million because"}, {"start": 81.24, "end": 85.6, "text": " a thousand by a thousand squared gives you a million numbers."}, {"start": 85.6, "end": 91.84, "text": " The face recognition problem is, can you train a neural network that takes this input, a"}, {"start": 91.84, "end": 98.36, "text": " feature vector with a million pixel brightness values and outputs the identity of the person"}, {"start": 98.36, "end": 100.36, "text": " in the picture?"}, {"start": 100.36, "end": 105.8, "text": " This is how you might build a neural network to carry out this task."}, {"start": 105.8, "end": 112.76, "text": " The input image X is fed to this layer of neurons, this is the first hidden layer, which"}, {"start": 112.76, "end": 119.08, "text": " then extracts some features and the output of this first hidden layer is fed to a second"}, {"start": 119.08, "end": 125.03999999999999, "text": " hidden layer and that output is fed to a third hidden layer and then finally to the output"}, {"start": 125.04, "end": 131.56, "text": " layer which then estimates, say, the probability of this being a particular person."}, {"start": 131.56, "end": 136.68, "text": " One interesting thing would be if you look at a neural network that's been trained on"}, {"start": 136.68, "end": 143.64000000000001, "text": " a lot of images of faces and to try to visualize what are these hidden layers trying to compute."}, {"start": 143.64000000000001, "end": 149.0, "text": " It turns out that when you train a system like this on a lot of pictures of faces and"}, {"start": 149.0, "end": 155.56, "text": " you peer at the different neurons in the hidden layers to figure what they may be computing,"}, {"start": 155.56, "end": 157.64, "text": " this is what you might find."}, {"start": 157.64, "end": 164.12, "text": " In the first hidden layer, you might find one neuron that is looking for a little vertical"}, {"start": 164.12, "end": 171.4, "text": " line or a vertical edge like that and a second neuron looking for a oriented line or oriented"}, {"start": 171.4, "end": 178.16, "text": " edge like that and a third neuron looking for a line at that orientation and so on."}, {"start": 178.16, "end": 182.44, "text": " In the earliest layers of a neural network, you might find that the neurons are looking"}, {"start": 182.44, "end": 188.35999999999999, "text": " for very short lines or very short edges in the image."}, {"start": 188.35999999999999, "end": 195.92, "text": " If you look at the next hidden layer, you find that these neurons might learn to group"}, {"start": 195.92, "end": 201.96, "text": " together lots of little short lines, little short edge segments in order to look for parts"}, {"start": 201.96, "end": 202.96, "text": " of faces."}, {"start": 202.96, "end": 209.12, "text": " For example, each of these little square boxes is a visualization of what that neuron is"}, {"start": 209.12, "end": 211.12, "text": " trying to detect."}, {"start": 211.12, "end": 215.84, "text": " This first neuron looks like it's trying to detect the presence or absence of an eye in"}, {"start": 215.84, "end": 222.8, "text": " a certain position of the image and the second neuron looks like it's trying to detect the"}, {"start": 222.8, "end": 231.44, "text": " horn of a nose and maybe this neuron over here is trying to detect the bottom of a nose."}, {"start": 231.44, "end": 237.0, "text": " And then as you look at the next hidden layer, in this example, the neural network is aggregating"}, {"start": 237.0, "end": 244.07999999999998, "text": " different parts of faces to then try to detect presence or absence of larger, coarser face"}, {"start": 244.07999999999998, "end": 245.07999999999998, "text": " shapes."}, {"start": 245.07999999999998, "end": 252.04, "text": " And then finally, detecting how much the face corresponds to different face shapes creates"}, {"start": 252.04, "end": 257.65999999999997, "text": " a rich set of features that then helps the upper layer try to determine the identity"}, {"start": 257.65999999999997, "end": 260.04, "text": " of the person pictured."}, {"start": 260.04, "end": 264.40000000000003, "text": " And a remarkable thing about the neural network is it can learn these feature detectors at"}, {"start": 264.40000000000003, "end": 267.84000000000003, "text": " the different hidden layers all by itself."}, {"start": 267.84000000000003, "end": 272.76000000000005, "text": " In this example, no one ever told it to look for short little edges in the first layer"}, {"start": 272.76000000000005, "end": 277.40000000000003, "text": " and eyes and noses and face parts in the second layer and then more complete face shapes at"}, {"start": 277.40000000000003, "end": 278.40000000000003, "text": " the third layer."}, {"start": 278.40000000000003, "end": 283.32000000000005, "text": " The neural network is able to figure out these things all by itself from data."}, {"start": 283.32000000000005, "end": 288.8, "text": " Just one note, in this visualization, the neurons in the first hidden layer are shown"}, {"start": 288.8, "end": 293.8, "text": " looking at relatively small windows to look for these edges."}, {"start": 293.8, "end": 297.84000000000003, "text": " And then the second hidden layer is looking at a bigger window and the third hidden layer"}, {"start": 297.84000000000003, "end": 300.56, "text": " is looking at an even bigger window."}, {"start": 300.56, "end": 306.08000000000004, "text": " So these little neurons visualizations actually correspond to differently sized regions in"}, {"start": 306.08000000000004, "end": 307.68, "text": " the image."}, {"start": 307.68, "end": 312.96000000000004, "text": " Just for fun, let's see what happens if you were to train this neural network on a different"}, {"start": 312.96, "end": 319.12, "text": " data set, say on lots of pictures of cars pictured on the side."}, {"start": 319.12, "end": 326.96, "text": " The same learning algorithm, if it's asked to detect cars, will then learn edges in the"}, {"start": 326.96, "end": 332.64, "text": " first layer, so pretty similar, but then they'll learn to detect parts of cars in the second"}, {"start": 332.64, "end": 337.88, "text": " hidden layer and then more complete car shapes in the third hidden layer."}, {"start": 337.88, "end": 344.6, "text": " So just by feeding it different data, the neural network automatically learns to detect"}, {"start": 344.6, "end": 352.86, "text": " very different features so as to try to make the predictions of car detection or person"}, {"start": 352.86, "end": 357.88, "text": " recognition or whatever is a particular given task that it's trained on."}, {"start": 357.88, "end": 361.0, "text": " So that's how a neural network works for a computer vision application."}, {"start": 361.0, "end": 366.0, "text": " And in fact, later this week, you see how you can build a neural network yourself and"}, {"start": 366.0, "end": 370.9, "text": " apply it to a handwritten digit recognition application."}, {"start": 370.9, "end": 375.8, "text": " So far, we've been going over the description of intuitions of neural networks to give you"}, {"start": 375.8, "end": 378.2, "text": " a feel for how they work."}, {"start": 378.2, "end": 383.56, "text": " In the next video, let's look more deeply into the concrete mathematics and the concrete"}, {"start": 383.56, "end": 389.8, "text": " implementational details of how you actually build one or more layers of a neural network"}, {"start": 389.8, "end": 393.24, "text": " and therefore how you can implement one of these things yourself."}, {"start": 393.24, "end": 396.32, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=YmDM4Bq-dyA
4.5 Neural Networks Model | Neural Network Layer --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
The fundamental building block of most modern neural networks is a layer of neurons. In this video, you learn how to construct a layer of neurons, and once you have that down, you'll be able to take those building blocks and put them together to form a large neural network. Let's take a look at how a layer of neurons works. Here's the example we had from the demand prediction example where we had four input features that were fed to this layer of three neurons in this hidden layer that then sends this output to this output layer with just one neuron. Let's zoom in to the hidden layer to look at its computations. This hidden layer inputs four numbers, and these four numbers are inputs to each of three neurons, and each of these three neurons is just implementing a little logistic regression unit or a little logistic regression function. Take this first neuron. It has two parameters, W and B, and in fact, to denote that this is the first hidden unit, I'm going to subscript this as W1B1, and what it does is it'll output some activation value A, which is G of W1 in a product of X plus B1, where this is the familiar Z value that you had learned about in logistic regression in the previous course, and G of Z is the familiar logistic function, one over one plus E to the negative Z, and so maybe this ends up being a number 0.3, and that's the activation value A of the first neuron. To denote that this is the first neuron, I'm also going to add a subscript A1 over here, and so A1 may be a number like 0.3. That's a 0.3 chance of this being highly affordable based on the input features. Now let's look at the second neuron. The second neuron has parameters W2 and B2, and this WB or W2B2 are the parameters of the second logistic unit. So it computes A2 equals the logistic function G applied to W2 dot product X plus B2, and this may be some other number, say 0.7, because in this example there's a 0.7 chance that we think the potential buyers will be aware of this t-shirt. And similarly, the third neuron has a third set of parameters, W3B3, and similarly computes an activation value A3 equals G of W3 dot product X plus B3, and that may be, say, 0.2. So in this example, these three neurons output 0.3, 0.7, and 0.2, and this vector of three numbers becomes the vector of activation values A that is then passed to the final output layer of this neural network. Now when you build neural networks with multiple layers, it will be useful to give the layers different numbers. So by convention, this layer is called layer one of the neural network, and this layer is called layer two of the neural network, and the input layer is also sometimes called layer zero, and today there are neural networks that can have dozens or even hundreds of layers. But in order to introduce notation to help us distinguish between the different layers, I'm going to use superscript square bracket one to index into different layers. So in particular, A superscript in square brackets one, I'm going to use as a notation to denote the output of layer one of this hidden layer of this neural network, and similarly, W1B1 here are the parameters of the first unit in layer one of the neural network, so I'm also going to add a superscript in square brackets one here, and W2B2 are the parameters of the second hidden unit or the second hidden neuron in layer one, and so those parameters are also denoted here, W superscript square bracket one, like so. And similarly, I can add superscript square brackets like so to denote that these are the activation values of the hidden units of layer one of this neural network. I know maybe this notation is getting a little bit cluttered, but the thing to remember is whenever you see this superscript square bracket one, that just refers to a quantity that is associated with layer one of the neural network, and if you see superscript square bracket two, that refers to a quantity associated with layer two of the neural network, and similarly for other layers as well, including layer three, layer four, and so on for neural networks with more layers. So that's the computation of layer one of this neural network. This output is this activation vector, A superscript square bracket one, and I'm going to copy this over here because this output, A1, becomes the input to layer two. So now let's zoom into the computation of layer two of this neural network, which is also the output layer. So the input to layer two is the output of layer one. So A1 is this vector 0.3, 0.7, 0.2 that we just computed on the previous part of the slide, and so because the output layer has just a single neuron, all it does is it computes a subscript one that is the output of its first and only neuron as G, the sigmoid function, applied to W subscript one in a product with A superscript square bracket one, so this is the input into this layer, and then plus B1. Here this is the quantity Z that you're familiar with, and G as before is the sigmoid function that you apply to this, and if this results in a number, say 0.84, then that becomes the output of this output layer of the neural network, and in this example, because the output layer has just a single neuron, this output is just a scalar, it's a single number rather than a vector of numbers. Sticking with our notational convention from before, we're going to use A superscript in square brackets two to denote the quantities associated with layer two of this neural network, so A superscript square bracket two is the output of this layer, and so I'm going to also copy this here as the final output of the neural network, and to make the notation consistent, you can also add these superscript square bracket twos to denote that these are the parameters and activation values associated with layer two of the neural network. Once the neural network has computed A2, there's one final optional step that you can choose to implement or not, which is if you want a binary prediction, one or zero, is this a top seller, yes or no, is you can take the number A superscript square brackets two, subscript one, and this is the number 0.84 that we computed, and threshold is at 0.5, so if it's greater than 0.5, you can predict y hat equals one, and if it's less than 0.5, then predict y hat equals zero, and we saw this thresholding as well when we learned about logistic regression in the first course of the specialization. So if you wish, this then gives you the final prediction y hat as either one or zero if you don't want just a probability of it being a top seller. So that's how a neural network works. Every layer inputs a vector of numbers and applies a bunch of logistic regression units to it and then computes another vector of numbers that then gets passed from layer to layer until you get the final output layers computation, which is a prediction of the neural network that you can either threshold at 0.5 or not to come up with the final prediction, and with that, let's go on to use this foundation we've built now to look at some even more complex, even larger neural network models, and I hope that by seeing more examples, this concept of layers and how to put them together to build a neural network will become even clearer. So let's go on to the next video.
[{"start": 0.0, "end": 8.68, "text": " The fundamental building block of most modern neural networks is a layer of neurons."}, {"start": 8.68, "end": 13.280000000000001, "text": " In this video, you learn how to construct a layer of neurons, and once you have that"}, {"start": 13.280000000000001, "end": 17.92, "text": " down, you'll be able to take those building blocks and put them together to form a large"}, {"start": 17.92, "end": 18.92, "text": " neural network."}, {"start": 18.92, "end": 22.44, "text": " Let's take a look at how a layer of neurons works."}, {"start": 22.44, "end": 28.36, "text": " Here's the example we had from the demand prediction example where we had four input"}, {"start": 28.36, "end": 35.92, "text": " features that were fed to this layer of three neurons in this hidden layer that then sends"}, {"start": 35.92, "end": 40.519999999999996, "text": " this output to this output layer with just one neuron."}, {"start": 40.519999999999996, "end": 47.08, "text": " Let's zoom in to the hidden layer to look at its computations."}, {"start": 47.08, "end": 52.82, "text": " This hidden layer inputs four numbers, and these four numbers are inputs to each of three"}, {"start": 52.82, "end": 61.08, "text": " neurons, and each of these three neurons is just implementing a little logistic regression"}, {"start": 61.08, "end": 65.2, "text": " unit or a little logistic regression function."}, {"start": 65.2, "end": 67.0, "text": " Take this first neuron."}, {"start": 67.0, "end": 75.44, "text": " It has two parameters, W and B, and in fact, to denote that this is the first hidden unit,"}, {"start": 75.44, "end": 83.52, "text": " I'm going to subscript this as W1B1, and what it does is it'll output some activation value"}, {"start": 83.52, "end": 96.12, "text": " A, which is G of W1 in a product of X plus B1, where this is the familiar Z value that"}, {"start": 96.12, "end": 104.16, "text": " you had learned about in logistic regression in the previous course, and G of Z is the"}, {"start": 104.16, "end": 111.75999999999999, "text": " familiar logistic function, one over one plus E to the negative Z, and so maybe this ends"}, {"start": 111.75999999999999, "end": 119.24, "text": " up being a number 0.3, and that's the activation value A of the first neuron."}, {"start": 119.24, "end": 124.19999999999999, "text": " To denote that this is the first neuron, I'm also going to add a subscript A1 over here,"}, {"start": 124.19999999999999, "end": 126.64, "text": " and so A1 may be a number like 0.3."}, {"start": 126.64, "end": 134.24, "text": " That's a 0.3 chance of this being highly affordable based on the input features."}, {"start": 134.24, "end": 136.28, "text": " Now let's look at the second neuron."}, {"start": 136.28, "end": 147.08, "text": " The second neuron has parameters W2 and B2, and this WB or W2B2 are the parameters of"}, {"start": 147.08, "end": 149.48, "text": " the second logistic unit."}, {"start": 149.48, "end": 159.95999999999998, "text": " So it computes A2 equals the logistic function G applied to W2 dot product X plus B2, and"}, {"start": 159.95999999999998, "end": 167.39999999999998, "text": " this may be some other number, say 0.7, because in this example there's a 0.7 chance that"}, {"start": 167.39999999999998, "end": 172.72, "text": " we think the potential buyers will be aware of this t-shirt."}, {"start": 172.72, "end": 178.76, "text": " And similarly, the third neuron has a third set of parameters, W3B3, and similarly computes"}, {"start": 178.76, "end": 186.67999999999998, "text": " an activation value A3 equals G of W3 dot product X plus B3, and that may be, say, 0.2."}, {"start": 186.67999999999998, "end": 194.94, "text": " So in this example, these three neurons output 0.3, 0.7, and 0.2, and this vector of three"}, {"start": 194.94, "end": 205.84, "text": " numbers becomes the vector of activation values A that is then passed to the final output"}, {"start": 205.84, "end": 208.39999999999998, "text": " layer of this neural network."}, {"start": 208.4, "end": 214.12, "text": " Now when you build neural networks with multiple layers, it will be useful to give the layers"}, {"start": 214.12, "end": 215.84, "text": " different numbers."}, {"start": 215.84, "end": 223.6, "text": " So by convention, this layer is called layer one of the neural network, and this layer"}, {"start": 223.6, "end": 230.24, "text": " is called layer two of the neural network, and the input layer is also sometimes called"}, {"start": 230.24, "end": 237.72, "text": " layer zero, and today there are neural networks that can have dozens or even hundreds of layers."}, {"start": 237.72, "end": 243.76, "text": " But in order to introduce notation to help us distinguish between the different layers,"}, {"start": 243.76, "end": 252.2, "text": " I'm going to use superscript square bracket one to index into different layers."}, {"start": 252.2, "end": 258.66, "text": " So in particular, A superscript in square brackets one, I'm going to use as a notation"}, {"start": 258.66, "end": 266.28, "text": " to denote the output of layer one of this hidden layer of this neural network, and similarly,"}, {"start": 266.28, "end": 273.35999999999996, "text": " W1B1 here are the parameters of the first unit in layer one of the neural network, so"}, {"start": 273.35999999999996, "end": 281.55999999999995, "text": " I'm also going to add a superscript in square brackets one here, and W2B2 are the parameters"}, {"start": 281.55999999999995, "end": 288.76, "text": " of the second hidden unit or the second hidden neuron in layer one, and so those parameters"}, {"start": 288.76, "end": 294.96, "text": " are also denoted here, W superscript square bracket one, like so."}, {"start": 294.96, "end": 300.68, "text": " And similarly, I can add superscript square brackets like so to denote that these are"}, {"start": 300.68, "end": 308.79999999999995, "text": " the activation values of the hidden units of layer one of this neural network."}, {"start": 308.79999999999995, "end": 315.24, "text": " I know maybe this notation is getting a little bit cluttered, but the thing to remember is"}, {"start": 315.24, "end": 322.59999999999997, "text": " whenever you see this superscript square bracket one, that just refers to a quantity that is"}, {"start": 322.6, "end": 329.16, "text": " associated with layer one of the neural network, and if you see superscript square bracket"}, {"start": 329.16, "end": 335.68, "text": " two, that refers to a quantity associated with layer two of the neural network, and"}, {"start": 335.68, "end": 340.76000000000005, "text": " similarly for other layers as well, including layer three, layer four, and so on for neural"}, {"start": 340.76000000000005, "end": 342.84000000000003, "text": " networks with more layers."}, {"start": 342.84000000000003, "end": 347.88, "text": " So that's the computation of layer one of this neural network."}, {"start": 347.88, "end": 355.6, "text": " This output is this activation vector, A superscript square bracket one, and I'm going to copy"}, {"start": 355.6, "end": 364.04, "text": " this over here because this output, A1, becomes the input to layer two."}, {"start": 364.04, "end": 369.92, "text": " So now let's zoom into the computation of layer two of this neural network, which is"}, {"start": 369.92, "end": 372.6, "text": " also the output layer."}, {"start": 372.6, "end": 378.48, "text": " So the input to layer two is the output of layer one."}, {"start": 378.48, "end": 389.36, "text": " So A1 is this vector 0.3, 0.7, 0.2 that we just computed on the previous part of the"}, {"start": 389.36, "end": 397.92, "text": " slide, and so because the output layer has just a single neuron, all it does is it computes"}, {"start": 397.92, "end": 405.6, "text": " a subscript one that is the output of its first and only neuron as G, the sigmoid function,"}, {"start": 405.6, "end": 412.40000000000003, "text": " applied to W subscript one in a product with A superscript square bracket one, so this"}, {"start": 412.40000000000003, "end": 418.36, "text": " is the input into this layer, and then plus B1."}, {"start": 418.36, "end": 425.40000000000003, "text": " Here this is the quantity Z that you're familiar with, and G as before is the sigmoid function"}, {"start": 425.4, "end": 433.44, "text": " that you apply to this, and if this results in a number, say 0.84, then that becomes the"}, {"start": 433.44, "end": 440.28, "text": " output of this output layer of the neural network, and in this example, because the"}, {"start": 440.28, "end": 445.84, "text": " output layer has just a single neuron, this output is just a scalar, it's a single number"}, {"start": 445.84, "end": 448.64, "text": " rather than a vector of numbers."}, {"start": 448.64, "end": 454.12, "text": " Sticking with our notational convention from before, we're going to use A superscript in"}, {"start": 454.12, "end": 460.72, "text": " square brackets two to denote the quantities associated with layer two of this neural network,"}, {"start": 460.72, "end": 467.56, "text": " so A superscript square bracket two is the output of this layer, and so I'm going to"}, {"start": 467.56, "end": 475.68, "text": " also copy this here as the final output of the neural network, and to make the notation"}, {"start": 475.68, "end": 481.84000000000003, "text": " consistent, you can also add these superscript square bracket twos to denote that these are"}, {"start": 481.84, "end": 489.4, "text": " the parameters and activation values associated with layer two of the neural network."}, {"start": 489.4, "end": 495.56, "text": " Once the neural network has computed A2, there's one final optional step that you can choose"}, {"start": 495.56, "end": 502.71999999999997, "text": " to implement or not, which is if you want a binary prediction, one or zero, is this"}, {"start": 502.71999999999997, "end": 508.88, "text": " a top seller, yes or no, is you can take the number A superscript square brackets two,"}, {"start": 508.88, "end": 518.04, "text": " subscript one, and this is the number 0.84 that we computed, and threshold is at 0.5,"}, {"start": 518.04, "end": 523.28, "text": " so if it's greater than 0.5, you can predict y hat equals one, and if it's less than 0.5,"}, {"start": 523.28, "end": 528.84, "text": " then predict y hat equals zero, and we saw this thresholding as well when we learned"}, {"start": 528.84, "end": 533.24, "text": " about logistic regression in the first course of the specialization."}, {"start": 533.24, "end": 538.6, "text": " So if you wish, this then gives you the final prediction y hat as either one or zero if"}, {"start": 538.6, "end": 542.4, "text": " you don't want just a probability of it being a top seller."}, {"start": 542.4, "end": 544.8000000000001, "text": " So that's how a neural network works."}, {"start": 544.8000000000001, "end": 549.6800000000001, "text": " Every layer inputs a vector of numbers and applies a bunch of logistic regression units"}, {"start": 549.6800000000001, "end": 555.1, "text": " to it and then computes another vector of numbers that then gets passed from layer to"}, {"start": 555.1, "end": 560.5600000000001, "text": " layer until you get the final output layers computation, which is a prediction of the"}, {"start": 560.5600000000001, "end": 567.88, "text": " neural network that you can either threshold at 0.5 or not to come up with the final prediction,"}, {"start": 567.88, "end": 573.4, "text": " and with that, let's go on to use this foundation we've built now to look at some even more"}, {"start": 573.4, "end": 580.08, "text": " complex, even larger neural network models, and I hope that by seeing more examples, this"}, {"start": 580.08, "end": 585.68, "text": " concept of layers and how to put them together to build a neural network will become even"}, {"start": 585.68, "end": 587.0, "text": " clearer."}, {"start": 587.0, "end": 598.64, "text": " So let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=4-2FOgsMOpk
4.6 Neural Networks Model | More complex neural networks --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you learned about the neural network layer and how that takes as input a vector of numbers and in turn outputs another vector of numbers. In this video, let's use that layer to build a more complex neural network. And through this, I hope that the notation that we're using for neural networks will become clearer and more concrete as well. Let's take a look. This is the running example that I'm going to use throughout this video as an example of a more complex neural network. This network has four layers, not counting the input layer, which is also called layer zero, where layers one, two, and three are hidden layers and layer four is the output layer and layer zero as usual is the input layer. By convention, when we say that a neural network has four layers, that includes all the hidden layers and the output layer, but we don't count the input layer. So this is a neural network with four layers in the conventional way of counting layers in the network. Let's zoom in to layer three, which is the third and final hidden layer to look at the computations of that layer. Layer three inputs a vector, a superscript square bracket two that was computed by the previous layer and it outputs a three, which is another vector. So what is the computation that layer three does in order to go from a two to a three? If it has three neurons or we call it three hidden units, then it has parameters w1, b1, w2, b2, and w3, b3 and it computes a1 equals sigmoid of w1 dot product with this input to the layer plus b1 and it computes a2 equals sigmoid of w2 dot product with again a2, the input to the layer plus b2 and so on to get a3 and then the output of this layer is a vector comprising a1, a2, and a3. And again by convention if we want to more explicitly denote that all of these are quantities associated with layer three, then we add in all of these superscript square brackets three here to denote that these parameters w and b are the parameters associated with neurons in layer three and that these activations are activations with layer three. Notice that this term here is w1 superscript square bracket three meaning the parameters associated with layer three dot product with a superscript square bracket two, which was the output of layer two, which became the input to layer three. So that's why there's a three here because there's a parameter associated with layer three dot product with and there's a two there because it's the output of layer two. Now let's just do a quick double check of our understanding of this. I'm going to hide the superscripts and subscripts associated with the second neuron and without rewinding this video, go ahead and rewind if you want but you know prefer you not, but without rewinding this video, are you able to think through what are the missing superscripts and subscripts in this equation and fill them in yourself? Why don't you take a look at the end video quiz and see if you can figure out what are the appropriate superscripts and subscripts for this equation over here. So to recap, A3 is activation associated with layer three for the second neuron, hence this is a two. There's a parameter associated with the third layer for the second neuron. This is A2, same as above, and then plus B3, two. So hopefully that makes sense. Here's the more general form of this equation for an arbitrary layer L and for an arbitrary unit J, which is that A deactivation output of layer L unit J like A32, that's going to be the sigmoid function applied to this term, which is the weight vector of layer L such as layer three for the Jth unit. So there's two again in the example above. And so that's dot product with A deactivation value of, and notice this is not L, this is L minus one, like the two above here, because you're dot producting with the output from the previous layer, and then plus B, the parameter for this layer for that unit J. And so this gives you the activation of layer L unit J, where the superscript in square brackets L denotes layer L and the subscript J denotes unit J. And when building neural networks, unit J refers to the Jth neuron. So use those terms a little bit interchangeably, where each unit is a single neuron in a layer. G here is the sigmoid function. In the context of a neural network, G has another name, which is also called the activation function because G outputs this activation value. So when I say activation function, I mean this function G here. And so far, the only activation function you've seen is the sigmoid function. But next week, we'll look at when other functions than the sigmoid function can be plugged in in place of G as well. But so the activation function is just that function that outputs these activation values. And just one last piece of notation in order to make all this notation consistent. I'm also doing to give the input vector X another name, which is a zero. So this way, the same equation also works for the first layer, where when L is equal to one deactivations of the first layer that is a one will be sigmoid times the weights dot product with a zero, which is just this input feature vector X. So with this notation, you now know how to compute the activation values of any layer in a neural network as a function of the parameters as well as the activations of the previous layer. So you now know how to compute the activations of any layer given the activations of the previous layer. Let's put this into an inference algorithm for a neural network. In other words, how to get a neural network to make predictions. Let's go see that in the next video.
[{"start": 0.0, "end": 7.2, "text": " In the last video, you learned about the neural network layer and how that takes as input"}, {"start": 7.2, "end": 12.6, "text": " a vector of numbers and in turn outputs another vector of numbers."}, {"start": 12.6, "end": 17.6, "text": " In this video, let's use that layer to build a more complex neural network."}, {"start": 17.6, "end": 22.72, "text": " And through this, I hope that the notation that we're using for neural networks will"}, {"start": 22.72, "end": 25.32, "text": " become clearer and more concrete as well."}, {"start": 25.32, "end": 26.54, "text": " Let's take a look."}, {"start": 26.54, "end": 31.2, "text": " This is the running example that I'm going to use throughout this video as an example"}, {"start": 31.2, "end": 34.0, "text": " of a more complex neural network."}, {"start": 34.0, "end": 39.1, "text": " This network has four layers, not counting the input layer, which is also called layer"}, {"start": 39.1, "end": 46.64, "text": " zero, where layers one, two, and three are hidden layers and layer four is the output"}, {"start": 46.64, "end": 50.96, "text": " layer and layer zero as usual is the input layer."}, {"start": 50.96, "end": 56.519999999999996, "text": " By convention, when we say that a neural network has four layers, that includes all the hidden"}, {"start": 56.52, "end": 60.28, "text": " layers and the output layer, but we don't count the input layer."}, {"start": 60.28, "end": 65.08, "text": " So this is a neural network with four layers in the conventional way of counting layers"}, {"start": 65.08, "end": 67.88, "text": " in the network."}, {"start": 67.88, "end": 74.04, "text": " Let's zoom in to layer three, which is the third and final hidden layer to look at the"}, {"start": 74.04, "end": 77.32000000000001, "text": " computations of that layer."}, {"start": 77.32000000000001, "end": 83.92, "text": " Layer three inputs a vector, a superscript square bracket two that was computed by the"}, {"start": 83.92, "end": 92.04, "text": " previous layer and it outputs a three, which is another vector."}, {"start": 92.04, "end": 100.44, "text": " So what is the computation that layer three does in order to go from a two to a three?"}, {"start": 100.44, "end": 108.68, "text": " If it has three neurons or we call it three hidden units, then it has parameters w1, b1,"}, {"start": 108.68, "end": 120.16000000000001, "text": " w2, b2, and w3, b3 and it computes a1 equals sigmoid of w1 dot product with this input"}, {"start": 120.16000000000001, "end": 128.96, "text": " to the layer plus b1 and it computes a2 equals sigmoid of w2 dot product with again a2, the"}, {"start": 128.96, "end": 136.52, "text": " input to the layer plus b2 and so on to get a3 and then the output of this layer is a"}, {"start": 136.52, "end": 142.8, "text": " vector comprising a1, a2, and a3."}, {"start": 142.8, "end": 149.44, "text": " And again by convention if we want to more explicitly denote that all of these are quantities"}, {"start": 149.44, "end": 155.44, "text": " associated with layer three, then we add in all of these superscript square brackets three"}, {"start": 155.44, "end": 161.20000000000002, "text": " here to denote that these parameters w and b are the parameters associated with neurons"}, {"start": 161.2, "end": 168.6, "text": " in layer three and that these activations are activations with layer three."}, {"start": 168.6, "end": 173.88, "text": " Notice that this term here is w1 superscript square bracket three meaning the parameters"}, {"start": 173.88, "end": 180.56, "text": " associated with layer three dot product with a superscript square bracket two, which was"}, {"start": 180.56, "end": 184.39999999999998, "text": " the output of layer two, which became the input to layer three."}, {"start": 184.39999999999998, "end": 188.2, "text": " So that's why there's a three here because there's a parameter associated with layer"}, {"start": 188.2, "end": 194.04, "text": " three dot product with and there's a two there because it's the output of layer two."}, {"start": 194.04, "end": 198.51999999999998, "text": " Now let's just do a quick double check of our understanding of this."}, {"start": 198.51999999999998, "end": 207.28, "text": " I'm going to hide the superscripts and subscripts associated with the second neuron and without"}, {"start": 207.28, "end": 212.51999999999998, "text": " rewinding this video, go ahead and rewind if you want but you know prefer you not, but"}, {"start": 212.52, "end": 218.4, "text": " without rewinding this video, are you able to think through what are the missing superscripts"}, {"start": 218.4, "end": 222.32000000000002, "text": " and subscripts in this equation and fill them in yourself?"}, {"start": 222.32000000000002, "end": 226.44, "text": " Why don't you take a look at the end video quiz and see if you can figure out what are"}, {"start": 226.44, "end": 248.32, "text": " the appropriate superscripts and subscripts for this equation over here."}, {"start": 248.32, "end": 256.56, "text": " So to recap, A3 is activation associated with layer three for the second neuron, hence this"}, {"start": 256.56, "end": 257.56, "text": " is a two."}, {"start": 257.56, "end": 262.24, "text": " There's a parameter associated with the third layer for the second neuron."}, {"start": 262.24, "end": 267.48, "text": " This is A2, same as above, and then plus B3, two."}, {"start": 267.48, "end": 270.36, "text": " So hopefully that makes sense."}, {"start": 270.36, "end": 275.12, "text": " Here's the more general form of this equation for an arbitrary layer L and for an arbitrary"}, {"start": 275.12, "end": 285.56, "text": " unit J, which is that A deactivation output of layer L unit J like A32, that's going to"}, {"start": 285.56, "end": 293.56, "text": " be the sigmoid function applied to this term, which is the weight vector of layer L such"}, {"start": 293.56, "end": 296.24, "text": " as layer three for the Jth unit."}, {"start": 296.24, "end": 299.08, "text": " So there's two again in the example above."}, {"start": 299.08, "end": 305.44, "text": " And so that's dot product with A deactivation value of, and notice this is not L, this is"}, {"start": 305.44, "end": 311.88, "text": " L minus one, like the two above here, because you're dot producting with the output from"}, {"start": 311.88, "end": 318.47999999999996, "text": " the previous layer, and then plus B, the parameter for this layer for that unit J."}, {"start": 318.47999999999996, "end": 325.52, "text": " And so this gives you the activation of layer L unit J, where the superscript in square"}, {"start": 325.52, "end": 330.79999999999995, "text": " brackets L denotes layer L and the subscript J denotes unit J."}, {"start": 330.79999999999995, "end": 335.59999999999997, "text": " And when building neural networks, unit J refers to the Jth neuron."}, {"start": 335.59999999999997, "end": 341.28, "text": " So use those terms a little bit interchangeably, where each unit is a single neuron in a layer."}, {"start": 341.28, "end": 343.76, "text": " G here is the sigmoid function."}, {"start": 343.76, "end": 349.15999999999997, "text": " In the context of a neural network, G has another name, which is also called the activation"}, {"start": 349.15999999999997, "end": 353.59999999999997, "text": " function because G outputs this activation value."}, {"start": 353.6, "end": 358.84000000000003, "text": " So when I say activation function, I mean this function G here."}, {"start": 358.84000000000003, "end": 363.72, "text": " And so far, the only activation function you've seen is the sigmoid function."}, {"start": 363.72, "end": 368.36, "text": " But next week, we'll look at when other functions than the sigmoid function can be plugged in"}, {"start": 368.36, "end": 369.8, "text": " in place of G as well."}, {"start": 369.8, "end": 376.94, "text": " But so the activation function is just that function that outputs these activation values."}, {"start": 376.94, "end": 384.56, "text": " And just one last piece of notation in order to make all this notation consistent."}, {"start": 384.56, "end": 390.96, "text": " I'm also doing to give the input vector X another name, which is a zero."}, {"start": 390.96, "end": 396.52, "text": " So this way, the same equation also works for the first layer, where when L is equal"}, {"start": 396.52, "end": 402.76, "text": " to one deactivations of the first layer that is a one will be sigmoid times the weights"}, {"start": 402.76, "end": 408.96, "text": " dot product with a zero, which is just this input feature vector X."}, {"start": 408.96, "end": 414.71999999999997, "text": " So with this notation, you now know how to compute the activation values of any layer"}, {"start": 414.71999999999997, "end": 420.59999999999997, "text": " in a neural network as a function of the parameters as well as the activations of the previous"}, {"start": 420.59999999999997, "end": 421.59999999999997, "text": " layer."}, {"start": 421.59999999999997, "end": 426.84, "text": " So you now know how to compute the activations of any layer given the activations of the"}, {"start": 426.84, "end": 428.4, "text": " previous layer."}, {"start": 428.4, "end": 432.44, "text": " Let's put this into an inference algorithm for a neural network."}, {"start": 432.44, "end": 436.16, "text": " In other words, how to get a neural network to make predictions."}, {"start": 436.16, "end": 463.16, "text": " Let's go see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=L1fsfAbq-q8
4.7 Neural Networks Model |Inference : making predictions (forward propagation)- [ML- Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's take what we've learned and put it together into an algorithm to let your neural network make inferences or make predictions. This will be an algorithm called forward propagation. Let's take a look. I'm going to use as a multi-fingered example, handwritten digit recognition. And for simplicity, we're just going to distinguish between the handwritten digits 0 and 1. So it's just a binary classification problem where we're going to input an image and classify is this the digit 0 or the digit 1. And you get to play with this yourself later this week in the practice lab as well. For the example on this slide, I'm going to use an 8 by 8 image. And so this image of a 1 is this grid or matrix of 8 by 8 or 64 pixel intensity values where 255 denotes a bright white pixel and 0 would denote a black pixel and different numbers are different shades of gray in between the shades of black and white. Given these 64 input features, we're going to use a neural network with two hidden layers where the first hidden layer has 25 neurons or 25 units. Second hidden layer has 15 neurons or 15 units, and then finally the output layer outputs. What's the chance of this being 1 versus 0? So let's step through the sequence of computations that the neural network would need to make to go from the input x, this 8 by 8 or 64 numbers, to the predicted probability a3. The first computation is to go from x to a1. And that's what the first layer or the first hidden layer does. It carries out a computation of a superscript square bracket 1 equals this formula on the right. Notice that a1 has 25 numbers because this hidden layer has 25 units, which is why the parameters go from w1 through w25 as well as b1 through b25. And I've written x here, but I could also have written a0 here because by convention deactivation of layer 0, that is a0, is equal to the input feature value x. So that lets us compute a1. The next step is to compute a2. Looking at the second hidden layer, it then carries out this computation where a2 is a function of a1 and is computed as the sigmoid activation function applied to w dot product a1 plus the corresponding value of b. Notice that layer 2 has 15 neurons, so 15 units, which is why the parameters here run from w1 through w15 and b1 through b15. Now we've computed a2. The final step is then to compute a3. And we do so using a very similar computation. Only now this third layer, the output layer, has just one unit, which is why there's just one output here. So a3 is just a scalar. And finally, you can optionally take a3 subscript 1 and threshold it at 0.5 to come up with a binary classification label. Is this the digit 1? Yes or no? So the sequence of computations first takes x and then computes a1 and then computes a2 and then computes a3, which is also the output of the neural network. You can also write that as f of x. So remember when we learned about linear regression and logistic regression, we used f of x to denote the outputs of linear regression or logistic regression. So we can also use f of x to denote the function computed by the neural network as a function of x. Because this computation goes from left to right, you start from x, then compute a1, then a2, then a3, this algorithm is also called forward propagation because you're propagating the activations of the neurons. So you're making these computations in the forward direction from left to right. And this is in contrast to a different algorithm called backward propagation or back propagation, which is used for learning. And that's something you learn about next week. And by the way, this type of neural network architecture where you have more hidden units initially, and then the number of hidden units decreases as you get closer to the output layer, that's also a pretty typical choice when choosing neural network architectures. And you see more examples of this in the practice lab as well. So that's neural network inference using the forward propagation algorithm. And with this, you'd be able to download the parameters of a neural network that someone else had trained and posted on the internet. And you'd be able to carry out inference on your new data using their neural network. Now that you've seen the math and the algorithm, let's take a look at how you can actually implement this in TensorFlow. Specifically, let's take a look at this in the next video.
[{"start": 0.0, "end": 6.48, "text": " Let's take what we've learned and put it together into an algorithm to let your neural network"}, {"start": 6.48, "end": 9.48, "text": " make inferences or make predictions."}, {"start": 9.48, "end": 12.8, "text": " This will be an algorithm called forward propagation."}, {"start": 12.8, "end": 13.8, "text": " Let's take a look."}, {"start": 13.8, "end": 19.52, "text": " I'm going to use as a multi-fingered example, handwritten digit recognition."}, {"start": 19.52, "end": 26.48, "text": " And for simplicity, we're just going to distinguish between the handwritten digits 0 and 1."}, {"start": 26.48, "end": 31.04, "text": " So it's just a binary classification problem where we're going to input an image and classify"}, {"start": 31.04, "end": 34.52, "text": " is this the digit 0 or the digit 1."}, {"start": 34.52, "end": 39.16, "text": " And you get to play with this yourself later this week in the practice lab as well."}, {"start": 39.16, "end": 44.08, "text": " For the example on this slide, I'm going to use an 8 by 8 image."}, {"start": 44.08, "end": 52.56, "text": " And so this image of a 1 is this grid or matrix of 8 by 8 or 64 pixel intensity values where"}, {"start": 52.56, "end": 59.800000000000004, "text": " 255 denotes a bright white pixel and 0 would denote a black pixel and different numbers"}, {"start": 59.800000000000004, "end": 65.64, "text": " are different shades of gray in between the shades of black and white."}, {"start": 65.64, "end": 73.0, "text": " Given these 64 input features, we're going to use a neural network with two hidden layers"}, {"start": 73.0, "end": 77.68, "text": " where the first hidden layer has 25 neurons or 25 units."}, {"start": 77.68, "end": 84.24000000000001, "text": " Second hidden layer has 15 neurons or 15 units, and then finally the output layer outputs."}, {"start": 84.24000000000001, "end": 87.56, "text": " What's the chance of this being 1 versus 0?"}, {"start": 87.56, "end": 92.12, "text": " So let's step through the sequence of computations that the neural network would need to make"}, {"start": 92.12, "end": 101.22, "text": " to go from the input x, this 8 by 8 or 64 numbers, to the predicted probability a3."}, {"start": 101.22, "end": 105.72, "text": " The first computation is to go from x to a1."}, {"start": 105.72, "end": 109.24, "text": " And that's what the first layer or the first hidden layer does."}, {"start": 109.24, "end": 115.56, "text": " It carries out a computation of a superscript square bracket 1 equals this formula on the"}, {"start": 115.56, "end": 116.56, "text": " right."}, {"start": 116.56, "end": 125.0, "text": " Notice that a1 has 25 numbers because this hidden layer has 25 units, which is why the"}, {"start": 125.0, "end": 131.64, "text": " parameters go from w1 through w25 as well as b1 through b25."}, {"start": 131.64, "end": 136.92, "text": " And I've written x here, but I could also have written a0 here because by convention"}, {"start": 136.92, "end": 143.72, "text": " deactivation of layer 0, that is a0, is equal to the input feature value x."}, {"start": 143.72, "end": 146.48, "text": " So that lets us compute a1."}, {"start": 146.48, "end": 150.2, "text": " The next step is to compute a2."}, {"start": 150.2, "end": 156.48, "text": " Looking at the second hidden layer, it then carries out this computation where a2 is a"}, {"start": 156.48, "end": 165.2, "text": " function of a1 and is computed as the sigmoid activation function applied to w dot product"}, {"start": 165.2, "end": 169.12, "text": " a1 plus the corresponding value of b."}, {"start": 169.12, "end": 174.64, "text": " Notice that layer 2 has 15 neurons, so 15 units, which is why the parameters here run"}, {"start": 174.64, "end": 181.26, "text": " from w1 through w15 and b1 through b15."}, {"start": 181.26, "end": 183.57999999999998, "text": " Now we've computed a2."}, {"start": 183.58, "end": 188.04000000000002, "text": " The final step is then to compute a3."}, {"start": 188.04000000000002, "end": 192.0, "text": " And we do so using a very similar computation."}, {"start": 192.0, "end": 197.64000000000001, "text": " Only now this third layer, the output layer, has just one unit, which is why there's just"}, {"start": 197.64000000000001, "end": 199.64000000000001, "text": " one output here."}, {"start": 199.64000000000001, "end": 202.8, "text": " So a3 is just a scalar."}, {"start": 202.8, "end": 209.84, "text": " And finally, you can optionally take a3 subscript 1 and threshold it at 0.5 to come up with"}, {"start": 209.84, "end": 212.28, "text": " a binary classification label."}, {"start": 212.28, "end": 213.48000000000002, "text": " Is this the digit 1?"}, {"start": 213.48, "end": 215.11999999999998, "text": " Yes or no?"}, {"start": 215.11999999999998, "end": 220.84, "text": " So the sequence of computations first takes x and then computes a1 and then computes a2"}, {"start": 220.84, "end": 225.35999999999999, "text": " and then computes a3, which is also the output of the neural network."}, {"start": 225.35999999999999, "end": 228.76, "text": " You can also write that as f of x."}, {"start": 228.76, "end": 235.39999999999998, "text": " So remember when we learned about linear regression and logistic regression, we used f of x to"}, {"start": 235.39999999999998, "end": 239.14, "text": " denote the outputs of linear regression or logistic regression."}, {"start": 239.14, "end": 244.76, "text": " So we can also use f of x to denote the function computed by the neural network as a function"}, {"start": 244.76, "end": 247.04, "text": " of x."}, {"start": 247.04, "end": 251.23999999999998, "text": " Because this computation goes from left to right, you start from x, then compute a1,"}, {"start": 251.23999999999998, "end": 258.0, "text": " then a2, then a3, this algorithm is also called forward propagation because you're propagating"}, {"start": 258.0, "end": 260.2, "text": " the activations of the neurons."}, {"start": 260.2, "end": 266.36, "text": " So you're making these computations in the forward direction from left to right."}, {"start": 266.36, "end": 272.0, "text": " And this is in contrast to a different algorithm called backward propagation or back propagation,"}, {"start": 272.0, "end": 273.0, "text": " which is used for learning."}, {"start": 273.0, "end": 276.0, "text": " And that's something you learn about next week."}, {"start": 276.0, "end": 281.04, "text": " And by the way, this type of neural network architecture where you have more hidden units"}, {"start": 281.04, "end": 285.76, "text": " initially, and then the number of hidden units decreases as you get closer to the output"}, {"start": 285.76, "end": 290.78000000000003, "text": " layer, that's also a pretty typical choice when choosing neural network architectures."}, {"start": 290.78000000000003, "end": 294.0, "text": " And you see more examples of this in the practice lab as well."}, {"start": 294.0, "end": 299.84, "text": " So that's neural network inference using the forward propagation algorithm."}, {"start": 299.84, "end": 304.68, "text": " And with this, you'd be able to download the parameters of a neural network that someone"}, {"start": 304.68, "end": 307.76, "text": " else had trained and posted on the internet."}, {"start": 307.76, "end": 313.88, "text": " And you'd be able to carry out inference on your new data using their neural network."}, {"start": 313.88, "end": 318.44, "text": " Now that you've seen the math and the algorithm, let's take a look at how you can actually"}, {"start": 318.44, "end": 320.4, "text": " implement this in TensorFlow."}, {"start": 320.4, "end": 324.12, "text": " Specifically, let's take a look at this in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=VctGI7Xaogw
4.8 TensorFlow implementation | Inference in Code --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
TensorFlow is one of the leading frameworks for implementing deep learning algorithms. When I'm building projects, TensorFlow is actually the tool that I use the most often and the other popular tool is PyTorch. But we're going to focus in this specialization on TensorFlow. In this video, let's take a look at how you can implement inference in code using TensorFlow. Let's dive in. One of the remarkable things about neural networks is the same algorithm can be applied to so many different applications. So in order both for this video and in some of the labs for you to see what a neural network is doing, I'm going to use another example to illustrate inference. So sometimes I do like to roast coffee beans myself at home. My favorite is actually Colombian coffee beans. So kind of learning algorithm help optimize the quality of the beans you get from a roasting process like this. When you're roasting coffee, two parameters you get to control are the temperature at which are heating up the raw coffee beans to turn them into nicely roasted coffee beans, as well as the duration or how long you're going to roast the beans. And in this slightly simplified example, we've created the data sets of different temperatures and different durations, as well as labels showing whether the coffee you roasted is good tasting coffee, where cross here, the positive class, y equals one, corresponds to good coffee, and oh, the negative class corresponds to bad coffee. So it looks like a reasonable way to think of this data set is if you cook it at too low a temperature, it doesn't get roasted and ends up undercooked. If you cook it not for long enough, the duration is too short. There's also not a nicely roasted set of beans. And finally, if you were to cook it either for too long or for too high a temperature, then you end up with overcooked beans, they're a little bit burnt beans. And so that's not good coffee either. There's only points within this little triangle here that corresponds to good coffee. This example is simplified a bit from actual coffee roasting. Even though this example is a simplified one for the purpose of illustration, there have actually been serious projects using machine learning to optimize coffee roasting as well. So the task is given a feature vector x with both temperature and duration, say 200 degrees Celsius for 17 minutes, how can we do inference in a neural network to get it to tell us whether or not this temperature and duration setting will result in good coffee or not? It looks like this. We're going to set x to be an array of two numbers, the input features 200 degrees Celsius and 17 minutes. And this here, layer one equals dense units, three activation equals sigmoid, creates a hidden layer of neurons with three hidden units and using as the activation function, the sigmoid function. And dense here is just the name of this layer. And then finally, to compute the activation values a one, you would write a one equals layer one applied to the input features x. Then you create layer one as this first hidden layer of the neural network as dense, open prints units three, that means three units or three hidden units in this layer using as the activation function, the sigmoid function. And dense is another name for the layers of a neural network that we've learned about so far. And as you learn more about neural networks, you learn about other types of layers as well. But for now, we just use the dense layer, which is the layer type you've learned about in the last few videos for all of our examples. So next, you compute a one by taking layer one, which is actually a function and applying this function layer one to the values of x. So that's how you get a one, which is going to be a list of three numbers because they one had three units. And so a one here may just for the sake of illustration be point 2.7.3. Next, for the second hidden layer, layer two would be dense of. Now this time as one unit and again to signal and activation function. And you can then compute a two by applying this layer two function to the activation values from layer one to a one. And that will give you the value of a two, which for the sake of illustration is maybe zero point eight. Finally, if you wish to threshold is at 0.5, then you can just test if a two is greater than or equal to 0.5 and set y hat equals to one or zero positive or negative class accordingly. So that's how you do inference in the neural network using TensorFlow. There are some additional details that I didn't go over here, such as how to load the TensorFlow library and how to also load the parameters W and B of the neural network. But we'll go over that in the lab. So please be sure to take a look at the lab. But these are the key steps for for propagation and how you compute a one and a two and optionally threshold a two. Let's look at one more example. And we're going to go back to the handwritten digit classification problem. So in this example, X is a list of the pixel intensity values. So X is equal to a non-pi array of this list of pixel intensity values. And then to initialize and carry out one step before propagation, layer one is a dense layer with 25 units and a sigmoid activation function. And you then compute a one equals the layer one function applied to X to build and carry out inference through the second layer. Similarly, you set up layer two as follows and then compute a two as layer two applied to a one. And then finally, layer three is the third and final dense layer. And then finally, you can optionally threshold a three to come up with a binary prediction for Y hat. So that's the syntax for carrying out inference in TensorFlow. One thing I briefly alluded to is the structure of the non-pi arrays. TensorFlow treats data in a certain way that is important to get right. So in the next video, let's take a look at how TensorFlow handles data.
[{"start": 0.0, "end": 6.22, "text": " TensorFlow is one of the leading frameworks for implementing deep learning algorithms."}, {"start": 6.22, "end": 10.86, "text": " When I'm building projects, TensorFlow is actually the tool that I use the most often"}, {"start": 10.86, "end": 13.94, "text": " and the other popular tool is PyTorch."}, {"start": 13.94, "end": 18.3, "text": " But we're going to focus in this specialization on TensorFlow."}, {"start": 18.3, "end": 24.1, "text": " In this video, let's take a look at how you can implement inference in code using TensorFlow."}, {"start": 24.1, "end": 25.1, "text": " Let's dive in."}, {"start": 25.1, "end": 30.060000000000002, "text": " One of the remarkable things about neural networks is the same algorithm can be applied"}, {"start": 30.060000000000002, "end": 33.36, "text": " to so many different applications."}, {"start": 33.36, "end": 39.6, "text": " So in order both for this video and in some of the labs for you to see what a neural network"}, {"start": 39.6, "end": 46.16, "text": " is doing, I'm going to use another example to illustrate inference."}, {"start": 46.16, "end": 50.540000000000006, "text": " So sometimes I do like to roast coffee beans myself at home."}, {"start": 50.540000000000006, "end": 53.96, "text": " My favorite is actually Colombian coffee beans."}, {"start": 53.96, "end": 58.980000000000004, "text": " So kind of learning algorithm help optimize the quality of the beans you get from a roasting"}, {"start": 58.980000000000004, "end": 61.06, "text": " process like this."}, {"start": 61.06, "end": 65.94, "text": " When you're roasting coffee, two parameters you get to control are the temperature at"}, {"start": 65.94, "end": 72.64, "text": " which are heating up the raw coffee beans to turn them into nicely roasted coffee beans,"}, {"start": 72.64, "end": 76.7, "text": " as well as the duration or how long you're going to roast the beans."}, {"start": 76.7, "end": 83.22, "text": " And in this slightly simplified example, we've created the data sets of different temperatures"}, {"start": 83.22, "end": 90.38, "text": " and different durations, as well as labels showing whether the coffee you roasted is"}, {"start": 90.38, "end": 96.14, "text": " good tasting coffee, where cross here, the positive class, y equals one, corresponds"}, {"start": 96.14, "end": 102.22, "text": " to good coffee, and oh, the negative class corresponds to bad coffee."}, {"start": 102.22, "end": 109.02, "text": " So it looks like a reasonable way to think of this data set is if you cook it at too"}, {"start": 109.02, "end": 114.17999999999999, "text": " low a temperature, it doesn't get roasted and ends up undercooked."}, {"start": 114.17999999999999, "end": 118.5, "text": " If you cook it not for long enough, the duration is too short."}, {"start": 118.5, "end": 121.67999999999999, "text": " There's also not a nicely roasted set of beans."}, {"start": 121.67999999999999, "end": 126.89999999999999, "text": " And finally, if you were to cook it either for too long or for too high a temperature,"}, {"start": 126.89999999999999, "end": 130.34, "text": " then you end up with overcooked beans, they're a little bit burnt beans."}, {"start": 130.34, "end": 132.84, "text": " And so that's not good coffee either."}, {"start": 132.84, "end": 139.3, "text": " There's only points within this little triangle here that corresponds to good coffee."}, {"start": 139.3, "end": 143.74, "text": " This example is simplified a bit from actual coffee roasting."}, {"start": 143.74, "end": 150.1, "text": " Even though this example is a simplified one for the purpose of illustration, there have"}, {"start": 150.1, "end": 155.46, "text": " actually been serious projects using machine learning to optimize coffee roasting as well."}, {"start": 155.46, "end": 163.06, "text": " So the task is given a feature vector x with both temperature and duration, say 200 degrees"}, {"start": 163.06, "end": 171.06, "text": " Celsius for 17 minutes, how can we do inference in a neural network to get it to tell us whether"}, {"start": 171.06, "end": 176.9, "text": " or not this temperature and duration setting will result in good coffee or not?"}, {"start": 176.9, "end": 179.58, "text": " It looks like this."}, {"start": 179.58, "end": 189.82000000000002, "text": " We're going to set x to be an array of two numbers, the input features 200 degrees Celsius"}, {"start": 189.82000000000002, "end": 192.34, "text": " and 17 minutes."}, {"start": 192.34, "end": 199.82000000000002, "text": " And this here, layer one equals dense units, three activation equals sigmoid, creates a"}, {"start": 199.82000000000002, "end": 206.02, "text": " hidden layer of neurons with three hidden units and using as the activation function,"}, {"start": 206.02, "end": 207.98000000000002, "text": " the sigmoid function."}, {"start": 207.98, "end": 211.06, "text": " And dense here is just the name of this layer."}, {"start": 211.06, "end": 219.33999999999997, "text": " And then finally, to compute the activation values a one, you would write a one equals"}, {"start": 219.33999999999997, "end": 223.66, "text": " layer one applied to the input features x."}, {"start": 223.66, "end": 231.62, "text": " Then you create layer one as this first hidden layer of the neural network as dense, open"}, {"start": 231.62, "end": 237.9, "text": " prints units three, that means three units or three hidden units in this layer using"}, {"start": 237.9, "end": 241.46, "text": " as the activation function, the sigmoid function."}, {"start": 241.46, "end": 246.58, "text": " And dense is another name for the layers of a neural network that we've learned about"}, {"start": 246.58, "end": 247.82, "text": " so far."}, {"start": 247.82, "end": 253.34, "text": " And as you learn more about neural networks, you learn about other types of layers as well."}, {"start": 253.34, "end": 257.5, "text": " But for now, we just use the dense layer, which is the layer type you've learned about"}, {"start": 257.5, "end": 260.96, "text": " in the last few videos for all of our examples."}, {"start": 260.96, "end": 268.62, "text": " So next, you compute a one by taking layer one, which is actually a function and applying"}, {"start": 268.62, "end": 271.41999999999996, "text": " this function layer one to the values of x."}, {"start": 271.41999999999996, "end": 277.26, "text": " So that's how you get a one, which is going to be a list of three numbers because they"}, {"start": 277.26, "end": 279.09999999999997, "text": " one had three units."}, {"start": 279.09999999999997, "end": 284.85999999999996, "text": " And so a one here may just for the sake of illustration be point 2.7.3."}, {"start": 284.85999999999996, "end": 290.46, "text": " Next, for the second hidden layer, layer two would be dense of."}, {"start": 290.46, "end": 295.21999999999997, "text": " Now this time as one unit and again to signal and activation function."}, {"start": 295.21999999999997, "end": 300.14, "text": " And you can then compute a two by applying this layer two function to the activation"}, {"start": 300.14, "end": 303.5, "text": " values from layer one to a one."}, {"start": 303.5, "end": 307.21999999999997, "text": " And that will give you the value of a two, which for the sake of illustration is maybe"}, {"start": 307.21999999999997, "end": 308.9, "text": " zero point eight."}, {"start": 308.9, "end": 316.08, "text": " Finally, if you wish to threshold is at 0.5, then you can just test if a two is greater"}, {"start": 316.08, "end": 321.5, "text": " than or equal to 0.5 and set y hat equals to one or zero positive or negative class"}, {"start": 321.5, "end": 323.46, "text": " accordingly."}, {"start": 323.46, "end": 327.78, "text": " So that's how you do inference in the neural network using TensorFlow."}, {"start": 327.78, "end": 332.62, "text": " There are some additional details that I didn't go over here, such as how to load the TensorFlow"}, {"start": 332.62, "end": 339.82, "text": " library and how to also load the parameters W and B of the neural network."}, {"start": 339.82, "end": 341.47999999999996, "text": " But we'll go over that in the lab."}, {"start": 341.47999999999996, "end": 344.06, "text": " So please be sure to take a look at the lab."}, {"start": 344.06, "end": 350.66, "text": " But these are the key steps for for propagation and how you compute a one and a two and optionally"}, {"start": 350.66, "end": 353.06, "text": " threshold a two."}, {"start": 353.06, "end": 354.74, "text": " Let's look at one more example."}, {"start": 354.74, "end": 360.26, "text": " And we're going to go back to the handwritten digit classification problem."}, {"start": 360.26, "end": 365.04, "text": " So in this example, X is a list of the pixel intensity values."}, {"start": 365.04, "end": 370.06, "text": " So X is equal to a non-pi array of this list of pixel intensity values."}, {"start": 370.06, "end": 377.98, "text": " And then to initialize and carry out one step before propagation, layer one is a dense layer"}, {"start": 377.98, "end": 382.1, "text": " with 25 units and a sigmoid activation function."}, {"start": 382.1, "end": 388.3, "text": " And you then compute a one equals the layer one function applied to X to build and carry"}, {"start": 388.3, "end": 391.78, "text": " out inference through the second layer."}, {"start": 391.78, "end": 400.73999999999995, "text": " Similarly, you set up layer two as follows and then compute a two as layer two applied to a one."}, {"start": 400.73999999999995, "end": 405.94, "text": " And then finally, layer three is the third and final dense layer."}, {"start": 405.94, "end": 410.5, "text": " And then finally, you can optionally threshold a three to come up with a binary prediction"}, {"start": 410.5, "end": 412.65999999999997, "text": " for Y hat."}, {"start": 412.65999999999997, "end": 417.46, "text": " So that's the syntax for carrying out inference in TensorFlow."}, {"start": 417.46, "end": 422.58, "text": " One thing I briefly alluded to is the structure of the non-pi arrays."}, {"start": 422.58, "end": 427.35999999999996, "text": " TensorFlow treats data in a certain way that is important to get right."}, {"start": 427.36, "end": 448.26, "text": " So in the next video, let's take a look at how TensorFlow handles data."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Q3cLU1trK_E
4.9 TensorFlow implementation | Data in TensorFlow --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, I want to step through with you how data is represented in NumPy and IntensiveLow so that as you're implementing new neural networks, you can have a consistent framework to think about how to represent your data. One of the unfortunate things about the way things are done in code today is that many, many years ago, NumPy was first created and became a standard library for linear algebra in Python. And then much later, the Google Brain team, the team that I had started and once led, created TensorFlow. And so unfortunately, there are some inconsistencies between how data is represented in NumPy and IntensiveLow. So it's good to be aware of these conventions so that you can implement correct code and hopefully get things running in your neural networks. Let's start by taking a look at how TensorFlow represents data. Let's say you have a data set like this from the coffee example. I mentioned that you would write x as follows. So why do you have this double square bracket here? Let's take a look at how NumPy stores vectors and matrices. In case you think matrices and vectors are complicated mathematical concepts, don't worry about it. We'll go through a few concrete examples and you'll be able to do everything you need to do with matrices and vectors in order to implement neural networks. Let's start with an example of a matrix. Here is a matrix with two rows and three columns. Notice that there are one, two rows and one, two, three columns. So we call this a two by three matrix. And so the convention is the dimension of the matrix is written as the number of rows by the number of columns. So in code to store this matrix, this two by three matrix, you just write x equals NP dot array of these numbers like these, where you notice that the square brackets tells you that one, two, three is the first row of this matrix and four, five, six is the second row of this matrix. And then this alphas square bracket groups the first and the second row together. So this says x to be this 2D array of numbers. So a matrix is just a 2D array of numbers. Let's look at one more example. Here I've written out another matrix. How many rows and how many columns does this have? Well we count this as one, two, three, four rows and it has one, two columns. So this is a number of rows by number of columns matrix. So it's a four by two matrix. And so to store this in code, you would write x equals NP dot array and then this syntax over here to store these four rows of a matrix in the variable x. So this creates a 2D array of these eight numbers. Matrices can have different dimensions. We saw an example of a two by three matrix and a four by two matrix. A matrix can also be other dimensions like one by two or two by one. And we'll see examples of these on the next slide. So what we did previously when setting x to be input feature vectors was set x to be equal to NP array with two square brackets 200 comma 17. And what that does is this creates a one by two matrix that is just one row and two columns. Let's look at a different example. If you were to define x to be NP array, but now written like this, this creates a two by one matrix that has two rows and one column. Because the first row is just a number 200 and the second row is just the number 17. And so this has the same numbers, but in a two by one instead of a one by two matrix. In nav, this example on top is also called a row vector. It's a vector that is just a single row. And this example is also called a column vector because it's a vector that just has a single column. And the difference between using double square brackets like this versus a single square bracket like this is that whereas the two examples on top are two D arrays where one of the dimensions happens to be one. This example results in a one D vector. So this is just a one D array that has no rows or columns. Although by convention, we may write x as a column like this. So on a contrast this with what we had previously done in the first course, which was to write x like this with a single square bracket. And that resulted in what's called in Python a one D vector instead of a two D matrix. And this technically is not one by two or two by one. It's just a linear array with no rows or no columns. And it's just a list of numbers. So whereas in course one, when we're working with linear regression, logistic regression, we use these one D vectors to represent the input features x. With TensorFlow, the convention is to use matrices to represent the data. And why is there this switch in conventions? Well, it turns out that TensorFlow was designed to handle very large datasets. And by representing the data in matrices instead of one D arrays, it lets TensorFlow be a bit more computationally efficient internally. So going back to our original example, for the first training example in this dataset with features 200 degrees Celsius in 17 minutes, we would represent it like this. And so this is actually a one by two matrix that happens to have one row and two columns to store the numbers 217. And in case this seems like a lot of details and really complicated conventions, don't worry about it. All this will become clearer. And you get to see the concrete implementations of the code yourself in the optional labs and in the practice labs. Going back to the code for carrying out for propagation or inference in the neural network, when you compute a one equals layer one applied to X, what is a one? Well, a one is actually going to be because there's three numbers is actually going to be a one by three matrix. And if you print out a one, you will get something like this is TF dot tensor point 2.7.3 is a shape of one by three, one three refers to that this is a one by three matrix. And this is TensorFlow's way of saying that this is a floating point number, meaning that it's a number that can have a decimal point represented using 32 bits of memory in your computer. That's what a float 32 is. And what is a tensor? A tensor here is a data type that the TensorFlow team had created in order to store and carry out computations on matrices efficiently. So whenever you see tensor, just think of it as matrix on these few slides. Technically a tensor is a little bit more general than the matrix, but for the purposes of this course, think of tensor as just a way of representing matrices. So remember I said at the start of this video that there's the TensorFlow way of representing a matrix and the NumPy way of representing matrix. This is an artifact of the history of how NumPy and TensorFlow were created. And unfortunately, there are two ways of representing a matrix that have been baked into these systems. And in fact, if you want to take a one, which is a tensor and want to convert it back to NumPy array, you can do so with this function a one dot NumPy and it will take the same data and return it in the form of a NumPy array rather than in the form of a TensorFlow array or TensorFlow matrix. Now let's take a look at what the activations output by the second layer would look like. Here's the code that we had from before. Layer two is a dense layer with one unit and a six point activation. And a two is computed by taking layer two and applying it to a one. So what is a two? A two may be a number like 0.8 and technically this is a one by one matrix. It's a 2D array with one row and one column. And so it's equal to this number 0.8. And if you print out a two, you see that it is a TensorFlow tensor with just one element, one number 0.8. And it is a one by one matrix. Again it is a flow 32 decimal point number taking up 32 bits in computer memory. Once again you can convert from a TensorFlow tensor to a NumPy matrix using a two dot NumPy and that will turn this back into a NumPy array that looks like this. So that hopefully gives you a sense of how data is represented in TensorFlow and in NumPy. I'm used to loading data and manipulating data in NumPy. But when you pass a NumPy array into TensorFlow, TensorFlow likes to convert it to its own internal format, the tensor, and then operate efficiently using tensors. And when you read the data back out, you can keep it as a tensor or convert it back to a NumPy array. I think it's a bit unfortunate that the history of how these libraries evolve has led us to do this extra conversion work when actually the two libraries can work quite well together. But when you convert back and forth, whether you're using a NumPy array or a tensor, it's just something to be aware of when you're writing code. Next, let's take what we've learned and put it together to actually build a neural network. Let's go see that in the next video.
[{"start": 0.0, "end": 9.16, "text": " In this video, I want to step through with you how data is represented in NumPy and IntensiveLow"}, {"start": 9.16, "end": 15.0, "text": " so that as you're implementing new neural networks, you can have a consistent framework"}, {"start": 15.0, "end": 18.6, "text": " to think about how to represent your data."}, {"start": 18.6, "end": 23.32, "text": " One of the unfortunate things about the way things are done in code today is that many,"}, {"start": 23.32, "end": 28.76, "text": " many years ago, NumPy was first created and became a standard library for linear algebra"}, {"start": 28.76, "end": 30.080000000000002, "text": " in Python."}, {"start": 30.080000000000002, "end": 35.120000000000005, "text": " And then much later, the Google Brain team, the team that I had started and once led,"}, {"start": 35.120000000000005, "end": 36.6, "text": " created TensorFlow."}, {"start": 36.6, "end": 42.56, "text": " And so unfortunately, there are some inconsistencies between how data is represented in NumPy and"}, {"start": 42.56, "end": 44.160000000000004, "text": " IntensiveLow."}, {"start": 44.160000000000004, "end": 49.2, "text": " So it's good to be aware of these conventions so that you can implement correct code and"}, {"start": 49.2, "end": 52.480000000000004, "text": " hopefully get things running in your neural networks."}, {"start": 52.480000000000004, "end": 56.84, "text": " Let's start by taking a look at how TensorFlow represents data."}, {"start": 56.84, "end": 61.88, "text": " Let's say you have a data set like this from the coffee example."}, {"start": 61.88, "end": 66.08, "text": " I mentioned that you would write x as follows."}, {"start": 66.08, "end": 71.16, "text": " So why do you have this double square bracket here?"}, {"start": 71.16, "end": 77.08000000000001, "text": " Let's take a look at how NumPy stores vectors and matrices."}, {"start": 77.08000000000001, "end": 83.12, "text": " In case you think matrices and vectors are complicated mathematical concepts, don't worry"}, {"start": 83.12, "end": 84.12, "text": " about it."}, {"start": 84.12, "end": 88.88000000000001, "text": " We'll go through a few concrete examples and you'll be able to do everything you need to"}, {"start": 88.88000000000001, "end": 93.80000000000001, "text": " do with matrices and vectors in order to implement neural networks."}, {"start": 93.80000000000001, "end": 97.12, "text": " Let's start with an example of a matrix."}, {"start": 97.12, "end": 103.56, "text": " Here is a matrix with two rows and three columns."}, {"start": 103.56, "end": 111.96000000000001, "text": " Notice that there are one, two rows and one, two, three columns."}, {"start": 111.96, "end": 117.36, "text": " So we call this a two by three matrix."}, {"start": 117.36, "end": 123.96, "text": " And so the convention is the dimension of the matrix is written as the number of rows"}, {"start": 123.96, "end": 126.16, "text": " by the number of columns."}, {"start": 126.16, "end": 133.88, "text": " So in code to store this matrix, this two by three matrix, you just write x equals NP"}, {"start": 133.88, "end": 141.56, "text": " dot array of these numbers like these, where you notice that the square brackets tells"}, {"start": 141.56, "end": 148.88, "text": " you that one, two, three is the first row of this matrix and four, five, six is the"}, {"start": 148.88, "end": 152.0, "text": " second row of this matrix."}, {"start": 152.0, "end": 158.35999999999999, "text": " And then this alphas square bracket groups the first and the second row together."}, {"start": 158.36, "end": 163.92000000000002, "text": " So this says x to be this 2D array of numbers."}, {"start": 163.92000000000002, "end": 168.84, "text": " So a matrix is just a 2D array of numbers."}, {"start": 168.84, "end": 171.4, "text": " Let's look at one more example."}, {"start": 171.4, "end": 174.28, "text": " Here I've written out another matrix."}, {"start": 174.28, "end": 177.08, "text": " How many rows and how many columns does this have?"}, {"start": 177.08, "end": 185.72000000000003, "text": " Well we count this as one, two, three, four rows and it has one, two columns."}, {"start": 185.72, "end": 190.06, "text": " So this is a number of rows by number of columns matrix."}, {"start": 190.06, "end": 192.6, "text": " So it's a four by two matrix."}, {"start": 192.6, "end": 199.76, "text": " And so to store this in code, you would write x equals NP dot array and then this syntax"}, {"start": 199.76, "end": 205.5, "text": " over here to store these four rows of a matrix in the variable x."}, {"start": 205.5, "end": 211.0, "text": " So this creates a 2D array of these eight numbers."}, {"start": 211.0, "end": 213.18, "text": " Matrices can have different dimensions."}, {"start": 213.18, "end": 219.08, "text": " We saw an example of a two by three matrix and a four by two matrix."}, {"start": 219.08, "end": 225.84, "text": " A matrix can also be other dimensions like one by two or two by one."}, {"start": 225.84, "end": 229.36, "text": " And we'll see examples of these on the next slide."}, {"start": 229.36, "end": 236.12, "text": " So what we did previously when setting x to be input feature vectors was set x to be equal"}, {"start": 236.12, "end": 242.88, "text": " to NP array with two square brackets 200 comma 17."}, {"start": 242.88, "end": 253.24, "text": " And what that does is this creates a one by two matrix that is just one row and two columns."}, {"start": 253.24, "end": 255.84, "text": " Let's look at a different example."}, {"start": 255.84, "end": 263.04, "text": " If you were to define x to be NP array, but now written like this, this creates a two"}, {"start": 263.04, "end": 271.15999999999997, "text": " by one matrix that has two rows and one column."}, {"start": 271.16, "end": 279.08000000000004, "text": " Because the first row is just a number 200 and the second row is just the number 17."}, {"start": 279.08000000000004, "end": 285.86, "text": " And so this has the same numbers, but in a two by one instead of a one by two matrix."}, {"start": 285.86, "end": 289.28000000000003, "text": " In nav, this example on top is also called a row vector."}, {"start": 289.28000000000003, "end": 292.64000000000004, "text": " It's a vector that is just a single row."}, {"start": 292.64000000000004, "end": 297.6, "text": " And this example is also called a column vector because it's a vector that just has a single"}, {"start": 297.6, "end": 300.12, "text": " column."}, {"start": 300.12, "end": 306.08, "text": " And the difference between using double square brackets like this versus a single square"}, {"start": 306.08, "end": 314.36, "text": " bracket like this is that whereas the two examples on top are two D arrays where one"}, {"start": 314.36, "end": 317.92, "text": " of the dimensions happens to be one."}, {"start": 317.92, "end": 322.32, "text": " This example results in a one D vector."}, {"start": 322.32, "end": 328.48, "text": " So this is just a one D array that has no rows or columns."}, {"start": 328.48, "end": 333.62, "text": " Although by convention, we may write x as a column like this."}, {"start": 333.62, "end": 339.88, "text": " So on a contrast this with what we had previously done in the first course, which was to write"}, {"start": 339.88, "end": 343.88, "text": " x like this with a single square bracket."}, {"start": 343.88, "end": 350.44, "text": " And that resulted in what's called in Python a one D vector instead of a two D matrix."}, {"start": 350.44, "end": 354.04, "text": " And this technically is not one by two or two by one."}, {"start": 354.04, "end": 358.44, "text": " It's just a linear array with no rows or no columns."}, {"start": 358.44, "end": 360.96, "text": " And it's just a list of numbers."}, {"start": 360.96, "end": 365.68, "text": " So whereas in course one, when we're working with linear regression, logistic regression,"}, {"start": 365.68, "end": 370.72, "text": " we use these one D vectors to represent the input features x."}, {"start": 370.72, "end": 376.72, "text": " With TensorFlow, the convention is to use matrices to represent the data."}, {"start": 376.72, "end": 378.68, "text": " And why is there this switch in conventions?"}, {"start": 378.68, "end": 384.42, "text": " Well, it turns out that TensorFlow was designed to handle very large datasets."}, {"start": 384.42, "end": 390.48, "text": " And by representing the data in matrices instead of one D arrays, it lets TensorFlow be a bit"}, {"start": 390.48, "end": 394.36, "text": " more computationally efficient internally."}, {"start": 394.36, "end": 399.40000000000003, "text": " So going back to our original example, for the first training example in this dataset"}, {"start": 399.40000000000003, "end": 405.90000000000003, "text": " with features 200 degrees Celsius in 17 minutes, we would represent it like this."}, {"start": 405.90000000000003, "end": 413.68, "text": " And so this is actually a one by two matrix that happens to have one row and two columns"}, {"start": 413.68, "end": 417.68, "text": " to store the numbers 217."}, {"start": 417.68, "end": 422.52, "text": " And in case this seems like a lot of details and really complicated conventions, don't"}, {"start": 422.52, "end": 423.52, "text": " worry about it."}, {"start": 423.52, "end": 425.32, "text": " All this will become clearer."}, {"start": 425.32, "end": 430.44, "text": " And you get to see the concrete implementations of the code yourself in the optional labs"}, {"start": 430.44, "end": 432.0, "text": " and in the practice labs."}, {"start": 432.0, "end": 438.68, "text": " Going back to the code for carrying out for propagation or inference in the neural network,"}, {"start": 438.68, "end": 446.08, "text": " when you compute a one equals layer one applied to X, what is a one?"}, {"start": 446.08, "end": 451.6, "text": " Well, a one is actually going to be because there's three numbers is actually going to"}, {"start": 451.6, "end": 455.36, "text": " be a one by three matrix."}, {"start": 455.36, "end": 464.52, "text": " And if you print out a one, you will get something like this is TF dot tensor point 2.7.3 is"}, {"start": 464.52, "end": 471.03999999999996, "text": " a shape of one by three, one three refers to that this is a one by three matrix."}, {"start": 471.03999999999996, "end": 476.14, "text": " And this is TensorFlow's way of saying that this is a floating point number, meaning that"}, {"start": 476.14, "end": 482.15999999999997, "text": " it's a number that can have a decimal point represented using 32 bits of memory in your"}, {"start": 482.15999999999997, "end": 483.15999999999997, "text": " computer."}, {"start": 483.15999999999997, "end": 484.84, "text": " That's what a float 32 is."}, {"start": 484.84, "end": 485.91999999999996, "text": " And what is a tensor?"}, {"start": 485.91999999999996, "end": 492.32, "text": " A tensor here is a data type that the TensorFlow team had created in order to store and carry"}, {"start": 492.32, "end": 495.52, "text": " out computations on matrices efficiently."}, {"start": 495.52, "end": 502.08, "text": " So whenever you see tensor, just think of it as matrix on these few slides."}, {"start": 502.08, "end": 505.71999999999997, "text": " Technically a tensor is a little bit more general than the matrix, but for the purposes"}, {"start": 505.71999999999997, "end": 510.98, "text": " of this course, think of tensor as just a way of representing matrices."}, {"start": 510.98, "end": 516.72, "text": " So remember I said at the start of this video that there's the TensorFlow way of representing"}, {"start": 516.72, "end": 520.12, "text": " a matrix and the NumPy way of representing matrix."}, {"start": 520.12, "end": 525.92, "text": " This is an artifact of the history of how NumPy and TensorFlow were created."}, {"start": 525.92, "end": 534.24, "text": " And unfortunately, there are two ways of representing a matrix that have been baked into these systems."}, {"start": 534.24, "end": 540.6, "text": " And in fact, if you want to take a one, which is a tensor and want to convert it back to"}, {"start": 540.6, "end": 546.6800000000001, "text": " NumPy array, you can do so with this function a one dot NumPy and it will take the same"}, {"start": 546.68, "end": 553.0799999999999, "text": " data and return it in the form of a NumPy array rather than in the form of a TensorFlow"}, {"start": 553.0799999999999, "end": 554.92, "text": " array or TensorFlow matrix."}, {"start": 554.92, "end": 559.9599999999999, "text": " Now let's take a look at what the activations output by the second layer would look like."}, {"start": 559.9599999999999, "end": 562.0799999999999, "text": " Here's the code that we had from before."}, {"start": 562.0799999999999, "end": 565.7199999999999, "text": " Layer two is a dense layer with one unit and a six point activation."}, {"start": 565.7199999999999, "end": 569.66, "text": " And a two is computed by taking layer two and applying it to a one."}, {"start": 569.66, "end": 571.16, "text": " So what is a two?"}, {"start": 571.16, "end": 578.88, "text": " A two may be a number like 0.8 and technically this is a one by one matrix."}, {"start": 578.88, "end": 582.8399999999999, "text": " It's a 2D array with one row and one column."}, {"start": 582.8399999999999, "end": 588.7199999999999, "text": " And so it's equal to this number 0.8."}, {"start": 588.7199999999999, "end": 595.48, "text": " And if you print out a two, you see that it is a TensorFlow tensor with just one element,"}, {"start": 595.48, "end": 597.4, "text": " one number 0.8."}, {"start": 597.4, "end": 600.64, "text": " And it is a one by one matrix."}, {"start": 600.64, "end": 608.76, "text": " Again it is a flow 32 decimal point number taking up 32 bits in computer memory."}, {"start": 608.76, "end": 617.88, "text": " Once again you can convert from a TensorFlow tensor to a NumPy matrix using a two dot NumPy"}, {"start": 617.88, "end": 622.92, "text": " and that will turn this back into a NumPy array that looks like this."}, {"start": 622.92, "end": 629.36, "text": " So that hopefully gives you a sense of how data is represented in TensorFlow and in NumPy."}, {"start": 629.36, "end": 633.72, "text": " I'm used to loading data and manipulating data in NumPy."}, {"start": 633.72, "end": 638.44, "text": " But when you pass a NumPy array into TensorFlow, TensorFlow likes to convert it to its own"}, {"start": 638.44, "end": 644.16, "text": " internal format, the tensor, and then operate efficiently using tensors."}, {"start": 644.16, "end": 648.88, "text": " And when you read the data back out, you can keep it as a tensor or convert it back to"}, {"start": 648.88, "end": 650.8000000000001, "text": " a NumPy array."}, {"start": 650.8000000000001, "end": 656.12, "text": " I think it's a bit unfortunate that the history of how these libraries evolve has led us to"}, {"start": 656.12, "end": 662.72, "text": " do this extra conversion work when actually the two libraries can work quite well together."}, {"start": 662.72, "end": 668.0, "text": " But when you convert back and forth, whether you're using a NumPy array or a tensor, it's"}, {"start": 668.0, "end": 671.28, "text": " just something to be aware of when you're writing code."}, {"start": 671.28, "end": 676.72, "text": " Next, let's take what we've learned and put it together to actually build a neural network."}, {"start": 676.72, "end": 686.76, "text": " Let's go see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=8GAQXo6SRws
4.10 TensorFlow implementation | Building a neural network --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
So you've seen a bunch of TensorFlow code by now, learned about how to build a layer in TensorFlow, how to do forward prop through a single layer in TensorFlow, and also learned about data in TensorFlow. Let's put it all together and talk about how to build a neural network in TensorFlow. This is also the last video on TensorFlow for this week, and in this video, you also learned about a different way of building a neural network that will be even a little bit simpler than what you've seen so far. So let's dive in. What you saw previously was if you want to do forward prop, you initialize the data x, create layer one like so, then compute A1, then create layer two, and compute A2. So this was an explicit way of carrying out forward prop one layer of computation at the time. It turns out that TensorFlow has a different way of implementing forward prop as well as learning. Let me show you a different way of building a neural network in TensorFlow, which is that same as before, you're going to create layer one and create layer two. But now, instead of you manually taking the data and passing it to layer one, and then taking the activations from layer one and passing it to layer two, we can instead tell TensorFlow that we would like it to take layer one and layer two and string them together to form a neural network. That's what the sequential function in TensorFlow does, which is it says, dear TensorFlow, please create a neural network for me by sequentially stringing together these two layers that I just created. It turns out that with the sequential framework, TensorFlow can do a lot of work for you. Let's say you have a training set like this on the left. This is for the coffee example. You can then take the training data's inputs X and put them into a NumPy array. This here is a four by two matrix, and the target labels Y can then be written as follows. This is just a four dimensional array. Y, the set of targets, can then be stored as a 1D array like this, 1 0 0 1, corresponding to the four training examples. It turns out that given the data X and Y stored in this matrix X and this array Y, if you want to train this neural network, all you need to do is call two functions. You need to call model.compile with some parameters. We'll talk more about this next week, so don't worry about it for now. And then you need to call model.fit X Y, which tells TensorFlow to take this neural network that it created by sequentially stringing together layers one and two, and to train it on the data X and Y. But we'll learn the details of how to do this next week. And then finally, how do you do inference on this neural network? How do you do forward prop? If you have a new example, say X new, which is NP array with these two features, then to carry out forward prop, instead of having to do it one layer at a time yourself, you just have to call model predict on X new, and this will output the corresponding value of A2 for you, given this input value of X. So model predict carries out forward propagation or carries out inference for you using this neural network that you compiled using the sequential function. Now, I want to take these three lines of code on top and just simplify it a little bit further, which is when coding in TensorFlow, by convention, we don't explicitly assign the two layers to two variables, layer one and layer two as follows, but by convention, I would usually just write the code like this. What we say the model is a sequential model of a few layers strung together sequentially, where the first layer, layer one, is a dense layer with three units and activation of sigmoid, and the second layer is a dense layer with one unit and again a sigmoid activation function. So if you look at others TensorFlow code, you often see it look more like this, rather than having an explicit assignment to these layer one and layer two variables. And so that's it. This is pretty much the code you need in order to train as well as to inference on a neural network in TensorFlow. Where again, we'll talk more about the training bits of this, the compound, the compile and the fit function next week. Let's redo this for the digit classification example as well. So previously we had X is this input, layer one is a layer, A1 equals layer one applied to X and so on through layer two and layer three in order to try to classify a digit. With this new coding convention, with using TensorFlow's sequential function, you can instead specify what are layer one, layer two, layer three, and tell TensorFlow to string the layers together for you into a neural network. And same as before, you can then store the data in a matrix and run the compile function and fit the model as follows. Again, more on this next week. And finally, to do inference or to make predictions, you can use model predict on X new. And similar to what you saw before with the coffee classification network, by convention, instead of assigning layer one, layer two, layer three explicitly like this, we would more calmly just take these layers and put them directly into the sequential function to end up with this more compact code where you just tell TensorFlow, create a model for me that sequentially strings together these three layers and then the rest of the code works same as before. So that's how you would build a neural network in TensorFlow. Now I know that when you're learning about these techniques, sometimes someone may ask you to, hey, implement these five lines of code, and then you type five lines of code and then someone says, congratulations with just five lines of code, you've built this crazy complicated state of the art neural network. And sometimes that makes you wonder what exactly did I do with just these five lines of code. One thing I want you to take away from the machine learning specialization is the ability to use cutting edge libraries, like TensorFlow to do your work efficiently. But I don't really want you to just call five lines of code and not really also know what the code is actually doing underneath the hood. So in the next video, I'd like to go back and share with you how you can implement from scratch by yourself for propagation in Python so that you can understand the whole thing for yourself. In practice, most machine learning engineers don't actually implement for a problem in Python that often. We just use libraries like TensorFlow and PyTorch. But because I want you to understand how these algorithms work yourself, so that if something goes wrong, you can think through for yourself what you might need to change, what's likely to work, what's less likely to work. Let's also go through what it would take for you to implement for propagation from scratch. Because that way, even when you're calling a library and having it run efficiently and do great things in your application, I want you in the back of your mind to also have that deeper understanding of what your code is actually doing. So with that, let's go on to the next video.
[{"start": 0.0, "end": 5.6000000000000005, "text": " So you've seen a bunch of TensorFlow code by now, learned about how to build a layer"}, {"start": 5.6000000000000005, "end": 11.28, "text": " in TensorFlow, how to do forward prop through a single layer in TensorFlow, and also learned"}, {"start": 11.28, "end": 13.88, "text": " about data in TensorFlow."}, {"start": 13.88, "end": 19.52, "text": " Let's put it all together and talk about how to build a neural network in TensorFlow."}, {"start": 19.52, "end": 24.12, "text": " This is also the last video on TensorFlow for this week, and in this video, you also"}, {"start": 24.12, "end": 28.8, "text": " learned about a different way of building a neural network that will be even a little"}, {"start": 28.8, "end": 31.48, "text": " bit simpler than what you've seen so far."}, {"start": 31.48, "end": 33.0, "text": " So let's dive in."}, {"start": 33.0, "end": 40.760000000000005, "text": " What you saw previously was if you want to do forward prop, you initialize the data x,"}, {"start": 40.760000000000005, "end": 46.6, "text": " create layer one like so, then compute A1, then create layer two, and compute A2."}, {"start": 46.6, "end": 53.0, "text": " So this was an explicit way of carrying out forward prop one layer of computation at the"}, {"start": 53.0, "end": 55.480000000000004, "text": " time."}, {"start": 55.48, "end": 63.279999999999994, "text": " It turns out that TensorFlow has a different way of implementing forward prop as well as"}, {"start": 63.279999999999994, "end": 64.28, "text": " learning."}, {"start": 64.28, "end": 70.32, "text": " Let me show you a different way of building a neural network in TensorFlow, which is that"}, {"start": 70.32, "end": 75.12, "text": " same as before, you're going to create layer one and create layer two."}, {"start": 75.12, "end": 81.67999999999999, "text": " But now, instead of you manually taking the data and passing it to layer one, and then"}, {"start": 81.68, "end": 86.80000000000001, "text": " taking the activations from layer one and passing it to layer two, we can instead tell"}, {"start": 86.80000000000001, "end": 93.32000000000001, "text": " TensorFlow that we would like it to take layer one and layer two and string them together"}, {"start": 93.32000000000001, "end": 95.64000000000001, "text": " to form a neural network."}, {"start": 95.64000000000001, "end": 102.4, "text": " That's what the sequential function in TensorFlow does, which is it says, dear TensorFlow, please"}, {"start": 102.4, "end": 108.2, "text": " create a neural network for me by sequentially stringing together these two layers that I"}, {"start": 108.2, "end": 110.64000000000001, "text": " just created."}, {"start": 110.64, "end": 118.04, "text": " It turns out that with the sequential framework, TensorFlow can do a lot of work for you."}, {"start": 118.04, "end": 120.68, "text": " Let's say you have a training set like this on the left."}, {"start": 120.68, "end": 123.16, "text": " This is for the coffee example."}, {"start": 123.16, "end": 131.4, "text": " You can then take the training data's inputs X and put them into a NumPy array."}, {"start": 131.4, "end": 142.0, "text": " This here is a four by two matrix, and the target labels Y can then be written as follows."}, {"start": 142.0, "end": 145.52, "text": " This is just a four dimensional array."}, {"start": 145.52, "end": 153.0, "text": " Y, the set of targets, can then be stored as a 1D array like this, 1 0 0 1, corresponding"}, {"start": 153.0, "end": 155.52, "text": " to the four training examples."}, {"start": 155.52, "end": 164.48000000000002, "text": " It turns out that given the data X and Y stored in this matrix X and this array Y, if you"}, {"start": 164.48000000000002, "end": 170.24, "text": " want to train this neural network, all you need to do is call two functions."}, {"start": 170.24, "end": 174.60000000000002, "text": " You need to call model.compile with some parameters."}, {"start": 174.60000000000002, "end": 178.68, "text": " We'll talk more about this next week, so don't worry about it for now."}, {"start": 178.68, "end": 186.16, "text": " And then you need to call model.fit X Y, which tells TensorFlow to take this neural network"}, {"start": 186.16, "end": 192.52, "text": " that it created by sequentially stringing together layers one and two, and to train"}, {"start": 192.52, "end": 196.52, "text": " it on the data X and Y."}, {"start": 196.52, "end": 201.4, "text": " But we'll learn the details of how to do this next week."}, {"start": 201.4, "end": 204.92000000000002, "text": " And then finally, how do you do inference on this neural network?"}, {"start": 204.92000000000002, "end": 206.60000000000002, "text": " How do you do forward prop?"}, {"start": 206.6, "end": 213.56, "text": " If you have a new example, say X new, which is NP array with these two features, then"}, {"start": 213.56, "end": 219.32, "text": " to carry out forward prop, instead of having to do it one layer at a time yourself, you"}, {"start": 219.32, "end": 227.56, "text": " just have to call model predict on X new, and this will output the corresponding value"}, {"start": 227.56, "end": 234.2, "text": " of A2 for you, given this input value of X."}, {"start": 234.2, "end": 239.82, "text": " So model predict carries out forward propagation or carries out inference for you using this"}, {"start": 239.82, "end": 244.2, "text": " neural network that you compiled using the sequential function."}, {"start": 244.2, "end": 252.12, "text": " Now, I want to take these three lines of code on top and just simplify it a little bit further,"}, {"start": 252.12, "end": 259.58, "text": " which is when coding in TensorFlow, by convention, we don't explicitly assign the two layers"}, {"start": 259.58, "end": 265.8, "text": " to two variables, layer one and layer two as follows, but by convention, I would usually"}, {"start": 265.8, "end": 267.8, "text": " just write the code like this."}, {"start": 267.8, "end": 273.2, "text": " What we say the model is a sequential model of a few layers strung together sequentially,"}, {"start": 273.2, "end": 279.91999999999996, "text": " where the first layer, layer one, is a dense layer with three units and activation of sigmoid,"}, {"start": 279.91999999999996, "end": 286.2, "text": " and the second layer is a dense layer with one unit and again a sigmoid activation function."}, {"start": 286.2, "end": 291.4, "text": " So if you look at others TensorFlow code, you often see it look more like this, rather"}, {"start": 291.4, "end": 298.76, "text": " than having an explicit assignment to these layer one and layer two variables."}, {"start": 298.76, "end": 300.84, "text": " And so that's it."}, {"start": 300.84, "end": 307.96, "text": " This is pretty much the code you need in order to train as well as to inference on a neural"}, {"start": 307.96, "end": 309.8, "text": " network in TensorFlow."}, {"start": 309.8, "end": 314.48, "text": " Where again, we'll talk more about the training bits of this, the compound, the compile and"}, {"start": 314.48, "end": 317.32, "text": " the fit function next week."}, {"start": 317.32, "end": 321.88, "text": " Let's redo this for the digit classification example as well."}, {"start": 321.88, "end": 329.0, "text": " So previously we had X is this input, layer one is a layer, A1 equals layer one applied"}, {"start": 329.0, "end": 335.40000000000003, "text": " to X and so on through layer two and layer three in order to try to classify a digit."}, {"start": 335.40000000000003, "end": 341.68, "text": " With this new coding convention, with using TensorFlow's sequential function, you can"}, {"start": 341.68, "end": 348.48, "text": " instead specify what are layer one, layer two, layer three, and tell TensorFlow to string"}, {"start": 348.48, "end": 352.24, "text": " the layers together for you into a neural network."}, {"start": 352.24, "end": 360.4, "text": " And same as before, you can then store the data in a matrix and run the compile function"}, {"start": 360.4, "end": 362.36, "text": " and fit the model as follows."}, {"start": 362.36, "end": 365.28000000000003, "text": " Again, more on this next week."}, {"start": 365.28, "end": 374.59999999999997, "text": " And finally, to do inference or to make predictions, you can use model predict on X new."}, {"start": 374.59999999999997, "end": 380.08, "text": " And similar to what you saw before with the coffee classification network, by convention,"}, {"start": 380.08, "end": 384.28, "text": " instead of assigning layer one, layer two, layer three explicitly like this, we would"}, {"start": 384.28, "end": 389.79999999999995, "text": " more calmly just take these layers and put them directly into the sequential function"}, {"start": 389.79999999999995, "end": 394.47999999999996, "text": " to end up with this more compact code where you just tell TensorFlow, create a model for"}, {"start": 394.48, "end": 399.04, "text": " me that sequentially strings together these three layers and then the rest of the code"}, {"start": 399.04, "end": 401.20000000000005, "text": " works same as before."}, {"start": 401.20000000000005, "end": 405.0, "text": " So that's how you would build a neural network in TensorFlow."}, {"start": 405.0, "end": 408.76, "text": " Now I know that when you're learning about these techniques, sometimes someone may ask"}, {"start": 408.76, "end": 413.64000000000004, "text": " you to, hey, implement these five lines of code, and then you type five lines of code"}, {"start": 413.64000000000004, "end": 417.40000000000003, "text": " and then someone says, congratulations with just five lines of code, you've built this"}, {"start": 417.40000000000003, "end": 420.74, "text": " crazy complicated state of the art neural network."}, {"start": 420.74, "end": 426.44, "text": " And sometimes that makes you wonder what exactly did I do with just these five lines of code."}, {"start": 426.44, "end": 430.46000000000004, "text": " One thing I want you to take away from the machine learning specialization is the ability"}, {"start": 430.46000000000004, "end": 435.86, "text": " to use cutting edge libraries, like TensorFlow to do your work efficiently."}, {"start": 435.86, "end": 441.56, "text": " But I don't really want you to just call five lines of code and not really also know what"}, {"start": 441.56, "end": 445.0, "text": " the code is actually doing underneath the hood."}, {"start": 445.0, "end": 449.76, "text": " So in the next video, I'd like to go back and share with you how you can implement from"}, {"start": 449.76, "end": 454.71999999999997, "text": " scratch by yourself for propagation in Python so that you can understand the whole thing"}, {"start": 454.71999999999997, "end": 456.44, "text": " for yourself."}, {"start": 456.44, "end": 461.68, "text": " In practice, most machine learning engineers don't actually implement for a problem in"}, {"start": 461.68, "end": 463.59999999999997, "text": " Python that often."}, {"start": 463.59999999999997, "end": 466.59999999999997, "text": " We just use libraries like TensorFlow and PyTorch."}, {"start": 466.59999999999997, "end": 471.36, "text": " But because I want you to understand how these algorithms work yourself, so that if something"}, {"start": 471.36, "end": 475.8, "text": " goes wrong, you can think through for yourself what you might need to change, what's likely"}, {"start": 475.8, "end": 478.32, "text": " to work, what's less likely to work."}, {"start": 478.32, "end": 483.92, "text": " Let's also go through what it would take for you to implement for propagation from scratch."}, {"start": 483.92, "end": 488.76, "text": " Because that way, even when you're calling a library and having it run efficiently and"}, {"start": 488.76, "end": 493.76, "text": " do great things in your application, I want you in the back of your mind to also have"}, {"start": 493.76, "end": 498.26, "text": " that deeper understanding of what your code is actually doing."}, {"start": 498.26, "end": 509.24, "text": " So with that, let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=ydKVT95-Ufo
4.11 Neural network implementation in Python | Forward prop in a single layer-[ML| Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
If you had to implement forward propagation yourself from scratch in Python, how would you go about doing so? In addition to gaining intuition about what's really going on in libraries like TensorFlow and PyTorch, if ever someday you decide you want to build something even better than TensorFlow and PyTorch, maybe now you'd have a better idea how. I don't really recommend doing this for most people, but maybe someday someone will come up with an even better framework than TensorFlow and PyTorch, and whoever does that may end up having to implement these things from scratch themselves. So let's take a look. On this slide, I'm going to go through quite a bit of code, and you'll see all this code again later in the optional lab as well as in the practice lab. So don't worry about having to take notes on every line of code or memorize every line of code. You see this code written down in the Jupyter Notebook in the lab, and the goal of this video is to just show you the code to make sure you can understand what it's doing so that when you go to the optional lab in the practice lab and see the code there, you know what to do. So don't worry about taking detailed notes on every line. If you can read through the code on this slide and understand what it's doing, that's all you need. So let's take a look at how you implement forward prop in a single layer. We're going to continue using the coffee roasting model shown here. And let's look at how you would take an input feature vector x and implement forward prop to get this output a2. In this Python implementation, I'm going to use one D arrays to represent all of these vectors and parameters, which is why there's only a single square bracket here. This is a one D array in Python rather than a 2D matrix, which is what we had when we had double square brackets. So the first value you need to compute is a superscript square bracket one, subscript one, which is the first activation value of a one. And that's G of this expression over here. So I'm going to use the convention on this slide that a term like w two one, I'm going to represent as a variable w two and then subscript one, this underscore one denotes subscript one. So w two means w superscript two in square brackets and then subscript one. So to compute a one one, we have parameters w one one and b one one, which are say one two and negative one. You would then compute z one one as the dot product between that parameter w one one and the input X and add it to be one one and then finally a one one is equal to G the safe point function applied to z one one. Next let's go on to compute a one two, which again by the convention I described here is going to be a one two written like that. So similar as what we did on the left, w one two is your two parameters minus three four, b one two is the term b one two over there. So you compute z as this term in the middle and then apply the safe point function and then you end up with a one two and finally you do the same thing to compute a one three. Now you've computed these three values a one one, a one two and a one three and we like to take these three numbers and group them together into an array to give you a one up here, which is the output of the first layer. And so you do that by grouping them together using a NumPy array as follows. So now you've computed a one. Let's implement the second layer as well to compute the output a two. So a two is computed using this expression and so we would have parameters w two one and b two one corresponding to these parameters and then you would compute z as the dot product between w two one and a one and add b two one and then apply the safe point function to get a two one and that's it. That's how you implement Ford prop using just Python and NumPy. Now there are a lot of expressions in this page of code that you just saw. Let's in the next video look at how you can simplify this to implement Ford prop for a more general neural network rather than hard coding it for every single neuron like we just did. So let's go see that in the next video.
[{"start": 0.0, "end": 7.6000000000000005, "text": " If you had to implement forward propagation yourself from scratch in Python, how would"}, {"start": 7.6000000000000005, "end": 9.72, "text": " you go about doing so?"}, {"start": 9.72, "end": 14.8, "text": " In addition to gaining intuition about what's really going on in libraries like TensorFlow"}, {"start": 14.8, "end": 20.16, "text": " and PyTorch, if ever someday you decide you want to build something even better than TensorFlow"}, {"start": 20.16, "end": 24.400000000000002, "text": " and PyTorch, maybe now you'd have a better idea how."}, {"start": 24.400000000000002, "end": 28.98, "text": " I don't really recommend doing this for most people, but maybe someday someone will come"}, {"start": 28.98, "end": 33.94, "text": " up with an even better framework than TensorFlow and PyTorch, and whoever does that may end"}, {"start": 33.94, "end": 37.38, "text": " up having to implement these things from scratch themselves."}, {"start": 37.38, "end": 39.28, "text": " So let's take a look."}, {"start": 39.28, "end": 44.28, "text": " On this slide, I'm going to go through quite a bit of code, and you'll see all this code"}, {"start": 44.28, "end": 48.66, "text": " again later in the optional lab as well as in the practice lab."}, {"start": 48.66, "end": 52.58, "text": " So don't worry about having to take notes on every line of code or memorize every line"}, {"start": 52.58, "end": 53.620000000000005, "text": " of code."}, {"start": 53.62, "end": 59.0, "text": " You see this code written down in the Jupyter Notebook in the lab, and the goal of this"}, {"start": 59.0, "end": 64.84, "text": " video is to just show you the code to make sure you can understand what it's doing so"}, {"start": 64.84, "end": 68.92, "text": " that when you go to the optional lab in the practice lab and see the code there, you know"}, {"start": 68.92, "end": 69.92, "text": " what to do."}, {"start": 69.92, "end": 72.56, "text": " So don't worry about taking detailed notes on every line."}, {"start": 72.56, "end": 77.08, "text": " If you can read through the code on this slide and understand what it's doing, that's all"}, {"start": 77.08, "end": 78.44, "text": " you need."}, {"start": 78.44, "end": 83.64, "text": " So let's take a look at how you implement forward prop in a single layer."}, {"start": 83.64, "end": 89.56, "text": " We're going to continue using the coffee roasting model shown here."}, {"start": 89.56, "end": 97.2, "text": " And let's look at how you would take an input feature vector x and implement forward prop"}, {"start": 97.2, "end": 100.08, "text": " to get this output a2."}, {"start": 100.08, "end": 106.6, "text": " In this Python implementation, I'm going to use one D arrays to represent all of these"}, {"start": 106.6, "end": 111.11999999999999, "text": " vectors and parameters, which is why there's only a single square bracket here."}, {"start": 111.11999999999999, "end": 116.91999999999999, "text": " This is a one D array in Python rather than a 2D matrix, which is what we had when we"}, {"start": 116.91999999999999, "end": 119.61999999999999, "text": " had double square brackets."}, {"start": 119.61999999999999, "end": 125.47999999999999, "text": " So the first value you need to compute is a superscript square bracket one, subscript"}, {"start": 125.47999999999999, "end": 130.28, "text": " one, which is the first activation value of a one."}, {"start": 130.28, "end": 133.51999999999998, "text": " And that's G of this expression over here."}, {"start": 133.52, "end": 141.20000000000002, "text": " So I'm going to use the convention on this slide that a term like w two one, I'm going"}, {"start": 141.20000000000002, "end": 148.88, "text": " to represent as a variable w two and then subscript one, this underscore one denotes"}, {"start": 148.88, "end": 149.88, "text": " subscript one."}, {"start": 149.88, "end": 156.56, "text": " So w two means w superscript two in square brackets and then subscript one."}, {"start": 156.56, "end": 167.48, "text": " So to compute a one one, we have parameters w one one and b one one, which are say one"}, {"start": 167.48, "end": 170.04, "text": " two and negative one."}, {"start": 170.04, "end": 178.56, "text": " You would then compute z one one as the dot product between that parameter w one one and"}, {"start": 178.56, "end": 188.2, "text": " the input X and add it to be one one and then finally a one one is equal to G the safe point"}, {"start": 188.2, "end": 192.16, "text": " function applied to z one one."}, {"start": 192.16, "end": 198.6, "text": " Next let's go on to compute a one two, which again by the convention I described here is"}, {"start": 198.6, "end": 203.1, "text": " going to be a one two written like that."}, {"start": 203.1, "end": 210.0, "text": " So similar as what we did on the left, w one two is your two parameters minus three four,"}, {"start": 210.0, "end": 213.72, "text": " b one two is the term b one two over there."}, {"start": 213.72, "end": 219.6, "text": " So you compute z as this term in the middle and then apply the safe point function and"}, {"start": 219.6, "end": 228.56, "text": " then you end up with a one two and finally you do the same thing to compute a one three."}, {"start": 228.56, "end": 236.68, "text": " Now you've computed these three values a one one, a one two and a one three and we like"}, {"start": 236.68, "end": 243.32, "text": " to take these three numbers and group them together into an array to give you a one up"}, {"start": 243.32, "end": 245.88, "text": " here, which is the output of the first layer."}, {"start": 245.88, "end": 251.84, "text": " And so you do that by grouping them together using a NumPy array as follows."}, {"start": 251.84, "end": 255.0, "text": " So now you've computed a one."}, {"start": 255.0, "end": 260.88, "text": " Let's implement the second layer as well to compute the output a two."}, {"start": 260.88, "end": 268.32, "text": " So a two is computed using this expression and so we would have parameters w two one"}, {"start": 268.32, "end": 275.4, "text": " and b two one corresponding to these parameters and then you would compute z as the dot product"}, {"start": 275.4, "end": 281.04, "text": " between w two one and a one and add b two one and then apply the safe point function"}, {"start": 281.04, "end": 284.96, "text": " to get a two one and that's it."}, {"start": 284.96, "end": 289.59999999999997, "text": " That's how you implement Ford prop using just Python and NumPy."}, {"start": 289.59999999999997, "end": 294.08, "text": " Now there are a lot of expressions in this page of code that you just saw."}, {"start": 294.08, "end": 298.79999999999995, "text": " Let's in the next video look at how you can simplify this to implement Ford prop for a"}, {"start": 298.79999999999995, "end": 303.4, "text": " more general neural network rather than hard coding it for every single neuron like we"}, {"start": 303.4, "end": 304.58, "text": " just did."}, {"start": 304.58, "end": 315.68, "text": " So let's go see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=snDJExVtMMQ
4.12 Neural network implementation in Python | General implementation of forward propagation-ML Ng
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you saw how to implement forward prop in Python, but by hard coding lines of code for every single neuron. Let's now take a look at the more general implementation of forward prop in Python. Similar to the previous video, my goal in this video is to show you the code so that when you see it again in the practice lab and the optional labs, you know how to interpret it. So as we walk through this example, don't worry about taking notes on every single line of code. If you can read through the code and understand it, that's definitely enough. So what you can do is write a function to implement a dense layer that is a single layer of a neural network. So I'm going to define the dense function, which takes as input the activation from the previous layer, as well as the parameters w and b for the neurons in a given layer. Using the example from the previous video, if layer one has three neurons, and if w1 and w2 and w3 are these, then what we'll do is stack all of these weight vectors into a matrix. This is going to be a two by three matrix, where the first column is the parameter w11, the second column is the parameter w12, and the third column is the parameter w13. And then in a similar way, if you have parameters b, b11 equals negative 1, b12 equals 1, and so on, then we're going to stack these three numbers into one D array b as follows, negative 1, 1, 2. So what the dense function will do is take as input the activation from the previous layer, and a here could be a0, which is equal to x, or the activation from a later layer, as well as the w parameters stacked in columns, like shown on the right, as well as the b parameters also stacked into one D array, like shown to the left over there. And what this function will do is input a, the activation from the previous layer, and will output the activations from the columns layer. So let's step through the code for doing this. Here's the code. First, units equals w dot shape one. So w here is a two by three matrix, and so the number of columns is three. That's equal to the number of units in this layer. So here, units would be equal to three. And looking at the shape of w, it's just a way of pulling out the number of hidden units, or the number of units in this layer. Next, we set a to be an array of zeros with as many elements as there are units. So in this example, we need to output three activation values. So this just initializes a to be zero, zero, zero, an array of three zeros. Next, we go through a for loop to compute the first, second, and third elements of a. So for j and range units, so j goes from zero to units minus one, so it goes from zero, one, two, indexing from zero and Python as usual. This command, w equals capital W colon comma j, this is how you pull out the j column of a matrix in Python. So the first time through this loop, this will pull out the first column of w, and so will pull out w one one. The second time through this loop, when you're computing the activation of the second unit, the pull out the second column corresponding to w one two, and so on for the third time through this loop. And then you compute z using the usual formula as a thought product between that parameter w and the activation that you had received plus b j. And then you compute the activation a j equals g sigmoid function applied to z. So three times through this loop and you computed the values for all three values of this vector of activations a, and then finally you return a. So what the dense function does is it inputs the activations from the previous layer and given the parameters for the current layer, it returns the activations for the next layer. So given the dense function, here's how you can string together a few dense layers sequentially in order to implement forward prop in the neural network. Given the input features x, you can then compute the activations a one to be a one equals dens of x w one b one, where here w one b one are the parameters, sometimes also called the weights of the first hidden layer. Then you can compute a two as dens of now a one, which you just computed above and w two b two, which are the parameters or weights of this second hidden layer and then compute a three and a four. And if this is a neural network with four layers, then the final output f of x is just equal to a four. And so you return f of x. Notice that here I'm using a capital W because one of the notational conventions from linear algebra is to use uppercase or a capital alphabets when it's referring to a matrix and lower case to refer to vectors and scalars. So because it's a matrix, this is capital W. So that's it. You now know how to implement forward prop yourself from scratch and you get to see all this code and run it and practice it yourself in the practice lab coming after this as well. I think that even when you're using powerful libraries like TensorFlow, it's helpful to know how it works under the hood. Because in case something goes wrong, in case something runs really slowly or you have a strange result, or it looks like there's a bug, your ability to understand what's actually going on will make you much more effective when debugging your code. When I run machine learning algorithms a lot of the time, frankly, it doesn't work. So if you're not the first time. And so I find that my ability to debug my code, be it TensorFlow code or something else, is really important to being an effective machine learning engineer. So even when you're using TensorFlow or some other framework, I hope that you find this deeper understanding useful for your own applications and for debugging your own machine learning algorithms as well. So that's it. That's the last required video of this week with code in it. In the next video, I'd like to dive into what I think is a fun and fascinating topic, which is what is the relationship between neural networks and AI or AGI, artificial general intelligence. This is a controversial topic, but because it's been so widely discussed, I want to share with you some thoughts on this. So when you are asked, are neural networks at all on the path to human level intelligence, you have a framework for thinking about that question. Let's go take a look at that fun topic, I think, in the next video.
[{"start": 0.0, "end": 7.96, "text": " In the last video, you saw how to implement forward prop in Python, but by hard coding"}, {"start": 7.96, "end": 10.88, "text": " lines of code for every single neuron."}, {"start": 10.88, "end": 15.88, "text": " Let's now take a look at the more general implementation of forward prop in Python."}, {"start": 15.88, "end": 20.32, "text": " Similar to the previous video, my goal in this video is to show you the code so that"}, {"start": 20.32, "end": 25.76, "text": " when you see it again in the practice lab and the optional labs, you know how to interpret"}, {"start": 25.76, "end": 26.76, "text": " it."}, {"start": 26.76, "end": 30.92, "text": " So as we walk through this example, don't worry about taking notes on every single line"}, {"start": 30.92, "end": 32.120000000000005, "text": " of code."}, {"start": 32.120000000000005, "end": 36.58, "text": " If you can read through the code and understand it, that's definitely enough."}, {"start": 36.58, "end": 42.82, "text": " So what you can do is write a function to implement a dense layer that is a single layer"}, {"start": 42.82, "end": 45.120000000000005, "text": " of a neural network."}, {"start": 45.120000000000005, "end": 50.52, "text": " So I'm going to define the dense function, which takes as input the activation from the"}, {"start": 50.52, "end": 59.24, "text": " previous layer, as well as the parameters w and b for the neurons in a given layer."}, {"start": 59.24, "end": 70.0, "text": " Using the example from the previous video, if layer one has three neurons, and if w1"}, {"start": 70.0, "end": 78.88, "text": " and w2 and w3 are these, then what we'll do is stack all of these weight vectors into"}, {"start": 78.88, "end": 79.88, "text": " a matrix."}, {"start": 79.88, "end": 89.84, "text": " This is going to be a two by three matrix, where the first column is the parameter w11,"}, {"start": 89.84, "end": 96.91999999999999, "text": " the second column is the parameter w12, and the third column is the parameter w13."}, {"start": 96.91999999999999, "end": 105.88, "text": " And then in a similar way, if you have parameters b, b11 equals negative 1, b12 equals 1, and"}, {"start": 105.88, "end": 113.28, "text": " so on, then we're going to stack these three numbers into one D array b as follows, negative"}, {"start": 113.28, "end": 114.96, "text": " 1, 1, 2."}, {"start": 114.96, "end": 119.8, "text": " So what the dense function will do is take as input the activation from the previous"}, {"start": 119.8, "end": 127.82, "text": " layer, and a here could be a0, which is equal to x, or the activation from a later layer,"}, {"start": 127.82, "end": 135.48, "text": " as well as the w parameters stacked in columns, like shown on the right, as well as the b"}, {"start": 135.48, "end": 143.32, "text": " parameters also stacked into one D array, like shown to the left over there."}, {"start": 143.32, "end": 150.16, "text": " And what this function will do is input a, the activation from the previous layer, and"}, {"start": 150.16, "end": 154.39999999999998, "text": " will output the activations from the columns layer."}, {"start": 154.39999999999998, "end": 157.51999999999998, "text": " So let's step through the code for doing this."}, {"start": 157.51999999999998, "end": 159.32, "text": " Here's the code."}, {"start": 159.32, "end": 163.39999999999998, "text": " First, units equals w dot shape one."}, {"start": 163.4, "end": 170.72, "text": " So w here is a two by three matrix, and so the number of columns is three."}, {"start": 170.72, "end": 173.70000000000002, "text": " That's equal to the number of units in this layer."}, {"start": 173.70000000000002, "end": 177.08, "text": " So here, units would be equal to three."}, {"start": 177.08, "end": 182.92000000000002, "text": " And looking at the shape of w, it's just a way of pulling out the number of hidden units,"}, {"start": 182.92000000000002, "end": 185.64000000000001, "text": " or the number of units in this layer."}, {"start": 185.64000000000001, "end": 192.20000000000002, "text": " Next, we set a to be an array of zeros with as many elements as there are units."}, {"start": 192.2, "end": 196.6, "text": " So in this example, we need to output three activation values."}, {"start": 196.6, "end": 202.92, "text": " So this just initializes a to be zero, zero, zero, an array of three zeros."}, {"start": 202.92, "end": 209.0, "text": " Next, we go through a for loop to compute the first, second, and third elements of a."}, {"start": 209.0, "end": 214.79999999999998, "text": " So for j and range units, so j goes from zero to units minus one, so it goes from zero,"}, {"start": 214.79999999999998, "end": 218.28, "text": " one, two, indexing from zero and Python as usual."}, {"start": 218.28, "end": 227.56, "text": " This command, w equals capital W colon comma j, this is how you pull out the j column of"}, {"start": 227.56, "end": 230.2, "text": " a matrix in Python."}, {"start": 230.2, "end": 236.0, "text": " So the first time through this loop, this will pull out the first column of w, and so"}, {"start": 236.0, "end": 239.12, "text": " will pull out w one one."}, {"start": 239.12, "end": 243.56, "text": " The second time through this loop, when you're computing the activation of the second unit,"}, {"start": 243.56, "end": 249.52, "text": " the pull out the second column corresponding to w one two, and so on for the third time"}, {"start": 249.52, "end": 251.44, "text": " through this loop."}, {"start": 251.44, "end": 257.0, "text": " And then you compute z using the usual formula as a thought product between that parameter"}, {"start": 257.0, "end": 263.24, "text": " w and the activation that you had received plus b j."}, {"start": 263.24, "end": 269.32, "text": " And then you compute the activation a j equals g sigmoid function applied to z."}, {"start": 269.32, "end": 274.44, "text": " So three times through this loop and you computed the values for all three values of this vector"}, {"start": 274.44, "end": 279.44, "text": " of activations a, and then finally you return a."}, {"start": 279.44, "end": 284.76, "text": " So what the dense function does is it inputs the activations from the previous layer and"}, {"start": 284.76, "end": 290.78, "text": " given the parameters for the current layer, it returns the activations for the next layer."}, {"start": 290.78, "end": 296.84, "text": " So given the dense function, here's how you can string together a few dense layers sequentially"}, {"start": 296.84, "end": 300.96, "text": " in order to implement forward prop in the neural network."}, {"start": 300.96, "end": 310.5, "text": " Given the input features x, you can then compute the activations a one to be a one equals dens"}, {"start": 310.5, "end": 317.71999999999997, "text": " of x w one b one, where here w one b one are the parameters, sometimes also called the"}, {"start": 317.71999999999997, "end": 321.5, "text": " weights of the first hidden layer."}, {"start": 321.5, "end": 330.0, "text": " Then you can compute a two as dens of now a one, which you just computed above and w"}, {"start": 330.0, "end": 336.08, "text": " two b two, which are the parameters or weights of this second hidden layer and then compute"}, {"start": 336.08, "end": 338.76, "text": " a three and a four."}, {"start": 338.76, "end": 345.4, "text": " And if this is a neural network with four layers, then the final output f of x is just"}, {"start": 345.4, "end": 346.8, "text": " equal to a four."}, {"start": 346.8, "end": 350.44, "text": " And so you return f of x."}, {"start": 350.44, "end": 356.28, "text": " Notice that here I'm using a capital W because one of the notational conventions from linear"}, {"start": 356.28, "end": 362.56, "text": " algebra is to use uppercase or a capital alphabets when it's referring to a matrix and lower"}, {"start": 362.56, "end": 365.9, "text": " case to refer to vectors and scalars."}, {"start": 365.9, "end": 369.46, "text": " So because it's a matrix, this is capital W."}, {"start": 369.46, "end": 370.46, "text": " So that's it."}, {"start": 370.46, "end": 375.04, "text": " You now know how to implement forward prop yourself from scratch and you get to see all"}, {"start": 375.04, "end": 381.20000000000005, "text": " this code and run it and practice it yourself in the practice lab coming after this as well."}, {"start": 381.20000000000005, "end": 385.88, "text": " I think that even when you're using powerful libraries like TensorFlow, it's helpful to"}, {"start": 385.88, "end": 388.32000000000005, "text": " know how it works under the hood."}, {"start": 388.32000000000005, "end": 392.76, "text": " Because in case something goes wrong, in case something runs really slowly or you have a"}, {"start": 392.76, "end": 398.08000000000004, "text": " strange result, or it looks like there's a bug, your ability to understand what's actually"}, {"start": 398.08000000000004, "end": 402.24, "text": " going on will make you much more effective when debugging your code."}, {"start": 402.24, "end": 406.84000000000003, "text": " When I run machine learning algorithms a lot of the time, frankly, it doesn't work."}, {"start": 406.84000000000003, "end": 408.64, "text": " So if you're not the first time."}, {"start": 408.64, "end": 414.16, "text": " And so I find that my ability to debug my code, be it TensorFlow code or something else,"}, {"start": 414.16, "end": 418.36, "text": " is really important to being an effective machine learning engineer."}, {"start": 418.36, "end": 423.76, "text": " So even when you're using TensorFlow or some other framework, I hope that you find this"}, {"start": 423.76, "end": 429.92, "text": " deeper understanding useful for your own applications and for debugging your own machine learning"}, {"start": 429.92, "end": 432.6, "text": " algorithms as well."}, {"start": 432.6, "end": 434.3, "text": " So that's it."}, {"start": 434.3, "end": 438.08000000000004, "text": " That's the last required video of this week with code in it."}, {"start": 438.08000000000004, "end": 442.68, "text": " In the next video, I'd like to dive into what I think is a fun and fascinating topic, which"}, {"start": 442.68, "end": 449.0, "text": " is what is the relationship between neural networks and AI or AGI, artificial general"}, {"start": 449.0, "end": 450.64, "text": " intelligence."}, {"start": 450.64, "end": 455.16, "text": " This is a controversial topic, but because it's been so widely discussed, I want to share"}, {"start": 455.16, "end": 457.6, "text": " with you some thoughts on this."}, {"start": 457.6, "end": 464.54, "text": " So when you are asked, are neural networks at all on the path to human level intelligence,"}, {"start": 464.54, "end": 468.08000000000004, "text": " you have a framework for thinking about that question."}, {"start": 468.08, "end": 488.47999999999996, "text": " Let's go take a look at that fun topic, I think, in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=nxf7N2ZMdlI
4.13 Speculations on artificial general intellignece (AGI) | Is there a path to AGI ? - ML Andrew Ng
None
Ever since I was a teenager, starting to play around with neural networks, I always felt that the dream of maybe someday building an AI system that's as intelligent as myself or as intelligent as a typical human, that that was one of the most inspiring dreams of AI. I still hold that dream alive today, but I think that the path to get there is not clear and could be very difficult. And I don't know whether it'll take us mere decades and whether we'll see breakthroughs in a lifetime, or if it may take centuries or even longer to get there. But let's take a look at what this AGI, artificial general intelligence dream is like and speculate a bit on what might be possible paths, unclear paths, difficult paths to get there someday. I think there's been a lot of unnecessary hype about AGI or artificial general intelligence. And maybe one reason for that is AI actually includes two very different things. One is ANI, which stands for artificial narrow intelligence. This is an AI system that does one thing, a narrow task, sometimes really, really well and can be incredibly valuable, such as a smart speaker or self-driving car or web search, or AI applied to specific applications such as farming or factories. Over the last several years, ANI has made tremendous progress and is creating, as you know, tremendous value in the world today. Because ANI is a subset of AI, the rapid progress in ANI makes it logically true that AI has also made tremendous progress in the last decade. There's a different idea in AI, which is AGI, artificial general intelligence, this hope of building AI systems that could do anything a typical human can do. And despite all the progress in ANI and therefore tremendous progress in AI, I'm not sure how much progress, if any, we're really making to what AGI. And I think all the progress in ANI has made people conclude correctly that there's tremendous progress in AI, but that has caused some people to conclude, I think, incorrectly, that a lot of progress in AI necessarily means that there's a lot of progress toward AGI. So if you ever are asked about AI and AGI, sometimes you might find drawing this picture useful for explaining some of the things going on in AI as well, and some of the sources of unnecessary hype about AGI. And with the rise of modern deep learning, we started to simulate neurons. And with faster and faster computers and even GPUs, we could simulate even more neurons. So I think there was this vague hope many years ago that, boy, if only we could simulate a lot of neurons, then we could simulate the human brain or something like a human brain and with really intelligent systems, right? Sadly, it's turned out not to be quite as simple as that. And I think two reasons for this is, first, if you look at the artificial neural networks we're building, they are so simple that a logistic regression unit is really nothing like what any biological neuron is doing. It's so much simpler than what any neuron in your brain or mine is doing. And second, even to this day, I think we have almost no idea how the brain works. There's still fundamental questions about how exactly does a neuron map from inputs to outputs that we just don't know today. So trying to simulate that in a computer, much less a single logistic function, is just so far from an accurate model of what the human brain actually does. Given our very limited understanding, both now and probably for the near future, of how the human brain works, I think just trying to simulate the human brain as a path to AGI will be an incredibly difficult path. Having said that, is there any hope of within our lifetimes seeing breakthroughs in AGI? Let me share with you some evidence that helps me keep that hope alive, at least for myself. There have been some fascinating experiments done on animals that shows or strongly suggests that the same piece of biological brain tissue can do a surprisingly wide range of tasks. And this has led to the one learning algorithm hypothesis that maybe a lot of intelligence could be due to one or a small handful of learning algorithms. And if only we could figure out what that one or small handful of algorithms are, we may be able to implement that in a computer someday. Let me share with you some details of those experiments. This is a result due to Ro et al from many decades ago. The part of your brain shown here is your auditory cortex, and your brain is wired to feed signals from your ears in the form of electrical impulses depending on what sound your ear is detecting to that auditory cortex. It turns out that if you were to rewire an animal brain to cut the wire between the ear and the auditory cortex, and instead feed in images to the auditory cortex, then the auditory cortex learns to see. Auditory refers to sound, and so this piece of the brain that in most people learns to hear when it is fed different data, it instead learns to see. Here's another example. This part of your brain is your somatosensory cortex. Somatosensory refers to touch processing. If you were to similarly rewire the brain to cut the connection from the touch sensors to that part of the brain, and instead rewire the brain to feed in images, then the somatosensory cortex learns to see. So there's been a sequence of experiments like this showing that many different parts of the brain, just depending on what data it is given, can learn to see or learn to feel or learn to hear as if there was one, maybe one algorithm that just depending on what data it is given, learns to process that input accordingly. There have been systems built which take a camera, maybe mounted to someone's forehead, and maps it to a pattern of voltages in a grid on someone's tongue, and by mapping a grayscale image to a pattern of voltages on your tongue, this can help people that are not sighted, blind individuals, learn to see with your tongue. Or there have been fascinating experiments with human echolocation or human sonar. So animals like dolphins and bats use sonar to see, and researchers have found that if you train humans to make clicking sounds and listen to how that bounces off surroundings, humans can sometimes learn some degree of human echolocation. Or this is a haptic belt, and my research lab at Stanford once built something like this before as well, but if you mount a ring of buzzes around your waist and program it using a magnetic compass so that say the buzzes to the northmost direction are always vibrating slightly, then you somehow gain a direction sense which some animals have but humans don't, and it just feels like you're walking around and you just know where north is. It doesn't feel like, oh that part of my waist is buzzing. It feels like, oh I know where that north is. Or surgeries implant a third eye onto a frog and the brain just learns to deal with this input. There have been a variety of experiments like these showing that the human brain is amazingly adaptable. Neuroscientists say it's amazingly plastic. That just means adaptable to deal with a bewildering range of sensor inputs. And so the question is, if the same piece of brain tissue can learn to see or touch or feel or even other things, what is the algorithm it uses and can we replicate this algorithm and implement it in a computer? I do feel bad for the frog and other animals on which these experiments were done, although I think the conclusions are also quite fascinating. So even to this day, I think working on AGI is one of the most fascinating signs in engineering problems of our time and maybe you will choose someday to do research on it. However, I think it's important to avoid overhyping. I don't know if the brain is really one or a small handful of algorithms and even if it were, I have no idea and I don't think anyone knows what the algorithm is. But I still hold this hope alive and maybe it is and maybe we could, through a lot of hard work, someday discover an approximation to it. I still find this one of the most fascinating topics and I still often idly think about it in my spare time and maybe someday you will be the one to make a contribution to this problem. So in the short term, I think even without pursuing AGI, machine learning and neural networks are a very powerful tool and even without trying to go all the way to build human level intelligence, I think you find neural networks to be an incredibly powerful and useful set of tools for applications that you might build. And so that's it for the required videos of this week. Congratulations on getting to this point of the lessons. After this, we'll also have a few optional videos to dive a little bit more deeply into efficient implementations of neural networks. And in particular, in the optional videos to come, I'd like to share with you some details of how to implement vectorized implementations of neural networks. I hope you also take a look at those videos.
[{"start": 0.0, "end": 6.24, "text": " Ever since I was a teenager, starting to play around with neural networks, I always felt"}, {"start": 6.24, "end": 11.84, "text": " that the dream of maybe someday building an AI system that's as intelligent as myself"}, {"start": 11.84, "end": 16.68, "text": " or as intelligent as a typical human, that that was one of the most inspiring dreams"}, {"start": 16.68, "end": 17.84, "text": " of AI."}, {"start": 17.84, "end": 22.6, "text": " I still hold that dream alive today, but I think that the path to get there is not clear"}, {"start": 22.6, "end": 24.560000000000002, "text": " and could be very difficult."}, {"start": 24.560000000000002, "end": 28.52, "text": " And I don't know whether it'll take us mere decades and whether we'll see breakthroughs"}, {"start": 28.52, "end": 34.36, "text": " in a lifetime, or if it may take centuries or even longer to get there."}, {"start": 34.36, "end": 41.16, "text": " But let's take a look at what this AGI, artificial general intelligence dream is like and speculate"}, {"start": 41.16, "end": 47.120000000000005, "text": " a bit on what might be possible paths, unclear paths, difficult paths to get there someday."}, {"start": 47.120000000000005, "end": 54.120000000000005, "text": " I think there's been a lot of unnecessary hype about AGI or artificial general intelligence."}, {"start": 54.12, "end": 61.12, "text": " And maybe one reason for that is AI actually includes two very different things."}, {"start": 61.12, "end": 66.22, "text": " One is ANI, which stands for artificial narrow intelligence."}, {"start": 66.22, "end": 71.32, "text": " This is an AI system that does one thing, a narrow task, sometimes really, really well"}, {"start": 71.32, "end": 76.75999999999999, "text": " and can be incredibly valuable, such as a smart speaker or self-driving car or web search,"}, {"start": 76.75999999999999, "end": 82.52, "text": " or AI applied to specific applications such as farming or factories."}, {"start": 82.52, "end": 88.56, "text": " Over the last several years, ANI has made tremendous progress and is creating, as you"}, {"start": 88.56, "end": 91.6, "text": " know, tremendous value in the world today."}, {"start": 91.6, "end": 99.72, "text": " Because ANI is a subset of AI, the rapid progress in ANI makes it logically true that AI has"}, {"start": 99.72, "end": 104.03999999999999, "text": " also made tremendous progress in the last decade."}, {"start": 104.03999999999999, "end": 109.84, "text": " There's a different idea in AI, which is AGI, artificial general intelligence, this hope"}, {"start": 109.84, "end": 115.0, "text": " of building AI systems that could do anything a typical human can do."}, {"start": 115.0, "end": 122.0, "text": " And despite all the progress in ANI and therefore tremendous progress in AI, I'm not sure how"}, {"start": 122.0, "end": 127.16, "text": " much progress, if any, we're really making to what AGI."}, {"start": 127.16, "end": 132.76, "text": " And I think all the progress in ANI has made people conclude correctly that there's tremendous"}, {"start": 132.76, "end": 139.52, "text": " progress in AI, but that has caused some people to conclude, I think, incorrectly, that a"}, {"start": 139.52, "end": 145.68, "text": " lot of progress in AI necessarily means that there's a lot of progress toward AGI."}, {"start": 145.68, "end": 152.92000000000002, "text": " So if you ever are asked about AI and AGI, sometimes you might find drawing this picture"}, {"start": 152.92000000000002, "end": 157.92000000000002, "text": " useful for explaining some of the things going on in AI as well, and some of the sources"}, {"start": 157.92000000000002, "end": 162.44, "text": " of unnecessary hype about AGI."}, {"start": 162.44, "end": 168.08, "text": " And with the rise of modern deep learning, we started to simulate neurons."}, {"start": 168.08, "end": 174.64000000000001, "text": " And with faster and faster computers and even GPUs, we could simulate even more neurons."}, {"start": 174.64000000000001, "end": 179.64000000000001, "text": " So I think there was this vague hope many years ago that, boy, if only we could simulate"}, {"start": 179.64000000000001, "end": 184.36, "text": " a lot of neurons, then we could simulate the human brain or something like a human brain"}, {"start": 184.36, "end": 186.76000000000002, "text": " and with really intelligent systems, right?"}, {"start": 186.76000000000002, "end": 191.20000000000002, "text": " Sadly, it's turned out not to be quite as simple as that."}, {"start": 191.2, "end": 199.0, "text": " And I think two reasons for this is, first, if you look at the artificial neural networks"}, {"start": 199.0, "end": 204.76, "text": " we're building, they are so simple that a logistic regression unit is really nothing"}, {"start": 204.76, "end": 207.23999999999998, "text": " like what any biological neuron is doing."}, {"start": 207.23999999999998, "end": 212.07999999999998, "text": " It's so much simpler than what any neuron in your brain or mine is doing."}, {"start": 212.07999999999998, "end": 217.95999999999998, "text": " And second, even to this day, I think we have almost no idea how the brain works."}, {"start": 217.96, "end": 222.84, "text": " There's still fundamental questions about how exactly does a neuron map from inputs"}, {"start": 222.84, "end": 225.84, "text": " to outputs that we just don't know today."}, {"start": 225.84, "end": 231.68, "text": " So trying to simulate that in a computer, much less a single logistic function, is just"}, {"start": 231.68, "end": 236.96, "text": " so far from an accurate model of what the human brain actually does."}, {"start": 236.96, "end": 243.28, "text": " Given our very limited understanding, both now and probably for the near future, of how"}, {"start": 243.28, "end": 249.68, "text": " the human brain works, I think just trying to simulate the human brain as a path to AGI"}, {"start": 249.68, "end": 253.16, "text": " will be an incredibly difficult path."}, {"start": 253.16, "end": 260.76, "text": " Having said that, is there any hope of within our lifetimes seeing breakthroughs in AGI?"}, {"start": 260.76, "end": 268.36, "text": " Let me share with you some evidence that helps me keep that hope alive, at least for myself."}, {"start": 268.36, "end": 275.28000000000003, "text": " There have been some fascinating experiments done on animals that shows or strongly suggests"}, {"start": 275.28000000000003, "end": 283.2, "text": " that the same piece of biological brain tissue can do a surprisingly wide range of tasks."}, {"start": 283.2, "end": 289.24, "text": " And this has led to the one learning algorithm hypothesis that maybe a lot of intelligence"}, {"start": 289.24, "end": 292.88, "text": " could be due to one or a small handful of learning algorithms."}, {"start": 292.88, "end": 297.48, "text": " And if only we could figure out what that one or small handful of algorithms are, we"}, {"start": 297.48, "end": 301.44, "text": " may be able to implement that in a computer someday."}, {"start": 301.44, "end": 305.04, "text": " Let me share with you some details of those experiments."}, {"start": 305.04, "end": 310.20000000000005, "text": " This is a result due to Ro et al from many decades ago."}, {"start": 310.20000000000005, "end": 316.32, "text": " The part of your brain shown here is your auditory cortex, and your brain is wired to"}, {"start": 316.32, "end": 322.6, "text": " feed signals from your ears in the form of electrical impulses depending on what sound"}, {"start": 322.6, "end": 326.64000000000004, "text": " your ear is detecting to that auditory cortex."}, {"start": 326.64, "end": 333.59999999999997, "text": " It turns out that if you were to rewire an animal brain to cut the wire between the ear"}, {"start": 333.59999999999997, "end": 339.4, "text": " and the auditory cortex, and instead feed in images to the auditory cortex, then the"}, {"start": 339.4, "end": 342.4, "text": " auditory cortex learns to see."}, {"start": 342.4, "end": 346.8, "text": " Auditory refers to sound, and so this piece of the brain that in most people learns to"}, {"start": 346.8, "end": 352.59999999999997, "text": " hear when it is fed different data, it instead learns to see."}, {"start": 352.59999999999997, "end": 354.64, "text": " Here's another example."}, {"start": 354.64, "end": 358.0, "text": " This part of your brain is your somatosensory cortex."}, {"start": 358.0, "end": 361.24, "text": " Somatosensory refers to touch processing."}, {"start": 361.24, "end": 366.8, "text": " If you were to similarly rewire the brain to cut the connection from the touch sensors"}, {"start": 366.8, "end": 372.56, "text": " to that part of the brain, and instead rewire the brain to feed in images, then the somatosensory"}, {"start": 372.56, "end": 374.88, "text": " cortex learns to see."}, {"start": 374.88, "end": 379.08, "text": " So there's been a sequence of experiments like this showing that many different parts"}, {"start": 379.08, "end": 384.76, "text": " of the brain, just depending on what data it is given, can learn to see or learn to"}, {"start": 384.76, "end": 391.08, "text": " feel or learn to hear as if there was one, maybe one algorithm that just depending on"}, {"start": 391.08, "end": 396.88, "text": " what data it is given, learns to process that input accordingly."}, {"start": 396.88, "end": 402.96, "text": " There have been systems built which take a camera, maybe mounted to someone's forehead,"}, {"start": 402.96, "end": 409.47999999999996, "text": " and maps it to a pattern of voltages in a grid on someone's tongue, and by mapping a"}, {"start": 409.47999999999996, "end": 415.64, "text": " grayscale image to a pattern of voltages on your tongue, this can help people that are"}, {"start": 415.64, "end": 421.52, "text": " not sighted, blind individuals, learn to see with your tongue."}, {"start": 421.52, "end": 427.02, "text": " Or there have been fascinating experiments with human echolocation or human sonar."}, {"start": 427.02, "end": 434.28, "text": " So animals like dolphins and bats use sonar to see, and researchers have found that if"}, {"start": 434.28, "end": 442.88, "text": " you train humans to make clicking sounds and listen to how that bounces off surroundings,"}, {"start": 442.88, "end": 448.71999999999997, "text": " humans can sometimes learn some degree of human echolocation."}, {"start": 448.71999999999997, "end": 454.32, "text": " Or this is a haptic belt, and my research lab at Stanford once built something like"}, {"start": 454.32, "end": 461.76, "text": " this before as well, but if you mount a ring of buzzes around your waist and program it"}, {"start": 461.76, "end": 468.6, "text": " using a magnetic compass so that say the buzzes to the northmost direction are always vibrating"}, {"start": 468.6, "end": 474.36, "text": " slightly, then you somehow gain a direction sense which some animals have but humans don't,"}, {"start": 474.36, "end": 478.56, "text": " and it just feels like you're walking around and you just know where north is."}, {"start": 478.56, "end": 481.56, "text": " It doesn't feel like, oh that part of my waist is buzzing."}, {"start": 481.56, "end": 484.88, "text": " It feels like, oh I know where that north is."}, {"start": 484.88, "end": 490.84, "text": " Or surgeries implant a third eye onto a frog and the brain just learns to deal with this"}, {"start": 490.84, "end": 491.84, "text": " input."}, {"start": 491.84, "end": 497.24, "text": " There have been a variety of experiments like these showing that the human brain is amazingly"}, {"start": 497.24, "end": 499.04, "text": " adaptable."}, {"start": 499.04, "end": 500.8, "text": " Neuroscientists say it's amazingly plastic."}, {"start": 500.8, "end": 506.92, "text": " That just means adaptable to deal with a bewildering range of sensor inputs."}, {"start": 506.92, "end": 512.6800000000001, "text": " And so the question is, if the same piece of brain tissue can learn to see or touch"}, {"start": 512.6800000000001, "end": 517.9200000000001, "text": " or feel or even other things, what is the algorithm it uses and can we replicate this"}, {"start": 517.9200000000001, "end": 520.98, "text": " algorithm and implement it in a computer?"}, {"start": 520.98, "end": 526.04, "text": " I do feel bad for the frog and other animals on which these experiments were done, although"}, {"start": 526.04, "end": 530.24, "text": " I think the conclusions are also quite fascinating."}, {"start": 530.24, "end": 536.88, "text": " So even to this day, I think working on AGI is one of the most fascinating signs in engineering"}, {"start": 536.88, "end": 542.8, "text": " problems of our time and maybe you will choose someday to do research on it."}, {"start": 542.8, "end": 547.12, "text": " However, I think it's important to avoid overhyping."}, {"start": 547.12, "end": 552.64, "text": " I don't know if the brain is really one or a small handful of algorithms and even if"}, {"start": 552.64, "end": 557.96, "text": " it were, I have no idea and I don't think anyone knows what the algorithm is."}, {"start": 557.96, "end": 563.76, "text": " But I still hold this hope alive and maybe it is and maybe we could, through a lot of"}, {"start": 563.76, "end": 568.08, "text": " hard work, someday discover an approximation to it."}, {"start": 568.08, "end": 573.76, "text": " I still find this one of the most fascinating topics and I still often idly think about"}, {"start": 573.76, "end": 579.3199999999999, "text": " it in my spare time and maybe someday you will be the one to make a contribution to"}, {"start": 579.3199999999999, "end": 581.5, "text": " this problem."}, {"start": 581.5, "end": 587.96, "text": " So in the short term, I think even without pursuing AGI, machine learning and neural"}, {"start": 587.96, "end": 593.9200000000001, "text": " networks are a very powerful tool and even without trying to go all the way to build"}, {"start": 593.9200000000001, "end": 599.72, "text": " human level intelligence, I think you find neural networks to be an incredibly powerful"}, {"start": 599.72, "end": 604.96, "text": " and useful set of tools for applications that you might build."}, {"start": 604.96, "end": 608.48, "text": " And so that's it for the required videos of this week."}, {"start": 608.48, "end": 612.0400000000001, "text": " Congratulations on getting to this point of the lessons."}, {"start": 612.0400000000001, "end": 617.9200000000001, "text": " After this, we'll also have a few optional videos to dive a little bit more deeply into"}, {"start": 617.92, "end": 621.16, "text": " efficient implementations of neural networks."}, {"start": 621.16, "end": 625.68, "text": " And in particular, in the optional videos to come, I'd like to share with you some"}, {"start": 625.68, "end": 630.68, "text": " details of how to implement vectorized implementations of neural networks."}, {"start": 630.68, "end": 648.28, "text": " I hope you also take a look at those videos."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=3iDMb-EUQPA
4.14 Vectorization (optional) | How neural networks are implemented efficiently-- [ML- Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
One of the reasons that deep learning researchers have been able to scale up neural networks and build really large neural networks over the last decade is because neural networks can be vectorized. They can be implemented very efficiently using matrix multiplications. And it turns out that parallel computing hardware, including GPUs, but also some CPU functions are very good at doing very large matrix multiplications. In this video, we'll take a look at how these vectorized implementations of neural networks work. Without these ideas, I don't think deep learning would be anywhere near success in scale today. Here on the left is the code that you had seen previously of how you would implement for a prop or for a propagation in a single layer. This here is the input w, the weights of the first, second, and third neurons, say, parameters b. And then this is the same code as what you saw before. And this will output three numbers, say, like that. And if you actually implement this computation, you get 101. It turns out you can develop a vectorized implementation of this function as follows. Set x to be equal to this. Notice the double square brackets. So this is now a 2D array, like in TensorFlow. W is the same as before. And b, I'm now using capital B, is also a 1 by 3 2D array. And then it turns out that all of these steps, this for loop inside, can be replaced with just a couple lines of code. Z equals np.matmul. Matmul is how NumPy carries out matrix multiplication, where now x and w are both matrices. And so you just multiply them together. And it turns out that this for loop, all of these lines of code, can be replaced with just a couple lines of code, which gives a vectorized implementation of this function. So you compute z, which is now a matrix again, as NumPy.matmul between a in and w, where here a in and w are both matrices. And matmul is how NumPy carries out a matrix multiplication. It multiplies two matrices together and then adds the matrix b to it. And then a here, or a out, is equal to the activation function g, that is a sigmoid function, multiplied element-wise to this matrix z. And then you finally return a out. So this is what the code looks like. Notice that in the vectorized implementation, all of these quantities, x, which is fed into the value of a in, as well as w, b, as well as z, and a out, all of these are now 2D arrays. All of these are matrices. And this turns out to be a very efficient implementation of one step of forward propagation through a dense layer in the neural network. So this is code for a vectorized implementation of forward prop in a neural network. But what is this code doing and how does it actually work? And what is this matmul actually doing? In the next two videos, both also optional, we'll go over matrix multiplication and how that works. If you're familiar with linear algebra, if you're familiar with vectors, matrices, transposes, and matrix matrix multiplications, you can safely just quickly skim over these two videos and jump to the last video of this week. And then in the last video of this week, also optional, we'll dive into more detail to explain how matmul gives you this vectorized implementation. And so with that, let's go on to the next video where we'll take a look at what matrix multiplication is.
[{"start": 0.0, "end": 7.16, "text": " One of the reasons that deep learning researchers have been able to scale up neural networks"}, {"start": 7.16, "end": 12.24, "text": " and build really large neural networks over the last decade is because neural networks"}, {"start": 12.24, "end": 14.16, "text": " can be vectorized."}, {"start": 14.16, "end": 19.32, "text": " They can be implemented very efficiently using matrix multiplications."}, {"start": 19.32, "end": 26.0, "text": " And it turns out that parallel computing hardware, including GPUs, but also some CPU functions"}, {"start": 26.0, "end": 30.48, "text": " are very good at doing very large matrix multiplications."}, {"start": 30.48, "end": 36.0, "text": " In this video, we'll take a look at how these vectorized implementations of neural networks"}, {"start": 36.0, "end": 37.0, "text": " work."}, {"start": 37.0, "end": 42.620000000000005, "text": " Without these ideas, I don't think deep learning would be anywhere near success in scale today."}, {"start": 42.620000000000005, "end": 49.28, "text": " Here on the left is the code that you had seen previously of how you would implement"}, {"start": 49.28, "end": 55.400000000000006, "text": " for a prop or for a propagation in a single layer."}, {"start": 55.4, "end": 64.6, "text": " This here is the input w, the weights of the first, second, and third neurons, say, parameters"}, {"start": 64.6, "end": 65.6, "text": " b."}, {"start": 65.6, "end": 68.92, "text": " And then this is the same code as what you saw before."}, {"start": 68.92, "end": 72.46, "text": " And this will output three numbers, say, like that."}, {"start": 72.46, "end": 77.72, "text": " And if you actually implement this computation, you get 101."}, {"start": 77.72, "end": 86.56, "text": " It turns out you can develop a vectorized implementation of this function as follows."}, {"start": 86.56, "end": 89.8, "text": " Set x to be equal to this."}, {"start": 89.8, "end": 91.62, "text": " Notice the double square brackets."}, {"start": 91.62, "end": 96.75999999999999, "text": " So this is now a 2D array, like in TensorFlow."}, {"start": 96.75999999999999, "end": 98.75999999999999, "text": " W is the same as before."}, {"start": 98.76, "end": 108.96000000000001, "text": " And b, I'm now using capital B, is also a 1 by 3 2D array."}, {"start": 108.96000000000001, "end": 116.12, "text": " And then it turns out that all of these steps, this for loop inside, can be replaced with"}, {"start": 116.12, "end": 118.04, "text": " just a couple lines of code."}, {"start": 118.04, "end": 120.24000000000001, "text": " Z equals np.matmul."}, {"start": 120.24, "end": 129.84, "text": " Matmul is how NumPy carries out matrix multiplication, where now x and w are both matrices."}, {"start": 129.84, "end": 133.16, "text": " And so you just multiply them together."}, {"start": 133.16, "end": 137.76, "text": " And it turns out that this for loop, all of these lines of code, can be replaced with"}, {"start": 137.76, "end": 145.24, "text": " just a couple lines of code, which gives a vectorized implementation of this function."}, {"start": 145.24, "end": 154.60000000000002, "text": " So you compute z, which is now a matrix again, as NumPy.matmul between a in and w, where"}, {"start": 154.60000000000002, "end": 157.76000000000002, "text": " here a in and w are both matrices."}, {"start": 157.76000000000002, "end": 162.8, "text": " And matmul is how NumPy carries out a matrix multiplication."}, {"start": 162.8, "end": 167.64000000000001, "text": " It multiplies two matrices together and then adds the matrix b to it."}, {"start": 167.64000000000001, "end": 175.04000000000002, "text": " And then a here, or a out, is equal to the activation function g, that is a sigmoid function,"}, {"start": 175.04, "end": 178.72, "text": " multiplied element-wise to this matrix z."}, {"start": 178.72, "end": 182.4, "text": " And then you finally return a out."}, {"start": 182.4, "end": 185.92, "text": " So this is what the code looks like."}, {"start": 185.92, "end": 191.23999999999998, "text": " Notice that in the vectorized implementation, all of these quantities, x, which is fed into"}, {"start": 191.23999999999998, "end": 198.84, "text": " the value of a in, as well as w, b, as well as z, and a out, all of these are now 2D arrays."}, {"start": 198.84, "end": 201.28, "text": " All of these are matrices."}, {"start": 201.28, "end": 208.16, "text": " And this turns out to be a very efficient implementation of one step of forward propagation"}, {"start": 208.16, "end": 210.8, "text": " through a dense layer in the neural network."}, {"start": 210.8, "end": 217.28, "text": " So this is code for a vectorized implementation of forward prop in a neural network."}, {"start": 217.28, "end": 220.88, "text": " But what is this code doing and how does it actually work?"}, {"start": 220.88, "end": 224.48, "text": " And what is this matmul actually doing?"}, {"start": 224.48, "end": 230.68, "text": " In the next two videos, both also optional, we'll go over matrix multiplication and how"}, {"start": 230.68, "end": 232.36, "text": " that works."}, {"start": 232.36, "end": 238.6, "text": " If you're familiar with linear algebra, if you're familiar with vectors, matrices, transposes,"}, {"start": 238.6, "end": 244.84, "text": " and matrix matrix multiplications, you can safely just quickly skim over these two videos"}, {"start": 244.84, "end": 247.64000000000001, "text": " and jump to the last video of this week."}, {"start": 247.64000000000001, "end": 252.84, "text": " And then in the last video of this week, also optional, we'll dive into more detail to explain"}, {"start": 252.84, "end": 257.0, "text": " how matmul gives you this vectorized implementation."}, {"start": 257.0, "end": 261.32, "text": " And so with that, let's go on to the next video where we'll take a look at what matrix"}, {"start": 261.32, "end": 288.36, "text": " multiplication is."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=CYnwhHKnuwY
4.15 Vectorization (optional) | Matrix multiplication-- [Machine Learning - Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
So you know that a matrix is just a block or 2D array of numbers. What does it mean to multiply two matrices? Let's take a look. In order to build up to multiplying matrices, let's start by looking at how we take dot products between vectors. Let's use the example of taking the dot product between this vector 1, 2, and this vector 3, 4. If z is the dot product between these two vectors, then you compute z by multiplying the first element by this first element here, so it's 1 times 3, plus the second element times the second element, plus 2 times 4, and so that's just 3 plus 8, which is equal to 11. In the more general case, if z is the dot product between a vector a and a vector w, then you compute z by multiplying the first element together and then the second element together and the third and so on, and then adding up all of these products. So that's the vector vector dot product. It turns out there's another equivalent way of writing a dot product, which is given a vector a, that is 1, 2 written as a column, you can turn this into a row, that is, you can turn it from what's called a column vector to a row vector by taking the transpose of a. So the transpose of a vector a means you take this vector and lay its elements on the side like this. And it turns out that if you multiply a transpose, this is a row vector, or you can think of this as a 1 by 2 matrix, with w, which you can now think of as a 2 by 1 matrix, then z equals a transpose times w, and this is the same as taking the dot product between a and w. So to recap, z equals the dot product between a and w is the same as z equals a transpose, that is a laid on the side, multiplied by w. And this will be useful for understanding matrix multiplication, that these are just two ways of writing the exact same computation to arrive at z. Now let's look at vector matrix multiplication, which is when you take a vector and you multiply a vector by a matrix. Here again is the vector a, 1, 2, and a transpose is a laid on the side, so rather than just kind of think of this as a 2 by 1 matrix, it becomes a 1 by 2 matrix. And let me now create a 2 by 2 matrix w with these four elements, v4, v5, v6. If you want to compute capital Z as a transpose times w, so let's see how you would go about doing so. It turns out that z is going to be a 2 by 1 matrix, and to compute the first value of z, we're going to take a transpose, 1, 2 here, and multiply that by the first column of w, so that's 3, 4. And so to compute the first element of z, you end up with 1 times 3 plus 2 times 4, which we saw earlier is equal to 11, and so the first element of z is 11. Let's figure out what's the second element of z. Turns out you just repeat this process, but now multiply a transpose by the second column of w. And so to do that computation, you have 1 times 5 plus 2 times 6, which is equal to 5 plus 12, which is 17, so that's equal to 17. So z is equal to this 1 by 2 matrix, 11 and 17. Now just one last thing, and then that'll take us to the end of this video, which is how to take vector matrix multiplication and generalize it to matrix matrix multiplication. I have a matrix A with these four elements, the first column is 1, 2, and the second column is negative 1, negative 2. And I want to know how to compute a transpose times w. Unlike the previous slide, A now is a matrix rather than just a vector, but the matrix is just a set of different vectors stacked together in columns. So first, let's figure out what is A transpose. In order to compute A transpose, we're going to take the columns of A, and similar to what happened when you transpose a vector, we're going to take the columns and lay them on the side one column at a time. So the first column, 1, 2, becomes the first row, 1, 2, because it's just laid on the side, and this second column, negative 1, negative 2, becomes laid on the side, negative 1, negative 2, like this. So the way you transpose a matrix is you take the columns and you just lay the columns on the side one column at a time. So you end up with this being A transpose. Next, we have this matrix, w, which we're going to write as 3, 4, 5, 6. So there's a column 3, 4, and a column 5, 6. One way I encourage you to think of matrices, at least that's useful for neural network implementations is if you see a matrix, think of the columns of the matrix, and if you see the transpose of a matrix, think of the rows of that matrix as being grouped together, as illustrated here with A and A transpose, as well as w. And now, let me show you how to multiply A transpose and w. In order to carry out this computation, let me call the columns of A, A1 and A2, and that means that A1 transpose is the first row of A transpose, and A2 transpose is the second row of A transpose. And then, same as before, let me call the columns of w to be w1 and w2. So it turns out that to compute A transpose w, the first thing we need to do is let's just ignore the second row of A, and let's just pay attention to the first row of A, and let's take this row 1, 2, that is A1 transpose, and multiply that with w. So you already know how to do that from the previous slide. The first element is 1, 2, inner product or dot product with 3, 4, so that ends up with 3 times 1 plus 2 times 4, which is 11. And then the second element is 1, 2, A transpose, inner product with 5, 6, so that's 5 times 1 plus 6 times 2, which is 5 plus 12, which is 17. So that gives you the first row of z equals A transpose w. So all we've done is take A1 transpose and multiply that by w. That's exactly what we did on the previous slide. Next, let's forget A1 for now, and let's just look at A2 and take A2 transpose and multiply that by w. So now we have A2 transpose times w, and to compute that, first we take negative 1 and negative 2 and dot product that with 3, 4, so that's negative 1 times 3 plus negative 2 times 4, and that turns out to be negative 11. And then we have to compute A2 transpose times the second column, and that's negative 1 times 5 plus negative 2 times 6, and that turns out to be negative 17. So you end up with A transpose times w is equal to this 2 by 2 matrix over here. Let's talk about the general form of matrix-matrix multiplication. So this was an example of how you multiply a vector with a matrix or a matrix with a matrix. There's a lot of dot products between vectors, but ordered in a certain way to construct the elements of the output z one element at a time. I know this was a lot, but in the next video, let's look at the general form of how a matrix-matrix multiplication is defined, and I hope that that will make all this clear as well. Let's go on to the next video.
[{"start": 0.0, "end": 8.0, "text": " So you know that a matrix is just a block or 2D array of numbers."}, {"start": 8.0, "end": 11.200000000000001, "text": " What does it mean to multiply two matrices?"}, {"start": 11.200000000000001, "end": 12.72, "text": " Let's take a look."}, {"start": 12.72, "end": 19.76, "text": " In order to build up to multiplying matrices, let's start by looking at how we take dot"}, {"start": 19.76, "end": 22.28, "text": " products between vectors."}, {"start": 22.28, "end": 27.8, "text": " Let's use the example of taking the dot product between this vector 1, 2, and this vector"}, {"start": 27.8, "end": 30.2, "text": " 3, 4."}, {"start": 30.2, "end": 35.96, "text": " If z is the dot product between these two vectors, then you compute z by multiplying"}, {"start": 35.96, "end": 42.32, "text": " the first element by this first element here, so it's 1 times 3, plus the second element"}, {"start": 42.32, "end": 47.96, "text": " times the second element, plus 2 times 4, and so that's just 3 plus 8, which is equal"}, {"start": 47.96, "end": 50.64, "text": " to 11."}, {"start": 50.64, "end": 59.0, "text": " In the more general case, if z is the dot product between a vector a and a vector w,"}, {"start": 59.0, "end": 63.52, "text": " then you compute z by multiplying the first element together and then the second element"}, {"start": 63.52, "end": 67.03999999999999, "text": " together and the third and so on, and then adding up all of these products."}, {"start": 67.03999999999999, "end": 70.72, "text": " So that's the vector vector dot product."}, {"start": 70.72, "end": 77.0, "text": " It turns out there's another equivalent way of writing a dot product, which is given a"}, {"start": 77.0, "end": 85.44, "text": " vector a, that is 1, 2 written as a column, you can turn this into a row, that is, you"}, {"start": 85.44, "end": 92.6, "text": " can turn it from what's called a column vector to a row vector by taking the transpose of"}, {"start": 92.6, "end": 93.72, "text": " a."}, {"start": 93.72, "end": 101.24000000000001, "text": " So the transpose of a vector a means you take this vector and lay its elements on the side"}, {"start": 101.24000000000001, "end": 103.6, "text": " like this."}, {"start": 103.6, "end": 110.72, "text": " And it turns out that if you multiply a transpose, this is a row vector, or you can think of"}, {"start": 110.72, "end": 120.39999999999999, "text": " this as a 1 by 2 matrix, with w, which you can now think of as a 2 by 1 matrix, then"}, {"start": 120.39999999999999, "end": 128.0, "text": " z equals a transpose times w, and this is the same as taking the dot product between"}, {"start": 128.0, "end": 131.04, "text": " a and w."}, {"start": 131.04, "end": 140.0, "text": " So to recap, z equals the dot product between a and w is the same as z equals a transpose,"}, {"start": 140.0, "end": 145.76, "text": " that is a laid on the side, multiplied by w."}, {"start": 145.76, "end": 150.32, "text": " And this will be useful for understanding matrix multiplication, that these are just"}, {"start": 150.32, "end": 156.35999999999999, "text": " two ways of writing the exact same computation to arrive at z."}, {"start": 156.36, "end": 162.32000000000002, "text": " Now let's look at vector matrix multiplication, which is when you take a vector and you multiply"}, {"start": 162.32000000000002, "end": 165.44000000000003, "text": " a vector by a matrix."}, {"start": 165.44000000000003, "end": 173.96, "text": " Here again is the vector a, 1, 2, and a transpose is a laid on the side, so rather than just"}, {"start": 173.96, "end": 180.4, "text": " kind of think of this as a 2 by 1 matrix, it becomes a 1 by 2 matrix."}, {"start": 180.4, "end": 187.92000000000002, "text": " And let me now create a 2 by 2 matrix w with these four elements, v4, v5, v6."}, {"start": 187.92000000000002, "end": 198.28, "text": " If you want to compute capital Z as a transpose times w, so let's see how you would go about"}, {"start": 198.28, "end": 199.88, "text": " doing so."}, {"start": 199.88, "end": 210.44, "text": " It turns out that z is going to be a 2 by 1 matrix, and to compute the first value of"}, {"start": 210.44, "end": 218.72, "text": " z, we're going to take a transpose, 1, 2 here, and multiply that by the first column of w,"}, {"start": 218.72, "end": 221.12, "text": " so that's 3, 4."}, {"start": 221.12, "end": 229.56, "text": " And so to compute the first element of z, you end up with 1 times 3 plus 2 times 4,"}, {"start": 229.56, "end": 236.72, "text": " which we saw earlier is equal to 11, and so the first element of z is 11."}, {"start": 236.72, "end": 239.84, "text": " Let's figure out what's the second element of z."}, {"start": 239.84, "end": 246.52, "text": " Turns out you just repeat this process, but now multiply a transpose by the second column"}, {"start": 246.52, "end": 247.76, "text": " of w."}, {"start": 247.76, "end": 256.44, "text": " And so to do that computation, you have 1 times 5 plus 2 times 6, which is equal to"}, {"start": 256.44, "end": 263.4, "text": " 5 plus 12, which is 17, so that's equal to 17."}, {"start": 263.4, "end": 269.84, "text": " So z is equal to this 1 by 2 matrix, 11 and 17."}, {"start": 269.84, "end": 275.64, "text": " Now just one last thing, and then that'll take us to the end of this video, which is"}, {"start": 275.64, "end": 282.88, "text": " how to take vector matrix multiplication and generalize it to matrix matrix multiplication."}, {"start": 282.88, "end": 288.71999999999997, "text": " I have a matrix A with these four elements, the first column is 1, 2, and the second column"}, {"start": 288.71999999999997, "end": 292.08, "text": " is negative 1, negative 2."}, {"start": 292.08, "end": 298.52, "text": " And I want to know how to compute a transpose times w."}, {"start": 298.52, "end": 304.44, "text": " Unlike the previous slide, A now is a matrix rather than just a vector, but the matrix"}, {"start": 304.44, "end": 309.8, "text": " is just a set of different vectors stacked together in columns."}, {"start": 309.8, "end": 313.52000000000004, "text": " So first, let's figure out what is A transpose."}, {"start": 313.52000000000004, "end": 318.84000000000003, "text": " In order to compute A transpose, we're going to take the columns of A, and similar to what"}, {"start": 318.84000000000003, "end": 323.44, "text": " happened when you transpose a vector, we're going to take the columns and lay them on"}, {"start": 323.44, "end": 326.2, "text": " the side one column at a time."}, {"start": 326.2, "end": 331.2, "text": " So the first column, 1, 2, becomes the first row, 1, 2, because it's just laid on the"}, {"start": 331.2, "end": 338.0, "text": " side, and this second column, negative 1, negative 2, becomes laid on the side, negative"}, {"start": 338.0, "end": 339.92, "text": " 1, negative 2, like this."}, {"start": 339.92, "end": 344.28, "text": " So the way you transpose a matrix is you take the columns and you just lay the columns on"}, {"start": 344.28, "end": 346.08, "text": " the side one column at a time."}, {"start": 346.08, "end": 349.72, "text": " So you end up with this being A transpose."}, {"start": 349.72, "end": 356.32, "text": " Next, we have this matrix, w, which we're going to write as 3, 4, 5, 6."}, {"start": 356.32, "end": 361.28, "text": " So there's a column 3, 4, and a column 5, 6."}, {"start": 361.28, "end": 366.56, "text": " One way I encourage you to think of matrices, at least that's useful for neural network"}, {"start": 366.56, "end": 373.84, "text": " implementations is if you see a matrix, think of the columns of the matrix, and if you see"}, {"start": 373.84, "end": 378.48, "text": " the transpose of a matrix, think of the rows of that matrix as being grouped together,"}, {"start": 378.48, "end": 383.6, "text": " as illustrated here with A and A transpose, as well as w."}, {"start": 383.6, "end": 390.68, "text": " And now, let me show you how to multiply A transpose and w."}, {"start": 390.68, "end": 399.04, "text": " In order to carry out this computation, let me call the columns of A, A1 and A2, and that"}, {"start": 399.04, "end": 408.16, "text": " means that A1 transpose is the first row of A transpose, and A2 transpose is the second"}, {"start": 408.16, "end": 410.6, "text": " row of A transpose."}, {"start": 410.6, "end": 418.2, "text": " And then, same as before, let me call the columns of w to be w1 and w2."}, {"start": 418.2, "end": 425.03999999999996, "text": " So it turns out that to compute A transpose w, the first thing we need to do is let's"}, {"start": 425.03999999999996, "end": 432.68, "text": " just ignore the second row of A, and let's just pay attention to the first row of A,"}, {"start": 432.68, "end": 439.03999999999996, "text": " and let's take this row 1, 2, that is A1 transpose, and multiply that with w."}, {"start": 439.03999999999996, "end": 443.44, "text": " So you already know how to do that from the previous slide."}, {"start": 443.44, "end": 449.4, "text": " The first element is 1, 2, inner product or dot product with 3, 4, so that ends up with"}, {"start": 449.4, "end": 453.56, "text": " 3 times 1 plus 2 times 4, which is 11."}, {"start": 453.56, "end": 461.24, "text": " And then the second element is 1, 2, A transpose, inner product with 5, 6, so that's 5 times"}, {"start": 461.24, "end": 467.15999999999997, "text": " 1 plus 6 times 2, which is 5 plus 12, which is 17."}, {"start": 467.15999999999997, "end": 472.24, "text": " So that gives you the first row of z equals A transpose w."}, {"start": 472.24, "end": 477.72, "text": " So all we've done is take A1 transpose and multiply that by w."}, {"start": 477.72, "end": 481.12, "text": " That's exactly what we did on the previous slide."}, {"start": 481.12, "end": 489.2, "text": " Next, let's forget A1 for now, and let's just look at A2 and take A2 transpose and multiply"}, {"start": 489.2, "end": 491.28000000000003, "text": " that by w."}, {"start": 491.28000000000003, "end": 498.76, "text": " So now we have A2 transpose times w, and to compute that, first we take negative 1 and"}, {"start": 498.76, "end": 505.44, "text": " negative 2 and dot product that with 3, 4, so that's negative 1 times 3 plus negative"}, {"start": 505.44, "end": 510.76, "text": " 2 times 4, and that turns out to be negative 11."}, {"start": 510.76, "end": 517.64, "text": " And then we have to compute A2 transpose times the second column, and that's negative 1 times"}, {"start": 517.64, "end": 524.08, "text": " 5 plus negative 2 times 6, and that turns out to be negative 17."}, {"start": 524.08, "end": 531.96, "text": " So you end up with A transpose times w is equal to this 2 by 2 matrix over here."}, {"start": 531.96, "end": 537.84, "text": " Let's talk about the general form of matrix-matrix multiplication."}, {"start": 537.84, "end": 543.5600000000001, "text": " So this was an example of how you multiply a vector with a matrix or a matrix with a"}, {"start": 543.5600000000001, "end": 544.5600000000001, "text": " matrix."}, {"start": 544.5600000000001, "end": 549.48, "text": " There's a lot of dot products between vectors, but ordered in a certain way to construct"}, {"start": 549.48, "end": 553.76, "text": " the elements of the output z one element at a time."}, {"start": 553.76, "end": 560.16, "text": " I know this was a lot, but in the next video, let's look at the general form of how a matrix-matrix"}, {"start": 560.16, "end": 565.48, "text": " multiplication is defined, and I hope that that will make all this clear as well."}, {"start": 565.48, "end": 585.88, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=7GFKFng9gyM
4.16 Vectorization (optional) | Matrix multiplication rules-- [Machine Learning - Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, let's take a look at the general form of how you multiply two matrices together. And then in the last video after this one, we'll take this and apply it to the vectorized implementation of a neural network. Let's dive in. Here's a matrix A, which is a two by three matrix, because it has two rows and three columns. As before, I'd encourage you to think of the columns of this matrix as three vectors, vectors A1, A2, and A3. And what we're going to do is take A transpose and multiply that with a matrix W. The first, what is A transpose? Well, A transpose is obtained by taking the first column of A and laying it on the side like this, and then taking the second column of A and laying it on the side like this, and then the third column of A and laying it on the side like that. And so these rows are now A1 transpose, A2 transpose, and A3 transpose. Next, here's a matrix W. I encourage you to think of W as vectors W1, W2, W3, and W4 stacked together as so. Let's look at how you then compute A transpose times W. Now, notice that I've also used slightly different shades of orange to denote the different columns of A, where the same shade corresponds to numbers that we think of as grouped together into a vector. And that same shade is used to indicate different rows of A transpose, because the different rows of A transpose are A1 transpose, A2 transpose, and A3 transpose. And in a similar way, I've used different shades to denote the different columns of W, because the numbers of the same shade of blue are the ones that are grouped together to form the vectors W1, or W2, or W3, or W4. Now, let's look at how you can compute A transpose times W. I'm going to draw vertical bars with the different shades of blue, and horizontal bars with the different shades of orange to indicate which elements of Z, that is A transpose W, are influenced or affected by the different rows of A transpose, and which are influenced or affected by the different columns of W. So, for example, let's look at the first column of W. So, that's W1, as indicated by the lightest shade of blue here. So, W1 will influence or will correspond to this first column of Z, shown here, by this lightest shade of blue. And the values of this second column of W, that is W2, as indicated by this second lightest shade of blue, will affect the values computed in the second column of Z, and so on. For the third and fourth columns. Correspondingly, let's look at A transpose. A1 transpose is the first row of A transpose, as indicated by the lightest shade of orange, and A1 transpose will affect or influence or correspond to the values in the first row of Z. And A2 transpose will influence the second row of Z, and A3 transpose will influence or correspond to this third row of Z. So, let's figure out how to compute the matrix Z, which is going to be a 3 by 4 matrix. So, we have 12 numbers altogether. Let's start off and figure out how to compute the number in the first row in the first column of Z. So, this upper left most element here. Because this is the first row in the first column corresponding to the lightest shade of orange and the lightest shade of blue, the way you compute that is to grab the first row of A transpose and the first column of W and take their inner product or the dot product. And so, this number is going to be 1, 2 dot product with 3, 4, which is 1 times 3 plus 2 times 4, which is equal to 11. Let's look at a second example. How would you compute this number, this element of Z? So, this is in the third row, row 1, row 2, row 3. So, this is row 3 and the second column, column 1, column 2. So, to compute the number in row 3, column 2 of Z, you would now grab row 3 of A transpose and column 2 of W and dot product those together. Notice that this corresponds to the darkest shade of orange and the second lightest shade of blue. And to compute this, this is 0.1 times 5 plus 0.2 times 6, which is 0.5 plus 1.2, which is equal to 1.7. So, to compute the number in row 3, column 2 of Z, you grab the third row, row 3 of A transpose and column 2 of W. Let's look at one more example and let's see if you can figure this one out. This is row 2, column 3 of the matrix Z. Why don't you take a look and see if you can figure out which row and which column to grab to dot product together and therefore what is the number that will go in this element of this matrix. Maybe you got that you should be grabbing row 2 of A transpose and column 3 of W and when you dot product that together, you have A2 transpose W3 is negative 1 times 7 plus negative 2 times 8, which is negative 7 plus negative 16, which is equal to negative 23. And so that's how you compute this element of the matrix Z. And it turns out if you do this for every element of the matrix Z, then you can compute all of the numbers in this matrix, which turns out to look like that. Feel free to pause the video if you want and pick any element and double check that the formula we've been going through gives you the right value for Z. I just want to point out one last interesting requirement for multiplying matrices together, which is that X transpose here is a 3 by 2 matrix because it has 3 rows and 2 columns and W here is a 2 by 4 matrix because it has 2 rows and 4 columns. One requirement in order to multiply 2 matrices together is that this number must match that number and that's because you can only take dot products between vectors that are the same length. So you can take the dot product between a vector with 2 numbers and that's because you can take the inner product between a vector of length 2 only with another vector of length 2. You can't take the inner product between a vector of length 2 with a vector of length 3, for example. And that's why matrix multiplication is valid only if the number of columns of the first matrix, that is A transpose here, is equal to the number of rows of the second matrix, that is the number of rows of W here. So that when you take dot products during this process, you're taking dot products of vectors of the same size. And then the other observation is that the output Z equals A transpose W, the dimensions of Z is 3 by 4. And so the output of this multiplication will have the same number of rows as X transpose and the same number of columns as W. And so that too is another property of matrix multiplication. So that's matrix multiplication. All these videos are optional, so thank you for sticking with me through these. And if you're interested, later in this week, there are also some purely optional quizzes to let you practice some more of these calculations yourself as well. So with that, let's take what we've learned about matrix multiplication and apply it back to the vectorized implementation of a neural network. I have to say, the first time I understood the vectorized implementation, I thought it was actually really cool. I've been implementing neural networks for a while myself without the vectorized implementation. And when I finally understood the vectorized implementation and implemented it that way for the first time, it ran blazingly much faster than anything I've ever done before. And I thought, wow, I wish I had figured this out earlier. The vectorized implementation, it is a little bit complicated, but it makes neural networks run much faster. So let's take a look at that in the next video.
[{"start": 0.0, "end": 8.6, "text": " So, let's take a look at the general form of how you multiply two matrices together."}, {"start": 8.6, "end": 13.92, "text": " And then in the last video after this one, we'll take this and apply it to the vectorized"}, {"start": 13.92, "end": 16.68, "text": " implementation of a neural network."}, {"start": 16.68, "end": 17.68, "text": " Let's dive in."}, {"start": 17.68, "end": 25.48, "text": " Here's a matrix A, which is a two by three matrix, because it has two rows and three"}, {"start": 25.48, "end": 26.48, "text": " columns."}, {"start": 26.48, "end": 33.6, "text": " As before, I'd encourage you to think of the columns of this matrix as three vectors, vectors"}, {"start": 33.6, "end": 37.6, "text": " A1, A2, and A3."}, {"start": 37.6, "end": 45.04, "text": " And what we're going to do is take A transpose and multiply that with a matrix W."}, {"start": 45.04, "end": 46.480000000000004, "text": " The first, what is A transpose?"}, {"start": 46.480000000000004, "end": 52.56, "text": " Well, A transpose is obtained by taking the first column of A and laying it on the side"}, {"start": 52.56, "end": 57.2, "text": " like this, and then taking the second column of A and laying it on the side like this,"}, {"start": 57.2, "end": 60.88, "text": " and then the third column of A and laying it on the side like that."}, {"start": 60.88, "end": 68.2, "text": " And so these rows are now A1 transpose, A2 transpose, and A3 transpose."}, {"start": 68.2, "end": 80.48, "text": " Next, here's a matrix W. I encourage you to think of W as vectors W1, W2, W3, and W4 stacked"}, {"start": 80.48, "end": 82.34, "text": " together as so."}, {"start": 82.34, "end": 88.36, "text": " Let's look at how you then compute A transpose times W. Now, notice that I've also used slightly"}, {"start": 88.36, "end": 97.08, "text": " different shades of orange to denote the different columns of A, where the same shade corresponds"}, {"start": 97.08, "end": 101.68, "text": " to numbers that we think of as grouped together into a vector."}, {"start": 101.68, "end": 107.0, "text": " And that same shade is used to indicate different rows of A transpose, because the different"}, {"start": 107.0, "end": 112.28, "text": " rows of A transpose are A1 transpose, A2 transpose, and A3 transpose."}, {"start": 112.28, "end": 118.4, "text": " And in a similar way, I've used different shades to denote the different columns of W,"}, {"start": 118.4, "end": 123.32, "text": " because the numbers of the same shade of blue are the ones that are grouped together to"}, {"start": 123.32, "end": 128.08, "text": " form the vectors W1, or W2, or W3, or W4."}, {"start": 128.08, "end": 136.54, "text": " Now, let's look at how you can compute A transpose times W. I'm going to draw vertical bars"}, {"start": 136.54, "end": 142.06, "text": " with the different shades of blue, and horizontal bars with the different shades of orange to"}, {"start": 142.06, "end": 152.56, "text": " indicate which elements of Z, that is A transpose W, are influenced or affected by the different"}, {"start": 152.56, "end": 159.32, "text": " rows of A transpose, and which are influenced or affected by the different columns of W."}, {"start": 159.32, "end": 165.68, "text": " So, for example, let's look at the first column of W. So, that's W1, as indicated by the"}, {"start": 165.68, "end": 168.72, "text": " lightest shade of blue here."}, {"start": 168.72, "end": 179.1, "text": " So, W1 will influence or will correspond to this first column of Z, shown here, by this"}, {"start": 179.1, "end": 181.16, "text": " lightest shade of blue."}, {"start": 181.16, "end": 188.52, "text": " And the values of this second column of W, that is W2, as indicated by this second lightest"}, {"start": 188.52, "end": 195.48000000000002, "text": " shade of blue, will affect the values computed in the second column of Z, and so on."}, {"start": 195.48, "end": 198.44, "text": " For the third and fourth columns."}, {"start": 198.44, "end": 203.51999999999998, "text": " Correspondingly, let's look at A transpose."}, {"start": 203.51999999999998, "end": 209.88, "text": " A1 transpose is the first row of A transpose, as indicated by the lightest shade of orange,"}, {"start": 209.88, "end": 216.23999999999998, "text": " and A1 transpose will affect or influence or correspond to the values in the first row"}, {"start": 216.23999999999998, "end": 224.14, "text": " of Z. And A2 transpose will influence the second row of Z, and A3 transpose will influence"}, {"start": 224.14, "end": 226.88, "text": " or correspond to this third row of Z."}, {"start": 226.88, "end": 234.79999999999998, "text": " So, let's figure out how to compute the matrix Z, which is going to be a 3 by 4 matrix."}, {"start": 234.79999999999998, "end": 237.51999999999998, "text": " So, we have 12 numbers altogether."}, {"start": 237.51999999999998, "end": 244.07999999999998, "text": " Let's start off and figure out how to compute the number in the first row in the first column"}, {"start": 244.07999999999998, "end": 247.48, "text": " of Z. So, this upper left most element here."}, {"start": 247.48, "end": 251.51999999999998, "text": " Because this is the first row in the first column corresponding to the lightest shade"}, {"start": 251.52, "end": 256.68, "text": " of orange and the lightest shade of blue, the way you compute that is to grab the first"}, {"start": 256.68, "end": 262.96000000000004, "text": " row of A transpose and the first column of W and take their inner product or the dot"}, {"start": 262.96000000000004, "end": 263.96000000000004, "text": " product."}, {"start": 263.96000000000004, "end": 272.64, "text": " And so, this number is going to be 1, 2 dot product with 3, 4, which is 1 times 3 plus"}, {"start": 272.64, "end": 277.0, "text": " 2 times 4, which is equal to 11."}, {"start": 277.0, "end": 278.84000000000003, "text": " Let's look at a second example."}, {"start": 278.84, "end": 284.35999999999996, "text": " How would you compute this number, this element of Z?"}, {"start": 284.35999999999996, "end": 288.0, "text": " So, this is in the third row, row 1, row 2, row 3."}, {"start": 288.0, "end": 292.2, "text": " So, this is row 3 and the second column, column 1, column 2."}, {"start": 292.2, "end": 303.28, "text": " So, to compute the number in row 3, column 2 of Z, you would now grab row 3 of A transpose"}, {"start": 303.28, "end": 307.76, "text": " and column 2 of W and dot product those together."}, {"start": 307.76, "end": 312.71999999999997, "text": " Notice that this corresponds to the darkest shade of orange and the second lightest shade"}, {"start": 312.71999999999997, "end": 314.2, "text": " of blue."}, {"start": 314.2, "end": 324.84, "text": " And to compute this, this is 0.1 times 5 plus 0.2 times 6, which is 0.5 plus 1.2, which"}, {"start": 324.84, "end": 326.52, "text": " is equal to 1.7."}, {"start": 326.52, "end": 333.15999999999997, "text": " So, to compute the number in row 3, column 2 of Z, you grab the third row, row 3 of A"}, {"start": 333.15999999999997, "end": 336.92, "text": " transpose and column 2 of W."}, {"start": 336.92, "end": 342.44, "text": " Let's look at one more example and let's see if you can figure this one out."}, {"start": 342.44, "end": 348.0, "text": " This is row 2, column 3 of the matrix Z."}, {"start": 348.0, "end": 354.44, "text": " Why don't you take a look and see if you can figure out which row and which column to grab"}, {"start": 354.44, "end": 360.56, "text": " to dot product together and therefore what is the number that will go in this element"}, {"start": 360.56, "end": 362.44, "text": " of this matrix."}, {"start": 362.44, "end": 370.4, "text": " Maybe you got that you should be grabbing row 2 of A transpose and column 3 of W and"}, {"start": 370.4, "end": 377.64, "text": " when you dot product that together, you have A2 transpose W3 is negative 1 times 7 plus"}, {"start": 377.64, "end": 384.72, "text": " negative 2 times 8, which is negative 7 plus negative 16, which is equal to negative 23."}, {"start": 384.72, "end": 388.6, "text": " And so that's how you compute this element of the matrix Z."}, {"start": 388.6, "end": 393.8, "text": " And it turns out if you do this for every element of the matrix Z, then you can compute"}, {"start": 393.8, "end": 399.04, "text": " all of the numbers in this matrix, which turns out to look like that."}, {"start": 399.04, "end": 403.0, "text": " Feel free to pause the video if you want and pick any element and double check that the"}, {"start": 403.0, "end": 408.68, "text": " formula we've been going through gives you the right value for Z."}, {"start": 408.68, "end": 416.72, "text": " I just want to point out one last interesting requirement for multiplying matrices together,"}, {"start": 416.72, "end": 423.72, "text": " which is that X transpose here is a 3 by 2 matrix because it has 3 rows and 2 columns"}, {"start": 423.72, "end": 430.28000000000003, "text": " and W here is a 2 by 4 matrix because it has 2 rows and 4 columns."}, {"start": 430.28000000000003, "end": 437.44000000000005, "text": " One requirement in order to multiply 2 matrices together is that this number must match that"}, {"start": 437.44000000000005, "end": 443.92, "text": " number and that's because you can only take dot products between vectors that are the"}, {"start": 443.92, "end": 444.92, "text": " same length."}, {"start": 444.92, "end": 450.96000000000004, "text": " So you can take the dot product between a vector with 2 numbers and that's because you"}, {"start": 450.96000000000004, "end": 456.76, "text": " can take the inner product between a vector of length 2 only with another vector of length"}, {"start": 456.76, "end": 457.76, "text": " 2."}, {"start": 457.76, "end": 461.72, "text": " You can't take the inner product between a vector of length 2 with a vector of length"}, {"start": 461.72, "end": 463.72, "text": " 3, for example."}, {"start": 463.72, "end": 470.0, "text": " And that's why matrix multiplication is valid only if the number of columns of the first"}, {"start": 470.0, "end": 476.32, "text": " matrix, that is A transpose here, is equal to the number of rows of the second matrix,"}, {"start": 476.32, "end": 479.56, "text": " that is the number of rows of W here."}, {"start": 479.56, "end": 484.36, "text": " So that when you take dot products during this process, you're taking dot products of"}, {"start": 484.36, "end": 487.26, "text": " vectors of the same size."}, {"start": 487.26, "end": 495.08, "text": " And then the other observation is that the output Z equals A transpose W, the dimensions"}, {"start": 495.08, "end": 497.9, "text": " of Z is 3 by 4."}, {"start": 497.9, "end": 505.23999999999995, "text": " And so the output of this multiplication will have the same number of rows as X transpose"}, {"start": 505.23999999999995, "end": 513.52, "text": " and the same number of columns as W. And so that too is another property of matrix multiplication."}, {"start": 513.52, "end": 516.4, "text": " So that's matrix multiplication."}, {"start": 516.4, "end": 520.92, "text": " All these videos are optional, so thank you for sticking with me through these."}, {"start": 520.92, "end": 526.04, "text": " And if you're interested, later in this week, there are also some purely optional quizzes"}, {"start": 526.04, "end": 530.48, "text": " to let you practice some more of these calculations yourself as well."}, {"start": 530.48, "end": 535.0, "text": " So with that, let's take what we've learned about matrix multiplication and apply it back"}, {"start": 535.0, "end": 537.88, "text": " to the vectorized implementation of a neural network."}, {"start": 537.88, "end": 542.5999999999999, "text": " I have to say, the first time I understood the vectorized implementation, I thought it"}, {"start": 542.5999999999999, "end": 544.7199999999999, "text": " was actually really cool."}, {"start": 544.7199999999999, "end": 550.4399999999999, "text": " I've been implementing neural networks for a while myself without the vectorized implementation."}, {"start": 550.4399999999999, "end": 555.36, "text": " And when I finally understood the vectorized implementation and implemented it that way"}, {"start": 555.36, "end": 560.52, "text": " for the first time, it ran blazingly much faster than anything I've ever done before."}, {"start": 560.52, "end": 563.8000000000001, "text": " And I thought, wow, I wish I had figured this out earlier."}, {"start": 563.8000000000001, "end": 568.24, "text": " The vectorized implementation, it is a little bit complicated, but it makes neural networks"}, {"start": 568.24, "end": 570.16, "text": " run much faster."}, {"start": 570.16, "end": 586.52, "text": " So let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=m7xcF9jXLpc
4.17 Vectorization (optional) | Matrix multiplication code-- [Machine Learning - Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
So, without further ado, let's jump into the vectorized implementation of a neural network. We'll look at the code that you have seen in an earlier video, and hopefully, MatMul, that is that matrix multiplication calculation, will make more sense. Let's jump in. So you saw previously how you can take the matrix A and compute A transpose times W, resulting in this matrix here, Z. In code, if this is the matrix A, this is a numpy array with the elements corresponding to what I wrote on top, then A transpose, which I'm going to write as A T, is going to be this matrix here, with, again, the columns of A now laid out in rows instead. And by the way, instead of setting up A T this way, another way to compute A T in numpy array would be to write A T equals A dot T. That's the transpose function that takes the columns of the matrix and lays them on the side. In code, here's how you initialize the matrix W, this is another 2D numpy array. And then to compute Z equals A transpose times W, you would write Z equals NP dot MatMul A T comma W. And that will compute this matrix Z over here, giving you this result down here. And by the way, if you read others' code, sometimes you see Z equals A T and then the at symbol W. This is an alternative way of calling the MatMul function, although I find using NP dot MatMul to be clearer. And so in the code you see in this class, we just use the MatMul function like this, rather than this at symbol. So let's look at what a vectorized implementation of what prop looks like. I'm going to set A transpose to be equal to the input feature values 217. So these are just the usual input feature values, 200 degrees, roasting coffee for 17 minutes. So this is a 1 by 2 matrix, and I'm going to take the parameters W1, W2, and W3 and stack them in columns like this to form this matrix capital W. And the values b1, b2, b3, I'm going to put into a 1 by 3 matrix that is this matrix B as follows. And it turns out that if you were to compute Z equals A transpose W plus B, that will result in these three numbers. And that's computed by taking the input feature values and multiplying that by the first column and then adding B to get 165. Taking these feature values, dot producting with the second column that is a weights W2 and adding B2 to get negative 531. And these feature values dot product with the weights W3 plus B3 to get 900. Feel free to pause the video if you wish to double check these calculations, but this gives you the values of Z11, Z12, and Z13. And then finally, if the function G applies the sigmoid function to these three numbers element-wise, that this applies the sigmoid function to 165, to negative 531, and to 900, then you end up with A equals G of this matrix Z ends up being 101. And it's 101 because sigmoid of 165 is so close to 1 that, you know, up to numerical round off is basically 1 and these are basically 0 and 1. Now let's look at how you implement this in code. A transpose is equal to this, is this 1 by 2 array of 217. The matrix W is this 2 by 3 matrix and B is this 1 by 3 matrix. And so the way you can implement for prop in the layer is dense input A transpose WB is equal to Z equals matmul A transpose times W plus B. So that just implements this line of code. And then A out, that is the output of this layer, is equal to G, the activation function applied element-wise to this matrix Z. And you return A out and that gives you this value. In case you're comparing the slide with the slides a few videos back, there was just one little difference, which was by convention, the way this is implemented in TensorFlow, rather than calling this variable Xt, we call it just A and rather than calling this variable At, we were calling it An, which is why this too is a correct implementation of the code. And there is a convention in TensorFlow that individual examples are actually laid out in rows in the matrix X rather than in the matrix X transpose, which is why the code implementation actually looks like this in TensorFlow. But this explains why with just a few lines of code, you can implement for prop in the neural network and moreover get a huge speed bonus because matmul matrix multiplication can be done very efficiently using fast hardware and get a huge bonus because modern computers are very good at implementing matrix multiplication such as matmul efficiently. That's the last video of this week. Thanks for sticking with me all the way through the end of these optional videos. For the rest of this week, I hope you also take a look at the quizzes and the practice labs and also the optional labs to exercise this material even more deeply. You now know how to do inference and for prop in a neural network, which I think is really cool. So congratulations. After you have gone through the quizzes in the labs, please also come back and in the next week, we'll look at how to actually train a neural network. So I look forward to seeing you next week.
[{"start": 0.0, "end": 7.28, "text": " So, without further ado, let's jump into the vectorized implementation of a neural network."}, {"start": 7.28, "end": 12.120000000000001, "text": " We'll look at the code that you have seen in an earlier video, and hopefully, MatMul,"}, {"start": 12.120000000000001, "end": 16.28, "text": " that is that matrix multiplication calculation, will make more sense."}, {"start": 16.28, "end": 17.28, "text": " Let's jump in."}, {"start": 17.28, "end": 23.44, "text": " So you saw previously how you can take the matrix A and compute A transpose times W,"}, {"start": 23.44, "end": 27.76, "text": " resulting in this matrix here, Z."}, {"start": 27.76, "end": 35.120000000000005, "text": " In code, if this is the matrix A, this is a numpy array with the elements corresponding"}, {"start": 35.120000000000005, "end": 40.68, "text": " to what I wrote on top, then A transpose, which I'm going to write as A T, is going"}, {"start": 40.68, "end": 49.760000000000005, "text": " to be this matrix here, with, again, the columns of A now laid out in rows instead."}, {"start": 49.760000000000005, "end": 57.120000000000005, "text": " And by the way, instead of setting up A T this way, another way to compute A T in numpy"}, {"start": 57.12, "end": 61.599999999999994, "text": " array would be to write A T equals A dot T."}, {"start": 61.599999999999994, "end": 66.44, "text": " That's the transpose function that takes the columns of the matrix and lays them on the"}, {"start": 66.44, "end": 67.44, "text": " side."}, {"start": 67.44, "end": 75.36, "text": " In code, here's how you initialize the matrix W, this is another 2D numpy array."}, {"start": 75.36, "end": 84.56, "text": " And then to compute Z equals A transpose times W, you would write Z equals NP dot MatMul"}, {"start": 84.56, "end": 95.64, "text": " A T comma W. And that will compute this matrix Z over here, giving you this result down here."}, {"start": 95.64, "end": 102.16, "text": " And by the way, if you read others' code, sometimes you see Z equals A T and then the"}, {"start": 102.16, "end": 109.72, "text": " at symbol W. This is an alternative way of calling the MatMul function, although I find"}, {"start": 109.72, "end": 112.76, "text": " using NP dot MatMul to be clearer."}, {"start": 112.76, "end": 116.60000000000001, "text": " And so in the code you see in this class, we just use the MatMul function like this,"}, {"start": 116.60000000000001, "end": 119.28, "text": " rather than this at symbol."}, {"start": 119.28, "end": 124.2, "text": " So let's look at what a vectorized implementation of what prop looks like."}, {"start": 124.2, "end": 131.68, "text": " I'm going to set A transpose to be equal to the input feature values 217."}, {"start": 131.68, "end": 137.76, "text": " So these are just the usual input feature values, 200 degrees, roasting coffee for 17"}, {"start": 137.76, "end": 138.76, "text": " minutes."}, {"start": 138.76, "end": 147.72, "text": " So this is a 1 by 2 matrix, and I'm going to take the parameters W1, W2, and W3 and"}, {"start": 147.72, "end": 157.0, "text": " stack them in columns like this to form this matrix capital W. And the values b1, b2, b3,"}, {"start": 157.0, "end": 163.68, "text": " I'm going to put into a 1 by 3 matrix that is this matrix B as follows."}, {"start": 163.68, "end": 172.04000000000002, "text": " And it turns out that if you were to compute Z equals A transpose W plus B, that will result"}, {"start": 172.04000000000002, "end": 174.84, "text": " in these three numbers."}, {"start": 174.84, "end": 182.60000000000002, "text": " And that's computed by taking the input feature values and multiplying that by the first column"}, {"start": 182.60000000000002, "end": 187.36, "text": " and then adding B to get 165."}, {"start": 187.36, "end": 192.92000000000002, "text": " Taking these feature values, dot producting with the second column that is a weights W2"}, {"start": 192.92, "end": 196.51999999999998, "text": " and adding B2 to get negative 531."}, {"start": 196.51999999999998, "end": 204.95999999999998, "text": " And these feature values dot product with the weights W3 plus B3 to get 900."}, {"start": 204.95999999999998, "end": 208.35999999999999, "text": " Feel free to pause the video if you wish to double check these calculations, but this"}, {"start": 208.35999999999999, "end": 216.95999999999998, "text": " gives you the values of Z11, Z12, and Z13."}, {"start": 216.96, "end": 223.20000000000002, "text": " And then finally, if the function G applies the sigmoid function to these three numbers"}, {"start": 223.20000000000002, "end": 229.68, "text": " element-wise, that this applies the sigmoid function to 165, to negative 531, and to 900,"}, {"start": 229.68, "end": 237.32, "text": " then you end up with A equals G of this matrix Z ends up being 101."}, {"start": 237.32, "end": 242.88, "text": " And it's 101 because sigmoid of 165 is so close to 1 that, you know, up to numerical"}, {"start": 242.88, "end": 247.48, "text": " round off is basically 1 and these are basically 0 and 1."}, {"start": 247.48, "end": 250.79999999999998, "text": " Now let's look at how you implement this in code."}, {"start": 250.79999999999998, "end": 257.32, "text": " A transpose is equal to this, is this 1 by 2 array of 217."}, {"start": 257.32, "end": 265.56, "text": " The matrix W is this 2 by 3 matrix and B is this 1 by 3 matrix."}, {"start": 265.56, "end": 272.56, "text": " And so the way you can implement for prop in the layer is dense input A transpose WB"}, {"start": 272.56, "end": 278.4, "text": " is equal to Z equals matmul A transpose times W plus B."}, {"start": 278.4, "end": 282.08, "text": " So that just implements this line of code."}, {"start": 282.08, "end": 289.84000000000003, "text": " And then A out, that is the output of this layer, is equal to G, the activation function"}, {"start": 289.84000000000003, "end": 294.56, "text": " applied element-wise to this matrix Z."}, {"start": 294.56, "end": 299.64, "text": " And you return A out and that gives you this value."}, {"start": 299.64, "end": 304.65999999999997, "text": " In case you're comparing the slide with the slides a few videos back, there was just one"}, {"start": 304.65999999999997, "end": 310.56, "text": " little difference, which was by convention, the way this is implemented in TensorFlow,"}, {"start": 310.56, "end": 317.08, "text": " rather than calling this variable Xt, we call it just A and rather than calling this variable"}, {"start": 317.08, "end": 325.44, "text": " At, we were calling it An, which is why this too is a correct implementation of the code."}, {"start": 325.44, "end": 331.64, "text": " And there is a convention in TensorFlow that individual examples are actually laid out"}, {"start": 331.64, "end": 337.84, "text": " in rows in the matrix X rather than in the matrix X transpose, which is why the code"}, {"start": 337.84, "end": 341.08, "text": " implementation actually looks like this in TensorFlow."}, {"start": 341.08, "end": 346.2, "text": " But this explains why with just a few lines of code, you can implement for prop in the"}, {"start": 346.2, "end": 353.38, "text": " neural network and moreover get a huge speed bonus because matmul matrix multiplication"}, {"start": 353.38, "end": 360.84, "text": " can be done very efficiently using fast hardware and get a huge bonus because modern computers"}, {"start": 360.84, "end": 366.71999999999997, "text": " are very good at implementing matrix multiplication such as matmul efficiently."}, {"start": 366.71999999999997, "end": 368.96, "text": " That's the last video of this week."}, {"start": 368.96, "end": 373.68, "text": " Thanks for sticking with me all the way through the end of these optional videos."}, {"start": 373.68, "end": 377.71999999999997, "text": " For the rest of this week, I hope you also take a look at the quizzes and the practice"}, {"start": 377.71999999999997, "end": 383.04, "text": " labs and also the optional labs to exercise this material even more deeply."}, {"start": 383.04, "end": 387.40000000000003, "text": " You now know how to do inference and for prop in a neural network, which I think is really"}, {"start": 387.40000000000003, "end": 388.40000000000003, "text": " cool."}, {"start": 388.40000000000003, "end": 389.40000000000003, "text": " So congratulations."}, {"start": 389.40000000000003, "end": 394.20000000000005, "text": " After you have gone through the quizzes in the labs, please also come back and in the"}, {"start": 394.20000000000005, "end": 398.08000000000004, "text": " next week, we'll look at how to actually train a neural network."}, {"start": 398.08, "end": 413.71999999999997, "text": " So I look forward to seeing you next week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=jdjGdT_jR50
5.1 Neural Network Training | TensorFlow implementation --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome back to the second week of this course on advanced learning algorithms. Last week you learned how to carry out inference in a neural network. This week we're going to go over training of a neural network. I think being able to take your own data and train your own neural network on it is really fun. This week we'll look at how you could do that. Let's dive in. Let's continue with our running example of handwritten digit recognition, recognizing this image as 0 or a 1. And here we're using the neural network architecture that you saw last week, where you have an input x that is the image, and then the first hidden layer with 25 units, second hidden layer with 15 units, and then one output unit. If you are given a training set of examples comprising images x as well as the ground truth label y, how would you train the parameters of this neural network? Let me go ahead and show you the code that you can use in TensorFlow to train this network. And then in the next few videos after this, we'll dive into details to explain what the code is actually doing. So this is the code you would write. This first part may look familiar from the previous week, where you are asking TensorFlow to sequentially string together these three layers of a neural network, the first hidden layer with 25 units and sig 1 activation, the second hidden layer, and then finally the output layer. So nothing new here relative to what you saw last week. Second step is you have to ask TensorFlow to compile the model. And the key step in asking TensorFlow to compile the model is to specify what is the loss function you want to use. In this case, we'll use something that goes by the arcane name of sparse categorical cross entropy. We'll say more in the next video what this really is. And then having specified the loss function, the third step is to call the fit function, which tells TensorFlow to fit the model that you specified in step one. Using the loss of the cost function that you specified in step two to the data set x, y. And back in the first course, when we talked about gradient descent, we had to decide how many steps to run gradient descent or how long to run gradient descent. So epochs is a technical term for how many steps of learning algorithm like gradient descent you may want to run. And that's it. Step one is to specify the model, which tells TensorFlow how to compute for the inference. Step two compiles the model using a specific loss function. And step three is to train the model. So that's how you can train a neural network in TensorFlow. As usual, I hope that you'll be able to not just call these lines of code to train the model, but that you also understand what's actually going on behind these lines of code. So you don't just call it without really understanding what's going on. And I think this is important because when you're running a learning algorithm, if it doesn't work initially, having that conceptual mental framework of what's really going on will help you debug whenever things don't work the way you expect. So with that, let's go on to the next video, where we'll dive more deeply into what these steps in the TensorFlow implementation are actually doing. I'll see you in the next video.
[{"start": 0.0, "end": 7.640000000000001, "text": " Welcome back to the second week of this course on advanced learning algorithms."}, {"start": 7.640000000000001, "end": 11.76, "text": " Last week you learned how to carry out inference in a neural network."}, {"start": 11.76, "end": 15.76, "text": " This week we're going to go over training of a neural network."}, {"start": 15.76, "end": 20.8, "text": " I think being able to take your own data and train your own neural network on it is really"}, {"start": 20.8, "end": 21.8, "text": " fun."}, {"start": 21.8, "end": 24.240000000000002, "text": " This week we'll look at how you could do that."}, {"start": 24.240000000000002, "end": 25.240000000000002, "text": " Let's dive in."}, {"start": 25.24, "end": 31.64, "text": " Let's continue with our running example of handwritten digit recognition, recognizing"}, {"start": 31.64, "end": 34.76, "text": " this image as 0 or a 1."}, {"start": 34.76, "end": 40.04, "text": " And here we're using the neural network architecture that you saw last week, where you have an"}, {"start": 40.04, "end": 46.519999999999996, "text": " input x that is the image, and then the first hidden layer with 25 units, second hidden"}, {"start": 46.519999999999996, "end": 50.94, "text": " layer with 15 units, and then one output unit."}, {"start": 50.94, "end": 56.72, "text": " If you are given a training set of examples comprising images x as well as the ground"}, {"start": 56.72, "end": 62.12, "text": " truth label y, how would you train the parameters of this neural network?"}, {"start": 62.12, "end": 67.52, "text": " Let me go ahead and show you the code that you can use in TensorFlow to train this network."}, {"start": 67.52, "end": 71.32, "text": " And then in the next few videos after this, we'll dive into details to explain what the"}, {"start": 71.32, "end": 73.68, "text": " code is actually doing."}, {"start": 73.68, "end": 75.56, "text": " So this is the code you would write."}, {"start": 75.56, "end": 80.92, "text": " This first part may look familiar from the previous week, where you are asking TensorFlow"}, {"start": 80.92, "end": 86.68, "text": " to sequentially string together these three layers of a neural network, the first hidden"}, {"start": 86.68, "end": 92.6, "text": " layer with 25 units and sig 1 activation, the second hidden layer, and then finally"}, {"start": 92.6, "end": 94.16, "text": " the output layer."}, {"start": 94.16, "end": 98.36, "text": " So nothing new here relative to what you saw last week."}, {"start": 98.36, "end": 102.80000000000001, "text": " Second step is you have to ask TensorFlow to compile the model."}, {"start": 102.8, "end": 108.84, "text": " And the key step in asking TensorFlow to compile the model is to specify what is the loss function"}, {"start": 108.84, "end": 110.2, "text": " you want to use."}, {"start": 110.2, "end": 115.88, "text": " In this case, we'll use something that goes by the arcane name of sparse categorical cross"}, {"start": 115.88, "end": 116.88, "text": " entropy."}, {"start": 116.88, "end": 120.75999999999999, "text": " We'll say more in the next video what this really is."}, {"start": 120.75999999999999, "end": 127.12, "text": " And then having specified the loss function, the third step is to call the fit function,"}, {"start": 127.12, "end": 131.72, "text": " which tells TensorFlow to fit the model that you specified in step one."}, {"start": 131.72, "end": 138.4, "text": " Using the loss of the cost function that you specified in step two to the data set x, y."}, {"start": 138.4, "end": 143.56, "text": " And back in the first course, when we talked about gradient descent, we had to decide how"}, {"start": 143.56, "end": 147.2, "text": " many steps to run gradient descent or how long to run gradient descent."}, {"start": 147.2, "end": 151.92, "text": " So epochs is a technical term for how many steps of learning algorithm like gradient"}, {"start": 151.92, "end": 155.36, "text": " descent you may want to run."}, {"start": 155.36, "end": 156.44, "text": " And that's it."}, {"start": 156.44, "end": 163.68, "text": " Step one is to specify the model, which tells TensorFlow how to compute for the inference."}, {"start": 163.68, "end": 168.07999999999998, "text": " Step two compiles the model using a specific loss function."}, {"start": 168.07999999999998, "end": 170.96, "text": " And step three is to train the model."}, {"start": 170.96, "end": 174.8, "text": " So that's how you can train a neural network in TensorFlow."}, {"start": 174.8, "end": 179.48, "text": " As usual, I hope that you'll be able to not just call these lines of code to train the"}, {"start": 179.48, "end": 185.14, "text": " model, but that you also understand what's actually going on behind these lines of code."}, {"start": 185.14, "end": 189.67999999999998, "text": " So you don't just call it without really understanding what's going on."}, {"start": 189.67999999999998, "end": 194.79999999999998, "text": " And I think this is important because when you're running a learning algorithm, if it"}, {"start": 194.79999999999998, "end": 200.88, "text": " doesn't work initially, having that conceptual mental framework of what's really going on"}, {"start": 200.88, "end": 206.06, "text": " will help you debug whenever things don't work the way you expect."}, {"start": 206.06, "end": 211.04, "text": " So with that, let's go on to the next video, where we'll dive more deeply into what these"}, {"start": 211.04, "end": 214.92, "text": " steps in the TensorFlow implementation are actually doing."}, {"start": 214.92, "end": 216.2, "text": " I'll see you in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=jzaF4An03Oc
5.2 Neural Network Training | Training Details --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's take a look at the details of what the TensorFlow code for training a neural network is actually doing. Let's dive in. Before looking at the details of training a neural network, let's recall how you had trained a logistic regression model in the previous course. Step 1 of building a logistic regression model was you would specify how to compute the output given the input v to the x and the parameters w and b. So in the first course we said the logistic regression function predicts f of x is equal to g, the sigmoid function applied to w dot product x plus b, which was the sigmoid function applied to w dot x plus b. So if z is the dot product of w of x plus b, then f of x is 1 over 1 plus e to the negative z. So that was the first step, where to specify what is the input to output function of logistic regression, and that depends on both the input x and the parameters of the model. The second step we had to do to train the logistic regression model was to specify the loss function and also the cost function. So you may recall that the loss function said if logistic regression outputs f of x and the ground truth label, the actual label in the training set was y, then the loss on that single training example was negative y log f of x minus 1 minus y times log of 1 minus f of x. So this was a measure of how well is logistic regression doing on a single training example x, y. Given this definition of a loss function, we then define the cost function, and the cost function was a function of the parameters w and b, and that was just the average that is taking an average over all m training examples of the loss function computed on the m training examples x1, y1, through xm, ym. And remember that in the condition we're using, the loss function is a function of the output of the learning algorithm and the ground truth label as computed over a single training example, whereas the cost function j is an average of the loss function computed over your entire training set. So that was step two of what we did when building up logistic regression. And then the third and final step to train the logistic regression model was to use an algorithm, specifically gradient descent, to minimize that cost function j of w, b to minimize it as a function of the parameters w and b. And we minimize the cost j as a function of the parameters using gradient descent, where w is updated as w minus the learning rate alpha times the derivative of j with respect to w, and b similarly is updated as b minus the learning rate alpha times the derivative of j with respect to b. So with these three steps, step one, specifying how to compute the outputs given the input x in parameters, step two, specifying loss in cost, and step three, minimize the cost function, we trained logistic regression. The same three steps is how we can train a neural network in TensorFlow. Now let's look at how these three steps map to training a neural network. We'll go over this in greater detail on the next three slides, but really briefly, step one of specifying how to compute the output given the input x in parameters w and b, that's done with this code snippet, which should be familiar from last week of specifying the neural network, and this was actually enough to specify the computations needed in forward propagation or for the inference algorithm, for example. The second step is to compile the model and to tell it what loss you want to use. And here's a code that you use to specify this loss function, which is the binary cross NGP loss function. And once you specify this loss, taking an average over the entire training set also gives you the cost function for the neural network. And then step three is to call function to try to minimize the cost as a function of the parameters of the neural network. Let's look in greater detail in these three steps in the context of training a neural network. The first step, specify how to compute the output given the input x in parameters w and b. This code snippet specifies the entire architecture of the neural network. It tells you that there are 25 hidden units in the first hidden layer, then 15 in the next one, and then one output unit, and that we're using the sig one activation value. And so based on this code snippet, we know also what are the parameters, w1, b1 of the first layer, parameters of the second layer, and parameters of the third layer. So this code snippet specifies the entire architecture of the neural network and therefore tells TensorFlow everything it needs in order to compute the output x as a function. In order to compute the output a3 or f of x as a function of the input x and the parameters here we have written wl and bl. Let's go on to step two. In the second step, you have to specify what is the loss function, and that will also define the cost function we use to train the neural network. So for the MNIST 01 digit classification problem is a binary classification problem, and the most common by far loss function to use is this one. It's actually the same loss function as what we had for logistic regression is negative y log f of x minus one minus y times log one minus f of x, where y is the ground truth label, sometimes also called the target label y, and f of x is now the output of the neural network. So in the terminology of TensorFlow, this loss function is called binary cross entropy, and the syntax is to ask TensorFlow to compile the neural network using this loss function. And another historical note, Keras was originally a library that had developed independently of TensorFlow, it was actually a totally separate project from TensorFlow, but eventually it got merged into TensorFlow, which is why we have tf.keraslibrary.losses.the name of this loss function. And by the way, I don't always remember the names of all the loss functions in TensorFlow, but I just do a quick web search myself to find the right name, and then I plug that into my code. Having specified the loss with respect to a single training example, TensorFlow knows that the costs you want to minimize is then the average, taking the average over all m training examples of the loss on all of the training examples. And optimizing this cost function will result in fitting the neural network to your binary classification data. In case you want to solve a regression problem rather than a classification problem, you can also tell TensorFlow to compile your model using a different loss function. For example, if you have a regression problem, and if you want to minimize the squared error loss, so here is the squared error loss, the loss with respect to if you're learning algorithm outputs f of x with a target or ground truth label of y, that's one half of the squared error, then you can use this loss function in TensorFlow, which is to use the maybe more intuitively named mean squared error loss function, and then TensorFlow will try to minimize the mean squared error. In this expression, I'm using J of capital W comma capital B to denote the cost function. The cost function is a function of all of the parameters in the neural network. So you can think of capital W as including W1, W2, W3, so all the W parameters in the entire neural network and B as including B1, B2, and B3. So if you are optimizing the cost function with respect to W and B, you'd be trying to optimize it with respect to all of the parameters in the neural network. And up on top as well, I had written f of x as the output of the neural network, but you want, you can also write f of WB if we want to emphasize that the output of the neural network as a function of x depends on all the parameters in all the layers of the neural network. So that's the loss function and the cost function. And in TensorFlow, this is called the binary cross entropy loss function. Where does that name come from? Well, it turns out in statistics, this function on top is called the cross entropy loss function. So that's what cross entropy means. And the word binary just re-emphasizes or points out that this is a binary classification problem because each image is either a zero or a one. Finally, you will ask TensorFlow to minimize the cost function. You might remember the gradient descent algorithm from the first course. If you are using gradient descent to train the parameters of a neural network, then you will repeatedly for every layer L and for every unit J, update WLJ according to WLJ minus the learning rate alpha times the partial derivative with respect to that parameter of the cost function J of WB. And similarly for the parameters B as well. And after doing, say, 100 iterations of gradient descent, hopefully you get to a good value of the parameters. So in order to use gradient descent, the key thing you need to compute is these partial derivative terms. And what TensorFlow does, and in fact what is standard in neural network training, is to use an algorithm called backpropagation in order to compute these partial derivative terms. TensorFlow can do all of these things for you. It implements backpropagation all within this function called fit. So all you have to do is call model.fit x, y as your training set and tell it to do so for 100 iterations or 100 epochs. In fact, what you see later is that TensorFlow can use an algorithm that is even a little bit faster than gradient descent. And you see more about that later this week as well. Now I know that we're relying heavily on the TensorFlow library in order to implement a neural network. One pattern I've seen across multiple ideas is as the technology evolves, libraries become more mature and most engineers will use libraries rather than implement code from scratch. And there have been many other examples of this in the history of computing. Once many, many decades ago programmers had to implement their own sorting function from scratch. But now sorting libraries are quite mature that you probably call someone else's sorting function rather than implement it yourself unless you're taking a computing classes, ask you to do it as an exercise. And today if you want to compute the square root of a number, like what is the square root of 7? Well once programmers had to write their own code to compute this, but now pretty much everyone just calls a library to take square roots or matrix operations such as multiplying two matrices together. So when deep learning was younger and less mature, many developers, including me, were implementing things from scratch using Python or C++ or some other library. But today deep learning libraries have matured enough that most developers will use these libraries and in fact most commercial implementations of neural networks today use a library like TensorFlow or PyTorch. But as I've mentioned, it's still useful to understand how they work under the hood so that if something unexpected happens, which still does with today's libraries, you have a better chance of knowing how to fix it. Now that you know how to train a basic neural network, also called a multilayer perceptron, there are some things you can change about the neural network that will make it even more powerful. In the next video, let's take a look at how you can swap in different activation functions as an alternative to the sigmoid activation function we've been using. This will make your neural networks work even much better. So let's go take a look at that in the next video.
[{"start": 0.0, "end": 6.96, "text": " Let's take a look at the details of what the TensorFlow code for training a neural network"}, {"start": 6.96, "end": 7.96, "text": " is actually doing."}, {"start": 7.96, "end": 9.64, "text": " Let's dive in."}, {"start": 9.64, "end": 14.58, "text": " Before looking at the details of training a neural network, let's recall how you had"}, {"start": 14.58, "end": 19.76, "text": " trained a logistic regression model in the previous course."}, {"start": 19.76, "end": 26.6, "text": " Step 1 of building a logistic regression model was you would specify how to compute the output"}, {"start": 26.6, "end": 31.0, "text": " given the input v to the x and the parameters w and b."}, {"start": 31.0, "end": 37.760000000000005, "text": " So in the first course we said the logistic regression function predicts f of x is equal"}, {"start": 37.760000000000005, "end": 43.92, "text": " to g, the sigmoid function applied to w dot product x plus b, which was the sigmoid function"}, {"start": 43.92, "end": 46.84, "text": " applied to w dot x plus b."}, {"start": 46.84, "end": 58.92, "text": " So if z is the dot product of w of x plus b, then f of x is 1 over 1 plus e to the negative"}, {"start": 58.92, "end": 60.760000000000005, "text": " z."}, {"start": 60.760000000000005, "end": 66.48, "text": " So that was the first step, where to specify what is the input to output function of logistic"}, {"start": 66.48, "end": 72.48, "text": " regression, and that depends on both the input x and the parameters of the model."}, {"start": 72.48, "end": 78.08, "text": " The second step we had to do to train the logistic regression model was to specify the"}, {"start": 78.08, "end": 81.26, "text": " loss function and also the cost function."}, {"start": 81.26, "end": 89.52000000000001, "text": " So you may recall that the loss function said if logistic regression outputs f of x and"}, {"start": 89.52000000000001, "end": 95.80000000000001, "text": " the ground truth label, the actual label in the training set was y, then the loss on that"}, {"start": 95.8, "end": 103.28, "text": " single training example was negative y log f of x minus 1 minus y times log of 1 minus"}, {"start": 103.28, "end": 104.52, "text": " f of x."}, {"start": 104.52, "end": 110.67999999999999, "text": " So this was a measure of how well is logistic regression doing on a single training example"}, {"start": 110.67999999999999, "end": 115.0, "text": " x, y."}, {"start": 115.0, "end": 121.44, "text": " Given this definition of a loss function, we then define the cost function, and the"}, {"start": 121.44, "end": 127.92, "text": " cost function was a function of the parameters w and b, and that was just the average that"}, {"start": 127.92, "end": 134.2, "text": " is taking an average over all m training examples of the loss function computed on the m training"}, {"start": 134.2, "end": 139.88, "text": " examples x1, y1, through xm, ym."}, {"start": 139.88, "end": 147.76, "text": " And remember that in the condition we're using, the loss function is a function of the output"}, {"start": 147.76, "end": 154.07999999999998, "text": " of the learning algorithm and the ground truth label as computed over a single training example,"}, {"start": 154.07999999999998, "end": 159.64, "text": " whereas the cost function j is an average of the loss function computed over your entire"}, {"start": 159.64, "end": 161.35999999999999, "text": " training set."}, {"start": 161.35999999999999, "end": 166.56, "text": " So that was step two of what we did when building up logistic regression."}, {"start": 166.56, "end": 172.32, "text": " And then the third and final step to train the logistic regression model was to use an"}, {"start": 172.32, "end": 179.64, "text": " algorithm, specifically gradient descent, to minimize that cost function j of w, b to"}, {"start": 179.64, "end": 183.88, "text": " minimize it as a function of the parameters w and b."}, {"start": 183.88, "end": 188.95999999999998, "text": " And we minimize the cost j as a function of the parameters using gradient descent, where"}, {"start": 188.95999999999998, "end": 196.24, "text": " w is updated as w minus the learning rate alpha times the derivative of j with respect"}, {"start": 196.24, "end": 206.04000000000002, "text": " to w, and b similarly is updated as b minus the learning rate alpha times the derivative"}, {"start": 206.04000000000002, "end": 209.12, "text": " of j with respect to b."}, {"start": 209.12, "end": 213.36, "text": " So with these three steps, step one, specifying how to compute the outputs given the input"}, {"start": 213.36, "end": 218.0, "text": " x in parameters, step two, specifying loss in cost, and step three, minimize the cost"}, {"start": 218.0, "end": 221.4, "text": " function, we trained logistic regression."}, {"start": 221.4, "end": 227.96, "text": " The same three steps is how we can train a neural network in TensorFlow."}, {"start": 227.96, "end": 233.12, "text": " Now let's look at how these three steps map to training a neural network."}, {"start": 233.12, "end": 238.72, "text": " We'll go over this in greater detail on the next three slides, but really briefly, step"}, {"start": 238.72, "end": 244.0, "text": " one of specifying how to compute the output given the input x in parameters w and b, that's"}, {"start": 244.0, "end": 249.32, "text": " done with this code snippet, which should be familiar from last week of specifying the"}, {"start": 249.32, "end": 254.6, "text": " neural network, and this was actually enough to specify the computations needed in forward"}, {"start": 254.6, "end": 257.71999999999997, "text": " propagation or for the inference algorithm, for example."}, {"start": 257.71999999999997, "end": 263.76, "text": " The second step is to compile the model and to tell it what loss you want to use."}, {"start": 263.76, "end": 269.32, "text": " And here's a code that you use to specify this loss function, which is the binary cross"}, {"start": 269.32, "end": 273.2, "text": " NGP loss function."}, {"start": 273.2, "end": 278.15999999999997, "text": " And once you specify this loss, taking an average over the entire training set also"}, {"start": 278.16, "end": 281.0, "text": " gives you the cost function for the neural network."}, {"start": 281.0, "end": 286.56, "text": " And then step three is to call function to try to minimize the cost as a function of"}, {"start": 286.56, "end": 288.96000000000004, "text": " the parameters of the neural network."}, {"start": 288.96000000000004, "end": 293.52000000000004, "text": " Let's look in greater detail in these three steps in the context of training a neural"}, {"start": 293.52000000000004, "end": 296.04, "text": " network."}, {"start": 296.04, "end": 300.08000000000004, "text": " The first step, specify how to compute the output given the input x in parameters w and"}, {"start": 300.08000000000004, "end": 301.08000000000004, "text": " b."}, {"start": 301.08000000000004, "end": 305.02000000000004, "text": " This code snippet specifies the entire architecture of the neural network."}, {"start": 305.02, "end": 309.96, "text": " It tells you that there are 25 hidden units in the first hidden layer, then 15 in the"}, {"start": 309.96, "end": 315.32, "text": " next one, and then one output unit, and that we're using the sig one activation value."}, {"start": 315.32, "end": 320.03999999999996, "text": " And so based on this code snippet, we know also what are the parameters, w1, b1 of the"}, {"start": 320.03999999999996, "end": 324.78, "text": " first layer, parameters of the second layer, and parameters of the third layer."}, {"start": 324.78, "end": 330.28, "text": " So this code snippet specifies the entire architecture of the neural network and therefore"}, {"start": 330.28, "end": 336.65999999999997, "text": " tells TensorFlow everything it needs in order to compute the output x as a function."}, {"start": 336.65999999999997, "end": 344.52, "text": " In order to compute the output a3 or f of x as a function of the input x and the parameters"}, {"start": 344.52, "end": 349.47999999999996, "text": " here we have written wl and bl."}, {"start": 349.47999999999996, "end": 351.32, "text": " Let's go on to step two."}, {"start": 351.32, "end": 356.03999999999996, "text": " In the second step, you have to specify what is the loss function, and that will also define"}, {"start": 356.03999999999996, "end": 359.88, "text": " the cost function we use to train the neural network."}, {"start": 359.88, "end": 369.68, "text": " So for the MNIST 01 digit classification problem is a binary classification problem, and the"}, {"start": 369.68, "end": 373.64, "text": " most common by far loss function to use is this one."}, {"start": 373.64, "end": 379.0, "text": " It's actually the same loss function as what we had for logistic regression is negative"}, {"start": 379.0, "end": 387.0, "text": " y log f of x minus one minus y times log one minus f of x, where y is the ground truth"}, {"start": 387.0, "end": 392.72, "text": " label, sometimes also called the target label y, and f of x is now the output of the neural"}, {"start": 392.72, "end": 393.96, "text": " network."}, {"start": 393.96, "end": 401.52, "text": " So in the terminology of TensorFlow, this loss function is called binary cross entropy,"}, {"start": 401.52, "end": 408.16, "text": " and the syntax is to ask TensorFlow to compile the neural network using this loss function."}, {"start": 408.16, "end": 414.16, "text": " And another historical note, Keras was originally a library that had developed independently"}, {"start": 414.16, "end": 418.72, "text": " of TensorFlow, it was actually a totally separate project from TensorFlow, but eventually it"}, {"start": 418.72, "end": 425.52000000000004, "text": " got merged into TensorFlow, which is why we have tf.keraslibrary.losses.the name of this"}, {"start": 425.52000000000004, "end": 427.44000000000005, "text": " loss function."}, {"start": 427.44000000000005, "end": 432.52000000000004, "text": " And by the way, I don't always remember the names of all the loss functions in TensorFlow,"}, {"start": 432.52000000000004, "end": 436.36, "text": " but I just do a quick web search myself to find the right name, and then I plug that"}, {"start": 436.36, "end": 438.36, "text": " into my code."}, {"start": 438.36, "end": 443.8, "text": " Having specified the loss with respect to a single training example, TensorFlow knows"}, {"start": 443.8, "end": 448.92, "text": " that the costs you want to minimize is then the average, taking the average over all m"}, {"start": 448.92, "end": 453.44, "text": " training examples of the loss on all of the training examples."}, {"start": 453.44, "end": 459.92, "text": " And optimizing this cost function will result in fitting the neural network to your binary"}, {"start": 459.92, "end": 461.44, "text": " classification data."}, {"start": 461.44, "end": 468.04, "text": " In case you want to solve a regression problem rather than a classification problem, you"}, {"start": 468.04, "end": 474.8, "text": " can also tell TensorFlow to compile your model using a different loss function."}, {"start": 474.8, "end": 481.64000000000004, "text": " For example, if you have a regression problem, and if you want to minimize the squared error"}, {"start": 481.64000000000004, "end": 487.24, "text": " loss, so here is the squared error loss, the loss with respect to if you're learning algorithm"}, {"start": 487.24, "end": 492.84000000000003, "text": " outputs f of x with a target or ground truth label of y, that's one half of the squared"}, {"start": 492.84, "end": 500.56, "text": " error, then you can use this loss function in TensorFlow, which is to use the maybe more"}, {"start": 500.56, "end": 505.34, "text": " intuitively named mean squared error loss function, and then TensorFlow will try to"}, {"start": 505.34, "end": 507.52, "text": " minimize the mean squared error."}, {"start": 507.52, "end": 514.72, "text": " In this expression, I'm using J of capital W comma capital B to denote the cost function."}, {"start": 514.72, "end": 520.04, "text": " The cost function is a function of all of the parameters in the neural network."}, {"start": 520.04, "end": 529.16, "text": " So you can think of capital W as including W1, W2, W3, so all the W parameters in the"}, {"start": 529.16, "end": 536.88, "text": " entire neural network and B as including B1, B2, and B3."}, {"start": 536.88, "end": 543.5999999999999, "text": " So if you are optimizing the cost function with respect to W and B, you'd be trying to"}, {"start": 543.5999999999999, "end": 548.36, "text": " optimize it with respect to all of the parameters in the neural network."}, {"start": 548.36, "end": 554.2, "text": " And up on top as well, I had written f of x as the output of the neural network, but"}, {"start": 554.2, "end": 559.84, "text": " you want, you can also write f of WB if we want to emphasize that the output of the neural"}, {"start": 559.84, "end": 564.44, "text": " network as a function of x depends on all the parameters in all the layers of the neural"}, {"start": 564.44, "end": 565.64, "text": " network."}, {"start": 565.64, "end": 568.48, "text": " So that's the loss function and the cost function."}, {"start": 568.48, "end": 575.0, "text": " And in TensorFlow, this is called the binary cross entropy loss function."}, {"start": 575.0, "end": 576.2, "text": " Where does that name come from?"}, {"start": 576.2, "end": 582.12, "text": " Well, it turns out in statistics, this function on top is called the cross entropy loss function."}, {"start": 582.12, "end": 584.0400000000001, "text": " So that's what cross entropy means."}, {"start": 584.0400000000001, "end": 589.58, "text": " And the word binary just re-emphasizes or points out that this is a binary classification"}, {"start": 589.58, "end": 594.84, "text": " problem because each image is either a zero or a one."}, {"start": 594.84, "end": 599.86, "text": " Finally, you will ask TensorFlow to minimize the cost function."}, {"start": 599.86, "end": 605.1800000000001, "text": " You might remember the gradient descent algorithm from the first course."}, {"start": 605.18, "end": 609.56, "text": " If you are using gradient descent to train the parameters of a neural network, then you"}, {"start": 609.56, "end": 619.12, "text": " will repeatedly for every layer L and for every unit J, update WLJ according to WLJ"}, {"start": 619.12, "end": 626.4399999999999, "text": " minus the learning rate alpha times the partial derivative with respect to that parameter"}, {"start": 626.4399999999999, "end": 629.88, "text": " of the cost function J of WB."}, {"start": 629.88, "end": 634.38, "text": " And similarly for the parameters B as well."}, {"start": 634.38, "end": 641.16, "text": " And after doing, say, 100 iterations of gradient descent, hopefully you get to a good value"}, {"start": 641.16, "end": 643.24, "text": " of the parameters."}, {"start": 643.24, "end": 649.12, "text": " So in order to use gradient descent, the key thing you need to compute is these partial"}, {"start": 649.12, "end": 651.4, "text": " derivative terms."}, {"start": 651.4, "end": 656.32, "text": " And what TensorFlow does, and in fact what is standard in neural network training, is"}, {"start": 656.32, "end": 662.44, "text": " to use an algorithm called backpropagation in order to compute these partial derivative"}, {"start": 662.44, "end": 663.44, "text": " terms."}, {"start": 663.44, "end": 666.7600000000001, "text": " TensorFlow can do all of these things for you."}, {"start": 666.7600000000001, "end": 671.1600000000001, "text": " It implements backpropagation all within this function called fit."}, {"start": 671.1600000000001, "end": 677.7600000000001, "text": " So all you have to do is call model.fit x, y as your training set and tell it to do so"}, {"start": 677.7600000000001, "end": 681.8800000000001, "text": " for 100 iterations or 100 epochs."}, {"start": 681.8800000000001, "end": 686.6400000000001, "text": " In fact, what you see later is that TensorFlow can use an algorithm that is even a little"}, {"start": 686.6400000000001, "end": 688.6, "text": " bit faster than gradient descent."}, {"start": 688.6, "end": 691.5200000000001, "text": " And you see more about that later this week as well."}, {"start": 691.52, "end": 697.28, "text": " Now I know that we're relying heavily on the TensorFlow library in order to implement a"}, {"start": 697.28, "end": 699.04, "text": " neural network."}, {"start": 699.04, "end": 705.84, "text": " One pattern I've seen across multiple ideas is as the technology evolves, libraries become"}, {"start": 705.84, "end": 712.6, "text": " more mature and most engineers will use libraries rather than implement code from scratch."}, {"start": 712.6, "end": 717.48, "text": " And there have been many other examples of this in the history of computing."}, {"start": 717.48, "end": 722.4, "text": " Once many, many decades ago programmers had to implement their own sorting function from"}, {"start": 722.4, "end": 723.9200000000001, "text": " scratch."}, {"start": 723.9200000000001, "end": 728.76, "text": " But now sorting libraries are quite mature that you probably call someone else's sorting"}, {"start": 728.76, "end": 733.12, "text": " function rather than implement it yourself unless you're taking a computing classes,"}, {"start": 733.12, "end": 736.04, "text": " ask you to do it as an exercise."}, {"start": 736.04, "end": 740.76, "text": " And today if you want to compute the square root of a number, like what is the square"}, {"start": 740.76, "end": 742.32, "text": " root of 7?"}, {"start": 742.32, "end": 747.28, "text": " Well once programmers had to write their own code to compute this, but now pretty much"}, {"start": 747.28, "end": 754.4, "text": " everyone just calls a library to take square roots or matrix operations such as multiplying"}, {"start": 754.4, "end": 757.16, "text": " two matrices together."}, {"start": 757.16, "end": 762.6, "text": " So when deep learning was younger and less mature, many developers, including me, were"}, {"start": 762.6, "end": 767.9599999999999, "text": " implementing things from scratch using Python or C++ or some other library."}, {"start": 767.9599999999999, "end": 773.36, "text": " But today deep learning libraries have matured enough that most developers will use these"}, {"start": 773.36, "end": 779.0, "text": " libraries and in fact most commercial implementations of neural networks today use a library like"}, {"start": 779.0, "end": 780.88, "text": " TensorFlow or PyTorch."}, {"start": 780.88, "end": 785.0, "text": " But as I've mentioned, it's still useful to understand how they work under the hood so"}, {"start": 785.0, "end": 789.42, "text": " that if something unexpected happens, which still does with today's libraries, you have"}, {"start": 789.42, "end": 791.84, "text": " a better chance of knowing how to fix it."}, {"start": 791.84, "end": 798.44, "text": " Now that you know how to train a basic neural network, also called a multilayer perceptron,"}, {"start": 798.44, "end": 802.58, "text": " there are some things you can change about the neural network that will make it even"}, {"start": 802.58, "end": 804.48, "text": " more powerful."}, {"start": 804.48, "end": 809.5, "text": " In the next video, let's take a look at how you can swap in different activation functions"}, {"start": 809.5, "end": 814.5600000000001, "text": " as an alternative to the sigmoid activation function we've been using."}, {"start": 814.5600000000001, "end": 818.12, "text": " This will make your neural networks work even much better."}, {"start": 818.12, "end": 833.76, "text": " So let's go take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=TQ388ISSoI4
5.3 Activation Functions | Alternatives to the sigmoid activation --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
So far, we've been using the sigmoid activation function in all the nodes, in the hidden layers, and in the output layer. And we have started that way because we were building up neural networks by taking logistic regression and creating a lot of logistic regression units and stringing them together. But if you use other activation functions, your neural network can become much more powerful. Let's take a look at how to do that. Recall the demand prediction example from last week, where given price, shipping cost, marketing material, you would try to predict if something is highly affordable, if there's good awareness and high perceived quality, and based on that, try to predict if it's a top seller. But this assumes that awareness is maybe binary, it's either people are aware or they are not. But it seems like the degree to which possible buyers are aware of the t-shirt you're selling may not be binary. They can be a little bit aware, somewhat aware, extremely aware, or it could have gone completely viral. So rather than modeling awareness as a binary number, 0 or 1, that you try to estimate the probability of awareness, or rather than modeling awareness as just a number between 0 and 1, maybe awareness should be any non-negative number, because there can be any non-negative value of awareness going from 0 up to very, very large numbers. So whereas previously we had used this equation to calculate the activation of that second hidden unit estimating awareness, where g was the sigmoid function, and thus goes between 0 and 1. If you want to allow a 1, 2 to potentially take on much larger positive values, we can instead swap in a different activation function. It turns out that a very common choice of activation function in neural networks is this function. It looks like this. It goes, if z is this, then g of z is 0 to the left, and then this is straight line, 35 degrees to the right of 0. And so when z is greater than or equal to 0, g of z is just equal to z, that is to the right half of this diagram. And the mathematical equation for this is g of z equals max of 0, z. Be free to verify for yourself that max of 0, z results in this curve that I've drawn over here. And if a 1, 2 is g of z for this value of z, then a, the activation value, can now take on 0 or any non-negative value. This activation function has a name. It goes by the name ReLU with this funny capitalization. And ReLU stands for, again, a somewhat arcane term, but it stands for rectified linear unit. Don't worry too much about what rectified means and what linear unit means. This was just a name that the authors had given to this particular activation function when they came up with it. But most people in deep learning just say ReLU to refer to this g of z. More generally, you have a choice of what to use for g of z. And sometimes we'll use a different choice than the sigmoid activation function. Here are the most commonly used activation functions. You saw the sigmoid activation function, g of z equals the sigmoid function. On the last slide, we just looked at the ReLU, or rectified linear unit, g of z equals max of 0, z. There's one other activation function which is worth mentioning, which is called the linear activation function, which is just g of z equals to z. Sometimes if you use the linear activation function, people will say we're not using any activation function because if a is g of z, where g of z equals z, then a is just equal to this, w dot x plus b, say. And so it's as if there was no g in there at all. So when you are using this linear activation function, g of z, sometimes people will say, well, we're not using any activation function. Although in this clause, I will refer to using the linear activation function rather than no activation function. But if you hear someone else use that terminology, that's what they mean. It just refers to the linear activation function. And these three are probably by far the most commonly used activation functions in neural networks. Later this week, we'll touch on the fourth one called the softmax activation function. But with these activation functions, you'll be able to build a rich variety of powerful neural networks. So when building a neural network, for each neuron, do you want to use the sigmoid activation function or the ReLU activation function or a linear activation function? How do you choose between these different activation functions? Let's take a look at that in the next video.
[{"start": 0.0, "end": 8.76, "text": " So far, we've been using the sigmoid activation function in all the nodes, in the hidden layers,"}, {"start": 8.76, "end": 11.08, "text": " and in the output layer."}, {"start": 11.08, "end": 16.28, "text": " And we have started that way because we were building up neural networks by taking logistic"}, {"start": 16.28, "end": 22.84, "text": " regression and creating a lot of logistic regression units and stringing them together."}, {"start": 22.84, "end": 27.84, "text": " But if you use other activation functions, your neural network can become much more powerful."}, {"start": 27.84, "end": 30.4, "text": " Let's take a look at how to do that."}, {"start": 30.4, "end": 35.36, "text": " Recall the demand prediction example from last week, where given price, shipping cost,"}, {"start": 35.36, "end": 41.0, "text": " marketing material, you would try to predict if something is highly affordable, if there's"}, {"start": 41.0, "end": 44.879999999999995, "text": " good awareness and high perceived quality, and based on that, try to predict if it's"}, {"start": 44.879999999999995, "end": 46.64, "text": " a top seller."}, {"start": 46.64, "end": 54.16, "text": " But this assumes that awareness is maybe binary, it's either people are aware or they are not."}, {"start": 54.16, "end": 60.699999999999996, "text": " But it seems like the degree to which possible buyers are aware of the t-shirt you're selling"}, {"start": 60.699999999999996, "end": 62.0, "text": " may not be binary."}, {"start": 62.0, "end": 66.92, "text": " They can be a little bit aware, somewhat aware, extremely aware, or it could have gone completely"}, {"start": 66.92, "end": 68.24, "text": " viral."}, {"start": 68.24, "end": 74.8, "text": " So rather than modeling awareness as a binary number, 0 or 1, that you try to estimate the"}, {"start": 74.8, "end": 80.72, "text": " probability of awareness, or rather than modeling awareness as just a number between 0 and 1,"}, {"start": 80.72, "end": 86.2, "text": " maybe awareness should be any non-negative number, because there can be any non-negative"}, {"start": 86.2, "end": 91.9, "text": " value of awareness going from 0 up to very, very large numbers."}, {"start": 91.9, "end": 98.96000000000001, "text": " So whereas previously we had used this equation to calculate the activation of that second"}, {"start": 98.96000000000001, "end": 105.24, "text": " hidden unit estimating awareness, where g was the sigmoid function, and thus goes between"}, {"start": 105.24, "end": 107.32, "text": " 0 and 1."}, {"start": 107.32, "end": 114.67999999999999, "text": " If you want to allow a 1, 2 to potentially take on much larger positive values, we can"}, {"start": 114.67999999999999, "end": 118.96, "text": " instead swap in a different activation function."}, {"start": 118.96, "end": 123.96, "text": " It turns out that a very common choice of activation function in neural networks is"}, {"start": 123.96, "end": 125.58, "text": " this function."}, {"start": 125.58, "end": 126.91999999999999, "text": " It looks like this."}, {"start": 126.91999999999999, "end": 136.28, "text": " It goes, if z is this, then g of z is 0 to the left, and then this is straight line,"}, {"start": 136.28, "end": 139.84, "text": " 35 degrees to the right of 0."}, {"start": 139.84, "end": 148.24, "text": " And so when z is greater than or equal to 0, g of z is just equal to z, that is to the"}, {"start": 148.24, "end": 151.52, "text": " right half of this diagram."}, {"start": 151.52, "end": 159.12, "text": " And the mathematical equation for this is g of z equals max of 0, z."}, {"start": 159.12, "end": 168.6, "text": " Be free to verify for yourself that max of 0, z results in this curve that I've drawn"}, {"start": 168.6, "end": 170.1, "text": " over here."}, {"start": 170.1, "end": 177.88, "text": " And if a 1, 2 is g of z for this value of z, then a, the activation value, can now take"}, {"start": 177.88, "end": 182.9, "text": " on 0 or any non-negative value."}, {"start": 182.9, "end": 185.52, "text": " This activation function has a name."}, {"start": 185.52, "end": 190.12, "text": " It goes by the name ReLU with this funny capitalization."}, {"start": 190.12, "end": 196.8, "text": " And ReLU stands for, again, a somewhat arcane term, but it stands for rectified linear unit."}, {"start": 196.8, "end": 200.18, "text": " Don't worry too much about what rectified means and what linear unit means."}, {"start": 200.18, "end": 204.72, "text": " This was just a name that the authors had given to this particular activation function"}, {"start": 204.72, "end": 206.48000000000002, "text": " when they came up with it."}, {"start": 206.48000000000002, "end": 212.72, "text": " But most people in deep learning just say ReLU to refer to this g of z."}, {"start": 212.72, "end": 217.92, "text": " More generally, you have a choice of what to use for g of z."}, {"start": 217.92, "end": 223.07999999999998, "text": " And sometimes we'll use a different choice than the sigmoid activation function."}, {"start": 223.07999999999998, "end": 227.56, "text": " Here are the most commonly used activation functions."}, {"start": 227.56, "end": 232.84, "text": " You saw the sigmoid activation function, g of z equals the sigmoid function."}, {"start": 232.84, "end": 238.44, "text": " On the last slide, we just looked at the ReLU, or rectified linear unit, g of z equals max"}, {"start": 238.44, "end": 240.6, "text": " of 0, z."}, {"start": 240.6, "end": 244.92, "text": " There's one other activation function which is worth mentioning, which is called the linear"}, {"start": 244.92, "end": 250.44, "text": " activation function, which is just g of z equals to z."}, {"start": 250.44, "end": 254.48, "text": " Sometimes if you use the linear activation function, people will say we're not using"}, {"start": 254.48, "end": 262.36, "text": " any activation function because if a is g of z, where g of z equals z, then a is just"}, {"start": 262.36, "end": 266.36, "text": " equal to this, w dot x plus b, say."}, {"start": 266.36, "end": 269.36, "text": " And so it's as if there was no g in there at all."}, {"start": 269.36, "end": 275.64, "text": " So when you are using this linear activation function, g of z, sometimes people will say,"}, {"start": 275.64, "end": 278.6, "text": " well, we're not using any activation function."}, {"start": 278.6, "end": 283.64, "text": " Although in this clause, I will refer to using the linear activation function rather than"}, {"start": 283.64, "end": 285.52000000000004, "text": " no activation function."}, {"start": 285.52000000000004, "end": 288.44, "text": " But if you hear someone else use that terminology, that's what they mean."}, {"start": 288.44, "end": 291.82, "text": " It just refers to the linear activation function."}, {"start": 291.82, "end": 298.32, "text": " And these three are probably by far the most commonly used activation functions in neural"}, {"start": 298.32, "end": 299.64, "text": " networks."}, {"start": 299.64, "end": 304.88, "text": " Later this week, we'll touch on the fourth one called the softmax activation function."}, {"start": 304.88, "end": 310.8, "text": " But with these activation functions, you'll be able to build a rich variety of powerful"}, {"start": 310.8, "end": 312.48, "text": " neural networks."}, {"start": 312.48, "end": 318.24, "text": " So when building a neural network, for each neuron, do you want to use the sigmoid activation"}, {"start": 318.24, "end": 323.65999999999997, "text": " function or the ReLU activation function or a linear activation function?"}, {"start": 323.65999999999997, "end": 327.24, "text": " How do you choose between these different activation functions?"}, {"start": 327.24, "end": 328.96000000000004, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=orElYjWScBw
5.4 Activation Functions | Choosing activation functions--[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's take a look at how you can choose the activation function for different neurons in your neural network. We'll start with some guidance for how to choose it for the output layer. It turns out that depending on what the target label or the ground truth label Y is, there will be one fairly natural choice for the activation function for the output layer. And we'll then go and look at the choice of the activation function also for the hidden layers of your neural network. Let's take a look. You can choose different activation functions for different neurons in your neural network. And when considering the activation function for the output layer, it turns out that there will often be one fairly natural choice depending on what is the target or the ground truth label Y. Specifically, if you are working on a classification problem where Y is either 0 or 1, so a binary classification problem, then the sigmoid activation function will almost always be the most natural choice because then the neural network learns to predict the probability that Y is equal to 1, just like we had for logistic regression. So my recommendation is if you're working on a binary classification problem, use sigmoid at the output layer. Alternatively, if you're solving a regression problem, then you might choose a different activation function. For example, if you're trying to predict how tomorrow's stock price will change compared to today's stock price, well, it can go up or down. And so in this case, Y would be a number that can be either positive or negative. And in that case, I would recommend you use the linear activation function. Why is that? Well, that's because then the output of your neural network, f of x, which is equal to a3 in the example above, would be g applied to z3. And with the linear activation function, g of z can take on either positive or negative values. So Y can be positive or negative, use the linear activation function. And finally, if Y can only take on non-negative values, such as if you're predicting the price of a house, that can never be negative. Then the most natural choice would be the ReLU activation function. Because as you see here, this activation function only takes on non-negative values, either zero or positive values. So when choosing the activation function to use for your output layer, usually depending on what is the label, why you're trying to predict, there'll be one fairly natural choice. And in fact, the guidance on this slide is how I pretty much always choose my activation function as well for the output layer of a neural network. How about the hidden layers of a neural network? It turns out that the ReLU activation function is by far the most common choice in how neural networks are trained by many, many practitioners today. Even though we had initially described neural networks using the sigmoid activation function, and in fact, in the early history of the development of neural networks, people use sigmoid activation functions in many places, the field has evolved to use ReLU much more often and sigmoids hardly ever. With the one exception that you do use a sigmoid activation function in the output layer if you have a binary classification problem. So why is that? Well, there are a few reasons. First, if you compare the ReLU and the sigmoid activation functions, the ReLU is a bit faster to compute because it just requires computing max of 0, z, whereas the sigmoid requires taking an exponentiation and an inverse and so on, and so it's a little bit less efficient. But the second reason, which turns out to be even more important, is that the ReLU function kind of goes flat only in one part of the graph, here on the left, it's completely flat. Whereas the sigmoid activation function, it kind of goes flat in two places. It goes flat to the left of the graph and it goes flat to the right of the graph. And if you're using gradient descent to train a neural network, then when you have a function that is flat in a lot of places, gradient descent will be really slow. I know that gradient descent optimizes the cost function J of WB rather than optimizes the activation function, but the activation function is a piece of what goes into computing, and that results in more places in the cost function J of WB that are flat as well and with a smaller gradient and it slows down learning. I know that that was just an intuitive explanation, but researchers have found that using the ReLU activation function can cause your neural network to learn a bit faster as well, which is why for most practitioners, if you're trying to decide what activation function to use for the hidden layer, the ReLU activation function has become now by far the most common choice. In fact, if I'm building a neural network, this is how I choose activation functions for the hidden layers as well. So to summarize, here's what I recommend in terms of how you choose the activation functions for your neural network. For the output layer, use a sigmoid if you have a binary classification problem, linear if Y is a number that can take on positive or negative values, or use ReLU if Y can take on only positive values or zero positive values or non-negative values. Then for the hidden layers, I would recommend just using ReLU as a default activation function. And in TensorFlow, this is how you would implement it. Rather than saying activation equals sigmoid as we had previously, you can then for the hidden layers, that's the first hidden layer, the second hidden layer, ask TensorFlow to use the ReLU activation function. And then for the output layer, in this example, I've asked it to use the sigmoid activation function. But if you want it to use the linear activation function instead, that's the syntax for it. Or if you wanted to use the ReLU activation function, that shows the syntax for it. With this richer set of activation functions, you'd be well positioned to build much more powerful neural networks than just once using only the sigmoid activation function. By the way, if you look at the research literature, you sometimes hear of authors using even other activation functions, such as the tanh activation function or the leaky ReLU activation function or the swish activation function. Every few years, researchers sometimes come up with another interesting activation function, and sometimes they do work a little bit better. For example, I've used the leaky ReLU activation function a few times in my work, and sometimes it works a little bit better than the ReLU activation function you learned about in this video. But I think for the most part, and for the vast majority of applications, what you learned about in this video would be good enough. Of course, if you want to learn more about other activation functions, feel free to look on the internet, and there are just a small handful of cases where these other activation functions could be even more powerful as well. With that, I hope you also enjoy practicing these ideas, these activation functions, in the optional labs and in the practice labs. But this raises yet another question. Why do we even need activation functions at all? Why don't we just use the linear activation function, or use no activation function anywhere? It turns out this does not work at all. And in the next video, let's take a look at why that's the case, and why activation functions are so important for getting your neural networks to work.
[{"start": 0.0, "end": 6.3, "text": " Let's take a look at how you can choose the activation function for different neurons"}, {"start": 6.3, "end": 7.84, "text": " in your neural network."}, {"start": 7.84, "end": 12.42, "text": " We'll start with some guidance for how to choose it for the output layer."}, {"start": 12.42, "end": 17.88, "text": " It turns out that depending on what the target label or the ground truth label Y is, there"}, {"start": 17.88, "end": 24.14, "text": " will be one fairly natural choice for the activation function for the output layer."}, {"start": 24.14, "end": 28.48, "text": " And we'll then go and look at the choice of the activation function also for the hidden"}, {"start": 28.48, "end": 30.28, "text": " layers of your neural network."}, {"start": 30.28, "end": 31.72, "text": " Let's take a look."}, {"start": 31.72, "end": 37.96, "text": " You can choose different activation functions for different neurons in your neural network."}, {"start": 37.96, "end": 43.92, "text": " And when considering the activation function for the output layer, it turns out that there"}, {"start": 43.92, "end": 50.84, "text": " will often be one fairly natural choice depending on what is the target or the ground truth"}, {"start": 50.84, "end": 51.84, "text": " label Y."}, {"start": 51.84, "end": 59.64, "text": " Specifically, if you are working on a classification problem where Y is either 0 or 1, so a binary"}, {"start": 59.64, "end": 66.16, "text": " classification problem, then the sigmoid activation function will almost always be the most natural"}, {"start": 66.16, "end": 72.88, "text": " choice because then the neural network learns to predict the probability that Y is equal"}, {"start": 72.88, "end": 76.28, "text": " to 1, just like we had for logistic regression."}, {"start": 76.28, "end": 81.60000000000001, "text": " So my recommendation is if you're working on a binary classification problem, use sigmoid"}, {"start": 81.6, "end": 83.36, "text": " at the output layer."}, {"start": 83.36, "end": 88.44, "text": " Alternatively, if you're solving a regression problem, then you might choose a different"}, {"start": 88.44, "end": 90.28, "text": " activation function."}, {"start": 90.28, "end": 96.83999999999999, "text": " For example, if you're trying to predict how tomorrow's stock price will change compared"}, {"start": 96.83999999999999, "end": 100.0, "text": " to today's stock price, well, it can go up or down."}, {"start": 100.0, "end": 105.0, "text": " And so in this case, Y would be a number that can be either positive or negative."}, {"start": 105.0, "end": 110.32, "text": " And in that case, I would recommend you use the linear activation function."}, {"start": 110.32, "end": 111.32, "text": " Why is that?"}, {"start": 111.32, "end": 115.55999999999999, "text": " Well, that's because then the output of your neural network, f of x, which is equal to"}, {"start": 115.55999999999999, "end": 121.24, "text": " a3 in the example above, would be g applied to z3."}, {"start": 121.24, "end": 126.88, "text": " And with the linear activation function, g of z can take on either positive or negative"}, {"start": 126.88, "end": 127.88, "text": " values."}, {"start": 127.88, "end": 132.0, "text": " So Y can be positive or negative, use the linear activation function."}, {"start": 132.0, "end": 139.04, "text": " And finally, if Y can only take on non-negative values, such as if you're predicting the price"}, {"start": 139.04, "end": 142.16, "text": " of a house, that can never be negative."}, {"start": 142.16, "end": 146.28, "text": " Then the most natural choice would be the ReLU activation function."}, {"start": 146.28, "end": 151.6, "text": " Because as you see here, this activation function only takes on non-negative values, either"}, {"start": 151.6, "end": 153.7, "text": " zero or positive values."}, {"start": 153.7, "end": 159.12, "text": " So when choosing the activation function to use for your output layer, usually depending"}, {"start": 159.12, "end": 164.84, "text": " on what is the label, why you're trying to predict, there'll be one fairly natural choice."}, {"start": 164.84, "end": 171.48000000000002, "text": " And in fact, the guidance on this slide is how I pretty much always choose my activation"}, {"start": 171.48000000000002, "end": 174.76, "text": " function as well for the output layer of a neural network."}, {"start": 174.76, "end": 178.36, "text": " How about the hidden layers of a neural network?"}, {"start": 178.36, "end": 185.88, "text": " It turns out that the ReLU activation function is by far the most common choice in how neural"}, {"start": 185.88, "end": 190.66, "text": " networks are trained by many, many practitioners today."}, {"start": 190.66, "end": 196.96, "text": " Even though we had initially described neural networks using the sigmoid activation function,"}, {"start": 196.96, "end": 202.4, "text": " and in fact, in the early history of the development of neural networks, people use sigmoid activation"}, {"start": 202.4, "end": 210.72, "text": " functions in many places, the field has evolved to use ReLU much more often and sigmoids hardly"}, {"start": 210.72, "end": 211.72, "text": " ever."}, {"start": 211.72, "end": 216.22, "text": " With the one exception that you do use a sigmoid activation function in the output layer if"}, {"start": 216.22, "end": 218.88, "text": " you have a binary classification problem."}, {"start": 218.88, "end": 219.88, "text": " So why is that?"}, {"start": 219.88, "end": 221.84, "text": " Well, there are a few reasons."}, {"start": 221.84, "end": 228.88, "text": " First, if you compare the ReLU and the sigmoid activation functions, the ReLU is a bit faster"}, {"start": 228.88, "end": 234.79999999999998, "text": " to compute because it just requires computing max of 0, z, whereas the sigmoid requires"}, {"start": 234.79999999999998, "end": 240.76, "text": " taking an exponentiation and an inverse and so on, and so it's a little bit less efficient."}, {"start": 240.76, "end": 246.64, "text": " But the second reason, which turns out to be even more important, is that the ReLU function"}, {"start": 246.64, "end": 254.0, "text": " kind of goes flat only in one part of the graph, here on the left, it's completely flat."}, {"start": 254.0, "end": 258.32, "text": " Whereas the sigmoid activation function, it kind of goes flat in two places."}, {"start": 258.32, "end": 266.32, "text": " It goes flat to the left of the graph and it goes flat to the right of the graph."}, {"start": 266.32, "end": 272.03999999999996, "text": " And if you're using gradient descent to train a neural network, then when you have a function"}, {"start": 272.04, "end": 278.24, "text": " that is flat in a lot of places, gradient descent will be really slow."}, {"start": 278.24, "end": 285.14000000000004, "text": " I know that gradient descent optimizes the cost function J of WB rather than optimizes"}, {"start": 285.14000000000004, "end": 291.36, "text": " the activation function, but the activation function is a piece of what goes into computing,"}, {"start": 291.36, "end": 297.6, "text": " and that results in more places in the cost function J of WB that are flat as well and"}, {"start": 297.6, "end": 301.64000000000004, "text": " with a smaller gradient and it slows down learning."}, {"start": 301.64, "end": 306.56, "text": " I know that that was just an intuitive explanation, but researchers have found that using the"}, {"start": 306.56, "end": 312.24, "text": " ReLU activation function can cause your neural network to learn a bit faster as well, which"}, {"start": 312.24, "end": 316.32, "text": " is why for most practitioners, if you're trying to decide what activation function to use"}, {"start": 316.32, "end": 322.03999999999996, "text": " for the hidden layer, the ReLU activation function has become now by far the most common"}, {"start": 322.03999999999996, "end": 323.03999999999996, "text": " choice."}, {"start": 323.03999999999996, "end": 328.08, "text": " In fact, if I'm building a neural network, this is how I choose activation functions"}, {"start": 328.08, "end": 330.44, "text": " for the hidden layers as well."}, {"start": 330.44, "end": 337.04, "text": " So to summarize, here's what I recommend in terms of how you choose the activation functions"}, {"start": 337.04, "end": 338.84, "text": " for your neural network."}, {"start": 338.84, "end": 345.04, "text": " For the output layer, use a sigmoid if you have a binary classification problem, linear"}, {"start": 345.04, "end": 351.52, "text": " if Y is a number that can take on positive or negative values, or use ReLU if Y can take"}, {"start": 351.52, "end": 356.56, "text": " on only positive values or zero positive values or non-negative values."}, {"start": 356.56, "end": 365.12, "text": " Then for the hidden layers, I would recommend just using ReLU as a default activation function."}, {"start": 365.12, "end": 368.84000000000003, "text": " And in TensorFlow, this is how you would implement it."}, {"start": 368.84000000000003, "end": 374.76, "text": " Rather than saying activation equals sigmoid as we had previously, you can then for the"}, {"start": 374.76, "end": 380.0, "text": " hidden layers, that's the first hidden layer, the second hidden layer, ask TensorFlow to"}, {"start": 380.0, "end": 383.08, "text": " use the ReLU activation function."}, {"start": 383.08, "end": 389.35999999999996, "text": " And then for the output layer, in this example, I've asked it to use the sigmoid activation"}, {"start": 389.35999999999996, "end": 390.35999999999996, "text": " function."}, {"start": 390.35999999999996, "end": 396.12, "text": " But if you want it to use the linear activation function instead, that's the syntax for it."}, {"start": 396.12, "end": 402.26, "text": " Or if you wanted to use the ReLU activation function, that shows the syntax for it."}, {"start": 402.26, "end": 407.36, "text": " With this richer set of activation functions, you'd be well positioned to build much more"}, {"start": 407.36, "end": 413.72, "text": " powerful neural networks than just once using only the sigmoid activation function."}, {"start": 413.72, "end": 419.56, "text": " By the way, if you look at the research literature, you sometimes hear of authors using even other"}, {"start": 419.56, "end": 426.56, "text": " activation functions, such as the tanh activation function or the leaky ReLU activation function"}, {"start": 426.56, "end": 429.68, "text": " or the swish activation function."}, {"start": 429.68, "end": 434.64, "text": " Every few years, researchers sometimes come up with another interesting activation function,"}, {"start": 434.64, "end": 437.56, "text": " and sometimes they do work a little bit better."}, {"start": 437.56, "end": 443.24, "text": " For example, I've used the leaky ReLU activation function a few times in my work, and sometimes"}, {"start": 443.24, "end": 448.12, "text": " it works a little bit better than the ReLU activation function you learned about in this"}, {"start": 448.12, "end": 449.12, "text": " video."}, {"start": 449.12, "end": 453.8, "text": " But I think for the most part, and for the vast majority of applications, what you learned"}, {"start": 453.8, "end": 456.76, "text": " about in this video would be good enough."}, {"start": 456.76, "end": 461.84, "text": " Of course, if you want to learn more about other activation functions, feel free to look"}, {"start": 461.84, "end": 467.4, "text": " on the internet, and there are just a small handful of cases where these other activation"}, {"start": 467.4, "end": 471.56, "text": " functions could be even more powerful as well."}, {"start": 471.56, "end": 477.59999999999997, "text": " With that, I hope you also enjoy practicing these ideas, these activation functions, in"}, {"start": 477.59999999999997, "end": 480.96, "text": " the optional labs and in the practice labs."}, {"start": 480.96, "end": 483.76, "text": " But this raises yet another question."}, {"start": 483.76, "end": 486.84, "text": " Why do we even need activation functions at all?"}, {"start": 486.84, "end": 492.71999999999997, "text": " Why don't we just use the linear activation function, or use no activation function anywhere?"}, {"start": 492.71999999999997, "end": 495.76, "text": " It turns out this does not work at all."}, {"start": 495.76, "end": 500.46, "text": " And in the next video, let's take a look at why that's the case, and why activation functions"}, {"start": 500.46, "end": 517.16, "text": " are so important for getting your neural networks to work."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=fK5YzGIc2u8
5.5 Activation Functions | Why do we need activation functions? --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's take a look at why neural networks need activation functions and why they just don't work if we were to use the linear activation function in every neuron in the neural network. Recall this demand prediction example. What would happen if we were to use a linear activation function for all of the nodes in this neural network? It turns out that this big neural network will become no different than just linear regression. And so this would defeat the entire purpose of using a neural network because it would then just not be able to fit anything more complex than the linear regression model that we learned about in the first course. Let's illustrate this with a simpler example. Let's look at the example of a neural network where the input x is just a number and we have one hidden unit with parameters w1 and b1 that outputs a1, which is here just a number. And then the second layer is the output layer and it has also just one output unit with parameters w2 and b2 and that outputs a2, which is also just a number, just a scalar, which is the output of the neural network f of x. Let's see what this neural network would do if we were to use the linear activation function g of z equals z everywhere. So to compute a1 as a function of x, the neural network would use a1 equals g of w1 times x plus b1, but g of z is equal to z, so this is just w1 times x plus b1. Then a2 is equal to w2 times a1 plus b2 because g of z equals z. And let me take this expression for a1 and substitute it in there. So that becomes w2 times w1 x plus b1 plus b2. And if we simplify, this becomes w2 w1 times x plus w2 b1 plus b2. And it turns out that if I were to set w equals w2 times w1 and set b equals this quantity over here, then what we've just shown is that a2 is equal to wx plus b. So w is just a linear function of the input x. And rather than using a neural network with one hidden layer and one output layer, we might as well have just used a linear regression model. If you're familiar with linear algebra, this result comes from the fact that a linear function of a linear function is itself a linear function. And this is why having multiple layers in a neural network doesn't let the neural network compute any more complex features or learn anything more complex than just a linear function. So in the general case, if you had a neural network with multiple layers like this, and say you were to use a linear activation function for all the hidden layers, and also use a linear activation function for the output layer, then it turns out this model will compute an output that is completely equivalent to linear regression. The output a4 can be expressed as a linear function of the input features x plus b. Or alternatively, if we were to still use a linear activation function for all the hidden layers, for these three hidden layers here, but we were to use a logistic activation function for the output layer, then it turns out you can show that this model becomes equivalent to logistic regression. And a4 in this case can be expressed as 1 over 1 plus e to the negative wx plus b for some values of w and b. And so this big neural network doesn't do anything that you can't also do with logistic regression. That's why a common rule of thumb is don't use the linear activation function in the hidden layers of your neural network. And in fact, I recommend typically using the ReLU activation function should do just fine. So that's why a neural network needs activation functions other than just the linear activation function everywhere. So far, you've learned to build neural networks for binary classification problems, where y is either 0 or 1, as well as for regression problems, where y can take negative or positive values or maybe just positive and non-negative values. In the next video, I'd like to share with you a generalization of what you've seen so far for classification. In particular, when y doesn't just take on two values, but may take on three or four or 10 or even more categorical values. Let's take a look at how you can build a neural network for that type of classification problem.
[{"start": 0.0, "end": 8.700000000000001, "text": " Let's take a look at why neural networks need activation functions and why they just don't"}, {"start": 8.700000000000001, "end": 15.860000000000001, "text": " work if we were to use the linear activation function in every neuron in the neural network."}, {"start": 15.860000000000001, "end": 19.06, "text": " Recall this demand prediction example."}, {"start": 19.06, "end": 24.82, "text": " What would happen if we were to use a linear activation function for all of the nodes in"}, {"start": 24.82, "end": 26.26, "text": " this neural network?"}, {"start": 26.26, "end": 31.46, "text": " It turns out that this big neural network will become no different than just linear"}, {"start": 31.46, "end": 32.82, "text": " regression."}, {"start": 32.82, "end": 37.74, "text": " And so this would defeat the entire purpose of using a neural network because it would"}, {"start": 37.74, "end": 44.06, "text": " then just not be able to fit anything more complex than the linear regression model that"}, {"start": 44.06, "end": 46.980000000000004, "text": " we learned about in the first course."}, {"start": 46.980000000000004, "end": 51.44, "text": " Let's illustrate this with a simpler example."}, {"start": 51.44, "end": 57.08, "text": " Let's look at the example of a neural network where the input x is just a number and we"}, {"start": 57.08, "end": 68.53999999999999, "text": " have one hidden unit with parameters w1 and b1 that outputs a1, which is here just a number."}, {"start": 68.53999999999999, "end": 74.94, "text": " And then the second layer is the output layer and it has also just one output unit with"}, {"start": 74.94, "end": 81.74, "text": " parameters w2 and b2 and that outputs a2, which is also just a number, just a scalar,"}, {"start": 81.74, "end": 85.58, "text": " which is the output of the neural network f of x."}, {"start": 85.58, "end": 90.82, "text": " Let's see what this neural network would do if we were to use the linear activation function"}, {"start": 90.82, "end": 95.38, "text": " g of z equals z everywhere."}, {"start": 95.38, "end": 105.17999999999999, "text": " So to compute a1 as a function of x, the neural network would use a1 equals g of w1 times"}, {"start": 105.17999999999999, "end": 115.53999999999999, "text": " x plus b1, but g of z is equal to z, so this is just w1 times x plus b1."}, {"start": 115.53999999999999, "end": 123.97999999999999, "text": " Then a2 is equal to w2 times a1 plus b2 because g of z equals z."}, {"start": 123.98, "end": 129.68, "text": " And let me take this expression for a1 and substitute it in there."}, {"start": 129.68, "end": 139.9, "text": " So that becomes w2 times w1 x plus b1 plus b2."}, {"start": 139.9, "end": 152.86, "text": " And if we simplify, this becomes w2 w1 times x plus w2 b1 plus b2."}, {"start": 152.86, "end": 161.58, "text": " And it turns out that if I were to set w equals w2 times w1 and set b equals this quantity"}, {"start": 161.58, "end": 168.26000000000002, "text": " over here, then what we've just shown is that a2 is equal to wx plus b."}, {"start": 168.26000000000002, "end": 174.34, "text": " So w is just a linear function of the input x."}, {"start": 174.34, "end": 178.06, "text": " And rather than using a neural network with one hidden layer and one output layer, we"}, {"start": 178.06, "end": 181.9, "text": " might as well have just used a linear regression model."}, {"start": 181.9, "end": 187.86, "text": " If you're familiar with linear algebra, this result comes from the fact that a linear function"}, {"start": 187.86, "end": 191.18, "text": " of a linear function is itself a linear function."}, {"start": 191.18, "end": 195.36, "text": " And this is why having multiple layers in a neural network doesn't let the neural network"}, {"start": 195.36, "end": 202.78, "text": " compute any more complex features or learn anything more complex than just a linear function."}, {"start": 202.78, "end": 210.14000000000001, "text": " So in the general case, if you had a neural network with multiple layers like this, and"}, {"start": 210.14, "end": 214.7, "text": " say you were to use a linear activation function for all the hidden layers, and also use a"}, {"start": 214.7, "end": 221.29999999999998, "text": " linear activation function for the output layer, then it turns out this model will compute"}, {"start": 221.29999999999998, "end": 225.5, "text": " an output that is completely equivalent to linear regression."}, {"start": 225.5, "end": 233.89999999999998, "text": " The output a4 can be expressed as a linear function of the input features x plus b."}, {"start": 233.89999999999998, "end": 239.42, "text": " Or alternatively, if we were to still use a linear activation function for all the hidden"}, {"start": 239.42, "end": 245.38, "text": " layers, for these three hidden layers here, but we were to use a logistic activation function"}, {"start": 245.38, "end": 251.11999999999998, "text": " for the output layer, then it turns out you can show that this model becomes equivalent"}, {"start": 251.11999999999998, "end": 253.73999999999998, "text": " to logistic regression."}, {"start": 253.73999999999998, "end": 261.26, "text": " And a4 in this case can be expressed as 1 over 1 plus e to the negative wx plus b for"}, {"start": 261.26, "end": 263.41999999999996, "text": " some values of w and b."}, {"start": 263.41999999999996, "end": 268.9, "text": " And so this big neural network doesn't do anything that you can't also do with logistic"}, {"start": 268.9, "end": 269.97999999999996, "text": " regression."}, {"start": 269.97999999999996, "end": 273.78, "text": " That's why a common rule of thumb is don't use the linear activation function in the"}, {"start": 273.78, "end": 276.06, "text": " hidden layers of your neural network."}, {"start": 276.06, "end": 282.78, "text": " And in fact, I recommend typically using the ReLU activation function should do just fine."}, {"start": 282.78, "end": 288.78, "text": " So that's why a neural network needs activation functions other than just the linear activation"}, {"start": 288.78, "end": 290.94, "text": " function everywhere."}, {"start": 290.94, "end": 295.82, "text": " So far, you've learned to build neural networks for binary classification problems, where"}, {"start": 295.82, "end": 302.98, "text": " y is either 0 or 1, as well as for regression problems, where y can take negative or positive"}, {"start": 302.98, "end": 307.62, "text": " values or maybe just positive and non-negative values."}, {"start": 307.62, "end": 311.86, "text": " In the next video, I'd like to share with you a generalization of what you've seen"}, {"start": 311.86, "end": 315.18, "text": " so far for classification."}, {"start": 315.18, "end": 321.98, "text": " In particular, when y doesn't just take on two values, but may take on three or four"}, {"start": 321.98, "end": 326.1, "text": " or 10 or even more categorical values."}, {"start": 326.1, "end": 353.1, "text": " Let's take a look at how you can build a neural network for that type of classification problem."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=hfskCwks_X8
5.6 Multiclass Classification | Multiclass --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Multi-class classification refers to classification problems where you can have more than just two possible output labels, so not just zero or one. Let's take a look at what that means. For the handwritten digit classification problems we've looked at so far, we were just trying to distinguish between the handwritten digits zero and one. But if you're trying to read postal codes or zip codes on an envelope, well there are actually 10 possible digits you might want to recognize. Or alternatively, in the first course you saw the example if you're trying to classify whether a patient may have any of three or five different possible diseases, that too would be a multi-class classification problem. For one thing I've worked on a lot is visual defect inspection of parts manufactured in the factory, where you might look at a picture of a pill that a pharmaceutical company has manufactured and try to figure out does it have a scratch defect or discoloration defect or a chip defect. And this would again be multiple classes of multiple different types of defects that you could classify this pill as having. So a multi-class classification problem is still a classification problem in that Y can take on only a small number of discrete categories, it's not any number, but now Y can take on more than just two possible values. So whereas previously for binary classification you may have had a data set like this with features X1 and X2, in which case logistic regression would fit a model to estimate what's the probability of Y being 1 given the features X because Y was either 0 or 1. With multi-class classification problems you would instead have a data set that maybe looks like this, where we have four classes where the O's represent one class, the X's represent another class, the triangles represent the third class, and the squares represent the fourth class. And instead of just estimating the chance of Y being equal to 1, we'll now want to estimate what's the chance of Y is equal to 1, or what's the chance of Y is equal to 2, or what's the chance of Y is equal to 3, or the chance of Y being equal to 4. And it turns out that the algorithm you learn about in the next video can learn a decision boundary that maybe looks like this, that divides the space X1 and X2 into four categories rather than just two categories. So that's the definition of the multi-class classification problem. In the next video we'll look at the softmax regression algorithm, which is a generalization of the logistic regression algorithm, and using that you'll be able to carry out multi-class classification problems. And after that we'll take softmax regression and fit it into a new neural network so that you also be able to train a neural network to carry out multi-class classification problems. Let's go on to the next video.
[{"start": 0.0, "end": 7.96, "text": " Multi-class classification refers to classification problems where you can have more than just"}, {"start": 7.96, "end": 13.52, "text": " two possible output labels, so not just zero or one."}, {"start": 13.52, "end": 15.74, "text": " Let's take a look at what that means."}, {"start": 15.74, "end": 20.900000000000002, "text": " For the handwritten digit classification problems we've looked at so far, we were just trying"}, {"start": 20.900000000000002, "end": 25.98, "text": " to distinguish between the handwritten digits zero and one."}, {"start": 25.98, "end": 30.6, "text": " But if you're trying to read postal codes or zip codes on an envelope, well there are"}, {"start": 30.6, "end": 36.0, "text": " actually 10 possible digits you might want to recognize."}, {"start": 36.0, "end": 44.22, "text": " Or alternatively, in the first course you saw the example if you're trying to classify"}, {"start": 44.22, "end": 49.900000000000006, "text": " whether a patient may have any of three or five different possible diseases, that too"}, {"start": 49.900000000000006, "end": 53.400000000000006, "text": " would be a multi-class classification problem."}, {"start": 53.4, "end": 60.0, "text": " For one thing I've worked on a lot is visual defect inspection of parts manufactured in"}, {"start": 60.0, "end": 66.36, "text": " the factory, where you might look at a picture of a pill that a pharmaceutical company has"}, {"start": 66.36, "end": 72.36, "text": " manufactured and try to figure out does it have a scratch defect or discoloration defect"}, {"start": 72.36, "end": 74.6, "text": " or a chip defect."}, {"start": 74.6, "end": 79.56, "text": " And this would again be multiple classes of multiple different types of defects that you"}, {"start": 79.56, "end": 83.48, "text": " could classify this pill as having."}, {"start": 83.48, "end": 88.0, "text": " So a multi-class classification problem is still a classification problem in that Y can"}, {"start": 88.0, "end": 96.2, "text": " take on only a small number of discrete categories, it's not any number, but now Y can take on"}, {"start": 96.2, "end": 100.0, "text": " more than just two possible values."}, {"start": 100.0, "end": 106.32000000000001, "text": " So whereas previously for binary classification you may have had a data set like this with"}, {"start": 106.32, "end": 113.6, "text": " features X1 and X2, in which case logistic regression would fit a model to estimate what's"}, {"start": 113.6, "end": 121.96, "text": " the probability of Y being 1 given the features X because Y was either 0 or 1."}, {"start": 121.96, "end": 126.83999999999999, "text": " With multi-class classification problems you would instead have a data set that maybe looks"}, {"start": 126.83999999999999, "end": 133.2, "text": " like this, where we have four classes where the O's represent one class, the X's represent"}, {"start": 133.2, "end": 138.76, "text": " another class, the triangles represent the third class, and the squares represent the"}, {"start": 138.76, "end": 140.56, "text": " fourth class."}, {"start": 140.56, "end": 145.48, "text": " And instead of just estimating the chance of Y being equal to 1, we'll now want to estimate"}, {"start": 145.48, "end": 151.83999999999997, "text": " what's the chance of Y is equal to 1, or what's the chance of Y is equal to 2, or what's the"}, {"start": 151.83999999999997, "end": 158.83999999999997, "text": " chance of Y is equal to 3, or the chance of Y being equal to 4."}, {"start": 158.84, "end": 164.0, "text": " And it turns out that the algorithm you learn about in the next video can learn a decision"}, {"start": 164.0, "end": 171.56, "text": " boundary that maybe looks like this, that divides the space X1 and X2 into four categories"}, {"start": 171.56, "end": 174.76, "text": " rather than just two categories."}, {"start": 174.76, "end": 179.52, "text": " So that's the definition of the multi-class classification problem."}, {"start": 179.52, "end": 185.54, "text": " In the next video we'll look at the softmax regression algorithm, which is a generalization"}, {"start": 185.54, "end": 192.12, "text": " of the logistic regression algorithm, and using that you'll be able to carry out multi-class"}, {"start": 192.12, "end": 194.32, "text": " classification problems."}, {"start": 194.32, "end": 200.48, "text": " And after that we'll take softmax regression and fit it into a new neural network so that"}, {"start": 200.48, "end": 206.32, "text": " you also be able to train a neural network to carry out multi-class classification problems."}, {"start": 206.32, "end": 216.35999999999999, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=pzxxgEZkdLM
5.7 Multiclass Classification | Softmax --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
The softmax regression algorithm is a generalization of logistic regression, which is a binary classification algorithm to the multi-cost classification context. Let's take a look at how it works. Recall that logistic regression applies when y can take on two possible output values, either 0 or 1. And the way it computes its output is, you would first calculate z equals w dot product with x plus b, and then you would compute what I'm going to call here a equals g of z, which is a sigmoid function applied to z. And we interpreted this as logistic regression's estimate of the probability of y being equal to 1, given those input features x. Now, quick quiz question. If the probability of y equals 1 is 0.71, then what is the probability that y is equal to 0? Well, the chance of y being 1 and the chance of y being 0, they've got to add up to 1, right? So there's a 71% chance of it being 1. There has to be a 29% or a 0.29 chance of it being equal to 0. So to embellish logistic regression a little bit in order to set us up for the generalization to softmax regression, I'm going to think of logistic regression as actually computing two numbers. First, a 1, which is this quantity that we had previously of the chance of y being equal to 1, given x, and second, I'm going to think of logistic regression as also computing a 2, which is 1 minus this, which is just the chance of y being equal to 0, given the input features x. And so a 1 and a 2, of course, have to add up to 1. Let's now generalize this to softmax regression. And I'm going to do this with a concrete example of when y can take on four possible outputs. So y can take on the values 1, 2, 3, or 4. Here's what softmax regression will do. It will compute z1 as w1 dot product with x plus b1, and then z2 equals w2 dot product with x plus b2, and so on for z3 and z4. Here w1, w2, w3, w4, as well as b1, b2, b3, b4, these are the parameters of softmax regression. Next, here's the formula for softmax regression. We'll compute a1 equals e to the z1 divided by e to the z1 plus e to the z2 plus e to the z3 plus e to the z4. And a1 will be interpreted as the average estimate of the chance of y being equal to 1, given the input features x. Then the formula for softmax regression will compute a2 equals e to the z2 divided by the same denominator, e to the z1 plus e to the z2 plus e to the z3 plus e to the z4, and will interpret a2 as the average estimate of the chance that y is equal to 2, given the input features x. And similarly for a3, where here the numerator is now e to the z3 divided by the same denominator, that's the estimated chance of y being equal to 3, and similarly a4 takes on this expression. Whereas on the left, we wrote down the specification for the logistic regression model, these equations on the right are our specification for the softmax regression model. It has parameters w1 through w4 and b1 through b4, and if you can learn appropriate choices for all these parameters, then this gives you a way of predicting what's the chance of y being 1, 2, 3 or 4, given a set of input features x. Quick quiz, let's say you run softmax regression on a new input x, and you find that a1 is 0.30, a2 is 0.20, a3 is 0.15. What do you think a4 will be? Why don't you take a look at this quiz and see if you can figure out the right answer. So you might have realized that because the chance of y taking on the values of 1, 2, 3 or 4, they have to add up to 1, a4, the chance of y being equal to 4, has to be 0.15. So 0.35, which is 1 minus 0.3 minus 0.2 minus 0.15. So here I wrote down the formulas for softmax regression in the case of 4 possible outputs, and let's now write down the formula for the general case for softmax regression. In the general case, y can take on n possible values, so y can be 1, 2, 3 and so on up to n. In that case, softmax regression will compute zj equals wj dot product with x plus bj, where now the parameters of softmax regression are w1, w2 through wn, as well as b1, b2 through bn. And then finally it will compute aj equals e to the zj divided by sum from k equals 1 to n of e to the z sub k. Well here I'm using another variable k to index the summation, because here j refers to a specific fixed number like j equals 1. aj is interpreted as the model's estimate that y is equal to j given the input features x. And notice that by construction of this formula, if you add up a1, a2 all the way through an, these numbers always will end up adding up to 1. So we specified how you would compute the softmax regression model. And I won't prove it in this video, but it turns out that if you apply softmax regression with n equals 2, so there are only two possible output classes, then softmax regression ends up computing basically the same thing as logistic regression. The parameters end up being a little bit different, but it ends up reducing to a logistic regression model. But that's why the softmax regression model is a generalization of logistic regression. Having defined how softmax regression computes its outputs, let's now take a look at how to specify the cost function for softmax regression. Recall for logistic regression, this is what we had. We said z is equal to this, and then I wrote earlier that a1 is g of z, it was interpreted as the probability that y is equal to 1. And we also wrote a2 is the probability that y is equal to clause 0. So previously we had written the loss of logistic regression as negative y log a1 minus 1 minus y log 1 minus a1. But 1 minus a1 is also equal to just a2, because a2 is 1 minus a1 according to this expression over here. So I can rewrite or simplify the loss for logistic regression a little bit to be negative y log a1 minus 1 minus y log of a2. And in other words, the loss if y is equal to 1 is negative log a1, and if y is equal to 0, then the loss is negative log a2. And then same as before, the cost function for all the parameters in the model is the average loss, average over the entire training set. So that was the cost function for logistic regression. Let's write down the cost function that is conventionally used for softmax regression. Recall that these are the equations we use for softmax regression. The loss we're going to use for softmax regression is just this. The loss for if the algorithm outputs a1 through an, and the ground truth label is y, is if y equals 1, the loss is negative log a1, so it's negative log of the probability that it thought y was equal to 1. Or if y is equal to 2, then the loss I'm going to define as negative log a2. So if y is equal to 2, the loss of the algorithm on this example is negative log of the probability it thought y was equal to 2. And so on all the way down to if y is equal to n, then the loss is negative log of an. And to illustrate what this is doing, if y is equal to j, then the loss is negative log of aj, and that's what this function looks like. Negative log of aj is a curve that looks like this. And so if aj was very close to 1, then you'd be on this part of the curve and the loss would be very small. But if it thought, say, aj had only a 50% chance, then the loss gets a little bit bigger. And the smaller aj is, the bigger the loss. And so this incentivizes the algorithm to make aj as large as possible, as close to 1 as possible, because whatever the actual value y was, you want the algorithm to say, hopefully, that the chance of y being that value was pretty large. Notice that in this loss function, y in each training example can take on only one value. And so you end up computing this negative log of aj only for one value of aj, which is whatever was the actual value of y equals j in that particular training example. For example, if y was equal to 2, you end up computing negative log of a2, but not any of the other negative log of a1 or the other terms here. So that's the form of the model, as well as the cost function for softmax regression. And if you were to train this model, you can start to build multi-class classification algorithms. And what we'd like to do next is take this softmax regression model and fit it into a neural network so that you're going to do something even better, which is to train a neural network for multi-class classification. Let's go through that in the next video.
[{"start": 0.0, "end": 8.48, "text": " The softmax regression algorithm is a generalization of logistic regression, which is a binary"}, {"start": 8.48, "end": 13.0, "text": " classification algorithm to the multi-cost classification context."}, {"start": 13.0, "end": 16.2, "text": " Let's take a look at how it works."}, {"start": 16.2, "end": 23.48, "text": " Recall that logistic regression applies when y can take on two possible output values,"}, {"start": 23.48, "end": 26.44, "text": " either 0 or 1."}, {"start": 26.44, "end": 32.56, "text": " And the way it computes its output is, you would first calculate z equals w dot product"}, {"start": 32.56, "end": 39.04, "text": " with x plus b, and then you would compute what I'm going to call here a equals g of"}, {"start": 39.04, "end": 43.400000000000006, "text": " z, which is a sigmoid function applied to z."}, {"start": 43.400000000000006, "end": 49.28, "text": " And we interpreted this as logistic regression's estimate of the probability of y being equal"}, {"start": 49.28, "end": 52.08, "text": " to 1, given those input features x."}, {"start": 52.08, "end": 57.199999999999996, "text": " Now, quick quiz question."}, {"start": 57.199999999999996, "end": 66.64, "text": " If the probability of y equals 1 is 0.71, then what is the probability that y is equal"}, {"start": 66.64, "end": 68.08, "text": " to 0?"}, {"start": 68.08, "end": 74.16, "text": " Well, the chance of y being 1 and the chance of y being 0, they've got to add up to 1,"}, {"start": 74.16, "end": 75.16, "text": " right?"}, {"start": 75.16, "end": 77.36, "text": " So there's a 71% chance of it being 1."}, {"start": 77.36, "end": 85.16, "text": " There has to be a 29% or a 0.29 chance of it being equal to 0."}, {"start": 85.16, "end": 90.2, "text": " So to embellish logistic regression a little bit in order to set us up for the generalization"}, {"start": 90.2, "end": 95.28, "text": " to softmax regression, I'm going to think of logistic regression as actually computing"}, {"start": 95.28, "end": 96.52, "text": " two numbers."}, {"start": 96.52, "end": 103.76, "text": " First, a 1, which is this quantity that we had previously of the chance of y being equal"}, {"start": 103.76, "end": 109.64, "text": " to 1, given x, and second, I'm going to think of logistic regression as also computing a"}, {"start": 109.64, "end": 120.80000000000001, "text": " 2, which is 1 minus this, which is just the chance of y being equal to 0, given the input"}, {"start": 120.80000000000001, "end": 121.80000000000001, "text": " features x."}, {"start": 121.80000000000001, "end": 126.84, "text": " And so a 1 and a 2, of course, have to add up to 1."}, {"start": 126.84, "end": 131.08, "text": " Let's now generalize this to softmax regression."}, {"start": 131.08, "end": 137.72000000000003, "text": " And I'm going to do this with a concrete example of when y can take on four possible outputs."}, {"start": 137.72000000000003, "end": 143.0, "text": " So y can take on the values 1, 2, 3, or 4."}, {"start": 143.0, "end": 145.72000000000003, "text": " Here's what softmax regression will do."}, {"start": 145.72000000000003, "end": 155.04000000000002, "text": " It will compute z1 as w1 dot product with x plus b1, and then z2 equals w2 dot product"}, {"start": 155.04000000000002, "end": 160.20000000000002, "text": " with x plus b2, and so on for z3 and z4."}, {"start": 160.2, "end": 170.56, "text": " Here w1, w2, w3, w4, as well as b1, b2, b3, b4, these are the parameters of softmax regression."}, {"start": 170.56, "end": 175.56, "text": " Next, here's the formula for softmax regression."}, {"start": 175.56, "end": 184.32, "text": " We'll compute a1 equals e to the z1 divided by e to the z1 plus e to the z2 plus e to"}, {"start": 184.32, "end": 187.16, "text": " the z3 plus e to the z4."}, {"start": 187.16, "end": 193.68, "text": " And a1 will be interpreted as the average estimate of the chance of y being equal to"}, {"start": 193.68, "end": 196.4, "text": " 1, given the input features x."}, {"start": 196.4, "end": 205.76, "text": " Then the formula for softmax regression will compute a2 equals e to the z2 divided by the"}, {"start": 205.76, "end": 211.12, "text": " same denominator, e to the z1 plus e to the z2 plus e to the z3 plus e to the z4, and"}, {"start": 211.12, "end": 216.84, "text": " will interpret a2 as the average estimate of the chance that y is equal to 2, given"}, {"start": 216.84, "end": 218.44, "text": " the input features x."}, {"start": 218.44, "end": 225.96, "text": " And similarly for a3, where here the numerator is now e to the z3 divided by the same denominator,"}, {"start": 225.96, "end": 233.04, "text": " that's the estimated chance of y being equal to 3, and similarly a4 takes on this expression."}, {"start": 233.04, "end": 240.5, "text": " Whereas on the left, we wrote down the specification for the logistic regression model, these equations"}, {"start": 240.5, "end": 246.6, "text": " on the right are our specification for the softmax regression model."}, {"start": 246.6, "end": 254.12, "text": " It has parameters w1 through w4 and b1 through b4, and if you can learn appropriate choices"}, {"start": 254.12, "end": 258.94, "text": " for all these parameters, then this gives you a way of predicting what's the chance"}, {"start": 258.94, "end": 265.68, "text": " of y being 1, 2, 3 or 4, given a set of input features x."}, {"start": 265.68, "end": 271.68, "text": " Quick quiz, let's say you run softmax regression on a new input x, and you find that a1 is"}, {"start": 271.68, "end": 280.76, "text": " 0.30, a2 is 0.20, a3 is 0.15."}, {"start": 280.76, "end": 282.8, "text": " What do you think a4 will be?"}, {"start": 282.8, "end": 289.56, "text": " Why don't you take a look at this quiz and see if you can figure out the right answer."}, {"start": 289.56, "end": 294.96000000000004, "text": " So you might have realized that because the chance of y taking on the values of 1, 2,"}, {"start": 294.96000000000004, "end": 301.6, "text": " 3 or 4, they have to add up to 1, a4, the chance of y being equal to 4, has to be 0.15."}, {"start": 301.6, "end": 307.72, "text": " So 0.35, which is 1 minus 0.3 minus 0.2 minus 0.15."}, {"start": 307.72, "end": 315.04, "text": " So here I wrote down the formulas for softmax regression in the case of 4 possible outputs,"}, {"start": 315.04, "end": 321.28000000000003, "text": " and let's now write down the formula for the general case for softmax regression."}, {"start": 321.28000000000003, "end": 328.32000000000005, "text": " In the general case, y can take on n possible values, so y can be 1, 2, 3 and so on up to"}, {"start": 328.32000000000005, "end": 329.72, "text": " n."}, {"start": 329.72, "end": 338.68, "text": " In that case, softmax regression will compute zj equals wj dot product with x plus bj, where"}, {"start": 338.68, "end": 348.04, "text": " now the parameters of softmax regression are w1, w2 through wn, as well as b1, b2 through"}, {"start": 348.04, "end": 349.76000000000005, "text": " bn."}, {"start": 349.76, "end": 362.56, "text": " And then finally it will compute aj equals e to the zj divided by sum from k equals 1 to n of e to the z sub k."}, {"start": 362.56, "end": 369.2, "text": " Well here I'm using another variable k to index the summation, because here j refers"}, {"start": 369.2, "end": 372.64, "text": " to a specific fixed number like j equals 1."}, {"start": 372.64, "end": 380.0, "text": " aj is interpreted as the model's estimate that y is equal to j given the input features"}, {"start": 380.0, "end": 381.0, "text": " x."}, {"start": 381.0, "end": 387.76, "text": " And notice that by construction of this formula, if you add up a1, a2 all the way through an,"}, {"start": 387.76, "end": 390.96, "text": " these numbers always will end up adding up to 1."}, {"start": 390.96, "end": 396.47999999999996, "text": " So we specified how you would compute the softmax regression model."}, {"start": 396.47999999999996, "end": 401.28, "text": " And I won't prove it in this video, but it turns out that if you apply softmax regression"}, {"start": 401.28, "end": 408.28, "text": " with n equals 2, so there are only two possible output classes, then softmax regression ends"}, {"start": 408.28, "end": 413.03999999999996, "text": " up computing basically the same thing as logistic regression."}, {"start": 413.03999999999996, "end": 417.79999999999995, "text": " The parameters end up being a little bit different, but it ends up reducing to a logistic regression"}, {"start": 417.79999999999995, "end": 418.79999999999995, "text": " model."}, {"start": 418.79999999999995, "end": 424.64, "text": " But that's why the softmax regression model is a generalization of logistic regression."}, {"start": 424.64, "end": 429.71999999999997, "text": " Having defined how softmax regression computes its outputs, let's now take a look at how"}, {"start": 429.72, "end": 434.84000000000003, "text": " to specify the cost function for softmax regression."}, {"start": 434.84000000000003, "end": 437.52000000000004, "text": " Recall for logistic regression, this is what we had."}, {"start": 437.52000000000004, "end": 445.36, "text": " We said z is equal to this, and then I wrote earlier that a1 is g of z, it was interpreted"}, {"start": 445.36, "end": 447.88000000000005, "text": " as the probability that y is equal to 1."}, {"start": 447.88000000000005, "end": 454.96000000000004, "text": " And we also wrote a2 is the probability that y is equal to clause 0."}, {"start": 454.96, "end": 463.59999999999997, "text": " So previously we had written the loss of logistic regression as negative y log a1 minus 1 minus"}, {"start": 463.59999999999997, "end": 467.28, "text": " y log 1 minus a1."}, {"start": 467.28, "end": 477.2, "text": " But 1 minus a1 is also equal to just a2, because a2 is 1 minus a1 according to this expression"}, {"start": 477.2, "end": 479.09999999999997, "text": " over here."}, {"start": 479.1, "end": 484.92, "text": " So I can rewrite or simplify the loss for logistic regression a little bit to be negative"}, {"start": 484.92, "end": 491.28000000000003, "text": " y log a1 minus 1 minus y log of a2."}, {"start": 491.28000000000003, "end": 500.12, "text": " And in other words, the loss if y is equal to 1 is negative log a1, and if y is equal"}, {"start": 500.12, "end": 505.62, "text": " to 0, then the loss is negative log a2."}, {"start": 505.62, "end": 509.88, "text": " And then same as before, the cost function for all the parameters in the model is the"}, {"start": 509.88, "end": 513.94, "text": " average loss, average over the entire training set."}, {"start": 513.94, "end": 517.44, "text": " So that was the cost function for logistic regression."}, {"start": 517.44, "end": 525.08, "text": " Let's write down the cost function that is conventionally used for softmax regression."}, {"start": 525.08, "end": 530.0, "text": " Recall that these are the equations we use for softmax regression."}, {"start": 530.0, "end": 534.48, "text": " The loss we're going to use for softmax regression is just this."}, {"start": 534.48, "end": 546.52, "text": " The loss for if the algorithm outputs a1 through an, and the ground truth label is y, is if"}, {"start": 546.52, "end": 552.72, "text": " y equals 1, the loss is negative log a1, so it's negative log of the probability that"}, {"start": 552.72, "end": 556.36, "text": " it thought y was equal to 1."}, {"start": 556.36, "end": 563.02, "text": " Or if y is equal to 2, then the loss I'm going to define as negative log a2."}, {"start": 563.02, "end": 569.9399999999999, "text": " So if y is equal to 2, the loss of the algorithm on this example is negative log of the probability"}, {"start": 569.9399999999999, "end": 572.6, "text": " it thought y was equal to 2."}, {"start": 572.6, "end": 580.56, "text": " And so on all the way down to if y is equal to n, then the loss is negative log of an."}, {"start": 580.56, "end": 590.1, "text": " And to illustrate what this is doing, if y is equal to j, then the loss is negative log"}, {"start": 590.1, "end": 595.16, "text": " of aj, and that's what this function looks like."}, {"start": 595.16, "end": 599.86, "text": " Negative log of aj is a curve that looks like this."}, {"start": 599.86, "end": 606.24, "text": " And so if aj was very close to 1, then you'd be on this part of the curve and the loss"}, {"start": 606.24, "end": 608.36, "text": " would be very small."}, {"start": 608.36, "end": 614.72, "text": " But if it thought, say, aj had only a 50% chance, then the loss gets a little bit bigger."}, {"start": 614.72, "end": 619.36, "text": " And the smaller aj is, the bigger the loss."}, {"start": 619.36, "end": 625.76, "text": " And so this incentivizes the algorithm to make aj as large as possible, as close to"}, {"start": 625.76, "end": 630.92, "text": " 1 as possible, because whatever the actual value y was, you want the algorithm to say,"}, {"start": 630.92, "end": 635.82, "text": " hopefully, that the chance of y being that value was pretty large."}, {"start": 635.82, "end": 642.98, "text": " Notice that in this loss function, y in each training example can take on only one value."}, {"start": 642.98, "end": 650.44, "text": " And so you end up computing this negative log of aj only for one value of aj, which"}, {"start": 650.44, "end": 655.58, "text": " is whatever was the actual value of y equals j in that particular training example."}, {"start": 655.58, "end": 661.28, "text": " For example, if y was equal to 2, you end up computing negative log of a2, but not any"}, {"start": 661.28, "end": 664.7, "text": " of the other negative log of a1 or the other terms here."}, {"start": 664.7, "end": 670.48, "text": " So that's the form of the model, as well as the cost function for softmax regression."}, {"start": 670.48, "end": 675.36, "text": " And if you were to train this model, you can start to build multi-class classification"}, {"start": 675.36, "end": 677.24, "text": " algorithms."}, {"start": 677.24, "end": 682.16, "text": " And what we'd like to do next is take this softmax regression model and fit it into a"}, {"start": 682.16, "end": 687.12, "text": " neural network so that you're going to do something even better, which is to train a"}, {"start": 687.12, "end": 690.48, "text": " neural network for multi-class classification."}, {"start": 690.48, "end": 701.16, "text": " Let's go through that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=24QO9iNXvWs
5.8 Multiclass Classification | Neural Network with Softmax output --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In order to build a neural network that can carry out multi-class classification, we're going to take the Softmax regression model and put it into essentially the output layer of a neural network. Let's take a look at how to do that. Previously, when we were doing handwritten digit recognition with just two clauses, we used a neural network with this architecture. If you now want to do handwritten digit classification with 10 clauses, all the digits from 0 to 9, then we're going to change this neural network to have 10 output units, like so. And this new output layer will be a Softmax output layer. So sometimes we'll say this neural network has a Softmax output or that this output layer is a Softmax layer. And the way forward propagation works in this neural network is, given an input x, a1 gets computed exactly the same as before, and then a2, deactivations for the second hidden layer, also get computed exactly the same as before. And we now have to compute deactivations for this output layer. That is a3. This is how it works. If you have 10 output clauses, we will compute z1, z2, through z10 using these expressions. So this is actually very similar to what we had previously for the formula you used to compute z. z1 is w1.product with a2, deactivations from the previous layer, plus b1, and so on, for z1 through z10. Then a1 is equal to e to the z1 divided by e to the z1 plus dot dot dot plus up to e to the z10. And that's our estimate of the chance that y is equal to 1. And similarly for a2, and similarly all the way up to a10. And so this gives you your estimates of the chance of y being equal to 1, 2, and so on up through the 10th possible label for y. And just for completeness, if you want to indicate that these are the quantities associated with layer 3, technically I should add these superscript 3s there. It does make the notation a little bit more cluttered, but this makes explicit that this is, for example, the z31 value and this is the parameters associated with the first unit of layer 3 of this neural network. And with this, your softmax output layer now gives you estimates of the chance of y being any of these 10 possible output labels. I do want to mention that the softmax layer, or sometimes also called the softmax activation function, it is a little bit unusual in one respect compared to the other activation functions we've seen so far like sigmoid, ReLU, and linear, which is that when we're looking at sigmoid or ReLU or linear activation functions, a1 was a function of z1 and a2 was a function of z2 and only z2. In other words, to obtain the activation values, we could apply the activation function g, be it sigmoid or ReLU or something else, element-wise to z1 and z2 and so on to get any of the a1 and a2 and a3 and a4. But with the softmax activation function, notice that a1 is a function of z1 and z2 and z3 all the way up to z10. So each of these activation values depends on all of the values of z. And this is a property that's a bit unique to the softmax output or the softmax activation function. I'll state it differently, if you want to compute a1 through a10, that is a function of z1 all the way up to z10 simultaneously. And this is unlike the other activation functions we've seen so far. Finally, let's look at how you would implement this in TensorFlow. If you want to implement the neural network that I've shown here on this slide, this is the code to do so. Similar as before, there are three steps to specifying and training the model. The first step is to tell TensorFlow to sequentially string together three layers. First layer is this 25 units with ReLU activation function, second layer, 15 units with ReLU activation function, and then the third layer, because there are now 10 output units, you want to output a1 through a10, so there are now 10 output units, and we'll tell TensorFlow to use the softmax activation function. And the cost function that you saw in the last video, TensorFlow calls that the sparse categorical cross entropy function. So I know this name is a bit of a mouthful, whereas for logistic regression, we had the binary cross entropy function. Here we're using the sparse categorical cross entropy function, and what sparse categorical refers to is that you still classify y into categories, so it's categorical, it takes on values from 1 to 10, and sparse refers to that y can only take on one of these 10 values, so each image is either 0 or 1 or 2 or so on up to 9, you're not going to see a picture that is simultaneously the number 2 and the number 7, so sparse refers to that each digit is only one of these categories. So that's why the loss function that you saw in the last video is called, in TensorFlow, the sparse categorical cross entropy loss function. And then the code for training the model is just the same as before, and if you use this code, you can train a neural network on a multi-class classification problem. Just one important note, if you use this code exactly as I've written here, it will work, but don't actually use this code, because it turns out that in TensorFlow, there's a better version of the code that makes TensorFlow work better. So even though the code shown in this slide works, don't use this code the way I've written it here, because in a later video this week, you'll see a different version, there's actually the recommended version of implementing this that will work better, but we'll take a look at that in a later video. So now you know how to train a neural network with a softmax output layer with one caveat. There's a different version of the code that will make TensorFlow able to compute these probabilities much more accurately. Let's take a look at that in the next video, which will also show you the actual code that I recommend you use if you're training a softmax neural network. Let's go on to the next video.
[{"start": 0.0, "end": 7.12, "text": " In order to build a neural network that can carry out multi-class classification, we're"}, {"start": 7.12, "end": 13.24, "text": " going to take the Softmax regression model and put it into essentially the output layer"}, {"start": 13.24, "end": 14.24, "text": " of a neural network."}, {"start": 14.24, "end": 16.92, "text": " Let's take a look at how to do that."}, {"start": 16.92, "end": 22.98, "text": " Previously, when we were doing handwritten digit recognition with just two clauses, we"}, {"start": 22.98, "end": 26.580000000000002, "text": " used a neural network with this architecture."}, {"start": 26.58, "end": 34.519999999999996, "text": " If you now want to do handwritten digit classification with 10 clauses, all the digits from 0 to"}, {"start": 34.519999999999996, "end": 42.519999999999996, "text": " 9, then we're going to change this neural network to have 10 output units, like so."}, {"start": 42.519999999999996, "end": 47.04, "text": " And this new output layer will be a Softmax output layer."}, {"start": 47.04, "end": 51.8, "text": " So sometimes we'll say this neural network has a Softmax output or that this output layer"}, {"start": 51.8, "end": 55.0, "text": " is a Softmax layer."}, {"start": 55.0, "end": 61.08, "text": " And the way forward propagation works in this neural network is, given an input x,"}, {"start": 61.08, "end": 67.2, "text": " a1 gets computed exactly the same as before, and then a2, deactivations for the second"}, {"start": 67.2, "end": 72.6, "text": " hidden layer, also get computed exactly the same as before."}, {"start": 72.6, "end": 77.6, "text": " And we now have to compute deactivations for this output layer."}, {"start": 77.6, "end": 79.8, "text": " That is a3."}, {"start": 79.8, "end": 81.8, "text": " This is how it works."}, {"start": 81.8, "end": 89.64, "text": " If you have 10 output clauses, we will compute z1, z2, through z10 using these expressions."}, {"start": 89.64, "end": 94.64, "text": " So this is actually very similar to what we had previously for the formula you used to"}, {"start": 94.64, "end": 95.64, "text": " compute z."}, {"start": 95.64, "end": 104.4, "text": " z1 is w1.product with a2, deactivations from the previous layer, plus b1, and so on, for"}, {"start": 104.4, "end": 107.2, "text": " z1 through z10."}, {"start": 107.2, "end": 116.24000000000001, "text": " Then a1 is equal to e to the z1 divided by e to the z1 plus dot dot dot plus up to e"}, {"start": 116.24000000000001, "end": 117.8, "text": " to the z10."}, {"start": 117.8, "end": 122.76, "text": " And that's our estimate of the chance that y is equal to 1."}, {"start": 122.76, "end": 129.52, "text": " And similarly for a2, and similarly all the way up to a10."}, {"start": 129.52, "end": 135.44, "text": " And so this gives you your estimates of the chance of y being equal to 1, 2, and so on"}, {"start": 135.44, "end": 139.84, "text": " up through the 10th possible label for y."}, {"start": 139.84, "end": 145.72, "text": " And just for completeness, if you want to indicate that these are the quantities associated"}, {"start": 145.72, "end": 151.3, "text": " with layer 3, technically I should add these superscript 3s there."}, {"start": 151.3, "end": 155.44, "text": " It does make the notation a little bit more cluttered, but this makes explicit that this"}, {"start": 155.44, "end": 165.07999999999998, "text": " is, for example, the z31 value and this is the parameters associated with the first unit"}, {"start": 165.08, "end": 169.16000000000003, "text": " of layer 3 of this neural network."}, {"start": 169.16000000000003, "end": 176.44, "text": " And with this, your softmax output layer now gives you estimates of the chance of y being"}, {"start": 176.44, "end": 180.10000000000002, "text": " any of these 10 possible output labels."}, {"start": 180.10000000000002, "end": 185.04000000000002, "text": " I do want to mention that the softmax layer, or sometimes also called the softmax activation"}, {"start": 185.04000000000002, "end": 191.0, "text": " function, it is a little bit unusual in one respect compared to the other activation functions"}, {"start": 191.0, "end": 196.68, "text": " we've seen so far like sigmoid, ReLU, and linear, which is that when we're looking at sigmoid"}, {"start": 196.68, "end": 206.4, "text": " or ReLU or linear activation functions, a1 was a function of z1 and a2 was a function"}, {"start": 206.4, "end": 210.04, "text": " of z2 and only z2."}, {"start": 210.04, "end": 215.48, "text": " In other words, to obtain the activation values, we could apply the activation function g,"}, {"start": 215.48, "end": 220.96, "text": " be it sigmoid or ReLU or something else, element-wise to z1 and z2 and so on to get any of the"}, {"start": 220.96, "end": 224.88, "text": " a1 and a2 and a3 and a4."}, {"start": 224.88, "end": 232.88, "text": " But with the softmax activation function, notice that a1 is a function of z1 and z2"}, {"start": 232.88, "end": 235.56, "text": " and z3 all the way up to z10."}, {"start": 235.56, "end": 240.96, "text": " So each of these activation values depends on all of the values of z."}, {"start": 240.96, "end": 246.24, "text": " And this is a property that's a bit unique to the softmax output or the softmax activation"}, {"start": 246.24, "end": 247.24, "text": " function."}, {"start": 247.24, "end": 254.68, "text": " I'll state it differently, if you want to compute a1 through a10, that is a function"}, {"start": 254.68, "end": 261.24, "text": " of z1 all the way up to z10 simultaneously."}, {"start": 261.24, "end": 264.72, "text": " And this is unlike the other activation functions we've seen so far."}, {"start": 264.72, "end": 270.32, "text": " Finally, let's look at how you would implement this in TensorFlow."}, {"start": 270.32, "end": 276.16, "text": " If you want to implement the neural network that I've shown here on this slide, this is"}, {"start": 276.16, "end": 278.32000000000005, "text": " the code to do so."}, {"start": 278.32000000000005, "end": 282.20000000000005, "text": " Similar as before, there are three steps to specifying and training the model."}, {"start": 282.20000000000005, "end": 288.20000000000005, "text": " The first step is to tell TensorFlow to sequentially string together three layers."}, {"start": 288.20000000000005, "end": 294.48, "text": " First layer is this 25 units with ReLU activation function, second layer, 15 units with ReLU"}, {"start": 294.48, "end": 299.56, "text": " activation function, and then the third layer, because there are now 10 output units, you"}, {"start": 299.56, "end": 304.92, "text": " want to output a1 through a10, so there are now 10 output units, and we'll tell TensorFlow"}, {"start": 304.92, "end": 309.2, "text": " to use the softmax activation function."}, {"start": 309.2, "end": 316.04, "text": " And the cost function that you saw in the last video, TensorFlow calls that the sparse"}, {"start": 316.04, "end": 320.40000000000003, "text": " categorical cross entropy function."}, {"start": 320.40000000000003, "end": 326.0, "text": " So I know this name is a bit of a mouthful, whereas for logistic regression, we had the"}, {"start": 326.0, "end": 328.92, "text": " binary cross entropy function."}, {"start": 328.92, "end": 335.40000000000003, "text": " Here we're using the sparse categorical cross entropy function, and what sparse categorical"}, {"start": 335.40000000000003, "end": 342.20000000000005, "text": " refers to is that you still classify y into categories, so it's categorical, it takes"}, {"start": 342.20000000000005, "end": 349.6, "text": " on values from 1 to 10, and sparse refers to that y can only take on one of these 10"}, {"start": 349.6, "end": 354.76, "text": " values, so each image is either 0 or 1 or 2 or so on up to 9, you're not going to see"}, {"start": 354.76, "end": 360.28, "text": " a picture that is simultaneously the number 2 and the number 7, so sparse refers to that"}, {"start": 360.28, "end": 364.0, "text": " each digit is only one of these categories."}, {"start": 364.0, "end": 369.12, "text": " So that's why the loss function that you saw in the last video is called, in TensorFlow,"}, {"start": 369.12, "end": 373.8, "text": " the sparse categorical cross entropy loss function."}, {"start": 373.8, "end": 379.59999999999997, "text": " And then the code for training the model is just the same as before, and if you use this"}, {"start": 379.6, "end": 385.48, "text": " code, you can train a neural network on a multi-class classification problem."}, {"start": 385.48, "end": 391.08000000000004, "text": " Just one important note, if you use this code exactly as I've written here, it will work,"}, {"start": 391.08000000000004, "end": 396.28000000000003, "text": " but don't actually use this code, because it turns out that in TensorFlow, there's a"}, {"start": 396.28000000000003, "end": 401.16, "text": " better version of the code that makes TensorFlow work better."}, {"start": 401.16, "end": 406.48, "text": " So even though the code shown in this slide works, don't use this code the way I've written"}, {"start": 406.48, "end": 411.56, "text": " it here, because in a later video this week, you'll see a different version, there's actually"}, {"start": 411.56, "end": 416.12, "text": " the recommended version of implementing this that will work better, but we'll take a look"}, {"start": 416.12, "end": 418.70000000000005, "text": " at that in a later video."}, {"start": 418.70000000000005, "end": 425.76, "text": " So now you know how to train a neural network with a softmax output layer with one caveat."}, {"start": 425.76, "end": 430.96000000000004, "text": " There's a different version of the code that will make TensorFlow able to compute these"}, {"start": 430.96000000000004, "end": 433.84000000000003, "text": " probabilities much more accurately."}, {"start": 433.84, "end": 438.32, "text": " Let's take a look at that in the next video, which will also show you the actual code that"}, {"start": 438.32, "end": 442.23999999999995, "text": " I recommend you use if you're training a softmax neural network."}, {"start": 442.24, "end": 464.68, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Izt9Bn8HLUM
5.9 Multiclass Classification | Improved implementation of softmax --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
The implementation that you saw in the last video of a neural network with a softmax layer will work okay, but there's an even better way to implement it. Let's take a look at what can go wrong with that implementation and also how to make it better. Let me show you two different ways of computing the same quantity in a computer. Option one, we can set x equals to 2 over 10,000. Option two, we can set x equals 1 plus 1 over 10,000 minus 1 minus 1 over 10,000. I wish you'd first compute this and then compute this and you take the difference. And if you simplify this expression, this turns out to be equal to 2 over 10,000. Let me illustrate this in this notebook. So first, let's set x equals 2 over 10,000 and print the result to a lot of decimal points of accuracy. Okay, that looks pretty good. Second, let me set x equals, I'm going to insist on computing 1 over 1 plus 10,000 and then subtract from that 1 minus 1 over 10,000 and let's print that out. And oh, okay, this looks a little bit off. As if there's some round off error. Because a computer has only a finite amount of memory to store each number, called a floating point number in this case, depending on how you decide to compute the value to over 10,000, the result can have more or less numerical round off error. And it turns out that while the way we have been computing the cost function for softmax is correct, there's a different way of formulating it that reduces these numerical round off errors, leading to more accurate computations within TensorFlow. Let me first explain this a little bit more detail using logistic regression. And then we will show how these ideas apply to improving our implementation of softmax. So first, let me illustrate these ideas using logistic regression. And then we'll move on to show how to improve your implementation of softmax as well. Recall that for logistic regression, if you want to compute the loss function, for a given example, you would first compute this output activation A, which is g of z, or 1 over 1 plus e is negative z. And then you compute the loss using this expression over here. And in fact, this is what the code would look like for a logistic output layer with this binary cross entropy loss. And for logistic regression, this works okay. And usually the numerical round off errors aren't that bad. But it turns out that if you allow TensorFlow to not have to compute A as an intermediate term, but instead, if you tell TensorFlow that the loss is this expression down here, and all I've done is I've taken A and expanded it into this expression down here, then TensorFlow can rearrange terms in this expression and come up with a more numerically accurate way to compute this loss function. And so whereas the original procedure was like insisting on computing as an intermediate value, 1 plus 1 over 10,000, and another intermediate value, 1 minus 1 over 10,000, and then manipulating these two to get 2 over 10,000, this original implementation was insisting on explicitly computing A as an intermediate quantity. But instead, by specifying this expression at the bottom directly as a loss function, it gives TensorFlow more flexibility in terms of how to compute this and whether or not it wants to compute A explicitly. And so the code you can use to do this is shown here. And what this does is it sets the output layer to just use a linear activation function, and it puts both the activation function, 1 over 1 plus e to the negative z, as well as this cross entropy loss into the specification of the loss function over here. And that's what this from logits equals true argument causes TensorFlow to do. And in case you're wondering what the logits are, it's basically this number z. So TensorFlow will compute z as an intermediate value, but it can rearrange terms to make this become computed more accurately. One downside of this code is it becomes a little bit less legible, but this causes TensorFlow to have a little bit less numerical round off error. Now in the case of logistic regression, either of these implementations actually works okay. But the numerical round off errors can get worse when it comes to softmax. Now let's take this idea and apply it to softmax regression. Recall what you saw in the last video was you compute the activations as follows. The activations is g of z1 through z10, where a1, for example, is e to the z1 divided by the sum of the e to the zj's. And then the loss was this, depending on what is the actual value of y is negative log of aj for one of the aj's. And so this was the code that we had to do this computation in two separate steps. But once again, if you instead specify that the loss is if y is equal to 1 is negative log of this formula, and so on, if y is equal to 10 is this formula, then this gives TensorFlow the ability to rearrange terms and compute this in a more numerically accurate way. Just to give you some intuition for why TensorFlow might want to do this, it turns out if one of the z's is really small, then e to a negative small number becomes very, very small. Or if one of the z's is a very large number, then e to the z can become a very, very large number. And by rearranging terms, TensorFlow can avoid some of these very small or very large numbers and therefore come up with a more accurate computation for the loss function. So the code for doing this is shown here. In the output layer, we're now just using a linear activation function. So the output layer just computes z1 through z10. And this whole computation of the loss is then captured in the loss function over here, where again, we have the firm log of z equals true parameter. So once again, these two pieces of code do pretty much the same thing, except that the version that is recommended is more numerically accurate, although unfortunately, it is a little bit harder to read as well. So if you're reading someone else's code and you see this and you wonder what's going on, it's actually equivalent to the original implementation and these in concept, except that is more numerically accurate. The numerical round off errors for logistic regression aren't that bad, but it is recommended that you use this implementation down at the bottom instead. And conceptually, this code does the same thing as the first version that you had previously, except that it is a little bit more numerically accurate. Although the downside is maybe just a little bit harder to interpret as well. Now, there's just one more detail, which is that we've now changed the neural network to use a linear activation function rather than a softmax activation function. And so the neural network's final layer no longer outputs these probabilities a1 through a10, it is instead outputting z1 through z10. And I didn't talk about it in the case of logistic regression, but if you were combining the output logistic function with the loss function, then for logistic regression, you also have to change the code this way to take the output value and map it through the logistic function in order to actually get the probability. So you now know how to do multiclass classification with a softmax output layer, and also how to do it in a numerically stable way. Before wrapping up multiclass classification, I want to share with you one other type of classification problem called a multi-label classification problem. Let's talk about that in the next video.
[{"start": 0.0, "end": 8.92, "text": " The implementation that you saw in the last video of a neural network with a softmax layer"}, {"start": 8.92, "end": 13.68, "text": " will work okay, but there's an even better way to implement it."}, {"start": 13.68, "end": 18.04, "text": " Let's take a look at what can go wrong with that implementation and also how to make it"}, {"start": 18.04, "end": 19.52, "text": " better."}, {"start": 19.52, "end": 25.32, "text": " Let me show you two different ways of computing the same quantity in a computer."}, {"start": 25.32, "end": 29.560000000000002, "text": " Option one, we can set x equals to 2 over 10,000."}, {"start": 29.56, "end": 38.6, "text": " Option two, we can set x equals 1 plus 1 over 10,000 minus 1 minus 1 over 10,000."}, {"start": 38.6, "end": 44.56, "text": " I wish you'd first compute this and then compute this and you take the difference."}, {"start": 44.56, "end": 51.28, "text": " And if you simplify this expression, this turns out to be equal to 2 over 10,000."}, {"start": 51.28, "end": 53.44, "text": " Let me illustrate this in this notebook."}, {"start": 53.44, "end": 59.72, "text": " So first, let's set x equals 2 over 10,000 and print the result to a lot of decimal"}, {"start": 59.72, "end": 60.72, "text": " points of accuracy."}, {"start": 60.72, "end": 62.72, "text": " Okay, that looks pretty good."}, {"start": 62.72, "end": 69.88, "text": " Second, let me set x equals, I'm going to insist on computing 1 over 1 plus 10,000 and"}, {"start": 69.88, "end": 74.8, "text": " then subtract from that 1 minus 1 over 10,000 and let's print that out."}, {"start": 74.8, "end": 77.68, "text": " And oh, okay, this looks a little bit off."}, {"start": 77.68, "end": 81.0, "text": " As if there's some round off error."}, {"start": 81.0, "end": 86.76, "text": " Because a computer has only a finite amount of memory to store each number, called a floating"}, {"start": 86.76, "end": 94.08, "text": " point number in this case, depending on how you decide to compute the value to over 10,000,"}, {"start": 94.08, "end": 99.6, "text": " the result can have more or less numerical round off error."}, {"start": 99.6, "end": 106.14, "text": " And it turns out that while the way we have been computing the cost function for softmax"}, {"start": 106.14, "end": 111.92, "text": " is correct, there's a different way of formulating it that reduces these numerical round off"}, {"start": 111.92, "end": 117.28, "text": " errors, leading to more accurate computations within TensorFlow."}, {"start": 117.28, "end": 121.5, "text": " Let me first explain this a little bit more detail using logistic regression."}, {"start": 121.5, "end": 128.12, "text": " And then we will show how these ideas apply to improving our implementation of softmax."}, {"start": 128.12, "end": 132.52, "text": " So first, let me illustrate these ideas using logistic regression."}, {"start": 132.52, "end": 138.56, "text": " And then we'll move on to show how to improve your implementation of softmax as well."}, {"start": 138.56, "end": 145.60000000000002, "text": " Recall that for logistic regression, if you want to compute the loss function, for a given"}, {"start": 145.60000000000002, "end": 150.68, "text": " example, you would first compute this output activation A, which is g of z, or 1 over 1"}, {"start": 150.68, "end": 152.84, "text": " plus e is negative z."}, {"start": 152.84, "end": 158.8, "text": " And then you compute the loss using this expression over here."}, {"start": 158.8, "end": 167.88000000000002, "text": " And in fact, this is what the code would look like for a logistic output layer with this"}, {"start": 167.88000000000002, "end": 171.04000000000002, "text": " binary cross entropy loss."}, {"start": 171.04000000000002, "end": 173.16000000000003, "text": " And for logistic regression, this works okay."}, {"start": 173.16000000000003, "end": 177.4, "text": " And usually the numerical round off errors aren't that bad."}, {"start": 177.4, "end": 184.84, "text": " But it turns out that if you allow TensorFlow to not have to compute A as an intermediate"}, {"start": 184.84, "end": 191.4, "text": " term, but instead, if you tell TensorFlow that the loss is this expression down here,"}, {"start": 191.4, "end": 199.84, "text": " and all I've done is I've taken A and expanded it into this expression down here, then TensorFlow"}, {"start": 199.84, "end": 206.24, "text": " can rearrange terms in this expression and come up with a more numerically accurate way"}, {"start": 206.24, "end": 209.12, "text": " to compute this loss function."}, {"start": 209.12, "end": 214.84, "text": " And so whereas the original procedure was like insisting on computing as an intermediate"}, {"start": 214.84, "end": 224.88, "text": " value, 1 plus 1 over 10,000, and another intermediate value, 1 minus 1 over 10,000, and then manipulating"}, {"start": 224.88, "end": 230.68, "text": " these two to get 2 over 10,000, this original implementation was insisting on explicitly"}, {"start": 230.68, "end": 235.56, "text": " computing A as an intermediate quantity."}, {"start": 235.56, "end": 240.72, "text": " But instead, by specifying this expression at the bottom directly as a loss function,"}, {"start": 240.72, "end": 246.16, "text": " it gives TensorFlow more flexibility in terms of how to compute this and whether or not"}, {"start": 246.16, "end": 248.94, "text": " it wants to compute A explicitly."}, {"start": 248.94, "end": 255.06, "text": " And so the code you can use to do this is shown here."}, {"start": 255.06, "end": 261.88, "text": " And what this does is it sets the output layer to just use a linear activation function,"}, {"start": 261.88, "end": 267.71999999999997, "text": " and it puts both the activation function, 1 over 1 plus e to the negative z, as well"}, {"start": 267.71999999999997, "end": 275.4, "text": " as this cross entropy loss into the specification of the loss function over here."}, {"start": 275.4, "end": 282.24, "text": " And that's what this from logits equals true argument causes TensorFlow to do."}, {"start": 282.24, "end": 286.6, "text": " And in case you're wondering what the logits are, it's basically this number z."}, {"start": 286.6, "end": 292.92, "text": " So TensorFlow will compute z as an intermediate value, but it can rearrange terms to make"}, {"start": 292.92, "end": 296.44, "text": " this become computed more accurately."}, {"start": 296.44, "end": 302.76000000000005, "text": " One downside of this code is it becomes a little bit less legible, but this causes TensorFlow"}, {"start": 302.76000000000005, "end": 305.56, "text": " to have a little bit less numerical round off error."}, {"start": 305.56, "end": 311.48, "text": " Now in the case of logistic regression, either of these implementations actually works okay."}, {"start": 311.48, "end": 316.8, "text": " But the numerical round off errors can get worse when it comes to softmax."}, {"start": 316.8, "end": 320.84000000000003, "text": " Now let's take this idea and apply it to softmax regression."}, {"start": 320.84000000000003, "end": 326.72, "text": " Recall what you saw in the last video was you compute the activations as follows."}, {"start": 326.72, "end": 335.24, "text": " The activations is g of z1 through z10, where a1, for example, is e to the z1 divided by"}, {"start": 335.24, "end": 338.68, "text": " the sum of the e to the zj's."}, {"start": 338.68, "end": 343.64, "text": " And then the loss was this, depending on what is the actual value of y is negative log of"}, {"start": 343.64, "end": 346.36, "text": " aj for one of the aj's."}, {"start": 346.36, "end": 353.52, "text": " And so this was the code that we had to do this computation in two separate steps."}, {"start": 353.52, "end": 362.72, "text": " But once again, if you instead specify that the loss is if y is equal to 1 is negative"}, {"start": 362.72, "end": 374.24, "text": " log of this formula, and so on, if y is equal to 10 is this formula, then this gives TensorFlow"}, {"start": 374.24, "end": 381.12, "text": " the ability to rearrange terms and compute this in a more numerically accurate way."}, {"start": 381.12, "end": 386.92, "text": " Just to give you some intuition for why TensorFlow might want to do this, it turns out if one"}, {"start": 386.92, "end": 393.28000000000003, "text": " of the z's is really small, then e to a negative small number becomes very, very small."}, {"start": 393.28000000000003, "end": 397.0, "text": " Or if one of the z's is a very large number, then e to the z can become a very, very large"}, {"start": 397.0, "end": 398.36, "text": " number."}, {"start": 398.36, "end": 403.72, "text": " And by rearranging terms, TensorFlow can avoid some of these very small or very large numbers"}, {"start": 403.72, "end": 408.90000000000003, "text": " and therefore come up with a more accurate computation for the loss function."}, {"start": 408.90000000000003, "end": 411.92, "text": " So the code for doing this is shown here."}, {"start": 411.92, "end": 416.36, "text": " In the output layer, we're now just using a linear activation function."}, {"start": 416.36, "end": 421.28000000000003, "text": " So the output layer just computes z1 through z10."}, {"start": 421.28000000000003, "end": 430.36, "text": " And this whole computation of the loss is then captured in the loss function over here,"}, {"start": 430.36, "end": 434.44, "text": " where again, we have the firm log of z equals true parameter."}, {"start": 434.44, "end": 440.08000000000004, "text": " So once again, these two pieces of code do pretty much the same thing, except that the"}, {"start": 440.08000000000004, "end": 445.64, "text": " version that is recommended is more numerically accurate, although unfortunately, it is a"}, {"start": 445.64, "end": 448.12, "text": " little bit harder to read as well."}, {"start": 448.12, "end": 452.56, "text": " So if you're reading someone else's code and you see this and you wonder what's going on,"}, {"start": 452.56, "end": 457.76, "text": " it's actually equivalent to the original implementation and these in concept, except that is more"}, {"start": 457.76, "end": 458.76, "text": " numerically accurate."}, {"start": 458.76, "end": 466.32, "text": " The numerical round off errors for logistic regression aren't that bad, but it is recommended"}, {"start": 466.32, "end": 471.5, "text": " that you use this implementation down at the bottom instead."}, {"start": 471.5, "end": 477.24, "text": " And conceptually, this code does the same thing as the first version that you had previously,"}, {"start": 477.24, "end": 482.18, "text": " except that it is a little bit more numerically accurate."}, {"start": 482.18, "end": 486.2, "text": " Although the downside is maybe just a little bit harder to interpret as well."}, {"start": 486.2, "end": 491.56, "text": " Now, there's just one more detail, which is that we've now changed the neural network"}, {"start": 491.56, "end": 497.0, "text": " to use a linear activation function rather than a softmax activation function."}, {"start": 497.0, "end": 503.36, "text": " And so the neural network's final layer no longer outputs these probabilities a1 through"}, {"start": 503.36, "end": 509.46, "text": " a10, it is instead outputting z1 through z10."}, {"start": 509.46, "end": 515.12, "text": " And I didn't talk about it in the case of logistic regression, but if you were combining"}, {"start": 515.12, "end": 520.6, "text": " the output logistic function with the loss function, then for logistic regression, you"}, {"start": 520.6, "end": 526.44, "text": " also have to change the code this way to take the output value and map it through the logistic"}, {"start": 526.44, "end": 530.08, "text": " function in order to actually get the probability."}, {"start": 530.08, "end": 536.6, "text": " So you now know how to do multiclass classification with a softmax output layer, and also how"}, {"start": 536.6, "end": 540.2, "text": " to do it in a numerically stable way."}, {"start": 540.2, "end": 544.8800000000001, "text": " Before wrapping up multiclass classification, I want to share with you one other type of"}, {"start": 544.8800000000001, "end": 549.9200000000001, "text": " classification problem called a multi-label classification problem."}, {"start": 549.92, "end": 556.92, "text": " Let's talk about that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=1yv8_S9Srcg
5.10 Mult-label Classification | Classification with multiple outputs (Optional) --[ML | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
You've learned about multi-class classification, where the output label y can be any one of two or potentially many more than two possible categories. There's a different type of classification problem called a multi-label classification problem, which is where, associated with each image, there could be multiple labels. Let me show you what I mean by that. If you're building a self-driving car or maybe a driver assistance system, then given a picture of what's in front of your car, you may want to ask the question, like, is there a car or at least one car, or is there a bus, or is there a pedestrian, or are there any pedestrians? In this case, there is a car, there is no bus, and there is at least one pedestrian, or in this second image, no cars, no buses, and yes pedestrians, and yes car, yes bus, and no pedestrians. So these are examples of multi-label classification problems because associated with a single input image x are three different labels corresponding to whether or not there are any cars, buses, or pedestrians in the image. So in this case, the target output y is actually a vector of three numbers, and this is as distinct from multi-class classification where, for, say, handwritten digit classification, y was just a single number even if that number could take on 10 different possible values. So how do you build a neural network for multi-label classification? One way to go about it is to just treat this as three completely separate machine learning problems. You could build one neural network to decide are there any cars, a second one to detect buses, and a third one to detect pedestrians, and that's actually not an unreasonable approach. Here's the first neural network to detect cars, second one to detect buses, third one to detect pedestrians, but there's another way to do this, which is to train a single neural network to simultaneously detect all three of cars, buses, and pedestrians, which is if your neural network architecture looks like this. That's your input x. First hidden layer outputs a1, second hidden layer outputs a2, and then the final output layer in this case will have three output neurons and will output a3, which is going to be a vector of three numbers. And because we're solving three binary classification problems, so is there a car, is there a bus, is there a pedestrian, you can use a sigmoid activation function for each of these three nodes in the output layer. And so a3 in this case will be a31, a32, and a33 corresponding to whether or not the learning algorithm thinks there's a car and or a bus and or pedestrians in the image. So multi-class classification and multi-label classification are sometimes confused with each other. And that's why in this video I want to share with you just a definition of multi-label classification problems as well so that depending on your application, you could choose the right one for the job you want to do. So that's it for multi-label classification. I find that sometimes multi-class classification and multi-label classification are confused with each other, which is why I wanted to explicitly in this video share with you what is multi-label classification so that depending on your application, you can choose the right two for the job that you want to do. And that wraps up this section on multi-class and multi-label classification. In the next video, we'll start to look at some more advanced neural network concepts, including an optimization algorithm that is even better than gradient descent. So let's take a look at that algorithm in the next video because it'll help you to get your learning algorithms to learn much faster. So let's go on to the next video.
[{"start": 0.0, "end": 9.28, "text": " You've learned about multi-class classification, where the output label y can be any one of"}, {"start": 9.28, "end": 13.280000000000001, "text": " two or potentially many more than two possible categories."}, {"start": 13.280000000000001, "end": 19.2, "text": " There's a different type of classification problem called a multi-label classification"}, {"start": 19.2, "end": 25.92, "text": " problem, which is where, associated with each image, there could be multiple labels."}, {"start": 25.92, "end": 28.16, "text": " Let me show you what I mean by that."}, {"start": 28.16, "end": 34.4, "text": " If you're building a self-driving car or maybe a driver assistance system, then given a picture"}, {"start": 34.4, "end": 40.32, "text": " of what's in front of your car, you may want to ask the question, like, is there a car"}, {"start": 40.32, "end": 47.36, "text": " or at least one car, or is there a bus, or is there a pedestrian, or are there any pedestrians?"}, {"start": 47.36, "end": 54.879999999999995, "text": " In this case, there is a car, there is no bus, and there is at least one pedestrian,"}, {"start": 54.88, "end": 63.040000000000006, "text": " or in this second image, no cars, no buses, and yes pedestrians, and yes car, yes bus,"}, {"start": 63.040000000000006, "end": 65.16, "text": " and no pedestrians."}, {"start": 65.16, "end": 72.32000000000001, "text": " So these are examples of multi-label classification problems because associated with a single"}, {"start": 72.32000000000001, "end": 81.52000000000001, "text": " input image x are three different labels corresponding to whether or not there are any cars, buses,"}, {"start": 81.52000000000001, "end": 84.18, "text": " or pedestrians in the image."}, {"start": 84.18, "end": 92.68, "text": " So in this case, the target output y is actually a vector of three numbers, and this is as"}, {"start": 92.68, "end": 99.32000000000001, "text": " distinct from multi-class classification where, for, say, handwritten digit classification,"}, {"start": 99.32000000000001, "end": 105.2, "text": " y was just a single number even if that number could take on 10 different possible values."}, {"start": 105.2, "end": 110.16000000000001, "text": " So how do you build a neural network for multi-label classification?"}, {"start": 110.16, "end": 114.96, "text": " One way to go about it is to just treat this as three completely separate machine learning"}, {"start": 114.96, "end": 115.96, "text": " problems."}, {"start": 115.96, "end": 120.64, "text": " You could build one neural network to decide are there any cars, a second one to detect"}, {"start": 120.64, "end": 127.16, "text": " buses, and a third one to detect pedestrians, and that's actually not an unreasonable approach."}, {"start": 127.16, "end": 132.88, "text": " Here's the first neural network to detect cars, second one to detect buses, third one"}, {"start": 132.88, "end": 138.51999999999998, "text": " to detect pedestrians, but there's another way to do this, which is to train a single"}, {"start": 138.52, "end": 145.68, "text": " neural network to simultaneously detect all three of cars, buses, and pedestrians, which"}, {"start": 145.68, "end": 149.52, "text": " is if your neural network architecture looks like this."}, {"start": 149.52, "end": 150.92000000000002, "text": " That's your input x."}, {"start": 150.92000000000002, "end": 156.8, "text": " First hidden layer outputs a1, second hidden layer outputs a2, and then the final output"}, {"start": 156.8, "end": 163.72, "text": " layer in this case will have three output neurons and will output a3, which is going"}, {"start": 163.72, "end": 167.56, "text": " to be a vector of three numbers."}, {"start": 167.56, "end": 172.64000000000001, "text": " And because we're solving three binary classification problems, so is there a car, is there a bus,"}, {"start": 172.64000000000001, "end": 177.84, "text": " is there a pedestrian, you can use a sigmoid activation function for each of these three"}, {"start": 177.84, "end": 180.1, "text": " nodes in the output layer."}, {"start": 180.1, "end": 187.36, "text": " And so a3 in this case will be a31, a32, and a33 corresponding to whether or not the learning"}, {"start": 187.36, "end": 192.92000000000002, "text": " algorithm thinks there's a car and or a bus and or pedestrians in the image."}, {"start": 192.92, "end": 197.79999999999998, "text": " So multi-class classification and multi-label classification are sometimes confused with"}, {"start": 197.79999999999998, "end": 199.11999999999998, "text": " each other."}, {"start": 199.11999999999998, "end": 203.2, "text": " And that's why in this video I want to share with you just a definition of multi-label"}, {"start": 203.2, "end": 207.88, "text": " classification problems as well so that depending on your application, you could choose the"}, {"start": 207.88, "end": 210.92, "text": " right one for the job you want to do."}, {"start": 210.92, "end": 214.6, "text": " So that's it for multi-label classification."}, {"start": 214.6, "end": 220.32, "text": " I find that sometimes multi-class classification and multi-label classification are confused"}, {"start": 220.32, "end": 225.95999999999998, "text": " with each other, which is why I wanted to explicitly in this video share with you what"}, {"start": 225.95999999999998, "end": 231.64, "text": " is multi-label classification so that depending on your application, you can choose the right"}, {"start": 231.64, "end": 234.44, "text": " two for the job that you want to do."}, {"start": 234.44, "end": 240.6, "text": " And that wraps up this section on multi-class and multi-label classification."}, {"start": 240.6, "end": 245.84, "text": " In the next video, we'll start to look at some more advanced neural network concepts,"}, {"start": 245.84, "end": 250.68, "text": " including an optimization algorithm that is even better than gradient descent."}, {"start": 250.68, "end": 255.08, "text": " So let's take a look at that algorithm in the next video because it'll help you to get"}, {"start": 255.08, "end": 257.9, "text": " your learning algorithms to learn much faster."}, {"start": 257.9, "end": 276.96, "text": " So let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=yo6aW-D7sCM
5.11 Additional Neural Network Concepts | Advanced Optimization --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Gradient descent is an optimization algorithm that is widely used in machine learning and was the foundation of many algorithms like linear regression and logistic regression and early implementations of neural networks. But it turns out that there are now some other optimization algorithms for minimizing the cost function that are even better than gradient descent. In this video, we'll take a look at an algorithm that can help you train your neural network much faster than gradient descent. Recall that this is the expression for one step of gradient descent. A parameter wj is updated as wj minus the learning rate alpha times this partial derivative term. How can we make this work even better? In this example, I plotted the cost function j using a contour plot comprising these ellipses. And so the minimum of this cost function is at the center of these ellipses down here. Now if you were to start gradient descent down here, one step of gradient descent, if alpha is small, may take you a little bit in that direction, then another step, then another step, then another step, then another step. And you notice that every single step of gradient descent is pretty much going in the same direction. And if you see this to be the case, you might wonder, well, why don't we make alpha bigger? Can we have an algorithm to automatically increase alpha to just make it take bigger steps and get to the minimum faster? There's an algorithm called the Adam algorithm that can do that. If it sees that the learning rate is too small, and we are just taking tiny little steps in a similar direction over and over, we should just make the learning rate alpha bigger. In contrast, here again is the same cost function. If we were starting here and had a relatively big learning rate alpha, then maybe one step of gradient descent takes us here, and the second step takes us here, third step, and the fourth step, and the fifth step, and the sixth step. And if you see gradient descent doing this, it's oscillating back and forth, you'd be tempted to say, well, why don't we make the learning rate smaller? And the Adam algorithm can also do that automatically. And with a smaller learning rate, you can then take a more smooth path toward the minimum of the cost function. So depending on how gradient descent is proceeding, sometimes you wish you had a bigger learning rate alpha, and sometimes you wish you had a smaller learning rate alpha. So the Adam algorithm can adjust the learning rate automatically. Adam stands for Adaptive Moment Estimation, or ADAM. And don't worry too much about what this name means, it's just what the authors had called this algorithm. But interestingly, the Adam algorithm doesn't use a single global learning rate alpha, it uses a different learning rates for every single parameter of your model. So if you have parameters w1 through w10, as well as b, then it actually has 11 learning rate parameters, alpha 1, alpha 2, all the way through alpha 10, for w1 through w10, as well as I'll call it alpha 11 for the parameter b. And the intuition behind the Adam algorithm is if a parameter wj or b seems to keep on moving in roughly the same direction, this is what we saw on the first example on the previous slide. But if it seems to keep on moving in roughly the same direction, let's increase the learning rate for that parameter. Let's go faster in that direction. Conversely, if a parameter keeps oscillating back and forth, this is what you saw in the second example on the previous slide, then let's not have it keep on oscillating or bouncing back and forth, let's reduce alpha j for that parameter a little bit. The details of how Adam does this are a bit complicated and beyond the scope of this course. But if you take some more advanced deep learning classes later, you may learn more about the details of this Adam algorithm. But in code, this is how you would implement it. The model is exactly the same as before. And the way you compile the model is very similar to what we had before, except that we now add one extra argument to the compile function, which is that we specify that the optimizer you want to use is tf.keris.optimizers.theAdamOptimizer. So the Adam optimization algorithm does need some default initial learning rate alpha. And in this example, I've set that initial learning rate to be 10 to the negative 3. But when you're using the Adam algorithm in practice, it's worth trying a few values for this initial, this default global learning rate. Try some larger and some smaller values to see what gives you the fastest learning performance. Compared to the original gradient descent algorithm that you had learned in the previous course though, the Adam algorithm, because it can adapt the learning rate a bit automatically, it is more robust to the exact choice of learning rate that you pick. Though it is still worth tuning this parameter a little bit to see if you can get somewhat faster learning. So that's it for the Adam optimization algorithm. It typically works much faster than gradient descent, and it's become a de facto standard in how practitioners train their neural networks. So if you're trying to decide what learning algorithm to use, what optimization algorithm to use to train your neural network, a safe choice would be to just use the Adam optimization algorithm. And most practitioners today will use Adam rather than the original gradient descent algorithm. And with this, I hope that your learning algorithms will be able to learn much more quickly. Now, in the next couple of videos, I'd like to touch on some more advanced concepts for neural networks. And in particular, in the next video, let's take a look at some alternative layer types.
[{"start": 0.0, "end": 8.8, "text": " Gradient descent is an optimization algorithm that is widely used in machine learning and"}, {"start": 8.8, "end": 14.56, "text": " was the foundation of many algorithms like linear regression and logistic regression"}, {"start": 14.56, "end": 18.56, "text": " and early implementations of neural networks."}, {"start": 18.56, "end": 23.42, "text": " But it turns out that there are now some other optimization algorithms for minimizing the"}, {"start": 23.42, "end": 27.42, "text": " cost function that are even better than gradient descent."}, {"start": 27.42, "end": 32.400000000000006, "text": " In this video, we'll take a look at an algorithm that can help you train your neural network"}, {"start": 32.400000000000006, "end": 34.760000000000005, "text": " much faster than gradient descent."}, {"start": 34.760000000000005, "end": 40.120000000000005, "text": " Recall that this is the expression for one step of gradient descent."}, {"start": 40.120000000000005, "end": 47.24, "text": " A parameter wj is updated as wj minus the learning rate alpha times this partial derivative"}, {"start": 47.24, "end": 48.64, "text": " term."}, {"start": 48.64, "end": 51.56, "text": " How can we make this work even better?"}, {"start": 51.56, "end": 58.760000000000005, "text": " In this example, I plotted the cost function j using a contour plot comprising these ellipses."}, {"start": 58.760000000000005, "end": 65.2, "text": " And so the minimum of this cost function is at the center of these ellipses down here."}, {"start": 65.2, "end": 72.0, "text": " Now if you were to start gradient descent down here, one step of gradient descent, if"}, {"start": 72.0, "end": 76.28, "text": " alpha is small, may take you a little bit in that direction, then another step, then"}, {"start": 76.28, "end": 79.72, "text": " another step, then another step, then another step."}, {"start": 79.72, "end": 85.24, "text": " And you notice that every single step of gradient descent is pretty much going in the same direction."}, {"start": 85.24, "end": 91.08, "text": " And if you see this to be the case, you might wonder, well, why don't we make alpha bigger?"}, {"start": 91.08, "end": 95.6, "text": " Can we have an algorithm to automatically increase alpha to just make it take bigger"}, {"start": 95.6, "end": 99.08, "text": " steps and get to the minimum faster?"}, {"start": 99.08, "end": 104.52, "text": " There's an algorithm called the Adam algorithm that can do that."}, {"start": 104.52, "end": 110.11999999999999, "text": " If it sees that the learning rate is too small, and we are just taking tiny little steps in"}, {"start": 110.11999999999999, "end": 116.32, "text": " a similar direction over and over, we should just make the learning rate alpha bigger."}, {"start": 116.32, "end": 120.96, "text": " In contrast, here again is the same cost function."}, {"start": 120.96, "end": 126.28, "text": " If we were starting here and had a relatively big learning rate alpha, then maybe one step"}, {"start": 126.28, "end": 130.6, "text": " of gradient descent takes us here, and the second step takes us here, third step, and"}, {"start": 130.6, "end": 134.12, "text": " the fourth step, and the fifth step, and the sixth step."}, {"start": 134.12, "end": 138.76, "text": " And if you see gradient descent doing this, it's oscillating back and forth, you'd be"}, {"start": 138.76, "end": 142.4, "text": " tempted to say, well, why don't we make the learning rate smaller?"}, {"start": 142.4, "end": 145.88, "text": " And the Adam algorithm can also do that automatically."}, {"start": 145.88, "end": 151.42000000000002, "text": " And with a smaller learning rate, you can then take a more smooth path toward the minimum"}, {"start": 151.42000000000002, "end": 153.32, "text": " of the cost function."}, {"start": 153.32, "end": 158.44, "text": " So depending on how gradient descent is proceeding, sometimes you wish you had a bigger learning"}, {"start": 158.44, "end": 164.16, "text": " rate alpha, and sometimes you wish you had a smaller learning rate alpha."}, {"start": 164.16, "end": 168.04, "text": " So the Adam algorithm can adjust the learning rate automatically."}, {"start": 168.04, "end": 174.48, "text": " Adam stands for Adaptive Moment Estimation, or ADAM."}, {"start": 174.48, "end": 178.84, "text": " And don't worry too much about what this name means, it's just what the authors had called"}, {"start": 178.84, "end": 180.64, "text": " this algorithm."}, {"start": 180.64, "end": 186.12, "text": " But interestingly, the Adam algorithm doesn't use a single global learning rate alpha, it"}, {"start": 186.12, "end": 190.96, "text": " uses a different learning rates for every single parameter of your model."}, {"start": 190.96, "end": 197.16, "text": " So if you have parameters w1 through w10, as well as b, then it actually has 11 learning"}, {"start": 197.16, "end": 204.0, "text": " rate parameters, alpha 1, alpha 2, all the way through alpha 10, for w1 through w10,"}, {"start": 204.0, "end": 209.04000000000002, "text": " as well as I'll call it alpha 11 for the parameter b."}, {"start": 209.04000000000002, "end": 216.08, "text": " And the intuition behind the Adam algorithm is if a parameter wj or b seems to keep on"}, {"start": 216.08, "end": 221.84, "text": " moving in roughly the same direction, this is what we saw on the first example on the"}, {"start": 221.84, "end": 222.84, "text": " previous slide."}, {"start": 222.84, "end": 227.16000000000003, "text": " But if it seems to keep on moving in roughly the same direction, let's increase the learning"}, {"start": 227.16000000000003, "end": 228.36, "text": " rate for that parameter."}, {"start": 228.36, "end": 230.68, "text": " Let's go faster in that direction."}, {"start": 230.68, "end": 236.12, "text": " Conversely, if a parameter keeps oscillating back and forth, this is what you saw in the"}, {"start": 236.12, "end": 241.84, "text": " second example on the previous slide, then let's not have it keep on oscillating or bouncing"}, {"start": 241.84, "end": 248.28, "text": " back and forth, let's reduce alpha j for that parameter a little bit."}, {"start": 248.28, "end": 254.12, "text": " The details of how Adam does this are a bit complicated and beyond the scope of this course."}, {"start": 254.12, "end": 258.16, "text": " But if you take some more advanced deep learning classes later, you may learn more about the"}, {"start": 258.16, "end": 260.96, "text": " details of this Adam algorithm."}, {"start": 260.96, "end": 264.08, "text": " But in code, this is how you would implement it."}, {"start": 264.08, "end": 267.46, "text": " The model is exactly the same as before."}, {"start": 267.46, "end": 272.91999999999996, "text": " And the way you compile the model is very similar to what we had before, except that"}, {"start": 272.91999999999996, "end": 279.84, "text": " we now add one extra argument to the compile function, which is that we specify that the"}, {"start": 279.84, "end": 287.0, "text": " optimizer you want to use is tf.keris.optimizers.theAdamOptimizer."}, {"start": 287.0, "end": 293.12, "text": " So the Adam optimization algorithm does need some default initial learning rate alpha."}, {"start": 293.12, "end": 300.36, "text": " And in this example, I've set that initial learning rate to be 10 to the negative 3."}, {"start": 300.36, "end": 304.68, "text": " But when you're using the Adam algorithm in practice, it's worth trying a few values for"}, {"start": 304.68, "end": 308.0, "text": " this initial, this default global learning rate."}, {"start": 308.0, "end": 313.92, "text": " Try some larger and some smaller values to see what gives you the fastest learning performance."}, {"start": 313.92, "end": 319.1, "text": " Compared to the original gradient descent algorithm that you had learned in the previous"}, {"start": 319.1, "end": 325.6, "text": " course though, the Adam algorithm, because it can adapt the learning rate a bit automatically,"}, {"start": 325.6, "end": 330.04, "text": " it is more robust to the exact choice of learning rate that you pick."}, {"start": 330.04, "end": 334.24, "text": " Though it is still worth tuning this parameter a little bit to see if you can get somewhat"}, {"start": 334.24, "end": 335.92, "text": " faster learning."}, {"start": 335.92, "end": 339.18, "text": " So that's it for the Adam optimization algorithm."}, {"start": 339.18, "end": 345.32000000000005, "text": " It typically works much faster than gradient descent, and it's become a de facto standard"}, {"start": 345.32000000000005, "end": 348.74, "text": " in how practitioners train their neural networks."}, {"start": 348.74, "end": 352.84000000000003, "text": " So if you're trying to decide what learning algorithm to use, what optimization algorithm"}, {"start": 352.84000000000003, "end": 358.48, "text": " to use to train your neural network, a safe choice would be to just use the Adam optimization"}, {"start": 358.48, "end": 359.98, "text": " algorithm."}, {"start": 359.98, "end": 364.04, "text": " And most practitioners today will use Adam rather than the original gradient descent"}, {"start": 364.04, "end": 365.54, "text": " algorithm."}, {"start": 365.54, "end": 371.52, "text": " And with this, I hope that your learning algorithms will be able to learn much more quickly."}, {"start": 371.52, "end": 378.0, "text": " Now, in the next couple of videos, I'd like to touch on some more advanced concepts for"}, {"start": 378.0, "end": 379.0, "text": " neural networks."}, {"start": 379.0, "end": 408.92, "text": " And in particular, in the next video, let's take a look at some alternative layer types."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=54TxZZpK5Ok
5.12 Additional Neural Network Concepts | Additional Layer Types --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
All the neural network layers we've used so far have been the dense layer type in which every neuron in a layer gets as its inputs all the activations from the previous layer. And it turns out that just using the dense layer type, you can actually build some pretty powerful learning algorithms. And to help you build further intuition about what neural networks can do, it turns out that there are some other types of layers as well with other properties. In this video, I'd like to briefly touch on this and give you an example of a different type of neural network layer. Let's take a look. To recap, in the dense layer that we've been using, the activation of a neuron in, say, the second hidden layer is a function of every single activation value from the previous layer of A1. But it turns out that for some applications, someone designing a neural network may choose to use a different type of layer. One other layer type that you may see in some work is called a convolutional layer. Let me illustrate this with an example. So what I'm showing on the left is the input X, which is a handwritten digit 9. And what I'm going to do is construct a hidden layer, which will compute different activations as functions of this input image X. But here's something I can do. For the first hidden unit, which I've drawn in blue, rather than saying this neuron can look at all the pixels in this image, I might say this neuron can only look at the pixels in this little rectangular region. The second neuron, which I'm going to illustrate in magenta, is also not going to look at the entire input image X. Instead, it's only going to look at the pixels in a limited region of the image. And so on for the third neuron and the fourth neuron, and so on and so forth, down to the last neuron, which maybe looks only at that region of the image. So why might you want to do this? Why won't you let every neuron look at all the pixels, but instead look at only some of the pixels? Well, some of the benefits are, first, it speeds up computation. And second advantage is that a neural network that uses this type of layer called a convolutional layer can need less training data. Or alternatively, it can also be less prone to overfitting. You'd heard me talk a bit about overfitting in the previous course, but this is something that we'll dive into greater detail on next week as well, when we talk about practical tips for using learning algorithms. And this type of layer, where each neuron only looks at a region of the input image, is called a convolutional layer. It was a researcher, Yang Le Kun, who had figured out a lot of the details of how to get convolutional layers to work and popularize their use. Let me illustrate in more detail a convolutional layer. And if you have multiple convolutional layers in a neural network, sometimes that's called a convolutional neural network. To illustrate the convolutional layer or convolutional neural network, on this slide, I'm going to use instead of a 2D image input, I'm going to use a one dimensional input. And the motivating example I'm going to use is classification of EKG signals or electrocardiograms. So if you put two electrodes on your chest, you will record voltages that look like this that correspond to your heartbeat. This is actually something that my Stanford research group did research on. We're actually reading EKG signals that actually look like this to try to diagnose if a patient may have a heart issue. So an EKG signal, an electrocardiogram, ECG in some places, EKG in some places, is just a list of numbers corresponding to the height of the surface at different points in time. So you may have, say, 100 numbers corresponding to the height of this curve at 100 different points of time. And the learning task is, given this time series, given this EKG signal, to classify, say whether this patient has a heart disease or some diagnosable heart condition, here's what a convolutional neural network might do. So I'm going to take the EKG signal and rotate it 90 degrees to lay it on the side. And so we have here 100 inputs, X1, X2, all the way through X100, like so. And when I construct the first hidden layer, instead of having the first hidden unit take as input all 100 numbers, let me have the first hidden unit look at only X1 through X20. So that corresponds to looking at just a small window of this EKG signal. The second hidden layer, shown in a different color here, will look at X11 through X30, so it looks at a different window in this EKG signal. And the third hidden layer looks at another window, X21 through X40, and so on. And the final hidden unit in this example will look at X81 through X100, so it looks at a small window toward the end of this EKG time series. So this is a convolutional layer, because each unit in this layer looks at only a limited window of the input. Now this layer of the neural network has nine units. The next layer can also be a convolutional layer. So in the second hidden layer, let me architect my first unit, not to look at all nine activations from the previous layer, but to look at, say, just the first five activations from the previous layer. And then my second unit in this second hidden layer may look at just another five numbers, say A3 to A7. And the third and final hidden unit in this layer will only look at A5 through A9. And then maybe finally, these activations, A2, gets inputs to a sigmoid unit that does look at all three of these values of A2 in order to make a binary classification regarding presence or absence of heart disease. So this is an example of a neural network with the first hidden layer being a convolutional layer, the second hidden layer also being a convolutional layer, and then the output layer being a sigmoid layer. And it turns out that with convolutional layers, you have many architectural choices, such as how big is the window of inputs that a single neuron should look at, and how many neurons should each layer have. And by choosing those architectural parameters effectively, you can build new versions of neural networks that can be even more effective than the dense layer for some applications. To recap, that's it for the convolutional layer and convolutional neural networks. I'm not going to go deeper into convolutional networks in this class, and you don't need to know anything about them to do the homeworks and finish this class successfully. But I hope that you find this additional intuition that neural networks can have other types of layers as well to be useful. And in fact, if you sometimes hear about the latest cutting edge architectures, like a transformer model or an LSTM or an attention model, a lot of this research in neural networks even today pertains to researchers trying to invent new types of layers for neural networks and plugging these different types of layers together as building blocks to form even more complex and hopefully more powerful neural networks. So that's it for the required videos for this week. Thank you and congrats on sticking with me all the way through this. And I look forward to seeing you next week also, where we'll start to talk about practical advice for how you can build machine learning systems. I hope that the tips you learn next week will help you become much more effective at building useful machine learning systems. And I look forward also to seeing you next week.
[{"start": 0.0, "end": 8.6, "text": " All the neural network layers we've used so far have been the dense layer type in which"}, {"start": 8.6, "end": 15.58, "text": " every neuron in a layer gets as its inputs all the activations from the previous layer."}, {"start": 15.58, "end": 20.16, "text": " And it turns out that just using the dense layer type, you can actually build some pretty"}, {"start": 20.16, "end": 22.6, "text": " powerful learning algorithms."}, {"start": 22.6, "end": 28.16, "text": " And to help you build further intuition about what neural networks can do, it turns out"}, {"start": 28.16, "end": 32.480000000000004, "text": " that there are some other types of layers as well with other properties."}, {"start": 32.480000000000004, "end": 37.96, "text": " In this video, I'd like to briefly touch on this and give you an example of a different"}, {"start": 37.96, "end": 39.56, "text": " type of neural network layer."}, {"start": 39.56, "end": 41.16, "text": " Let's take a look."}, {"start": 41.16, "end": 49.44, "text": " To recap, in the dense layer that we've been using, the activation of a neuron in, say,"}, {"start": 49.44, "end": 55.74, "text": " the second hidden layer is a function of every single activation value from the previous"}, {"start": 55.74, "end": 63.32, "text": " layer of A1. But it turns out that for some applications, someone designing a neural network"}, {"start": 63.32, "end": 67.32000000000001, "text": " may choose to use a different type of layer."}, {"start": 67.32000000000001, "end": 73.8, "text": " One other layer type that you may see in some work is called a convolutional layer."}, {"start": 73.8, "end": 76.26, "text": " Let me illustrate this with an example."}, {"start": 76.26, "end": 82.64, "text": " So what I'm showing on the left is the input X, which is a handwritten digit 9."}, {"start": 82.64, "end": 88.28, "text": " And what I'm going to do is construct a hidden layer, which will compute different activations"}, {"start": 88.28, "end": 91.52, "text": " as functions of this input image X."}, {"start": 91.52, "end": 93.64, "text": " But here's something I can do."}, {"start": 93.64, "end": 99.5, "text": " For the first hidden unit, which I've drawn in blue, rather than saying this neuron can"}, {"start": 99.5, "end": 105.86, "text": " look at all the pixels in this image, I might say this neuron can only look at the pixels"}, {"start": 105.86, "end": 108.88, "text": " in this little rectangular region."}, {"start": 108.88, "end": 113.19999999999999, "text": " The second neuron, which I'm going to illustrate in magenta, is also not going to look at the"}, {"start": 113.19999999999999, "end": 119.39999999999999, "text": " entire input image X. Instead, it's only going to look at the pixels in a limited region"}, {"start": 119.39999999999999, "end": 121.03999999999999, "text": " of the image."}, {"start": 121.03999999999999, "end": 128.24, "text": " And so on for the third neuron and the fourth neuron, and so on and so forth, down to the"}, {"start": 128.24, "end": 135.07999999999998, "text": " last neuron, which maybe looks only at that region of the image."}, {"start": 135.07999999999998, "end": 136.84, "text": " So why might you want to do this?"}, {"start": 136.84, "end": 141.6, "text": " Why won't you let every neuron look at all the pixels, but instead look at only some"}, {"start": 141.6, "end": 142.6, "text": " of the pixels?"}, {"start": 142.6, "end": 149.24, "text": " Well, some of the benefits are, first, it speeds up computation."}, {"start": 149.24, "end": 154.28, "text": " And second advantage is that a neural network that uses this type of layer called a convolutional"}, {"start": 154.28, "end": 157.72, "text": " layer can need less training data."}, {"start": 157.72, "end": 162.2, "text": " Or alternatively, it can also be less prone to overfitting."}, {"start": 162.2, "end": 166.64000000000001, "text": " You'd heard me talk a bit about overfitting in the previous course, but this is something"}, {"start": 166.64, "end": 171.48, "text": " that we'll dive into greater detail on next week as well, when we talk about practical"}, {"start": 171.48, "end": 175.55999999999997, "text": " tips for using learning algorithms."}, {"start": 175.55999999999997, "end": 183.23999999999998, "text": " And this type of layer, where each neuron only looks at a region of the input image,"}, {"start": 183.23999999999998, "end": 186.32, "text": " is called a convolutional layer."}, {"start": 186.32, "end": 190.72, "text": " It was a researcher, Yang Le Kun, who had figured out a lot of the details of how to"}, {"start": 190.72, "end": 195.23999999999998, "text": " get convolutional layers to work and popularize their use."}, {"start": 195.24, "end": 201.32000000000002, "text": " Let me illustrate in more detail a convolutional layer."}, {"start": 201.32000000000002, "end": 206.10000000000002, "text": " And if you have multiple convolutional layers in a neural network, sometimes that's called"}, {"start": 206.10000000000002, "end": 209.24, "text": " a convolutional neural network."}, {"start": 209.24, "end": 214.12, "text": " To illustrate the convolutional layer or convolutional neural network, on this slide, I'm going to"}, {"start": 214.12, "end": 221.20000000000002, "text": " use instead of a 2D image input, I'm going to use a one dimensional input."}, {"start": 221.2, "end": 229.28, "text": " And the motivating example I'm going to use is classification of EKG signals or electrocardiograms."}, {"start": 229.28, "end": 234.64, "text": " So if you put two electrodes on your chest, you will record voltages that look like this"}, {"start": 234.64, "end": 237.51999999999998, "text": " that correspond to your heartbeat."}, {"start": 237.51999999999998, "end": 241.83999999999997, "text": " This is actually something that my Stanford research group did research on."}, {"start": 241.83999999999997, "end": 247.79999999999998, "text": " We're actually reading EKG signals that actually look like this to try to diagnose if a patient"}, {"start": 247.79999999999998, "end": 250.5, "text": " may have a heart issue."}, {"start": 250.5, "end": 257.8, "text": " So an EKG signal, an electrocardiogram, ECG in some places, EKG in some places, is just"}, {"start": 257.8, "end": 263.94, "text": " a list of numbers corresponding to the height of the surface at different points in time."}, {"start": 263.94, "end": 269.68, "text": " So you may have, say, 100 numbers corresponding to the height of this curve at 100 different"}, {"start": 269.68, "end": 272.52, "text": " points of time."}, {"start": 272.52, "end": 279.68, "text": " And the learning task is, given this time series, given this EKG signal, to classify,"}, {"start": 279.68, "end": 286.56, "text": " say whether this patient has a heart disease or some diagnosable heart condition, here's"}, {"start": 286.56, "end": 289.84000000000003, "text": " what a convolutional neural network might do."}, {"start": 289.84000000000003, "end": 294.48, "text": " So I'm going to take the EKG signal and rotate it 90 degrees to lay it on the side."}, {"start": 294.48, "end": 301.16, "text": " And so we have here 100 inputs, X1, X2, all the way through X100, like so."}, {"start": 301.16, "end": 308.52, "text": " And when I construct the first hidden layer, instead of having the first hidden unit take"}, {"start": 308.52, "end": 315.0, "text": " as input all 100 numbers, let me have the first hidden unit look at only X1 through"}, {"start": 315.0, "end": 316.32, "text": " X20."}, {"start": 316.32, "end": 322.03999999999996, "text": " So that corresponds to looking at just a small window of this EKG signal."}, {"start": 322.03999999999996, "end": 328.12, "text": " The second hidden layer, shown in a different color here, will look at X11 through X30,"}, {"start": 328.12, "end": 332.35999999999996, "text": " so it looks at a different window in this EKG signal."}, {"start": 332.35999999999996, "end": 337.32, "text": " And the third hidden layer looks at another window, X21 through X40, and so on."}, {"start": 337.32, "end": 344.24, "text": " And the final hidden unit in this example will look at X81 through X100, so it looks"}, {"start": 344.24, "end": 349.76, "text": " at a small window toward the end of this EKG time series."}, {"start": 349.76, "end": 354.84, "text": " So this is a convolutional layer, because each unit in this layer looks at only a limited"}, {"start": 354.84, "end": 357.84, "text": " window of the input."}, {"start": 357.84, "end": 363.58, "text": " Now this layer of the neural network has nine units."}, {"start": 363.58, "end": 368.52, "text": " The next layer can also be a convolutional layer."}, {"start": 368.52, "end": 377.96, "text": " So in the second hidden layer, let me architect my first unit, not to look at all nine activations"}, {"start": 377.96, "end": 384.09999999999997, "text": " from the previous layer, but to look at, say, just the first five activations from the previous"}, {"start": 384.09999999999997, "end": 385.4, "text": " layer."}, {"start": 385.4, "end": 392.84, "text": " And then my second unit in this second hidden layer may look at just another five numbers,"}, {"start": 392.84, "end": 395.08, "text": " say A3 to A7."}, {"start": 395.08, "end": 401.52, "text": " And the third and final hidden unit in this layer will only look at A5 through A9."}, {"start": 401.52, "end": 409.67999999999995, "text": " And then maybe finally, these activations, A2, gets inputs to a sigmoid unit that does"}, {"start": 409.67999999999995, "end": 417.12, "text": " look at all three of these values of A2 in order to make a binary classification regarding"}, {"start": 417.12, "end": 420.03999999999996, "text": " presence or absence of heart disease."}, {"start": 420.04, "end": 425.04, "text": " So this is an example of a neural network with the first hidden layer being a convolutional"}, {"start": 425.04, "end": 429.56, "text": " layer, the second hidden layer also being a convolutional layer, and then the output"}, {"start": 429.56, "end": 432.68, "text": " layer being a sigmoid layer."}, {"start": 432.68, "end": 438.1, "text": " And it turns out that with convolutional layers, you have many architectural choices, such"}, {"start": 438.1, "end": 443.04, "text": " as how big is the window of inputs that a single neuron should look at, and how many"}, {"start": 443.04, "end": 445.94, "text": " neurons should each layer have."}, {"start": 445.94, "end": 451.71999999999997, "text": " And by choosing those architectural parameters effectively, you can build new versions of"}, {"start": 451.71999999999997, "end": 457.24, "text": " neural networks that can be even more effective than the dense layer for some applications."}, {"start": 457.24, "end": 462.48, "text": " To recap, that's it for the convolutional layer and convolutional neural networks."}, {"start": 462.48, "end": 467.2, "text": " I'm not going to go deeper into convolutional networks in this class, and you don't need"}, {"start": 467.2, "end": 472.28, "text": " to know anything about them to do the homeworks and finish this class successfully."}, {"start": 472.28, "end": 477.76, "text": " But I hope that you find this additional intuition that neural networks can have other types"}, {"start": 477.76, "end": 480.52, "text": " of layers as well to be useful."}, {"start": 480.52, "end": 485.64, "text": " And in fact, if you sometimes hear about the latest cutting edge architectures, like a"}, {"start": 485.64, "end": 492.03999999999996, "text": " transformer model or an LSTM or an attention model, a lot of this research in neural networks"}, {"start": 492.03999999999996, "end": 498.47999999999996, "text": " even today pertains to researchers trying to invent new types of layers for neural networks"}, {"start": 498.48, "end": 503.40000000000003, "text": " and plugging these different types of layers together as building blocks to form even more"}, {"start": 503.40000000000003, "end": 507.84000000000003, "text": " complex and hopefully more powerful neural networks."}, {"start": 507.84000000000003, "end": 511.0, "text": " So that's it for the required videos for this week."}, {"start": 511.0, "end": 515.16, "text": " Thank you and congrats on sticking with me all the way through this."}, {"start": 515.16, "end": 520.86, "text": " And I look forward to seeing you next week also, where we'll start to talk about practical"}, {"start": 520.86, "end": 525.0, "text": " advice for how you can build machine learning systems."}, {"start": 525.0, "end": 530.48, "text": " I hope that the tips you learn next week will help you become much more effective at building"}, {"start": 530.48, "end": 532.72, "text": " useful machine learning systems."}, {"start": 532.72, "end": 555.72, "text": " And I look forward also to seeing you next week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Y66jLs9ubsY
6.1 Advice for applying machine learning | Deciding what to try next -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Hi and welcome back. By now you've seen a lot of different learning algorithms including linear regression, logistic regression, even deep learning or neural networks, and next week you'll see decision trees as well. So you now have a lot of powerful tools of machine learning, but how do you use these tools effectively? I've seen teams sometimes take six months to build a machine learning system that I think a more skilled team could have taken or done in just a couple of weeks. And the efficiency of how quickly you can get a machine learning system to work well will depend to a large part on how well you can repeatedly make good decisions about what to do next in the course of a machine learning project. So in this week I hope to share with you a number of tips on how to make decisions about what to do next in a machine learning project that I hope will end up saving you a lot of time. So let's take a look at some advice on how to build machine learning systems. Let's start with an example. Say you've implemented regularized linear regression to predict housing prices. So you have the usual cost function for your learning algorithm, squared error plus this regularization term. But if you train the model and find that it makes unacceptably large errors in its predictions, what do you try next? When you're building a machine learning algorithm, there are usually a lot of different things you could try. For example, you could decide to get more training examples since it seems like having more data should help, right? Or maybe you think maybe you have too many features, you could try a smaller set of features. Or maybe you want to get additional features such as find additional properties of the houses to toss into your data. And maybe that will help it to do better. Or you might take the existing features x1, x2, and so on and try adding polynomial features like x1 squared, x2 squared, x1 x2 and so on. Or you might wonder if the value of lambda is chosen well and you might say, maybe it's too big, I want to decrease it. Or you may say, oh, maybe it's too small, I want to try increasing it. So on any given machine learning application, it will often turn out that some of these things could be fruitful and some of these things not fruitful. And a key to being effective at how you build a machine learning algorithm will be if you can find a way to make good choices about where to invest your time. For example, I have seen teams spend literally many, many months collecting more training examples, thinking that more training data has got to help. But it turns out sometimes it helps a lot and sometimes it doesn't. So in this week, you learn about how to carry out a set of diagnostics. And by diagnostic, I mean a test you can run to gain insight into what is or isn't working with a learning algorithm to gain guidance into improving performance. And some of these diagnostics will tell you things like, is it worth weeks or even months collecting more training data? Because if it is, then you can then go ahead and make the investment to get more data, which will hopefully lead to improved performance. Or if it isn't, then running that diagnostic could have saved you months of time. And one thing you see this week as well is that diagnostics can take time to implement, but running them can be a very good use of your time. So this week, we'll spend a lot of time talking about different diagnostics you can use to give you guidance on how to improve your learning algorithm's performance. But first, let's take a look at how to evaluate the performance of your learning algorithm. Let's go do that in the next video.
[{"start": 0.0, "end": 4.0600000000000005, "text": " Hi and welcome back."}, {"start": 4.0600000000000005, "end": 8.8, "text": " By now you've seen a lot of different learning algorithms including linear regression, logistic"}, {"start": 8.8, "end": 14.200000000000001, "text": " regression, even deep learning or neural networks, and next week you'll see decision trees as"}, {"start": 14.200000000000001, "end": 15.200000000000001, "text": " well."}, {"start": 15.200000000000001, "end": 20.14, "text": " So you now have a lot of powerful tools of machine learning, but how do you use these"}, {"start": 20.14, "end": 22.12, "text": " tools effectively?"}, {"start": 22.12, "end": 27.080000000000002, "text": " I've seen teams sometimes take six months to build a machine learning system that I think"}, {"start": 27.08, "end": 32.04, "text": " a more skilled team could have taken or done in just a couple of weeks."}, {"start": 32.04, "end": 36.04, "text": " And the efficiency of how quickly you can get a machine learning system to work well"}, {"start": 36.04, "end": 40.8, "text": " will depend to a large part on how well you can repeatedly make good decisions about what"}, {"start": 40.8, "end": 44.16, "text": " to do next in the course of a machine learning project."}, {"start": 44.16, "end": 48.84, "text": " So in this week I hope to share with you a number of tips on how to make decisions about"}, {"start": 48.84, "end": 53.32, "text": " what to do next in a machine learning project that I hope will end up saving you a lot of"}, {"start": 53.32, "end": 54.44, "text": " time."}, {"start": 54.44, "end": 59.64, "text": " So let's take a look at some advice on how to build machine learning systems."}, {"start": 59.64, "end": 61.68, "text": " Let's start with an example."}, {"start": 61.68, "end": 66.12, "text": " Say you've implemented regularized linear regression to predict housing prices."}, {"start": 66.12, "end": 71.72, "text": " So you have the usual cost function for your learning algorithm, squared error plus this"}, {"start": 71.72, "end": 73.28, "text": " regularization term."}, {"start": 73.28, "end": 79.86, "text": " But if you train the model and find that it makes unacceptably large errors in its predictions,"}, {"start": 79.86, "end": 80.86, "text": " what do you try next?"}, {"start": 80.86, "end": 84.88, "text": " When you're building a machine learning algorithm, there are usually a lot of different things"}, {"start": 84.88, "end": 85.88, "text": " you could try."}, {"start": 85.88, "end": 90.24, "text": " For example, you could decide to get more training examples since it seems like having"}, {"start": 90.24, "end": 92.58, "text": " more data should help, right?"}, {"start": 92.58, "end": 97.64, "text": " Or maybe you think maybe you have too many features, you could try a smaller set of features."}, {"start": 97.64, "end": 102.68, "text": " Or maybe you want to get additional features such as find additional properties of the"}, {"start": 102.68, "end": 105.22, "text": " houses to toss into your data."}, {"start": 105.22, "end": 107.42, "text": " And maybe that will help it to do better."}, {"start": 107.42, "end": 112.76, "text": " Or you might take the existing features x1, x2, and so on and try adding polynomial features"}, {"start": 112.76, "end": 116.04, "text": " like x1 squared, x2 squared, x1 x2 and so on."}, {"start": 116.04, "end": 121.2, "text": " Or you might wonder if the value of lambda is chosen well and you might say, maybe it's"}, {"start": 121.2, "end": 122.76, "text": " too big, I want to decrease it."}, {"start": 122.76, "end": 127.0, "text": " Or you may say, oh, maybe it's too small, I want to try increasing it."}, {"start": 127.0, "end": 132.14, "text": " So on any given machine learning application, it will often turn out that some of these"}, {"start": 132.14, "end": 136.96, "text": " things could be fruitful and some of these things not fruitful."}, {"start": 136.96, "end": 141.96, "text": " And a key to being effective at how you build a machine learning algorithm will be if you"}, {"start": 141.96, "end": 146.56, "text": " can find a way to make good choices about where to invest your time."}, {"start": 146.56, "end": 151.76000000000002, "text": " For example, I have seen teams spend literally many, many months collecting more training"}, {"start": 151.76000000000002, "end": 156.04000000000002, "text": " examples, thinking that more training data has got to help."}, {"start": 156.04000000000002, "end": 160.12, "text": " But it turns out sometimes it helps a lot and sometimes it doesn't."}, {"start": 160.12, "end": 166.16, "text": " So in this week, you learn about how to carry out a set of diagnostics."}, {"start": 166.16, "end": 171.84, "text": " And by diagnostic, I mean a test you can run to gain insight into what is or isn't working"}, {"start": 171.84, "end": 176.2, "text": " with a learning algorithm to gain guidance into improving performance."}, {"start": 176.2, "end": 180.51999999999998, "text": " And some of these diagnostics will tell you things like, is it worth weeks or even months"}, {"start": 180.51999999999998, "end": 182.35999999999999, "text": " collecting more training data?"}, {"start": 182.35999999999999, "end": 186.88, "text": " Because if it is, then you can then go ahead and make the investment to get more data,"}, {"start": 186.88, "end": 189.07999999999998, "text": " which will hopefully lead to improved performance."}, {"start": 189.07999999999998, "end": 195.8, "text": " Or if it isn't, then running that diagnostic could have saved you months of time."}, {"start": 195.8, "end": 201.96, "text": " And one thing you see this week as well is that diagnostics can take time to implement,"}, {"start": 201.96, "end": 205.52, "text": " but running them can be a very good use of your time."}, {"start": 205.52, "end": 210.48000000000002, "text": " So this week, we'll spend a lot of time talking about different diagnostics you can use to"}, {"start": 210.48000000000002, "end": 214.32000000000002, "text": " give you guidance on how to improve your learning algorithm's performance."}, {"start": 214.32000000000002, "end": 219.16000000000003, "text": " But first, let's take a look at how to evaluate the performance of your learning algorithm."}, {"start": 219.16, "end": 226.16, "text": " Let's go do that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=uNNx1Czrt1w
6.2 Evaluating and choosing models | Evaluating a model -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's say you've trained a machine learning model. How do you evaluate that model's performance? You find that having a systematic way to evaluate performance will also help paint a clearer path for how to then improve this performance. So let's take a look at how to evaluate a model. Let's take the example of learning to predict housing prices as a function of the size. Let's say you've trained a model to predict housing prices as a function of the size x. And for the model that is a fourth order polynomial, so it features x, x squared, x cubed, and x to the fourth. Because we fit a fourth order polynomial to a training set with five data points, this fits the training data really well. But we don't like this model very much because even though the model fits the training data well, we think it will fail to generalize to new examples that aren't in the training set. So when you are predicting prices just a single feature the size of the house, you could plot the model like this and we could see that the curve is very wiggly. So we know this probably isn't a good model. But if you were fitting this model with even more features, say we had x1 the size of the house, number of bedrooms, the number of floors of the house, also the age of the home and years, then it becomes much harder to plot f because f is now a function of x1 through x4. And how do you plot a four dimensional function? So in order to tell if your model is doing well, especially for applications where you have more than one or two features, which makes it difficult to plot f of x, we need some more systematic way to evaluate how well your model is doing. Here's a technique that you can use. If you have a training set, and this is a small training set with just 10 examples listed here, rather than taking all your data to train the parameters w and p of the model, you can instead split the training set into two subsets. I'm going to draw a line here. And let's put 70% of the data into the first part. And I'm going to call that the training set. And the second part of the data, let's say 30% of the data, I'm going to put into a test set. And what we're going to do is train the models parameters on the training sets on this first 70% or so of the data, and then we'll test this performance on this test set. In notation, I'm going to use x1, y1, same as before, to denote the training examples through xm, ym, except that now to make explicit. So in this little example, we would have seven training examples. And to introduce one new piece of notation, I'm going to use m subscript train. m train is a number of training examples, which in this small data set is seven. So the subscript train just emphasizes if we're looking at the training set portion of the data. And for the test sets, I'm going to use the notation x1 subscript test, y1 subscript test to denote the first test example. And this goes all the way to xm test, subscript test, ym test, subscript test. And m test is the number of test examples, which in this case is three. And it's not uncommon to split your data set according to maybe a 70-30 split or 80-20 split with most of your data going into the training set and then a smaller fraction going into the test set. So in order to train a model and evaluate it, this is what it would look like if you're using linear regression with a squared error cost. Start off by fitting the parameters by minimizing the cost function j of wb. So this is a usual cost function. Minimize over wb of this squared error cost plus regularization term, longer over 2m times sum of the wj squared. And then to tell how well this model is doing, you would compute j test of wb, which is equal to the average error on the test set. And that's just equal to 1 over 2 times m test. That's the number of test examples. And then of sum over all the examples from i equals 1 to the number of test examples of the squared error on each of the test examples like so. So it's a prediction on the i-th test example input minus the actual price of the house on the i-th test example squared. And notice that the test error formula j test, it does not include that regularization term. And this will give you a sense of how well your learning algorithm is doing. One other quantity that's often useful to compute as well is the training error, which is a measure of how well your learning algorithm is doing on the training set. So let me define j train of wb to be equal to the average over the training sets 1 over 2m or 1 over 2m subscript train of sum over your training sets of this squared error term. And once again, this does not include the regularization term, unlike the cost function that you were minimizing to fit the parameters. So in a model like what we saw earlier in this video, j train of wb will be low because the average error on your training examples will be zero or very close to zero. So j train will be very close to zero. But if you had a few additional examples in your test set that the algorithm had not trained on then those test examples might look like these. And there's a large gap between what the algorithm is predicting as the estimated housing price and the actual value of those housing prices. And so j tests will be high. So seeing that j test is high on this model gives you a way to realize that even though it does great on the training set is actually not so good at generalizing to new examples to new data points that were not in the training set. So that was regression with squared error costs. Now let's take a look at how you'd apply this procedure to a classification problem. For example, if you were classifying between handwritten digits that are either zero or one. So same as before, you fit the parameters by minimizing the cost function to find the parameters wb. For example, if you were training logistic regression, then this would be the cost function j of wb where this is the usual logistic loss function and then plus also the regularization term and to compute the test error. J test is then the average over your test examples. That's that 30% of the data that wasn't in the training set of the logistic loss on your test set. And the training error you could also compute using this formula is the average logistic loss on your training data that the algorithm was using to minimize the cost function j of wb. Well what I describe here will work okay for figuring out if your learning algorithm is doing well. I've seen how well it's doing in terms of test error. When applying machine learning to classification problems, there's actually one other definition of J test and J train that is maybe even more commonly used, which is instead of using the logistic loss to compute the test error and the training error to instead measure what's the fraction of the test set and the fraction of the training set that the algorithm has misclassified. So specifically on the test set, you can have the algorithm make a prediction one or zero on every test example. So recall y hat, we would predict as one if f of x is greater than or equal to 0.5 and zero if it's less than 0.5. And you can then count up in the test set the fraction of examples where y hat is not equal to the actual ground truth label y in the test set. So concretely, if you were classifying handwritten digits zero or one by new classification toss, then J test would be the fraction of that test set where zero was classified as one or one classified as zero. And similarly J train is a fraction of the training set that has been misclassified. Taking a data set and splitting it into a training set and a separate test set gives you a way to systematically evaluate how well your learning algorithm is doing. By computing both J test and J train, you can now measure how it's doing on the test set and on the training set. This procedure is one step to what you being able to automatically choose what model to use for a given machine learning application. For example, if you're trying to predict housing prices, should you fit a straight line to your data or fit a second order polynomial or third order or fourth order polynomial? It turns out that with one further refinement to the idea you saw in this video, you'll be able to have an algorithm help you to automatically make that type of decision well. Let's take a look at how to do that in the next video.
[{"start": 0.0, "end": 4.48, "text": " Let's say you've trained a machine learning model."}, {"start": 4.48, "end": 6.88, "text": " How do you evaluate that model's performance?"}, {"start": 6.88, "end": 12.040000000000001, "text": " You find that having a systematic way to evaluate performance will also help paint a clearer"}, {"start": 12.040000000000001, "end": 14.84, "text": " path for how to then improve this performance."}, {"start": 14.84, "end": 17.92, "text": " So let's take a look at how to evaluate a model."}, {"start": 17.92, "end": 24.96, "text": " Let's take the example of learning to predict housing prices as a function of the size."}, {"start": 24.96, "end": 31.04, "text": " Let's say you've trained a model to predict housing prices as a function of the size x."}, {"start": 31.04, "end": 38.32, "text": " And for the model that is a fourth order polynomial, so it features x, x squared, x cubed, and"}, {"start": 38.32, "end": 40.52, "text": " x to the fourth."}, {"start": 40.52, "end": 46.68, "text": " Because we fit a fourth order polynomial to a training set with five data points, this"}, {"start": 46.68, "end": 50.36, "text": " fits the training data really well."}, {"start": 50.36, "end": 57.32, "text": " But we don't like this model very much because even though the model fits the training data"}, {"start": 57.32, "end": 61.879999999999995, "text": " well, we think it will fail to generalize to new examples that aren't in the training"}, {"start": 61.879999999999995, "end": 63.96, "text": " set."}, {"start": 63.96, "end": 69.88, "text": " So when you are predicting prices just a single feature the size of the house, you could plot"}, {"start": 69.88, "end": 74.56, "text": " the model like this and we could see that the curve is very wiggly."}, {"start": 74.56, "end": 77.84, "text": " So we know this probably isn't a good model."}, {"start": 77.84, "end": 82.84, "text": " But if you were fitting this model with even more features, say we had x1 the size of the"}, {"start": 82.84, "end": 87.28, "text": " house, number of bedrooms, the number of floors of the house, also the age of the home and"}, {"start": 87.28, "end": 95.80000000000001, "text": " years, then it becomes much harder to plot f because f is now a function of x1 through"}, {"start": 95.80000000000001, "end": 97.08000000000001, "text": " x4."}, {"start": 97.08000000000001, "end": 102.96000000000001, "text": " And how do you plot a four dimensional function?"}, {"start": 102.96, "end": 108.03999999999999, "text": " So in order to tell if your model is doing well, especially for applications where you"}, {"start": 108.03999999999999, "end": 113.72, "text": " have more than one or two features, which makes it difficult to plot f of x, we need"}, {"start": 113.72, "end": 118.6, "text": " some more systematic way to evaluate how well your model is doing."}, {"start": 118.6, "end": 121.56, "text": " Here's a technique that you can use."}, {"start": 121.56, "end": 125.91999999999999, "text": " If you have a training set, and this is a small training set with just 10 examples listed"}, {"start": 125.91999999999999, "end": 132.54, "text": " here, rather than taking all your data to train the parameters w and p of the model,"}, {"start": 132.54, "end": 136.07999999999998, "text": " you can instead split the training set into two subsets."}, {"start": 136.07999999999998, "end": 138.35999999999999, "text": " I'm going to draw a line here."}, {"start": 138.35999999999999, "end": 144.48, "text": " And let's put 70% of the data into the first part."}, {"start": 144.48, "end": 147.76, "text": " And I'm going to call that the training set."}, {"start": 147.76, "end": 153.88, "text": " And the second part of the data, let's say 30% of the data, I'm going to put into a test"}, {"start": 153.88, "end": 155.68, "text": " set."}, {"start": 155.68, "end": 160.92, "text": " And what we're going to do is train the models parameters on the training sets on this first"}, {"start": 160.92, "end": 168.07999999999998, "text": " 70% or so of the data, and then we'll test this performance on this test set."}, {"start": 168.07999999999998, "end": 177.16, "text": " In notation, I'm going to use x1, y1, same as before, to denote the training examples"}, {"start": 177.16, "end": 186.48, "text": " through xm, ym, except that now to make explicit."}, {"start": 186.48, "end": 190.88, "text": " So in this little example, we would have seven training examples."}, {"start": 190.88, "end": 197.35999999999999, "text": " And to introduce one new piece of notation, I'm going to use m subscript train."}, {"start": 197.35999999999999, "end": 203.22, "text": " m train is a number of training examples, which in this small data set is seven."}, {"start": 203.22, "end": 208.16, "text": " So the subscript train just emphasizes if we're looking at the training set portion"}, {"start": 208.16, "end": 209.95999999999998, "text": " of the data."}, {"start": 209.96, "end": 218.32, "text": " And for the test sets, I'm going to use the notation x1 subscript test, y1 subscript test"}, {"start": 218.32, "end": 220.96, "text": " to denote the first test example."}, {"start": 220.96, "end": 230.96, "text": " And this goes all the way to xm test, subscript test, ym test, subscript test."}, {"start": 230.96, "end": 236.20000000000002, "text": " And m test is the number of test examples, which in this case is three."}, {"start": 236.2, "end": 243.6, "text": " And it's not uncommon to split your data set according to maybe a 70-30 split or 80-20"}, {"start": 243.6, "end": 248.39999999999998, "text": " split with most of your data going into the training set and then a smaller fraction going"}, {"start": 248.39999999999998, "end": 250.67999999999998, "text": " into the test set."}, {"start": 250.67999999999998, "end": 256.82, "text": " So in order to train a model and evaluate it, this is what it would look like if you're"}, {"start": 256.82, "end": 261.4, "text": " using linear regression with a squared error cost."}, {"start": 261.4, "end": 266.44, "text": " Start off by fitting the parameters by minimizing the cost function j of wb."}, {"start": 266.44, "end": 269.23999999999995, "text": " So this is a usual cost function."}, {"start": 269.23999999999995, "end": 277.03999999999996, "text": " Minimize over wb of this squared error cost plus regularization term, longer over 2m times"}, {"start": 277.03999999999996, "end": 281.28, "text": " sum of the wj squared."}, {"start": 281.28, "end": 289.79999999999995, "text": " And then to tell how well this model is doing, you would compute j test of wb, which is equal"}, {"start": 289.8, "end": 293.16, "text": " to the average error on the test set."}, {"start": 293.16, "end": 297.64, "text": " And that's just equal to 1 over 2 times m test."}, {"start": 297.64, "end": 299.84000000000003, "text": " That's the number of test examples."}, {"start": 299.84000000000003, "end": 305.68, "text": " And then of sum over all the examples from i equals 1 to the number of test examples"}, {"start": 305.68, "end": 310.40000000000003, "text": " of the squared error on each of the test examples like so."}, {"start": 310.40000000000003, "end": 318.72, "text": " So it's a prediction on the i-th test example input minus the actual price of the house"}, {"start": 318.72, "end": 323.32000000000005, "text": " on the i-th test example squared."}, {"start": 323.32000000000005, "end": 330.92, "text": " And notice that the test error formula j test, it does not include that regularization term."}, {"start": 330.92, "end": 335.8, "text": " And this will give you a sense of how well your learning algorithm is doing."}, {"start": 335.8, "end": 340.92, "text": " One other quantity that's often useful to compute as well is the training error, which"}, {"start": 340.92, "end": 346.32000000000005, "text": " is a measure of how well your learning algorithm is doing on the training set."}, {"start": 346.32, "end": 352.84, "text": " So let me define j train of wb to be equal to the average over the training sets 1 over"}, {"start": 352.84, "end": 360.71999999999997, "text": " 2m or 1 over 2m subscript train of sum over your training sets of this squared error term."}, {"start": 360.71999999999997, "end": 365.88, "text": " And once again, this does not include the regularization term, unlike the cost function"}, {"start": 365.88, "end": 369.2, "text": " that you were minimizing to fit the parameters."}, {"start": 369.2, "end": 377.64, "text": " So in a model like what we saw earlier in this video, j train of wb will be low because"}, {"start": 377.64, "end": 384.48, "text": " the average error on your training examples will be zero or very close to zero."}, {"start": 384.48, "end": 387.36, "text": " So j train will be very close to zero."}, {"start": 387.36, "end": 391.96, "text": " But if you had a few additional examples in your test set that the algorithm had not trained"}, {"start": 391.96, "end": 396.96, "text": " on then those test examples might look like these."}, {"start": 396.96, "end": 401.91999999999996, "text": " And there's a large gap between what the algorithm is predicting as the estimated housing price"}, {"start": 401.91999999999996, "end": 405.28, "text": " and the actual value of those housing prices."}, {"start": 405.28, "end": 408.34, "text": " And so j tests will be high."}, {"start": 408.34, "end": 415.32, "text": " So seeing that j test is high on this model gives you a way to realize that even though"}, {"start": 415.32, "end": 420.9, "text": " it does great on the training set is actually not so good at generalizing to new examples"}, {"start": 420.9, "end": 424.96, "text": " to new data points that were not in the training set."}, {"start": 424.96, "end": 428.71999999999997, "text": " So that was regression with squared error costs."}, {"start": 428.71999999999997, "end": 433.52, "text": " Now let's take a look at how you'd apply this procedure to a classification problem."}, {"start": 433.52, "end": 438.47999999999996, "text": " For example, if you were classifying between handwritten digits that are either zero or"}, {"start": 438.47999999999996, "end": 439.79999999999995, "text": " one."}, {"start": 439.79999999999995, "end": 444.32, "text": " So same as before, you fit the parameters by minimizing the cost function to find the"}, {"start": 444.32, "end": 446.12, "text": " parameters wb."}, {"start": 446.12, "end": 451.08, "text": " For example, if you were training logistic regression, then this would be the cost function"}, {"start": 451.08, "end": 460.44, "text": " j of wb where this is the usual logistic loss function and then plus also the regularization"}, {"start": 460.44, "end": 464.44, "text": " term and to compute the test error."}, {"start": 464.44, "end": 470.71999999999997, "text": " J test is then the average over your test examples."}, {"start": 470.71999999999997, "end": 477.02, "text": " That's that 30% of the data that wasn't in the training set of the logistic loss on your"}, {"start": 477.02, "end": 479.12, "text": " test set."}, {"start": 479.12, "end": 484.48, "text": " And the training error you could also compute using this formula is the average logistic"}, {"start": 484.48, "end": 489.96, "text": " loss on your training data that the algorithm was using to minimize the cost function j"}, {"start": 489.96, "end": 491.96, "text": " of wb."}, {"start": 491.96, "end": 497.4, "text": " Well what I describe here will work okay for figuring out if your learning algorithm is"}, {"start": 497.4, "end": 498.4, "text": " doing well."}, {"start": 498.4, "end": 501.68, "text": " I've seen how well it's doing in terms of test error."}, {"start": 501.68, "end": 507.0, "text": " When applying machine learning to classification problems, there's actually one other definition"}, {"start": 507.0, "end": 513.68, "text": " of J test and J train that is maybe even more commonly used, which is instead of using the"}, {"start": 513.68, "end": 519.44, "text": " logistic loss to compute the test error and the training error to instead measure what's"}, {"start": 519.44, "end": 524.88, "text": " the fraction of the test set and the fraction of the training set that the algorithm has"}, {"start": 524.88, "end": 526.68, "text": " misclassified."}, {"start": 526.68, "end": 535.92, "text": " So specifically on the test set, you can have the algorithm make a prediction one or zero"}, {"start": 535.92, "end": 537.68, "text": " on every test example."}, {"start": 537.68, "end": 544.4, "text": " So recall y hat, we would predict as one if f of x is greater than or equal to 0.5 and"}, {"start": 544.4, "end": 547.56, "text": " zero if it's less than 0.5."}, {"start": 547.56, "end": 553.68, "text": " And you can then count up in the test set the fraction of examples where y hat is not"}, {"start": 553.68, "end": 558.52, "text": " equal to the actual ground truth label y in the test set."}, {"start": 558.52, "end": 565.64, "text": " So concretely, if you were classifying handwritten digits zero or one by new classification toss,"}, {"start": 565.64, "end": 570.88, "text": " then J test would be the fraction of that test set where zero was classified as one"}, {"start": 570.88, "end": 572.92, "text": " or one classified as zero."}, {"start": 572.92, "end": 578.8, "text": " And similarly J train is a fraction of the training set that has been misclassified."}, {"start": 578.8, "end": 583.6, "text": " Taking a data set and splitting it into a training set and a separate test set gives"}, {"start": 583.6, "end": 587.8, "text": " you a way to systematically evaluate how well your learning algorithm is doing."}, {"start": 587.8, "end": 592.92, "text": " By computing both J test and J train, you can now measure how it's doing on the test"}, {"start": 592.92, "end": 595.3199999999999, "text": " set and on the training set."}, {"start": 595.32, "end": 600.5600000000001, "text": " This procedure is one step to what you being able to automatically choose what model to"}, {"start": 600.5600000000001, "end": 603.8000000000001, "text": " use for a given machine learning application."}, {"start": 603.8000000000001, "end": 608.4000000000001, "text": " For example, if you're trying to predict housing prices, should you fit a straight line to"}, {"start": 608.4000000000001, "end": 613.24, "text": " your data or fit a second order polynomial or third order or fourth order polynomial?"}, {"start": 613.24, "end": 617.48, "text": " It turns out that with one further refinement to the idea you saw in this video, you'll"}, {"start": 617.48, "end": 622.08, "text": " be able to have an algorithm help you to automatically make that type of decision well."}, {"start": 622.08, "end": 626.0, "text": " Let's take a look at how to do that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=KwM_IYQ_I-8
6.3 Evaluating and choosing models | Model selection and training/cross validation/test sets-ML Ng
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the last video, you saw how to use a test set to evaluate the performance of a model. Let's make one further refinement to that idea in this video, which will allow you to use a technique to automatically choose a good model for your machine learning algorithm. One thing we've seen is that once the model's parameters w and b have been fit to the training set, the training error may not be a good indicator of how well the algorithm will do or how well it will generalize to new examples that were not in the training set. And in particular, for this example, the training error will be pretty much zero, and that's likely much lower than the actual generalization error. And by that I mean the average error on new examples that were not in the training set. And what you saw in the last video is that J-Test, the performance of the algorithm on examples is not trained on, that that will be a better indicator of how well the model will likely do on new data. And by that I mean other data that's not in the training set. Let's take a look at how this affects how we might use a test set to choose a model for a given machine learning application. So if fitting a function to predict housing prices or some other regression problem, one model you might consider is to fit a linear model like this. And this is a first order polynomial, and I'm going to use D equals one on this slide to denote fitting a one or first order polynomial. If you were to fit a model like this to your training set, you'd get some parameters W and B, and you can then compute J-Test to estimate how well this will generalize to new data. And on this slide, I'm going to use W1, B1, superscript there, to denote that these are the parameters you get if you were to fit a first order polynomial or degree one, D equals one polynomial. Now you might also consider fitting a second order polynomial or quadratic model. So this is the model. And if you were to fit this to your training set, you would get some parameters W2, B2, and you can then similarly evaluate those parameters on your test set and get J-Test W2, B2. And this would give you a sense of how well the second order polynomial does. And you can go on to try D equals three, that's a third order or a degree three polynomial that looks like this, and fit parameters, and similarly get J-Test. And you might keep doing this until, say, you try up to a 10th order polynomial and you end up with J-Test of W10, B10. That gives you a sense of how well the 10th order polynomial is doing. So one procedure you could try, this turns out not to be the best procedure, but one thing you could try is look at all of these J-Tests and see which one gives you the lowest value. And say you find that J-Test for the fifth order polynomial for W5, B5 turns out to be the lowest. If that's the case, then you might decide that the fifth order polynomial D equals five does best and choose that model for your application. And if you want to estimate how well this model performs, one thing you could do, but this turns out to be a slightly flawed procedure, is to report the test set error J-Test W5, B5. The reason this procedure is flawed is J-Test of W5, B5 is likely to be an optimistic estimate of the generalization error. In other words, it is likely to be lower than the actual generalization error. And the reason is, in the procedure we talked about on the slide, we basically fit one extra parameter, which is D, the degree of polynomial, and we chose this parameter using the test set. So on the previous slide, we saw that if you were to fit WB to the training data, then the training data would be an overly optimistic estimate of generalization error. And it turns out too, that if we were to choose the parameter D using the test set, then the test set J-Test is now an overly optimistic, that is lower than actual estimates of the generalization error. So the procedure on this particular slide is flawed and I don't recommend using this. Instead, if you want to automatically choose a model, such as decide what degree polynomial to use, here's how you modify the training and testing procedure in order to carry out model selection, whereby model selection, I mean, choosing amongst different models, such as these 10 different models that you might contemplate using for your machine learning application. The way we'll modify the procedure is, instead of splitting your data into just two subsets, the training set and the test set, we're going to split your data into three different subsets, which we're going to call the training set, the cross validation set, and then also the test set. So using our example from before of these 10 training examples, we might split it into putting 60% of the data into the training set. And so the notation we'll use for the training set portion will be the same as before, except that now mtrain, the number of training examples will be 6. And we might put 20% of the data into the cross validation set. And the notation I'm going to use is xCV of 1, yCV of 1 for the first cross validation example. So CV stands for cross validation, all the way down to xCV of mCV and yCV of mCV. So here mCV equals 2 in this example is the number of cross validation examples. And then finally, we have the test set same as before. So x1 through xm tests and y1 through ym tests, where m tests here is equal to 2. This is the number of test examples. We'll see on the next slide how to use the cross validation set. So the way we'll modify the procedure is you've already seen the training set and the test set. And we're going to introduce a new subset of the data called the cross validation set. The name cross validation refers to that this is an extra data set that we're going to use to check or cross check the validity or really the accuracy of different models. I don't think it's a great name, but that is what people in machine learning have gotten to call this extra data set. You may also hear people call this the validation set for short, just few syllables then cross validation. Or in some applications, people also call this the development set means basically the same thing. Or for short, sometimes you hear people call this the dev set, but all of these terms mean the same thing as cross validation set. I personally use the term dev set the most often because it's the shortest, fastest way to say it, but cross validation is probably used a little bit more often by machine learning practitioners. So armed with these three subsets of the data, training set, cross validation set and test set, you can then compute the training error, the cross validation error and the test error using these three formulas. Where as usual, none of these terms include the regularization term that is included in the training objective. And this new term in the middle, the cross validation error is just the average over your MCV cross validation examples of the average say squared error. And this term, in addition to being called cross validation error, is also commonly called the validation error for short or even the development set error or the dev error. Armed with these three measures of learning algorithm performance, this is how you can then go about carrying out model selection. You can with the 10 models, same as earlier on the slide with D equals one, D equals two, all the way up to a 10th degree or the 10th order polynomial, you can then fit the parameters W1, B1. But instead of evaluating this on your test set, you would instead evaluate these parameters on your cross validation sets and compute JCV of W1, B1. And similarly for the second model, you get JCV of W2, B2 and all the way down to JCV of W10, B10. Then in order to choose a model, you would look at which model has the lowest cross validation error. And concretely, let's say that JCV of W4, B4 is lowest, then what that means is you would pick this fourth order polynomial as the model you will use for this application. Finally, if you want to report out an estimate of the generalization error of how well this model will do on new data, you would do so using that third subset of your data, the test set and you report out J test of W4, B4. And you notice that throughout this entire procedure, you had fit these parameters using the training set, you then chose the parameter D or chose the degree of polynomial using the cross validation set. And so up until this point, you've not fit any parameters, either W or B or D to the test set. And that's why J test in this example will be a fair estimate of the generalization error of this model that has parameters W4, B4. So this gives a better procedure for model selection. And it lets you automatically make a decision like what order polynomial to choose for your linear regression model. This model selection procedure also works for choosing among other types of models. For example, choosing a neural network architecture. If you are fitting a model for handwritten digit recognition, you might consider three models like these, maybe even a larger set of models than just three, but here are a few different neural networks of small, somewhat larger and then even larger. To help you decide how many layers should your neural network have and how many hidden units per layer should you have. You can then train all three of these models and end up with parameters W1, B1 for the first model, W2, B2 for the second model and W3, B3 for the third model. And you can then evaluate the neural network's performance using JCV using your cross validation set. And with a classification problem, JCV can be the percentage of examples. And since this is a classification problem, JCV, the most common choice would be to compute this as a fraction of cross validation examples that the algorithm has misclassified. And you would compute this using all three models and then pick the model with the lowest cross validation error. So if in this example, this has the lowest cross validation error, you would then pick the second neural network and use parameters trained on this model. And finally, if you want to report out an estimate of the generalization error, you then use the test set to estimate how well the neural network that you just chose will do. So in machine learning practice, it's considered best practice to make all the decisions you want to make regarding your learning algorithm, such as how to choose parameters, what degree polynomial to use, but make decisions only looking at the training set and cross validation set and to not use the test set at all to make decisions about your model. And only after you've made all those decisions, then finally take the model you have designed and evaluated on your test set. And that procedure ensures that you haven't accidentally fit anything to the test set, so that your test set becomes still a fair and not overly optimistic estimate of the generalization error of your algorithm. So it's considered best practice in machine learning that if you have to make decisions about your model, such as fitting parameters or choosing the model architecture, such as neural network architecture or degree of polynomial if you're fitting linear regression, to make all those decisions only using your training set and your cross validation set and to not look at the test set at all while you're still making decisions regarding your learning algorithm. And it's only after you've come up with one model that's your final model to only then evaluate it on the test set. And because you haven't made any decisions using the test set, that ensures that your test set is a fair and not overly optimistic estimate of how well your model will generalize to new data. So that's model selection. And this is actually a very widely used procedure. I use this all the time to automatically choose what model to use for a given machine learning application. Now earlier this week, I mentioned running diagnostics to decide how to improve the performance of a learning algorithm. Now that you have a way to evaluate learning algorithms and even automatically choose a model, let's dive more deeply into examples of some diagnostics. The most powerful diagnostic that I know of and that I use for a lot of machine learning applications is one called bias and variance. Let's take a look at what that means in the next video.
[{"start": 0.0, "end": 7.68, "text": " In the last video, you saw how to use a test set to evaluate the performance of a model."}, {"start": 7.68, "end": 11.8, "text": " Let's make one further refinement to that idea in this video, which will allow you to"}, {"start": 11.8, "end": 17.34, "text": " use a technique to automatically choose a good model for your machine learning algorithm."}, {"start": 17.34, "end": 23.52, "text": " One thing we've seen is that once the model's parameters w and b have been fit to the training"}, {"start": 23.52, "end": 29.52, "text": " set, the training error may not be a good indicator of how well the algorithm will do"}, {"start": 29.52, "end": 34.8, "text": " or how well it will generalize to new examples that were not in the training set."}, {"start": 34.8, "end": 40.56, "text": " And in particular, for this example, the training error will be pretty much zero, and that's"}, {"start": 40.56, "end": 44.68, "text": " likely much lower than the actual generalization error."}, {"start": 44.68, "end": 51.519999999999996, "text": " And by that I mean the average error on new examples that were not in the training set."}, {"start": 51.519999999999996, "end": 55.64, "text": " And what you saw in the last video is that J-Test, the performance of the algorithm on"}, {"start": 55.64, "end": 60.88, "text": " examples is not trained on, that that will be a better indicator of how well the model"}, {"start": 60.88, "end": 63.36, "text": " will likely do on new data."}, {"start": 63.36, "end": 67.84, "text": " And by that I mean other data that's not in the training set."}, {"start": 67.84, "end": 73.68, "text": " Let's take a look at how this affects how we might use a test set to choose a model"}, {"start": 73.68, "end": 76.72, "text": " for a given machine learning application."}, {"start": 76.72, "end": 82.2, "text": " So if fitting a function to predict housing prices or some other regression problem, one"}, {"start": 82.2, "end": 86.04, "text": " model you might consider is to fit a linear model like this."}, {"start": 86.04, "end": 90.8, "text": " And this is a first order polynomial, and I'm going to use D equals one on this slide"}, {"start": 90.8, "end": 95.92, "text": " to denote fitting a one or first order polynomial."}, {"start": 95.92, "end": 100.2, "text": " If you were to fit a model like this to your training set, you'd get some parameters W"}, {"start": 100.2, "end": 106.92, "text": " and B, and you can then compute J-Test to estimate how well this will generalize to"}, {"start": 106.92, "end": 107.96000000000001, "text": " new data."}, {"start": 107.96, "end": 114.32, "text": " And on this slide, I'm going to use W1, B1, superscript there, to denote that these are"}, {"start": 114.32, "end": 119.36, "text": " the parameters you get if you were to fit a first order polynomial or degree one, D"}, {"start": 119.36, "end": 122.83999999999999, "text": " equals one polynomial."}, {"start": 122.83999999999999, "end": 129.12, "text": " Now you might also consider fitting a second order polynomial or quadratic model."}, {"start": 129.12, "end": 131.64, "text": " So this is the model."}, {"start": 131.64, "end": 139.11999999999998, "text": " And if you were to fit this to your training set, you would get some parameters W2, B2,"}, {"start": 139.11999999999998, "end": 146.16, "text": " and you can then similarly evaluate those parameters on your test set and get J-Test"}, {"start": 146.16, "end": 147.64, "text": " W2, B2."}, {"start": 147.64, "end": 151.0, "text": " And this would give you a sense of how well the second order polynomial does."}, {"start": 151.0, "end": 157.56, "text": " And you can go on to try D equals three, that's a third order or a degree three polynomial"}, {"start": 157.56, "end": 163.56, "text": " that looks like this, and fit parameters, and similarly get J-Test."}, {"start": 163.56, "end": 168.92000000000002, "text": " And you might keep doing this until, say, you try up to a 10th order polynomial and"}, {"start": 168.92000000000002, "end": 173.44, "text": " you end up with J-Test of W10, B10."}, {"start": 173.44, "end": 177.9, "text": " That gives you a sense of how well the 10th order polynomial is doing."}, {"start": 177.9, "end": 183.32, "text": " So one procedure you could try, this turns out not to be the best procedure, but one"}, {"start": 183.32, "end": 190.6, "text": " thing you could try is look at all of these J-Tests and see which one gives you the lowest"}, {"start": 190.6, "end": 191.84, "text": " value."}, {"start": 191.84, "end": 200.07999999999998, "text": " And say you find that J-Test for the fifth order polynomial for W5, B5 turns out to be"}, {"start": 200.07999999999998, "end": 202.16, "text": " the lowest."}, {"start": 202.16, "end": 207.16, "text": " If that's the case, then you might decide that the fifth order polynomial D equals five"}, {"start": 207.16, "end": 211.88, "text": " does best and choose that model for your application."}, {"start": 211.88, "end": 216.2, "text": " And if you want to estimate how well this model performs, one thing you could do, but"}, {"start": 216.2, "end": 222.79999999999998, "text": " this turns out to be a slightly flawed procedure, is to report the test set error J-Test W5,"}, {"start": 222.79999999999998, "end": 224.79999999999998, "text": " B5."}, {"start": 224.79999999999998, "end": 233.24, "text": " The reason this procedure is flawed is J-Test of W5, B5 is likely to be an optimistic estimate"}, {"start": 233.24, "end": 235.28, "text": " of the generalization error."}, {"start": 235.28, "end": 241.72, "text": " In other words, it is likely to be lower than the actual generalization error."}, {"start": 241.72, "end": 247.44, "text": " And the reason is, in the procedure we talked about on the slide, we basically fit one extra"}, {"start": 247.44, "end": 254.76, "text": " parameter, which is D, the degree of polynomial, and we chose this parameter using the test"}, {"start": 254.76, "end": 256.16, "text": " set."}, {"start": 256.16, "end": 262.72, "text": " So on the previous slide, we saw that if you were to fit WB to the training data, then"}, {"start": 262.72, "end": 269.36, "text": " the training data would be an overly optimistic estimate of generalization error."}, {"start": 269.36, "end": 274.08000000000004, "text": " And it turns out too, that if we were to choose the parameter D using the test set, then the"}, {"start": 274.08000000000004, "end": 279.96000000000004, "text": " test set J-Test is now an overly optimistic, that is lower than actual estimates of the"}, {"start": 279.96000000000004, "end": 282.1, "text": " generalization error."}, {"start": 282.1, "end": 287.08000000000004, "text": " So the procedure on this particular slide is flawed and I don't recommend using this."}, {"start": 287.08000000000004, "end": 292.96000000000004, "text": " Instead, if you want to automatically choose a model, such as decide what degree polynomial"}, {"start": 292.96000000000004, "end": 298.96000000000004, "text": " to use, here's how you modify the training and testing procedure in order to carry out"}, {"start": 298.96, "end": 305.08, "text": " model selection, whereby model selection, I mean, choosing amongst different models,"}, {"start": 305.08, "end": 309.59999999999997, "text": " such as these 10 different models that you might contemplate using for your machine learning"}, {"start": 309.59999999999997, "end": 311.84, "text": " application."}, {"start": 311.84, "end": 316.91999999999996, "text": " The way we'll modify the procedure is, instead of splitting your data into just two subsets,"}, {"start": 316.91999999999996, "end": 321.79999999999995, "text": " the training set and the test set, we're going to split your data into three different subsets,"}, {"start": 321.79999999999995, "end": 326.88, "text": " which we're going to call the training set, the cross validation set, and then also the"}, {"start": 326.88, "end": 328.91999999999996, "text": " test set."}, {"start": 328.92, "end": 336.96000000000004, "text": " So using our example from before of these 10 training examples, we might split it into"}, {"start": 336.96000000000004, "end": 342.46000000000004, "text": " putting 60% of the data into the training set."}, {"start": 342.46000000000004, "end": 348.6, "text": " And so the notation we'll use for the training set portion will be the same as before, except"}, {"start": 348.6, "end": 353.68, "text": " that now mtrain, the number of training examples will be 6."}, {"start": 353.68, "end": 360.32, "text": " And we might put 20% of the data into the cross validation set."}, {"start": 360.32, "end": 368.12, "text": " And the notation I'm going to use is xCV of 1, yCV of 1 for the first cross validation"}, {"start": 368.12, "end": 369.12, "text": " example."}, {"start": 369.12, "end": 377.68, "text": " So CV stands for cross validation, all the way down to xCV of mCV and yCV of mCV."}, {"start": 377.68, "end": 383.96, "text": " So here mCV equals 2 in this example is the number of cross validation examples."}, {"start": 383.96, "end": 387.36, "text": " And then finally, we have the test set same as before."}, {"start": 387.36, "end": 398.36, "text": " So x1 through xm tests and y1 through ym tests, where m tests here is equal to 2."}, {"start": 398.36, "end": 400.2, "text": " This is the number of test examples."}, {"start": 400.2, "end": 404.48, "text": " We'll see on the next slide how to use the cross validation set."}, {"start": 404.48, "end": 411.16, "text": " So the way we'll modify the procedure is you've already seen the training set and the test"}, {"start": 411.16, "end": 412.16, "text": " set."}, {"start": 412.16, "end": 418.8, "text": " And we're going to introduce a new subset of the data called the cross validation set."}, {"start": 418.8, "end": 424.32, "text": " The name cross validation refers to that this is an extra data set that we're going to use"}, {"start": 424.32, "end": 430.52000000000004, "text": " to check or cross check the validity or really the accuracy of different models."}, {"start": 430.52, "end": 435.15999999999997, "text": " I don't think it's a great name, but that is what people in machine learning have gotten"}, {"start": 435.15999999999997, "end": 437.91999999999996, "text": " to call this extra data set."}, {"start": 437.91999999999996, "end": 443.56, "text": " You may also hear people call this the validation set for short, just few syllables then cross"}, {"start": 443.56, "end": 444.56, "text": " validation."}, {"start": 444.56, "end": 450.0, "text": " Or in some applications, people also call this the development set means basically the"}, {"start": 450.0, "end": 451.08, "text": " same thing."}, {"start": 451.08, "end": 455.91999999999996, "text": " Or for short, sometimes you hear people call this the dev set, but all of these terms mean"}, {"start": 455.91999999999996, "end": 458.79999999999995, "text": " the same thing as cross validation set."}, {"start": 458.8, "end": 464.2, "text": " I personally use the term dev set the most often because it's the shortest, fastest way"}, {"start": 464.2, "end": 468.52000000000004, "text": " to say it, but cross validation is probably used a little bit more often by machine learning"}, {"start": 468.52000000000004, "end": 470.40000000000003, "text": " practitioners."}, {"start": 470.40000000000003, "end": 475.24, "text": " So armed with these three subsets of the data, training set, cross validation set and test"}, {"start": 475.24, "end": 481.88, "text": " set, you can then compute the training error, the cross validation error and the test error"}, {"start": 481.88, "end": 484.28000000000003, "text": " using these three formulas."}, {"start": 484.28, "end": 488.79999999999995, "text": " Where as usual, none of these terms include the regularization term that is included in"}, {"start": 488.79999999999995, "end": 490.52, "text": " the training objective."}, {"start": 490.52, "end": 494.59999999999997, "text": " And this new term in the middle, the cross validation error is just the average over"}, {"start": 494.59999999999997, "end": 501.35999999999996, "text": " your MCV cross validation examples of the average say squared error."}, {"start": 501.35999999999996, "end": 507.52, "text": " And this term, in addition to being called cross validation error, is also commonly called"}, {"start": 507.52, "end": 513.1999999999999, "text": " the validation error for short or even the development set error or the dev error."}, {"start": 513.2, "end": 518.5600000000001, "text": " Armed with these three measures of learning algorithm performance, this is how you can"}, {"start": 518.5600000000001, "end": 522.2, "text": " then go about carrying out model selection."}, {"start": 522.2, "end": 529.72, "text": " You can with the 10 models, same as earlier on the slide with D equals one, D equals two,"}, {"start": 529.72, "end": 535.8000000000001, "text": " all the way up to a 10th degree or the 10th order polynomial, you can then fit the parameters"}, {"start": 535.8000000000001, "end": 539.2800000000001, "text": " W1, B1."}, {"start": 539.28, "end": 543.9599999999999, "text": " But instead of evaluating this on your test set, you would instead evaluate these parameters"}, {"start": 543.9599999999999, "end": 549.88, "text": " on your cross validation sets and compute JCV of W1, B1."}, {"start": 549.88, "end": 557.8, "text": " And similarly for the second model, you get JCV of W2, B2 and all the way down to JCV"}, {"start": 557.8, "end": 561.72, "text": " of W10, B10."}, {"start": 561.72, "end": 569.1999999999999, "text": " Then in order to choose a model, you would look at which model has the lowest cross validation"}, {"start": 569.2, "end": 570.5600000000001, "text": " error."}, {"start": 570.5600000000001, "end": 578.6400000000001, "text": " And concretely, let's say that JCV of W4, B4 is lowest, then what that means is you"}, {"start": 578.6400000000001, "end": 584.48, "text": " would pick this fourth order polynomial as the model you will use for this application."}, {"start": 584.48, "end": 590.5200000000001, "text": " Finally, if you want to report out an estimate of the generalization error of how well this"}, {"start": 590.5200000000001, "end": 597.2800000000001, "text": " model will do on new data, you would do so using that third subset of your data, the"}, {"start": 597.28, "end": 602.1999999999999, "text": " test set and you report out J test of W4, B4."}, {"start": 602.1999999999999, "end": 608.0799999999999, "text": " And you notice that throughout this entire procedure, you had fit these parameters using"}, {"start": 608.0799999999999, "end": 614.52, "text": " the training set, you then chose the parameter D or chose the degree of polynomial using"}, {"start": 614.52, "end": 616.52, "text": " the cross validation set."}, {"start": 616.52, "end": 621.8399999999999, "text": " And so up until this point, you've not fit any parameters, either W or B or D to the"}, {"start": 621.8399999999999, "end": 623.12, "text": " test set."}, {"start": 623.12, "end": 630.32, "text": " And that's why J test in this example will be a fair estimate of the generalization error"}, {"start": 630.32, "end": 635.42, "text": " of this model that has parameters W4, B4."}, {"start": 635.42, "end": 639.84, "text": " So this gives a better procedure for model selection."}, {"start": 639.84, "end": 645.6, "text": " And it lets you automatically make a decision like what order polynomial to choose for your"}, {"start": 645.6, "end": 647.44, "text": " linear regression model."}, {"start": 647.44, "end": 652.6, "text": " This model selection procedure also works for choosing among other types of models."}, {"start": 652.6, "end": 655.9200000000001, "text": " For example, choosing a neural network architecture."}, {"start": 655.9200000000001, "end": 662.28, "text": " If you are fitting a model for handwritten digit recognition, you might consider three"}, {"start": 662.28, "end": 666.5600000000001, "text": " models like these, maybe even a larger set of models than just three, but here are a"}, {"start": 666.5600000000001, "end": 672.52, "text": " few different neural networks of small, somewhat larger and then even larger."}, {"start": 672.52, "end": 677.36, "text": " To help you decide how many layers should your neural network have and how many hidden"}, {"start": 677.36, "end": 679.6800000000001, "text": " units per layer should you have."}, {"start": 679.68, "end": 688.8, "text": " You can then train all three of these models and end up with parameters W1, B1 for the"}, {"start": 688.8, "end": 696.0799999999999, "text": " first model, W2, B2 for the second model and W3, B3 for the third model."}, {"start": 696.0799999999999, "end": 703.3599999999999, "text": " And you can then evaluate the neural network's performance using JCV using your cross validation"}, {"start": 703.3599999999999, "end": 704.74, "text": " set."}, {"start": 704.74, "end": 711.64, "text": " And with a classification problem, JCV can be the percentage of examples."}, {"start": 711.64, "end": 717.2, "text": " And since this is a classification problem, JCV, the most common choice would be to compute"}, {"start": 717.2, "end": 723.64, "text": " this as a fraction of cross validation examples that the algorithm has misclassified."}, {"start": 723.64, "end": 730.8, "text": " And you would compute this using all three models and then pick the model with the lowest"}, {"start": 730.8, "end": 733.16, "text": " cross validation error."}, {"start": 733.16, "end": 739.3199999999999, "text": " So if in this example, this has the lowest cross validation error, you would then pick"}, {"start": 739.3199999999999, "end": 746.4, "text": " the second neural network and use parameters trained on this model."}, {"start": 746.4, "end": 750.7199999999999, "text": " And finally, if you want to report out an estimate of the generalization error, you"}, {"start": 750.7199999999999, "end": 756.16, "text": " then use the test set to estimate how well the neural network that you just chose will"}, {"start": 756.16, "end": 757.4, "text": " do."}, {"start": 757.4, "end": 764.36, "text": " So in machine learning practice, it's considered best practice to make all the decisions you"}, {"start": 764.36, "end": 769.8, "text": " want to make regarding your learning algorithm, such as how to choose parameters, what degree"}, {"start": 769.8, "end": 775.4399999999999, "text": " polynomial to use, but make decisions only looking at the training set and cross validation"}, {"start": 775.4399999999999, "end": 780.9599999999999, "text": " set and to not use the test set at all to make decisions about your model."}, {"start": 780.9599999999999, "end": 786.74, "text": " And only after you've made all those decisions, then finally take the model you have designed"}, {"start": 786.74, "end": 789.72, "text": " and evaluated on your test set."}, {"start": 789.72, "end": 795.96, "text": " And that procedure ensures that you haven't accidentally fit anything to the test set,"}, {"start": 795.96, "end": 800.8, "text": " so that your test set becomes still a fair and not overly optimistic estimate of the"}, {"start": 800.8, "end": 803.32, "text": " generalization error of your algorithm."}, {"start": 803.32, "end": 808.44, "text": " So it's considered best practice in machine learning that if you have to make decisions"}, {"start": 808.44, "end": 813.32, "text": " about your model, such as fitting parameters or choosing the model architecture, such as"}, {"start": 813.32, "end": 818.6, "text": " neural network architecture or degree of polynomial if you're fitting linear regression, to make"}, {"start": 818.6, "end": 825.24, "text": " all those decisions only using your training set and your cross validation set and to not"}, {"start": 825.24, "end": 830.5600000000001, "text": " look at the test set at all while you're still making decisions regarding your learning algorithm."}, {"start": 830.5600000000001, "end": 836.08, "text": " And it's only after you've come up with one model that's your final model to only then"}, {"start": 836.08, "end": 838.8000000000001, "text": " evaluate it on the test set."}, {"start": 838.8, "end": 843.4, "text": " And because you haven't made any decisions using the test set, that ensures that your"}, {"start": 843.4, "end": 850.24, "text": " test set is a fair and not overly optimistic estimate of how well your model will generalize"}, {"start": 850.24, "end": 851.56, "text": " to new data."}, {"start": 851.56, "end": 853.52, "text": " So that's model selection."}, {"start": 853.52, "end": 856.76, "text": " And this is actually a very widely used procedure."}, {"start": 856.76, "end": 862.0, "text": " I use this all the time to automatically choose what model to use for a given machine learning"}, {"start": 862.0, "end": 864.12, "text": " application."}, {"start": 864.12, "end": 870.28, "text": " Now earlier this week, I mentioned running diagnostics to decide how to improve the performance"}, {"start": 870.28, "end": 872.14, "text": " of a learning algorithm."}, {"start": 872.14, "end": 876.76, "text": " Now that you have a way to evaluate learning algorithms and even automatically choose a"}, {"start": 876.76, "end": 881.28, "text": " model, let's dive more deeply into examples of some diagnostics."}, {"start": 881.28, "end": 885.84, "text": " The most powerful diagnostic that I know of and that I use for a lot of machine learning"}, {"start": 885.84, "end": 889.1800000000001, "text": " applications is one called bias and variance."}, {"start": 889.18, "end": 894.28, "text": " Let's take a look at what that means in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=YB61HDL7EzE
6.4 Bias and variance | Diagnosing bias and variance -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
The typical workflow of developing a machine learning system is that you have an idea and you train a model and you almost always find that it doesn't work as well as you wish yet. When I'm training machine learning model, it pretty much never works that well the first time. And so key to the process of building machine learning system is how to decide what to do next in order to improve this performance. I've found across many different applications that looking at the bias and variance of a learning algorithm gives you very good guidance on what to try next. Let's take a look at what this means. You might remember this example from the first course on linear regression, where given this data set, if you were to fill a straight line to it, it doesn't do that well. And we said that this algorithm has high bias or that it underfits this data set. Or if you were to fit a four-folder polynomial, then it has high variance or it overfits. And in the middle, if you fit a quadratic polynomial, then it looks pretty good. And I said that was just right. Because this is a problem with just a single feature x, we could plot the function f and look at it like this. But if you had more features, you can't plot f and visualize whether it's doing well as easily. So instead of trying to look at plots like this, a more systematic way to diagnose or to find out if your algorithm has high bias or high variance will be to look at the performance of your algorithm on the training set and on the cross validation set. In particular, let's look at the example on the left. If you were to compute J train, how well does the algorithm do on the training set? Not that well. So let's say J train here would be high because there are actually pretty large errors between the examples and the actual predictions of the model. And how about Jcv? So Jcv would be if you had a few new examples, maybe examples like that, that the algorithm had not previously seen. And here, the algorithm also doesn't do that well on examples that it had not previously seen. So Jcv would also be high. And one characteristic of an algorithm with high bias, something that is underfitting is that it's not even doing that well on the training set. And so when J train is high, that gives you a strong indicator that this algorithm has high bias. Let's now look at the example on the right. If you were to compute J train, how well is this doing on the training set? Well, it's actually doing great on the training set, fits the training data really well. So J train here will be low. But if you were to evaluate this model on other houses not in the training set, then you find that Jcv, the cross validation error will be quite high. And so a characteristic signature or characteristic cue that your algorithm has high variance will be of Jcv is much higher than J train. In other words, it does much better on data it has seen than on data it has not seen. And this turns out to be a strong indicator that your algorithm has high variance. And again, the point of what we're doing is that by computing J train and Jcv and seeing if J train is high, or if Jcv is much higher than J train, this gives you a sense even if you can't plot to function f of whether your algorithm has high bias or high variance. And finally, the case in the middle, if you look at J train is pretty low since it's doing quite well on the training set. And if you were to look at a few new examples like those from say your cross validation set, you find that Jcv is also pretty low. And so J train not being too high, indicates this doesn't have a high bias problem. And Jcv not being much worse than J train, this indicates that it doesn't have a high variance problem either, which is why this model, the quadratic model seems to be a pretty good one for this application. Let me share with you another view of bias and variance. So to summarize when d equals one for a linear polynomial, J train was high and Jcv was high. When d equals four, J train was low, but Jcv is high. And when d equals two, both were pretty low. Let's now take a different view on bias and variance. And in particular, on the next slide, I'd like to show you how J train and Jcv vary as a function of the degree of the polynomial you're fitting. So let me draw a figure where the horizontal axis of this figure will be the degree of polynomial that we're fitting to the data. Over on the left will correspond to a small value of d, like d equals one, which corresponds to a fitting straight line. And over to the right will correspond to say d equals four or even higher values of d, where we're fitting this high order polynomial. So if you were to plot J train of Wb as a function of degree of polynomial, what you find is that as you fit a higher and higher degree polynomial, here I'm assuming we're not using regularization, but as you fit a higher and higher order polynomial, the training error will tend to go down because when you have a very simple linear function, it doesn't fit the training data that well. When you fit a quadratic function or a third order polynomial or a fourth order polynomial, it fits the training data better and better. So as the degree of polynomial increases, J train will typically go down. Next, let's look at Jcv, which is how well does it do on data that it did not get to fit to? What we saw was when d equals one, when the degree of polynomial was very low, Jcv was pretty high because it underfit, so it didn't do well on the cross validation set. And here on the right as well, when the degree of polynomial is very large, say four, it doesn't do well on the cross validation set either and so is also high. But if d was in between, say a second order polynomial, then it actually did much better. And so if you were to vary the degree of polynomial, you'd actually get a curve that looks like this, which comes down and then goes back up. Where if the degree of polynomial is too low, it underfits and so doesn't do well on the cross validation set. If it is too high, it overfits and also doesn't do well on the cross validation set. And there's only if it's somewhere in the middle that is just right, which is why the second order polynomial in our example ends up with a lower cross validation error and neither high bias nor high variance. So to summarize, how do you diagnose bias and variance in your learning algorithm? If your learning algorithm has high bias or has underfit data, the key indicator will be if Jtrain is high. And so that corresponds to this leftmost portion of the curve, which is where Jtrain is high. And usually you have Jtrain and JCV will be close to each other. And how do you diagnose if you have high variance? Well the key indicator for high variance will be if JCV is much greater than Jtrain. This double greater than sign in math refers to much greater than. So this is greater and this means much greater. And this rightmost portion of the plot is where JCV is much greater than Jtrain. And usually Jtrain will be pretty low, but the key indicator is whether JCV is much greater than Jtrain. And that's what happens when we had fit a very high order polynomial to this small dataset. And even though we've just seen bias and variance, it turns out in some cases it's possible to simultaneously have high bias and have high variance. You won't see this happen that much for linear regression, but it turns out that if you're training a neural network, there are some applications where unfortunately you have high bias and high variance. And one way to recognize that situation will be if Jtrain is high, so you're not doing that well on the training set, but even worse the cross validation error is again even much larger than the training set. The notion of high bias and high variance, it doesn't really happen for linear models applied to 1D, but to give intuition about what it looks like, it would be as if for part of the input you had a very complicated model that overfit, so it overfits to part of the input. But then for some reason for other parts of the input, it doesn't even fit the training data well, and so it underfits for part of the input. In this example, which looks artificial because it's a single feature input, we fit the training set really well and we overfit in part of the input, and we don't even fit the training data well and we underfit in part of the input. And that's how in some applications you can unfortunately end up with both high bias and high variance. And the indicator for that will be if the algorithm does poorly on the training set and it even does much worse than on the training set. For most learning applications, you probably have primarily a high bias or a high variance problem rather than both at the same time, but it is possible sometimes that both are the same time. So I know that there's a lot to process, there are a lot of concepts on the slides, but the key takeaways are high bias means it's not even doing well on the training set, and high variance means it does much worse on the cross-validation set than the training set. Whenever I'm training a machine learning algorithm, I will almost always try to figure out to what extent the algorithm has a high bias or underfitting versus a high variance or an overfitting problem. And this will give good guidance, as we'll see later this week, on how you can improve the performance of the algorithm. But first, let's take a look at how regularization affects the bias and variance of a learning algorithm, because that will help you better understand when you should use regularization. Let's take a look at that in the next video.
[{"start": 0.0, "end": 8.2, "text": " The typical workflow of developing a machine learning system is that you have an idea and"}, {"start": 8.2, "end": 14.200000000000001, "text": " you train a model and you almost always find that it doesn't work as well as you wish yet."}, {"start": 14.200000000000001, "end": 17.92, "text": " When I'm training machine learning model, it pretty much never works that well the first"}, {"start": 17.92, "end": 18.92, "text": " time."}, {"start": 18.92, "end": 23.52, "text": " And so key to the process of building machine learning system is how to decide what to do"}, {"start": 23.52, "end": 26.12, "text": " next in order to improve this performance."}, {"start": 26.12, "end": 31.04, "text": " I've found across many different applications that looking at the bias and variance of a"}, {"start": 31.04, "end": 35.36, "text": " learning algorithm gives you very good guidance on what to try next."}, {"start": 35.36, "end": 38.120000000000005, "text": " Let's take a look at what this means."}, {"start": 38.120000000000005, "end": 45.52, "text": " You might remember this example from the first course on linear regression, where given this"}, {"start": 45.52, "end": 49.66, "text": " data set, if you were to fill a straight line to it, it doesn't do that well."}, {"start": 49.66, "end": 56.08, "text": " And we said that this algorithm has high bias or that it underfits this data set."}, {"start": 56.08, "end": 64.2, "text": " Or if you were to fit a four-folder polynomial, then it has high variance or it overfits."}, {"start": 64.2, "end": 68.88, "text": " And in the middle, if you fit a quadratic polynomial, then it looks pretty good."}, {"start": 68.88, "end": 71.46, "text": " And I said that was just right."}, {"start": 71.46, "end": 76.12, "text": " Because this is a problem with just a single feature x, we could plot the function f and"}, {"start": 76.12, "end": 77.6, "text": " look at it like this."}, {"start": 77.6, "end": 83.03999999999999, "text": " But if you had more features, you can't plot f and visualize whether it's doing well as"}, {"start": 83.03999999999999, "end": 84.8, "text": " easily."}, {"start": 84.8, "end": 91.47999999999999, "text": " So instead of trying to look at plots like this, a more systematic way to diagnose or"}, {"start": 91.47999999999999, "end": 97.52, "text": " to find out if your algorithm has high bias or high variance will be to look at the performance"}, {"start": 97.52, "end": 102.75999999999999, "text": " of your algorithm on the training set and on the cross validation set."}, {"start": 102.75999999999999, "end": 105.6, "text": " In particular, let's look at the example on the left."}, {"start": 105.6, "end": 112.16, "text": " If you were to compute J train, how well does the algorithm do on the training set?"}, {"start": 112.16, "end": 113.16, "text": " Not that well."}, {"start": 113.16, "end": 118.64, "text": " So let's say J train here would be high because there are actually pretty large errors between"}, {"start": 118.64, "end": 123.2, "text": " the examples and the actual predictions of the model."}, {"start": 123.2, "end": 124.92, "text": " And how about Jcv?"}, {"start": 124.92, "end": 133.4, "text": " So Jcv would be if you had a few new examples, maybe examples like that, that the algorithm"}, {"start": 133.4, "end": 136.16, "text": " had not previously seen."}, {"start": 136.16, "end": 141.44, "text": " And here, the algorithm also doesn't do that well on examples that it had not previously"}, {"start": 141.44, "end": 142.44, "text": " seen."}, {"start": 142.44, "end": 145.88, "text": " So Jcv would also be high."}, {"start": 145.88, "end": 150.4, "text": " And one characteristic of an algorithm with high bias, something that is underfitting"}, {"start": 150.4, "end": 154.16, "text": " is that it's not even doing that well on the training set."}, {"start": 154.16, "end": 159.0, "text": " And so when J train is high, that gives you a strong indicator that this algorithm has"}, {"start": 159.0, "end": 160.52, "text": " high bias."}, {"start": 160.52, "end": 163.2, "text": " Let's now look at the example on the right."}, {"start": 163.2, "end": 167.44, "text": " If you were to compute J train, how well is this doing on the training set?"}, {"start": 167.44, "end": 172.2, "text": " Well, it's actually doing great on the training set, fits the training data really well."}, {"start": 172.2, "end": 174.64, "text": " So J train here will be low."}, {"start": 174.64, "end": 181.48, "text": " But if you were to evaluate this model on other houses not in the training set, then"}, {"start": 181.48, "end": 187.94, "text": " you find that Jcv, the cross validation error will be quite high."}, {"start": 187.94, "end": 193.76, "text": " And so a characteristic signature or characteristic cue that your algorithm has high variance"}, {"start": 193.76, "end": 198.67999999999998, "text": " will be of Jcv is much higher than J train."}, {"start": 198.68, "end": 203.28, "text": " In other words, it does much better on data it has seen than on data it has not seen."}, {"start": 203.28, "end": 208.56, "text": " And this turns out to be a strong indicator that your algorithm has high variance."}, {"start": 208.56, "end": 215.32, "text": " And again, the point of what we're doing is that by computing J train and Jcv and seeing"}, {"start": 215.32, "end": 221.76000000000002, "text": " if J train is high, or if Jcv is much higher than J train, this gives you a sense even"}, {"start": 221.76, "end": 229.32, "text": " if you can't plot to function f of whether your algorithm has high bias or high variance."}, {"start": 229.32, "end": 236.67999999999998, "text": " And finally, the case in the middle, if you look at J train is pretty low since it's doing"}, {"start": 236.67999999999998, "end": 239.28, "text": " quite well on the training set."}, {"start": 239.28, "end": 245.44, "text": " And if you were to look at a few new examples like those from say your cross validation"}, {"start": 245.44, "end": 249.6, "text": " set, you find that Jcv is also pretty low."}, {"start": 249.6, "end": 257.64, "text": " And so J train not being too high, indicates this doesn't have a high bias problem."}, {"start": 257.64, "end": 262.76, "text": " And Jcv not being much worse than J train, this indicates that it doesn't have a high"}, {"start": 262.76, "end": 268.96, "text": " variance problem either, which is why this model, the quadratic model seems to be a pretty"}, {"start": 268.96, "end": 271.64, "text": " good one for this application."}, {"start": 271.64, "end": 276.36, "text": " Let me share with you another view of bias and variance."}, {"start": 276.36, "end": 284.66, "text": " So to summarize when d equals one for a linear polynomial, J train was high and Jcv was high."}, {"start": 284.66, "end": 289.66, "text": " When d equals four, J train was low, but Jcv is high."}, {"start": 289.66, "end": 293.36, "text": " And when d equals two, both were pretty low."}, {"start": 293.36, "end": 296.2, "text": " Let's now take a different view on bias and variance."}, {"start": 296.2, "end": 302.52000000000004, "text": " And in particular, on the next slide, I'd like to show you how J train and Jcv vary"}, {"start": 302.52, "end": 307.28, "text": " as a function of the degree of the polynomial you're fitting."}, {"start": 307.28, "end": 311.64, "text": " So let me draw a figure where the horizontal axis of this figure will be the degree of"}, {"start": 311.64, "end": 316.28, "text": " polynomial that we're fitting to the data."}, {"start": 316.28, "end": 321.59999999999997, "text": " Over on the left will correspond to a small value of d, like d equals one, which corresponds"}, {"start": 321.59999999999997, "end": 323.44, "text": " to a fitting straight line."}, {"start": 323.44, "end": 329.91999999999996, "text": " And over to the right will correspond to say d equals four or even higher values of d,"}, {"start": 329.92, "end": 333.72, "text": " where we're fitting this high order polynomial."}, {"start": 333.72, "end": 342.64000000000004, "text": " So if you were to plot J train of Wb as a function of degree of polynomial, what you"}, {"start": 342.64000000000004, "end": 349.12, "text": " find is that as you fit a higher and higher degree polynomial, here I'm assuming we're"}, {"start": 349.12, "end": 354.48, "text": " not using regularization, but as you fit a higher and higher order polynomial, the training"}, {"start": 354.48, "end": 359.64, "text": " error will tend to go down because when you have a very simple linear function, it doesn't"}, {"start": 359.64, "end": 361.71999999999997, "text": " fit the training data that well."}, {"start": 361.71999999999997, "end": 367.52, "text": " When you fit a quadratic function or a third order polynomial or a fourth order polynomial,"}, {"start": 367.52, "end": 370.44, "text": " it fits the training data better and better."}, {"start": 370.44, "end": 376.96, "text": " So as the degree of polynomial increases, J train will typically go down."}, {"start": 376.96, "end": 386.96, "text": " Next, let's look at Jcv, which is how well does it do on data that it did not get to"}, {"start": 386.96, "end": 388.15999999999997, "text": " fit to?"}, {"start": 388.16, "end": 394.64000000000004, "text": " What we saw was when d equals one, when the degree of polynomial was very low, Jcv was"}, {"start": 394.64000000000004, "end": 400.40000000000003, "text": " pretty high because it underfit, so it didn't do well on the cross validation set."}, {"start": 400.40000000000003, "end": 406.76000000000005, "text": " And here on the right as well, when the degree of polynomial is very large, say four, it"}, {"start": 406.76000000000005, "end": 412.54, "text": " doesn't do well on the cross validation set either and so is also high."}, {"start": 412.54, "end": 418.14000000000004, "text": " But if d was in between, say a second order polynomial, then it actually did much better."}, {"start": 418.14, "end": 423.47999999999996, "text": " And so if you were to vary the degree of polynomial, you'd actually get a curve that looks like"}, {"start": 423.47999999999996, "end": 427.15999999999997, "text": " this, which comes down and then goes back up."}, {"start": 427.15999999999997, "end": 433.08, "text": " Where if the degree of polynomial is too low, it underfits and so doesn't do well on the"}, {"start": 433.08, "end": 434.32, "text": " cross validation set."}, {"start": 434.32, "end": 440.4, "text": " If it is too high, it overfits and also doesn't do well on the cross validation set."}, {"start": 440.4, "end": 445.4, "text": " And there's only if it's somewhere in the middle that is just right, which is why the"}, {"start": 445.4, "end": 450.67999999999995, "text": " second order polynomial in our example ends up with a lower cross validation error and"}, {"start": 450.67999999999995, "end": 454.35999999999996, "text": " neither high bias nor high variance."}, {"start": 454.35999999999996, "end": 460.97999999999996, "text": " So to summarize, how do you diagnose bias and variance in your learning algorithm?"}, {"start": 460.97999999999996, "end": 466.0, "text": " If your learning algorithm has high bias or has underfit data, the key indicator will"}, {"start": 466.0, "end": 469.15999999999997, "text": " be if Jtrain is high."}, {"start": 469.15999999999997, "end": 474.52, "text": " And so that corresponds to this leftmost portion of the curve, which is where Jtrain is high."}, {"start": 474.52, "end": 479.71999999999997, "text": " And usually you have Jtrain and JCV will be close to each other."}, {"start": 479.71999999999997, "end": 482.71999999999997, "text": " And how do you diagnose if you have high variance?"}, {"start": 482.71999999999997, "end": 489.03999999999996, "text": " Well the key indicator for high variance will be if JCV is much greater than Jtrain."}, {"start": 489.03999999999996, "end": 492.79999999999995, "text": " This double greater than sign in math refers to much greater than."}, {"start": 492.79999999999995, "end": 496.96, "text": " So this is greater and this means much greater."}, {"start": 496.96, "end": 503.47999999999996, "text": " And this rightmost portion of the plot is where JCV is much greater than Jtrain."}, {"start": 503.48, "end": 510.0, "text": " And usually Jtrain will be pretty low, but the key indicator is whether JCV is much greater"}, {"start": 510.0, "end": 511.64000000000004, "text": " than Jtrain."}, {"start": 511.64000000000004, "end": 517.72, "text": " And that's what happens when we had fit a very high order polynomial to this small dataset."}, {"start": 517.72, "end": 523.94, "text": " And even though we've just seen bias and variance, it turns out in some cases it's possible to"}, {"start": 523.94, "end": 528.6, "text": " simultaneously have high bias and have high variance."}, {"start": 528.6, "end": 533.0, "text": " You won't see this happen that much for linear regression, but it turns out that if you're"}, {"start": 533.0, "end": 536.96, "text": " training a neural network, there are some applications where unfortunately you have"}, {"start": 536.96, "end": 539.76, "text": " high bias and high variance."}, {"start": 539.76, "end": 545.48, "text": " And one way to recognize that situation will be if Jtrain is high, so you're not doing"}, {"start": 545.48, "end": 550.88, "text": " that well on the training set, but even worse the cross validation error is again even much"}, {"start": 550.88, "end": 554.0, "text": " larger than the training set."}, {"start": 554.0, "end": 559.16, "text": " The notion of high bias and high variance, it doesn't really happen for linear models"}, {"start": 559.16, "end": 566.0, "text": " applied to 1D, but to give intuition about what it looks like, it would be as if for"}, {"start": 566.0, "end": 573.4399999999999, "text": " part of the input you had a very complicated model that overfit, so it overfits to part"}, {"start": 573.4399999999999, "end": 574.56, "text": " of the input."}, {"start": 574.56, "end": 579.68, "text": " But then for some reason for other parts of the input, it doesn't even fit the training"}, {"start": 579.68, "end": 583.52, "text": " data well, and so it underfits for part of the input."}, {"start": 583.52, "end": 589.04, "text": " In this example, which looks artificial because it's a single feature input, we fit the"}, {"start": 589.04, "end": 594.1999999999999, "text": " training set really well and we overfit in part of the input, and we don't even fit the"}, {"start": 594.1999999999999, "end": 597.7199999999999, "text": " training data well and we underfit in part of the input."}, {"start": 597.7199999999999, "end": 602.5999999999999, "text": " And that's how in some applications you can unfortunately end up with both high bias and"}, {"start": 602.5999999999999, "end": 605.28, "text": " high variance."}, {"start": 605.28, "end": 609.04, "text": " And the indicator for that will be if the algorithm does poorly on the training set"}, {"start": 609.04, "end": 612.28, "text": " and it even does much worse than on the training set."}, {"start": 612.28, "end": 617.7199999999999, "text": " For most learning applications, you probably have primarily a high bias or a high variance"}, {"start": 617.72, "end": 622.4, "text": " problem rather than both at the same time, but it is possible sometimes that both are"}, {"start": 622.4, "end": 623.88, "text": " the same time."}, {"start": 623.88, "end": 628.0400000000001, "text": " So I know that there's a lot to process, there are a lot of concepts on the slides, but the"}, {"start": 628.0400000000001, "end": 634.48, "text": " key takeaways are high bias means it's not even doing well on the training set, and high"}, {"start": 634.48, "end": 640.9200000000001, "text": " variance means it does much worse on the cross-validation set than the training set."}, {"start": 640.9200000000001, "end": 645.44, "text": " Whenever I'm training a machine learning algorithm, I will almost always try to figure out to"}, {"start": 645.44, "end": 651.4000000000001, "text": " what extent the algorithm has a high bias or underfitting versus a high variance or"}, {"start": 651.4000000000001, "end": 652.8800000000001, "text": " an overfitting problem."}, {"start": 652.8800000000001, "end": 657.2, "text": " And this will give good guidance, as we'll see later this week, on how you can improve"}, {"start": 657.2, "end": 659.48, "text": " the performance of the algorithm."}, {"start": 659.48, "end": 664.48, "text": " But first, let's take a look at how regularization affects the bias and variance of a learning"}, {"start": 664.48, "end": 669.9200000000001, "text": " algorithm, because that will help you better understand when you should use regularization."}, {"start": 669.92, "end": 676.92, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=2Ji4Upc606c
6.5 Bias and variance | Regularization and bias/variance -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
You saw in the last video how different choices of the degree of polynomial d affects the bias and variance of your learning algorithm and therefore its overall performance. In this video, let's take a look at how regularization, specifically the choice of the regularization parameter lambda, affects the bias and variance and therefore the overall performance of the algorithm. This, it turns out, will be helpful for when you want to choose a good value of lambda of the regularization parameter for your algorithm. Let's take a look. In this example, I'm going to use a fourth order polynomial, but we're going to fit this model using regularization, where here the value of lambda is the regularization parameter that controls how much you trade off keeping the parameters w small versus fitting the training data well. Let's start with the example of setting lambda to be a very large value. Say lambda is equal to 10,000. If you were to do so, you would end up fitting a model that looks roughly like this. Because if lambda were very, very large, then the algorithm is highly motivated to keep these parameters w very small, and so you end up with w1, w2, really all of these parameters would be very close to zero. The model ends up being f of x is just approximately b, a constant value, which is why you end up with a model like this. This model clearly has high bias and it underfits the training data because it doesn't even do well on the training set, and J train is large. Let's take a look at the other extreme. Let's say you set lambda to be a very small value. So with a small value of lambda, in fact, let's go to extreme of setting lambda equals zero. With that choice of lambda, there is no regularization, and so we're just fitting a four-folder polynomial with no regularization, and you end up with that curve that you saw previously that overfits the data. What we saw previously was when you have a model like this, J train is small, but JCV is much larger than J train. JCV is large, and so this indicates we have high variance and it overfits this data. It would be if you have some intermediate value of lambda, not really large, 10,000, but not so small as zero, that hopefully you get a model that looks like this, that is just right and fits the data well with small J train and small JCV. If you are trying to decide what is a good value of lambda to use for the regularization parameter, cross-validation gives you a way to do so as well. Let's take a look at how we could do so, and just as a reminder, the problem we're addressing is if you're fitting a four-folder polynomial, so that's the model, and you're using regularization, how can you choose a good value of lambda? This would be a procedure similar to what you had seen for choosing the degree of polynomial D using cross-validation. Specifically, let's say we try to fit a model using lambda equals zero, and so we would minimize the cost function using lambda equals zero and end up with some parameters w1, b1, and you can then compute the cross-validation error, JCV of w1, b1, and now let's try a different value of lambda. Let's say you try lambda equals 0.01, then again, minimizing the cost function gives you a second set of parameters, w2, b2, and you can also see how well that does on the cross-validation set and so on. Let's keep trying other values of lambda, and in this example, I'm going to try doubling it to lambda equals 0.02, and so that would give you JCV of w3, b3, and so on. Let's double the gain and double the gain. After doubling this number of times, you end up with lambda approximately equal to 10, and that would give you parameters w12, b12, and JCV w12, b12, and by trying out the large range of possible values for lambda, fitting parameters using those different regularization parameters and then evaluating the performance on the cross-validation set, you can then try to pick what is the best value for the regularization parameter. Concretely, if in this example, you find that JCV of w5, b5 has the lowest value of all of these different cross-validation errors, you might then decide to pick this value for lambda and so use w5, b5 as the chosen parameters. And finally, if you want to report out an estimate of the generalization error, you would then report out the test set error J test of w5, b5. To further hone intuition about what this algorithm is doing, let's take a look at how training error and cross-validation error vary as a function of the parameter lambda. So in this figure, I've changed the x-axis again. Notice that the x-axis here is annotated with the value of the regularization parameter lambda. And if we look at the extreme of lambda equals zero, here on the left, that corresponds to not using any regularization and so that's where we want to end up with this very wiggly curve if lambda was small or was even zero. And in that case, we have a high variance model and so J train is going to be small and JCV is going to be large because it does great on the training data but does much worse on the cross-validation data. This extreme on the right with very large values of lambda, say lambda equals 10,000, ends up with fitting a model that looks like that. So this has high bias, it underfits the data and it turns out J train will be high and JCV will be high as well. And in fact, if you were to look at how J train varies as a function of lambda, you find that J train will go up like this because in the optimization cost function, the larger lambda is, the more the algorithm is trying to keep w squared small, that is the more weight is given to this regularization term and thus the less attention is paid to actually doing well on the training set. Right, this term on the left is J train, so the more it's trying to keep the parameters small, the less good a job it does on minimizing the training error. So that's why as lambda increases, the training error J train will tend to increase like so. Now how about the cross-validation error? Turns out the cross-validation error will look like this because we've seen that if lambda is too small or too large, then it doesn't do well on the cross-validation set. It either overfits here on the left or underfits here on the right and there'll be some intermediate value of lambda that causes the algorithm to perform best. And what cross-validation is doing is trying out a lot of different values of lambda. This is what we saw on the last slide, try out lambda equals 0, lambda equals 0.01, lambda equals 0.02, try out a lot of different values of lambda and evaluate the cross-validation error at a lot of these different points and then hopefully pick a value that has low cross-validation error and this will hopefully correspond to a good model for your application. If you compare this diagram to the one that we had in the previous video where the horizontal axis was the degree of polynomial, these two diagrams look a little bit, not mathematically and not in any formal way, but they look a little bit like mirror images of each other. And that's because when you're fitting a degree of polynomial, the left part of this curve corresponded to overfitting and high bias, the right part corresponded to underfitting and high variance, whereas in this one, high variance was on the left and high bias was on the right. But that's why these two images are a little bit like mirror images of each other. But in both cases, cross-validation, evaluating different values can help you choose a good value of t or a good value of lambda. So that's how the choice of regularization parameter lambda affects the bias and variance and overall performance of your algorithm. And you've also seen how you can use cross-validation to make a good choice for the regularization parameter lambda. Now so far, we've talked about how having a high training set error, high J train is indicative of high bias and how having a high cross-validation error, JCV, specifically if it's much higher than J train, how that's indicative of a variance problem. But what do these words high or much higher actually mean? Let's take a look at that in the next video where we'll look at how you can look at the numbers J train and JCV and judge if it's high or low. And it turns out that one further refinement of these ideas, that is establishing a baseline level of performance for your learning algorithm, will make it much easier for you to look at these numbers J train, JCV and judge if they're high or low. Let's take a look at what all this means in the next video.
[{"start": 0.0, "end": 8.44, "text": " You saw in the last video how different choices of the degree of polynomial d affects the"}, {"start": 8.44, "end": 13.36, "text": " bias and variance of your learning algorithm and therefore its overall performance."}, {"start": 13.36, "end": 18.84, "text": " In this video, let's take a look at how regularization, specifically the choice of the regularization"}, {"start": 18.84, "end": 23.96, "text": " parameter lambda, affects the bias and variance and therefore the overall performance of the"}, {"start": 23.96, "end": 24.96, "text": " algorithm."}, {"start": 24.96, "end": 29.96, "text": " This, it turns out, will be helpful for when you want to choose a good value of lambda"}, {"start": 29.96, "end": 33.08, "text": " of the regularization parameter for your algorithm."}, {"start": 33.08, "end": 34.44, "text": " Let's take a look."}, {"start": 34.44, "end": 39.760000000000005, "text": " In this example, I'm going to use a fourth order polynomial, but we're going to fit this"}, {"start": 39.760000000000005, "end": 48.08, "text": " model using regularization, where here the value of lambda is the regularization parameter"}, {"start": 48.08, "end": 54.84, "text": " that controls how much you trade off keeping the parameters w small versus fitting the"}, {"start": 54.84, "end": 57.28, "text": " training data well."}, {"start": 57.28, "end": 63.440000000000005, "text": " Let's start with the example of setting lambda to be a very large value."}, {"start": 63.440000000000005, "end": 67.44, "text": " Say lambda is equal to 10,000."}, {"start": 67.44, "end": 75.84, "text": " If you were to do so, you would end up fitting a model that looks roughly like this."}, {"start": 75.84, "end": 81.4, "text": " Because if lambda were very, very large, then the algorithm is highly motivated to keep"}, {"start": 81.4, "end": 88.88000000000001, "text": " these parameters w very small, and so you end up with w1, w2, really all of these parameters"}, {"start": 88.88000000000001, "end": 92.96000000000001, "text": " would be very close to zero."}, {"start": 92.96000000000001, "end": 99.08000000000001, "text": " The model ends up being f of x is just approximately b, a constant value, which is why you end"}, {"start": 99.08000000000001, "end": 102.44, "text": " up with a model like this."}, {"start": 102.44, "end": 109.96000000000001, "text": " This model clearly has high bias and it underfits the training data because it doesn't even"}, {"start": 109.96, "end": 117.11999999999999, "text": " do well on the training set, and J train is large."}, {"start": 117.11999999999999, "end": 121.55999999999999, "text": " Let's take a look at the other extreme."}, {"start": 121.55999999999999, "end": 128.0, "text": " Let's say you set lambda to be a very small value."}, {"start": 128.0, "end": 133.56, "text": " So with a small value of lambda, in fact, let's go to extreme of setting lambda equals"}, {"start": 133.56, "end": 135.28, "text": " zero."}, {"start": 135.28, "end": 141.12, "text": " With that choice of lambda, there is no regularization, and so we're just fitting a four-folder polynomial"}, {"start": 141.12, "end": 148.64, "text": " with no regularization, and you end up with that curve that you saw previously that overfits"}, {"start": 148.64, "end": 151.12, "text": " the data."}, {"start": 151.12, "end": 157.08, "text": " What we saw previously was when you have a model like this, J train is small, but JCV"}, {"start": 157.08, "end": 159.48, "text": " is much larger than J train."}, {"start": 159.48, "end": 167.56, "text": " JCV is large, and so this indicates we have high variance and it overfits this data."}, {"start": 167.56, "end": 173.48, "text": " It would be if you have some intermediate value of lambda, not really large, 10,000,"}, {"start": 173.48, "end": 179.04, "text": " but not so small as zero, that hopefully you get a model that looks like this, that is"}, {"start": 179.04, "end": 186.51999999999998, "text": " just right and fits the data well with small J train and small JCV."}, {"start": 186.52, "end": 192.44, "text": " If you are trying to decide what is a good value of lambda to use for the regularization"}, {"start": 192.44, "end": 197.8, "text": " parameter, cross-validation gives you a way to do so as well."}, {"start": 197.8, "end": 202.68, "text": " Let's take a look at how we could do so, and just as a reminder, the problem we're addressing"}, {"start": 202.68, "end": 209.12, "text": " is if you're fitting a four-folder polynomial, so that's the model, and you're using regularization,"}, {"start": 209.12, "end": 212.36, "text": " how can you choose a good value of lambda?"}, {"start": 212.36, "end": 217.12, "text": " This would be a procedure similar to what you had seen for choosing the degree of polynomial"}, {"start": 217.12, "end": 219.4, "text": " D using cross-validation."}, {"start": 219.4, "end": 227.04000000000002, "text": " Specifically, let's say we try to fit a model using lambda equals zero, and so we would"}, {"start": 227.04000000000002, "end": 238.52, "text": " minimize the cost function using lambda equals zero and end up with some parameters w1, b1,"}, {"start": 238.52, "end": 245.8, "text": " and you can then compute the cross-validation error, JCV of w1, b1, and now let's try a"}, {"start": 245.8, "end": 247.0, "text": " different value of lambda."}, {"start": 247.0, "end": 252.74, "text": " Let's say you try lambda equals 0.01, then again, minimizing the cost function gives"}, {"start": 252.74, "end": 259.44, "text": " you a second set of parameters, w2, b2, and you can also see how well that does on the"}, {"start": 259.44, "end": 262.68, "text": " cross-validation set and so on."}, {"start": 262.68, "end": 267.66, "text": " Let's keep trying other values of lambda, and in this example, I'm going to try doubling"}, {"start": 267.66, "end": 276.16, "text": " it to lambda equals 0.02, and so that would give you JCV of w3, b3, and so on."}, {"start": 276.16, "end": 278.72, "text": " Let's double the gain and double the gain."}, {"start": 278.72, "end": 283.6, "text": " After doubling this number of times, you end up with lambda approximately equal to 10,"}, {"start": 283.6, "end": 295.04, "text": " and that would give you parameters w12, b12, and JCV w12, b12, and by trying out the large"}, {"start": 295.04, "end": 300.24, "text": " range of possible values for lambda, fitting parameters using those different regularization"}, {"start": 300.24, "end": 305.36, "text": " parameters and then evaluating the performance on the cross-validation set, you can then"}, {"start": 305.36, "end": 309.64000000000004, "text": " try to pick what is the best value for the regularization parameter."}, {"start": 309.64000000000004, "end": 319.36, "text": " Concretely, if in this example, you find that JCV of w5, b5 has the lowest value of all"}, {"start": 319.36, "end": 325.12, "text": " of these different cross-validation errors, you might then decide to pick this value for"}, {"start": 325.12, "end": 331.84000000000003, "text": " lambda and so use w5, b5 as the chosen parameters."}, {"start": 331.84000000000003, "end": 336.96000000000004, "text": " And finally, if you want to report out an estimate of the generalization error, you"}, {"start": 336.96000000000004, "end": 343.2, "text": " would then report out the test set error J test of w5, b5."}, {"start": 343.2, "end": 347.76, "text": " To further hone intuition about what this algorithm is doing, let's take a look at how"}, {"start": 347.76, "end": 354.28, "text": " training error and cross-validation error vary as a function of the parameter lambda."}, {"start": 354.28, "end": 357.76, "text": " So in this figure, I've changed the x-axis again."}, {"start": 357.76, "end": 364.0, "text": " Notice that the x-axis here is annotated with the value of the regularization parameter"}, {"start": 364.0, "end": 366.59999999999997, "text": " lambda."}, {"start": 366.59999999999997, "end": 374.44, "text": " And if we look at the extreme of lambda equals zero, here on the left, that corresponds to"}, {"start": 374.44, "end": 379.16, "text": " not using any regularization and so that's where we want to end up with this very wiggly"}, {"start": 379.16, "end": 384.12, "text": " curve if lambda was small or was even zero."}, {"start": 384.12, "end": 391.84, "text": " And in that case, we have a high variance model and so J train is going to be small"}, {"start": 391.84, "end": 397.72, "text": " and JCV is going to be large because it does great on the training data but does much worse"}, {"start": 397.72, "end": 400.56, "text": " on the cross-validation data."}, {"start": 400.56, "end": 406.28000000000003, "text": " This extreme on the right with very large values of lambda, say lambda equals 10,000,"}, {"start": 406.28000000000003, "end": 409.48, "text": " ends up with fitting a model that looks like that."}, {"start": 409.48, "end": 416.04, "text": " So this has high bias, it underfits the data and it turns out J train will be high and"}, {"start": 416.04, "end": 420.2, "text": " JCV will be high as well."}, {"start": 420.2, "end": 426.52, "text": " And in fact, if you were to look at how J train varies as a function of lambda, you"}, {"start": 426.52, "end": 433.52, "text": " find that J train will go up like this because in the optimization cost function, the larger"}, {"start": 433.52, "end": 439.08, "text": " lambda is, the more the algorithm is trying to keep w squared small, that is the more"}, {"start": 439.08, "end": 444.47999999999996, "text": " weight is given to this regularization term and thus the less attention is paid to actually"}, {"start": 444.47999999999996, "end": 446.35999999999996, "text": " doing well on the training set."}, {"start": 446.35999999999996, "end": 451.0, "text": " Right, this term on the left is J train, so the more it's trying to keep the parameters"}, {"start": 451.0, "end": 455.96, "text": " small, the less good a job it does on minimizing the training error."}, {"start": 455.96, "end": 463.12, "text": " So that's why as lambda increases, the training error J train will tend to increase like so."}, {"start": 463.12, "end": 465.71999999999997, "text": " Now how about the cross-validation error?"}, {"start": 465.71999999999997, "end": 471.71999999999997, "text": " Turns out the cross-validation error will look like this because we've seen that if"}, {"start": 471.71999999999997, "end": 477.35999999999996, "text": " lambda is too small or too large, then it doesn't do well on the cross-validation set."}, {"start": 477.35999999999996, "end": 485.15999999999997, "text": " It either overfits here on the left or underfits here on the right and there'll be some intermediate"}, {"start": 485.16, "end": 490.68, "text": " value of lambda that causes the algorithm to perform best."}, {"start": 490.68, "end": 496.8, "text": " And what cross-validation is doing is trying out a lot of different values of lambda."}, {"start": 496.8, "end": 500.8, "text": " This is what we saw on the last slide, try out lambda equals 0, lambda equals 0.01, lambda"}, {"start": 500.8, "end": 506.64000000000004, "text": " equals 0.02, try out a lot of different values of lambda and evaluate the cross-validation"}, {"start": 506.64000000000004, "end": 515.0400000000001, "text": " error at a lot of these different points and then hopefully pick a value that has low cross-validation"}, {"start": 515.04, "end": 521.52, "text": " error and this will hopefully correspond to a good model for your application."}, {"start": 521.52, "end": 527.4399999999999, "text": " If you compare this diagram to the one that we had in the previous video where the horizontal"}, {"start": 527.4399999999999, "end": 533.7199999999999, "text": " axis was the degree of polynomial, these two diagrams look a little bit, not mathematically"}, {"start": 533.7199999999999, "end": 539.76, "text": " and not in any formal way, but they look a little bit like mirror images of each other."}, {"start": 539.76, "end": 545.56, "text": " And that's because when you're fitting a degree of polynomial, the left part of this curve"}, {"start": 545.56, "end": 551.56, "text": " corresponded to overfitting and high bias, the right part corresponded to underfitting"}, {"start": 551.56, "end": 558.58, "text": " and high variance, whereas in this one, high variance was on the left and high bias was"}, {"start": 558.58, "end": 559.58, "text": " on the right."}, {"start": 559.58, "end": 563.4399999999999, "text": " But that's why these two images are a little bit like mirror images of each other."}, {"start": 563.4399999999999, "end": 569.12, "text": " But in both cases, cross-validation, evaluating different values can help you choose a good"}, {"start": 569.12, "end": 572.28, "text": " value of t or a good value of lambda."}, {"start": 572.28, "end": 577.26, "text": " So that's how the choice of regularization parameter lambda affects the bias and variance"}, {"start": 577.26, "end": 580.28, "text": " and overall performance of your algorithm."}, {"start": 580.28, "end": 585.42, "text": " And you've also seen how you can use cross-validation to make a good choice for the regularization"}, {"start": 585.42, "end": 588.0, "text": " parameter lambda."}, {"start": 588.0, "end": 594.14, "text": " Now so far, we've talked about how having a high training set error, high J train is"}, {"start": 594.14, "end": 600.3199999999999, "text": " indicative of high bias and how having a high cross-validation error, JCV, specifically"}, {"start": 600.3199999999999, "end": 606.9399999999999, "text": " if it's much higher than J train, how that's indicative of a variance problem."}, {"start": 606.9399999999999, "end": 611.04, "text": " But what do these words high or much higher actually mean?"}, {"start": 611.04, "end": 614.64, "text": " Let's take a look at that in the next video where we'll look at how you can look at the"}, {"start": 614.64, "end": 619.48, "text": " numbers J train and JCV and judge if it's high or low."}, {"start": 619.48, "end": 624.24, "text": " And it turns out that one further refinement of these ideas, that is establishing a baseline"}, {"start": 624.24, "end": 627.88, "text": " level of performance for your learning algorithm, will make it much easier for you to look at"}, {"start": 627.88, "end": 633.08, "text": " these numbers J train, JCV and judge if they're high or low."}, {"start": 633.08, "end": 650.0400000000001, "text": " Let's take a look at what all this means in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=8Rl_2WQbmlc
6.6 Bias and variance |Establishing a baseline level of performance -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's look at some concrete numbers for what JTrain and JCV might be and see how you can judge if a learning algorithm has high bias or high variance. And for the examples in this video, I'm going to use as a running example the application of speech recognition, which is something I've worked on multiple times over the years. Let's take a look. A lot of users doing web search on a mobile phone will use speech recognition rather than type on the tiny keyboards on their phones because speaking to a phone is often faster than typing. And so typical audio that a web search engine would get would be like this. What is today's weather? Or like this. Coffee shops near me. And there's the job of the speech recognition algorithm to output the transcripts, what is today's weather or coffee shops near me. Now if you were to train a speech recognition system and measure the training error, and the training error means what's the percentage of audio clips in your training set that the algorithm does not transcribe correctly in its entirety. Let's say the training error for this data set is 10.8%, meaning that it transcribes it perfectly for 89.2% of your training set, but makes some mistake in 10.8% of your training set. And if you were to also measure your speech recognition algorithm's performance on a separate cross-validation set, let's say it gets 14.8% error. So if you were to look at these numbers, it looks like the training error is really high, it got 10% wrong, and then the cross-validation error is higher, but getting 10% of even your training set wrong, that seems pretty high. And it seems like that 10% error would lead you to conclude it has to have high bias because it's not doing well on your training set. But it turns out that when analyzing speech recognition, it's useful to also measure one other thing, which is what is the human level of performance. In other words, how well can even humans transcribe speech accurately from these audio clips? And concretely, let's say that you measure how well fluent speakers can transcribe audio clips and you find that it is 10.6%, and you find that human level performance achieves 10.6% error. Why is human level error so high? It turns out that for web search, there are a lot of audio clips that sound like this. I'm going to navigate to somewhere on the... And there's a lot of noisy audio where really no one can accurately transcribe what was said because of the noise in the audio. And if even a human makes 10.6% error, then it seems difficult to expect a learning algorithm to do much better. And so in order to judge if the training error is high, it turns out to be more useful to see if the training error is much higher than a human level of performance. And in this example, it does just 0.2% worse than humans. Given that humans are actually really good at recognizing speech, I think if I can build a speech recognition system that achieves 10.6% error matching human performance, I'd be pretty happy. So it's just doing a little bit worse than humans. But in contrast, the gap or the difference between JCV and JTrain is much larger. There's actually a 4% gap there. So whereas previously we had said maybe 10.8% error means this is high bias, when we benchmark it to human level performance, we see that the algorithm is actually doing quite well on the training set. But the bigger problem is the cross validation error is much higher than the training error, which is why I would conclude that this algorithm actually has more of a variance problem than a bias problem. So it turns out when judging if the training error is high, it's often useful to establish a baseline level of performance. And by baseline level of performance, I mean, what is the level of error you can reasonably hope your learning algorithm to eventually get to? And one common way to establish a baseline level of performance is to measure how well humans can do on this task. Because humans are really good at understanding speech data or processing images or understanding text, human level performance is often a good benchmark when you are using unstructured data such as audio, images or text. Another way to estimate a baseline level of performance is if there's some competing algorithm, maybe a previous implementation that someone else implemented or even a competitor's algorithm to establish a baseline level of performance, if you can measure that. Or sometimes you might guess based on prior experience. And if you have access to this baseline level of performance, that is, what is the level of error you can reasonably hope to get to or what is the design level of performance that you want your algorithm to get to? Then when judging, if an algorithm has high bias or variance, you would look at the baseline level of performance and the training error and the cross validation error. And the two key quantities to measure are then what is the difference between training error and the baseline level that you hope to get to? So this is 0.2 and if this is large, then you would say you have a high bias problem and you would then also look at this gap between your training error and your cross validation error. And if this is high, then you would conclude you have a high variance problem. And that's why in this example, we concluded we have a high variance problem. Whereas, let's look at the second example. If the baseline level of performance, that is, human level performance and training error and cross validation error look like this, then this first gap is 4.4%. And so there's actually a big gap. The training error is much higher than what humans can do and what we hope to get to. Whereas the cross validation error is just a little bit bigger than the training error. And so if your training error and cross validation error look like this, I would say this algorithm has high bias. And so by looking at these numbers, training error and cross validation error, you can get a sense, intuitively or informally, of the degree to which your algorithm has a high bias or high variance problem. And so just to summarize, this gap between these first two numbers gives you a sense of whether you have a high bias problem and the gap between these two numbers give you a sense of whether you have a high variance problem. And sometimes the baseline level of performance could be 0%. If your goal is to achieve perfect performance, then the baseline level of performance could be 0%. But for some applications, like the speech recognition application, where some audio is just noisy, then the baseline level of performance could be much higher than 0. And the method described on this slide will give you a better read in terms of whether your algorithm suffers from bias or variance. And by the way, it is possible for your algorithm to have high bias and high variance. So concretely, if you get numbers like these, then the gap between the baseline and the training error is large, that would be 4.6%. And the gap between training error and cross validation error is also large, this is 4.2%. And so if it looks like this, you would conclude that your algorithm has high bias and high variance, although hopefully this won't happen that often for your learning applications. So to summarize, we've seen that looking at whether your training error is large is a way to tell if your algorithm has high bias. But on applications where the data is sometimes just noisy and is infeasible or unrealistic to ever expect to get to zero error, then it's useful to establish this baseline level of performance so that rather than just asking, is my training error large? You can ask, is my training error large relative to what I hope I could get to eventually, such as is my training error large relative to what humans can do on the task? And that gives you a more accurate read on how far away you are in terms of a training error from where you hope to get to. And then similarly, looking at whether your cross validation error is much larger than your training error gives you a sense of whether or not your algorithm may have a high variance problem as well. And in practice, this is how I often will look at these numbers to judge if my learning algorithm has a high bias or high variance problem. Now to further hone our intuition about how a learning algorithm is doing, there's one other thing that I found useful to think about, which is the learning curve. Let's take a look at what that means in the next video.
[{"start": 0.0, "end": 8.0, "text": " Let's look at some concrete numbers for what JTrain and JCV might be and see how you can"}, {"start": 8.0, "end": 12.280000000000001, "text": " judge if a learning algorithm has high bias or high variance."}, {"start": 12.280000000000001, "end": 17.36, "text": " And for the examples in this video, I'm going to use as a running example the application"}, {"start": 17.36, "end": 22.2, "text": " of speech recognition, which is something I've worked on multiple times over the years."}, {"start": 22.2, "end": 23.2, "text": " Let's take a look."}, {"start": 23.2, "end": 29.64, "text": " A lot of users doing web search on a mobile phone will use speech recognition rather than"}, {"start": 29.64, "end": 34.44, "text": " type on the tiny keyboards on their phones because speaking to a phone is often faster"}, {"start": 34.44, "end": 36.44, "text": " than typing."}, {"start": 36.44, "end": 42.52, "text": " And so typical audio that a web search engine would get would be like this."}, {"start": 42.52, "end": 44.44, "text": " What is today's weather?"}, {"start": 44.44, "end": 45.44, "text": " Or like this."}, {"start": 45.44, "end": 47.16, "text": " Coffee shops near me."}, {"start": 47.16, "end": 51.0, "text": " And there's the job of the speech recognition algorithm to output the transcripts, what"}, {"start": 51.0, "end": 54.72, "text": " is today's weather or coffee shops near me."}, {"start": 54.72, "end": 61.04, "text": " Now if you were to train a speech recognition system and measure the training error, and"}, {"start": 61.04, "end": 66.76, "text": " the training error means what's the percentage of audio clips in your training set that the"}, {"start": 66.76, "end": 70.88, "text": " algorithm does not transcribe correctly in its entirety."}, {"start": 70.88, "end": 77.36, "text": " Let's say the training error for this data set is 10.8%, meaning that it transcribes"}, {"start": 77.36, "end": 85.28, "text": " it perfectly for 89.2% of your training set, but makes some mistake in 10.8% of your training"}, {"start": 85.28, "end": 86.76, "text": " set."}, {"start": 86.76, "end": 91.8, "text": " And if you were to also measure your speech recognition algorithm's performance on a separate"}, {"start": 91.8, "end": 97.96000000000001, "text": " cross-validation set, let's say it gets 14.8% error."}, {"start": 97.96000000000001, "end": 103.24, "text": " So if you were to look at these numbers, it looks like the training error is really high,"}, {"start": 103.24, "end": 109.52, "text": " it got 10% wrong, and then the cross-validation error is higher, but getting 10% of even your"}, {"start": 109.52, "end": 112.8, "text": " training set wrong, that seems pretty high."}, {"start": 112.8, "end": 117.6, "text": " And it seems like that 10% error would lead you to conclude it has to have high bias because"}, {"start": 117.6, "end": 119.75999999999999, "text": " it's not doing well on your training set."}, {"start": 119.75999999999999, "end": 125.47999999999999, "text": " But it turns out that when analyzing speech recognition, it's useful to also measure one"}, {"start": 125.47999999999999, "end": 130.72, "text": " other thing, which is what is the human level of performance."}, {"start": 130.72, "end": 136.72, "text": " In other words, how well can even humans transcribe speech accurately from these audio clips?"}, {"start": 136.72, "end": 144.24, "text": " And concretely, let's say that you measure how well fluent speakers can transcribe audio"}, {"start": 144.24, "end": 150.32, "text": " clips and you find that it is 10.6%, and you find that human level performance achieves"}, {"start": 150.32, "end": 153.36, "text": " 10.6% error."}, {"start": 153.36, "end": 157.16, "text": " Why is human level error so high?"}, {"start": 157.16, "end": 163.0, "text": " It turns out that for web search, there are a lot of audio clips that sound like this."}, {"start": 163.0, "end": 166.56, "text": " I'm going to navigate to somewhere on the..."}, {"start": 166.56, "end": 171.6, "text": " And there's a lot of noisy audio where really no one can accurately transcribe what was"}, {"start": 171.6, "end": 175.01999999999998, "text": " said because of the noise in the audio."}, {"start": 175.01999999999998, "end": 182.48, "text": " And if even a human makes 10.6% error, then it seems difficult to expect a learning algorithm"}, {"start": 182.48, "end": 184.0, "text": " to do much better."}, {"start": 184.0, "end": 190.08, "text": " And so in order to judge if the training error is high, it turns out to be more useful to"}, {"start": 190.08, "end": 195.88, "text": " see if the training error is much higher than a human level of performance."}, {"start": 195.88, "end": 200.12, "text": " And in this example, it does just 0.2% worse than humans."}, {"start": 200.12, "end": 203.64, "text": " Given that humans are actually really good at recognizing speech, I think if I can build"}, {"start": 203.64, "end": 208.76, "text": " a speech recognition system that achieves 10.6% error matching human performance, I'd"}, {"start": 208.76, "end": 209.76, "text": " be pretty happy."}, {"start": 209.76, "end": 213.0, "text": " So it's just doing a little bit worse than humans."}, {"start": 213.0, "end": 219.56, "text": " But in contrast, the gap or the difference between JCV and JTrain is much larger."}, {"start": 219.56, "end": 222.48, "text": " There's actually a 4% gap there."}, {"start": 222.48, "end": 230.88, "text": " So whereas previously we had said maybe 10.8% error means this is high bias, when we benchmark"}, {"start": 230.88, "end": 235.12, "text": " it to human level performance, we see that the algorithm is actually doing quite well"}, {"start": 235.12, "end": 236.88, "text": " on the training set."}, {"start": 236.88, "end": 243.4, "text": " But the bigger problem is the cross validation error is much higher than the training error,"}, {"start": 243.4, "end": 249.04, "text": " which is why I would conclude that this algorithm actually has more of a variance problem than"}, {"start": 249.04, "end": 251.0, "text": " a bias problem."}, {"start": 251.0, "end": 259.76, "text": " So it turns out when judging if the training error is high, it's often useful to establish"}, {"start": 259.76, "end": 262.04, "text": " a baseline level of performance."}, {"start": 262.04, "end": 266.52, "text": " And by baseline level of performance, I mean, what is the level of error you can reasonably"}, {"start": 266.52, "end": 271.2, "text": " hope your learning algorithm to eventually get to?"}, {"start": 271.2, "end": 276.88, "text": " And one common way to establish a baseline level of performance is to measure how well"}, {"start": 276.88, "end": 279.46, "text": " humans can do on this task."}, {"start": 279.46, "end": 284.71999999999997, "text": " Because humans are really good at understanding speech data or processing images or understanding"}, {"start": 284.71999999999997, "end": 291.03999999999996, "text": " text, human level performance is often a good benchmark when you are using unstructured data"}, {"start": 291.03999999999996, "end": 294.78, "text": " such as audio, images or text."}, {"start": 294.78, "end": 300.15999999999997, "text": " Another way to estimate a baseline level of performance is if there's some competing algorithm,"}, {"start": 300.15999999999997, "end": 306.23999999999995, "text": " maybe a previous implementation that someone else implemented or even a competitor's algorithm"}, {"start": 306.23999999999995, "end": 311.4, "text": " to establish a baseline level of performance, if you can measure that."}, {"start": 311.4, "end": 316.96, "text": " Or sometimes you might guess based on prior experience."}, {"start": 316.96, "end": 322.76, "text": " And if you have access to this baseline level of performance, that is, what is the level"}, {"start": 322.76, "end": 327.12, "text": " of error you can reasonably hope to get to or what is the design level of performance"}, {"start": 327.12, "end": 329.56, "text": " that you want your algorithm to get to?"}, {"start": 329.56, "end": 335.59999999999997, "text": " Then when judging, if an algorithm has high bias or variance, you would look at the baseline"}, {"start": 335.59999999999997, "end": 340.36, "text": " level of performance and the training error and the cross validation error."}, {"start": 340.36, "end": 346.2, "text": " And the two key quantities to measure are then what is the difference between training"}, {"start": 346.2, "end": 350.4, "text": " error and the baseline level that you hope to get to?"}, {"start": 350.4, "end": 356.58, "text": " So this is 0.2 and if this is large, then you would say you have a high bias problem"}, {"start": 356.58, "end": 362.28, "text": " and you would then also look at this gap between your training error and your cross validation"}, {"start": 362.28, "end": 363.28, "text": " error."}, {"start": 363.28, "end": 367.47999999999996, "text": " And if this is high, then you would conclude you have a high variance problem."}, {"start": 367.47999999999996, "end": 372.64, "text": " And that's why in this example, we concluded we have a high variance problem."}, {"start": 372.64, "end": 376.47999999999996, "text": " Whereas, let's look at the second example."}, {"start": 376.48, "end": 381.08000000000004, "text": " If the baseline level of performance, that is, human level performance and training error"}, {"start": 381.08000000000004, "end": 388.44, "text": " and cross validation error look like this, then this first gap is 4.4%."}, {"start": 388.44, "end": 389.76, "text": " And so there's actually a big gap."}, {"start": 389.76, "end": 394.8, "text": " The training error is much higher than what humans can do and what we hope to get to."}, {"start": 394.8, "end": 399.12, "text": " Whereas the cross validation error is just a little bit bigger than the training error."}, {"start": 399.12, "end": 404.04, "text": " And so if your training error and cross validation error look like this, I would say this algorithm"}, {"start": 404.04, "end": 406.72, "text": " has high bias."}, {"start": 406.72, "end": 411.32, "text": " And so by looking at these numbers, training error and cross validation error, you can"}, {"start": 411.32, "end": 417.04, "text": " get a sense, intuitively or informally, of the degree to which your algorithm has a high"}, {"start": 417.04, "end": 419.92, "text": " bias or high variance problem."}, {"start": 419.92, "end": 425.74, "text": " And so just to summarize, this gap between these first two numbers gives you a sense"}, {"start": 425.74, "end": 431.72, "text": " of whether you have a high bias problem and the gap between these two numbers give you"}, {"start": 431.72, "end": 435.32000000000005, "text": " a sense of whether you have a high variance problem."}, {"start": 435.32000000000005, "end": 438.48, "text": " And sometimes the baseline level of performance could be 0%."}, {"start": 438.48, "end": 442.64000000000004, "text": " If your goal is to achieve perfect performance, then the baseline level of performance could"}, {"start": 442.64000000000004, "end": 444.48, "text": " be 0%."}, {"start": 444.48, "end": 448.32000000000005, "text": " But for some applications, like the speech recognition application, where some audio"}, {"start": 448.32000000000005, "end": 452.84000000000003, "text": " is just noisy, then the baseline level of performance could be much higher than 0."}, {"start": 452.84000000000003, "end": 457.24, "text": " And the method described on this slide will give you a better read in terms of whether"}, {"start": 457.24, "end": 460.36, "text": " your algorithm suffers from bias or variance."}, {"start": 460.36, "end": 465.72, "text": " And by the way, it is possible for your algorithm to have high bias and high variance."}, {"start": 465.72, "end": 472.64, "text": " So concretely, if you get numbers like these, then the gap between the baseline and the"}, {"start": 472.64, "end": 477.52000000000004, "text": " training error is large, that would be 4.6%."}, {"start": 477.52000000000004, "end": 483.44, "text": " And the gap between training error and cross validation error is also large, this is 4.2%."}, {"start": 483.44, "end": 487.68, "text": " And so if it looks like this, you would conclude that your algorithm has high bias and high"}, {"start": 487.68, "end": 493.24, "text": " variance, although hopefully this won't happen that often for your learning applications."}, {"start": 493.24, "end": 498.0, "text": " So to summarize, we've seen that looking at whether your training error is large is a"}, {"start": 498.0, "end": 501.44, "text": " way to tell if your algorithm has high bias."}, {"start": 501.44, "end": 507.36, "text": " But on applications where the data is sometimes just noisy and is infeasible or unrealistic"}, {"start": 507.36, "end": 512.6800000000001, "text": " to ever expect to get to zero error, then it's useful to establish this baseline level"}, {"start": 512.6800000000001, "end": 516.5600000000001, "text": " of performance so that rather than just asking, is my training error large?"}, {"start": 516.56, "end": 522.1199999999999, "text": " You can ask, is my training error large relative to what I hope I could get to eventually,"}, {"start": 522.1199999999999, "end": 526.64, "text": " such as is my training error large relative to what humans can do on the task?"}, {"start": 526.64, "end": 531.3199999999999, "text": " And that gives you a more accurate read on how far away you are in terms of a training"}, {"start": 531.3199999999999, "end": 534.28, "text": " error from where you hope to get to."}, {"start": 534.28, "end": 538.0, "text": " And then similarly, looking at whether your cross validation error is much larger than"}, {"start": 538.0, "end": 543.1199999999999, "text": " your training error gives you a sense of whether or not your algorithm may have a high variance"}, {"start": 543.1199999999999, "end": 545.04, "text": " problem as well."}, {"start": 545.04, "end": 549.04, "text": " And in practice, this is how I often will look at these numbers to judge if my learning"}, {"start": 549.04, "end": 552.68, "text": " algorithm has a high bias or high variance problem."}, {"start": 552.68, "end": 557.52, "text": " Now to further hone our intuition about how a learning algorithm is doing, there's one"}, {"start": 557.52, "end": 563.1999999999999, "text": " other thing that I found useful to think about, which is the learning curve."}, {"start": 563.2, "end": 576.08, "text": " Let's take a look at what that means in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=m0QgVaFS6O4
6.7 Bias and variance | Learning curves --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Learning curves are a way to help understand how your learning algorithm is doing as a function of the amount of experience it has, whereby experience I mean, for example, the number of training examples it has. Let's take a look. Let me plot learning curves for a model that fits a second order polynomial quadratic function like so. And I'm going to plot both J C V, the cross-validation error, as well as J train, the training error. So on this figure, the horizontal axis is going to be M train. That is the training set size or the number of examples that the algorithm can learn from. And on the vertical axis, I'm going to plot the error. And by error, I mean either J C V or J train. So let's start by plotting the cross-validation error. It will look something like this. So that's what J C V of W B will look like. And it's maybe no surprise that as M train, the training set size gets bigger, then you learn a better model. And so the cross-validation error goes down. Now, let's plot J train of W B of what the training error looks like as the training set size gets bigger. It turns out that the training error will actually look like this. That as the training set size gets bigger, the training set error actually increases. Let's take a look at why this is the case. We'll start with an example of when you have just a single training example. Well, if you were to fit a quadratic model to this, you can fit, you know, easily a straight line or a curve and your training error will be zero. How about if you have two training examples like this? Well, you can again fit straight line and achieve zero training error. In fact, if you have three training examples, the quadratic function can still fit this very well and get pretty much zero training error. But now, if your training set gets a little bit bigger, say you have four training examples, then it gets a little bit harder to fit all four examples perfectly. And you may get a curve that looks like this, fits it pretty well, but you know, a little bit off in a few places here and there. And so when you have increased the training set size to four, the training error has actually gone up a little bit. How about when you have five training examples? Well, again, you can fit it pretty well, but it gets even a little bit harder to fit all of them perfectly. And when you have an even larger training set, it just gets harder and harder to fit every single one of your training examples perfectly. So to recap, when you have a very small number of training examples, like one or two or even three, it's relatively easy to get zero or very small training error. But when you have a larger training set, it's harder for quadratic function to fit all the training examples perfectly, which is why as the training set gets bigger, the training error increases because it's harder to fit all of the training examples perfectly. Notice one other thing about these curves, which is the cross validation error will be typically higher than the training error because you fit the parameters to the training set and so you expect to do at least a little bit better or when M is small, maybe even a lot better on the training set than on the cross validation set. Let's now take a look at what the learning curves will look like for an algorithm with high bias versus one with high variance. Let's start at the high bias or the under fitting case. Recall that an example of high bias would be if you're fitting a linear function to a curve that looks like this. If you were to plot the training error, then the training error will go up like so, as you'd expect. And in fact, this curve of training error may start to flatten out or we call it plateau, meaning flatten out after a while. And that's because as you get more and more training examples, when you're fitting the simple linear function, your model doesn't actually change that much more. Fitting a straight line and even as you get more and more and more examples, there's just not that much more to change, which is why the average training error flattens out after a while. And similarly, your cross validation error will come down and also flatten out after a while, which is why JCV again is higher than JTrain, but JCV will tend to look like that. And it's because beyond a certain point, even as you get more and more and more examples, not much is going to change about the straight line you're fitting. It's just too simple a model to be fitting to this much data, which is why both of these curves, JCV and JTrain, tend to flatten after a while. And if you had a measure of that baseline level of performance, such as human level performance, then it'll tend to be a value that is lower than your JTrain and your JCV. So human level performance may look like this. And there's a big gap between the baseline level performance and JTrain, which was our indicator for this algorithm having high bias. That is, one could hope to be doing much better if only we could fit a more complex function than just a straight line. Now, one interesting thing about this plot is you can ask, what do you think will happen if you could have a much bigger training set? So what would it look like if we could increase M even further than the right of this plot and go further to the right as follows? Well, you can imagine if you were to extend both of these curves to the right, they both sort of flatten out and both of them will probably just continue to be flat like that. And no matter how far you extend to the right of this plot, these two curves, they will never, you know, somehow find a way to dip down to this human level of performance or just keep on being kind of flat like this pretty much forever, no matter how large the training set gets. So that gives this conclusion, maybe a little bit surprising, that if a learning algorithm has high bias, getting more training data will not by itself help that much. And I know that we're used to thinking that having more data is good. But if your algorithm has high bias, then if the only thing you do is throw more training data at it, that by itself will not ever let you bring down the error rate that much. And it's because of this really, no matter how many more examples you add to this figure, the straight learning fitting just isn't going to get that much better. And that's why before investing a lot of effort into collecting more training data, it's worth checking if your learning algorithm has high bias, because if it does, then you probably need to do some other things other than just throw more training data at it. Let's now take a look at what the learning curve looks like for learning algorithm with high variance. You might remember that if you were to fit a four-fold polynomial with small lambda, say, or even lambda equals zero, then you get a curve that looks like this. And even though it fits the training data very well, it doesn't generalize. Let's now look at what a learning curve might look like in this high variance scenario. JTrain will be going up as the training set size increases, so you get a curve that looks like this, and JCV will be much higher. So your cross-validation error is much higher than your training error. And the fact there's a huge gap here is what can tell you that this high variance is doing much better on the training set than it's doing on your cross-validation set. If you were to plot a baseline level of performance, such as human level performance, you may find that it turns out to be here that JTrain can sometimes be even lower than the human level of performance, or maybe human level performance is a little bit lower than this. But when you're overfitting the training set, you may be able to fit the training set so well to have an unrealistically low error, such as zero error in this example over here, which is actually better than how well humans would actually be able to predict housing prices or whatever the application you're working on. But again, the signal for high variance is whether JCV is much higher than JTrain. And when you have high variance, then increasing the training set size could help a lot. And in particular, if we could extrapolate these curves to the right, increase MTrain, then the training error will continue to go up, but then the cross-validation error hopefully will come down and approach JTrain. And so in this scenario, it might be possible just by increasing the training set size to lower the cross-validation error and to get your algorithm to perform better and better. And this is unlike the high bias case where if the only thing you do is get more training data, that won't actually help your learning algorithm performance much. So to summarize, if a learning algorithm suffers from high variance, then getting more training data is indeed likely to help. Because extrapolating to the right of this curve, you see that you can expect JCV to keep on coming down. And in this example, just by getting more training data allows the algorithm to go from this relatively high cross-validation error to get much closer to human level performance. You can see that if you were to add a lot more training examples and continue to fit a four-folder polynomial, then you can just get a better four-folder polynomial fit to this data than just very weakly curve up on top. So if you're building a machine learning application, you could plot the learning curve if you want. That is, you can take different subsets of your training sets. And even if you have, say, a thousand training examples, you could train a model on just a hundred training examples and look at the training error and the cross-validation error, then train a model on 200 examples, holding out 800 examples and just not using them for now, and plot J train and JCV and so on and repeat and plot out what the learning curve looks like. And if you were to visualize it that way, then that could be another way for you to see if your learning curve looks more like a high bias or high variance one. One downside of plotting learning curves like this is something I've done, but one downside is it is computationally quite expensive to train so many different models using different size subsets of your training set. So in practice, it isn't done that often. But nonetheless, I find that having this mental visual picture in my head of what the training set looks like, sometimes that helps me to think through what I think my learning algorithm is doing and whether it has high bias or high variance. So I know we've gone through a lot about bias and variance. Let's go back to our earlier example of if you have trained a model for housing price prediction, how does bias and variance help you decide what to do next? Let's go back to that earlier example, which I hope will now make a lot more sense to you. Let's do that in the next video.
[{"start": 0.0, "end": 9.4, "text": " Learning curves are a way to help understand how your learning algorithm is doing as a function of the amount of experience it has,"}, {"start": 9.4, "end": 14.0, "text": " whereby experience I mean, for example, the number of training examples it has."}, {"start": 14.0, "end": 22.04, "text": " Let's take a look. Let me plot learning curves for a model that fits a second order polynomial quadratic function like so."}, {"start": 22.04, "end": 29.400000000000002, "text": " And I'm going to plot both J C V, the cross-validation error, as well as J train, the training error."}, {"start": 29.4, "end": 36.64, "text": " So on this figure, the horizontal axis is going to be M train."}, {"start": 36.64, "end": 41.36, "text": " That is the training set size or the number of examples that the algorithm can learn from."}, {"start": 41.36, "end": 44.4, "text": " And on the vertical axis, I'm going to plot the error."}, {"start": 44.4, "end": 47.84, "text": " And by error, I mean either J C V or J train."}, {"start": 47.84, "end": 51.04, "text": " So let's start by plotting the cross-validation error."}, {"start": 51.04, "end": 53.56, "text": " It will look something like this."}, {"start": 53.56, "end": 58.72, "text": " So that's what J C V of W B will look like."}, {"start": 58.72, "end": 63.44, "text": " And it's maybe no surprise that as M train, the training set size gets bigger,"}, {"start": 63.44, "end": 65.4, "text": " then you learn a better model."}, {"start": 65.4, "end": 68.64, "text": " And so the cross-validation error goes down."}, {"start": 68.64, "end": 77.36, "text": " Now, let's plot J train of W B of what the training error looks like as the training set size gets bigger."}, {"start": 77.36, "end": 82.2, "text": " It turns out that the training error will actually look like this."}, {"start": 82.2, "end": 88.44, "text": " That as the training set size gets bigger, the training set error actually increases."}, {"start": 88.44, "end": 90.39999999999999, "text": " Let's take a look at why this is the case."}, {"start": 90.39999999999999, "end": 95.39999999999999, "text": " We'll start with an example of when you have just a single training example."}, {"start": 95.39999999999999, "end": 99.36, "text": " Well, if you were to fit a quadratic model to this, you can fit, you know,"}, {"start": 99.36, "end": 103.75999999999999, "text": " easily a straight line or a curve and your training error will be zero."}, {"start": 103.75999999999999, "end": 106.68, "text": " How about if you have two training examples like this?"}, {"start": 106.68, "end": 111.68, "text": " Well, you can again fit straight line and achieve zero training error."}, {"start": 111.68, "end": 116.68, "text": " In fact, if you have three training examples, the quadratic function can still fit this very well"}, {"start": 116.68, "end": 119.80000000000001, "text": " and get pretty much zero training error."}, {"start": 119.80000000000001, "end": 125.04, "text": " But now, if your training set gets a little bit bigger, say you have four training examples,"}, {"start": 125.04, "end": 129.6, "text": " then it gets a little bit harder to fit all four examples perfectly."}, {"start": 129.6, "end": 132.4, "text": " And you may get a curve that looks like this, fits it pretty well,"}, {"start": 132.4, "end": 135.20000000000002, "text": " but you know, a little bit off in a few places here and there."}, {"start": 135.20000000000002, "end": 140.24, "text": " And so when you have increased the training set size to four,"}, {"start": 140.24, "end": 144.04000000000002, "text": " the training error has actually gone up a little bit."}, {"start": 144.04000000000002, "end": 146.4, "text": " How about when you have five training examples?"}, {"start": 146.4, "end": 152.20000000000002, "text": " Well, again, you can fit it pretty well, but it gets even a little bit harder to fit all of them perfectly."}, {"start": 152.20000000000002, "end": 156.28, "text": " And when you have an even larger training set, it just gets harder and harder"}, {"start": 156.28, "end": 159.92000000000002, "text": " to fit every single one of your training examples perfectly."}, {"start": 159.92000000000002, "end": 163.6, "text": " So to recap, when you have a very small number of training examples,"}, {"start": 163.6, "end": 171.48000000000002, "text": " like one or two or even three, it's relatively easy to get zero or very small training error."}, {"start": 171.48, "end": 176.67999999999998, "text": " But when you have a larger training set, it's harder for quadratic function"}, {"start": 176.67999999999998, "end": 182.83999999999997, "text": " to fit all the training examples perfectly, which is why as the training set gets bigger,"}, {"start": 182.83999999999997, "end": 189.76, "text": " the training error increases because it's harder to fit all of the training examples perfectly."}, {"start": 189.76, "end": 194.32, "text": " Notice one other thing about these curves, which is the cross validation error"}, {"start": 194.32, "end": 201.28, "text": " will be typically higher than the training error because you fit the parameters to the training set"}, {"start": 201.28, "end": 205.6, "text": " and so you expect to do at least a little bit better or when M is small,"}, {"start": 205.6, "end": 211.52, "text": " maybe even a lot better on the training set than on the cross validation set."}, {"start": 211.52, "end": 219.72, "text": " Let's now take a look at what the learning curves will look like for an algorithm with high bias versus one with high variance."}, {"start": 219.72, "end": 223.92000000000002, "text": " Let's start at the high bias or the under fitting case."}, {"start": 223.92000000000002, "end": 230.52, "text": " Recall that an example of high bias would be if you're fitting a linear function to a curve that looks like this."}, {"start": 230.52, "end": 238.96, "text": " If you were to plot the training error, then the training error will go up like so, as you'd expect."}, {"start": 238.96, "end": 248.92000000000002, "text": " And in fact, this curve of training error may start to flatten out or we call it plateau, meaning flatten out after a while."}, {"start": 248.92000000000002, "end": 255.56, "text": " And that's because as you get more and more training examples, when you're fitting the simple linear function,"}, {"start": 255.56, "end": 258.44, "text": " your model doesn't actually change that much more."}, {"start": 258.44, "end": 264.2, "text": " Fitting a straight line and even as you get more and more and more examples, there's just not that much more to change,"}, {"start": 264.2, "end": 270.12, "text": " which is why the average training error flattens out after a while."}, {"start": 270.12, "end": 276.88, "text": " And similarly, your cross validation error will come down and also flatten out after a while,"}, {"start": 276.88, "end": 284.0, "text": " which is why JCV again is higher than JTrain, but JCV will tend to look like that."}, {"start": 284.0, "end": 289.2, "text": " And it's because beyond a certain point, even as you get more and more and more examples,"}, {"start": 289.2, "end": 291.8, "text": " not much is going to change about the straight line you're fitting."}, {"start": 291.8, "end": 302.08, "text": " It's just too simple a model to be fitting to this much data, which is why both of these curves, JCV and JTrain, tend to flatten after a while."}, {"start": 302.08, "end": 309.84, "text": " And if you had a measure of that baseline level of performance, such as human level performance,"}, {"start": 309.84, "end": 315.47999999999996, "text": " then it'll tend to be a value that is lower than your JTrain and your JCV."}, {"start": 315.47999999999996, "end": 318.59999999999997, "text": " So human level performance may look like this."}, {"start": 318.59999999999997, "end": 323.32, "text": " And there's a big gap between the baseline level performance and JTrain,"}, {"start": 323.32, "end": 328.91999999999996, "text": " which was our indicator for this algorithm having high bias."}, {"start": 328.91999999999996, "end": 337.91999999999996, "text": " That is, one could hope to be doing much better if only we could fit a more complex function than just a straight line."}, {"start": 337.92, "end": 343.76, "text": " Now, one interesting thing about this plot is you can ask,"}, {"start": 343.76, "end": 349.32, "text": " what do you think will happen if you could have a much bigger training set?"}, {"start": 349.32, "end": 358.48, "text": " So what would it look like if we could increase M even further than the right of this plot and go further to the right as follows?"}, {"start": 358.48, "end": 362.36, "text": " Well, you can imagine if you were to extend both of these curves to the right,"}, {"start": 362.36, "end": 368.6, "text": " they both sort of flatten out and both of them will probably just continue to be flat like that."}, {"start": 368.6, "end": 372.56, "text": " And no matter how far you extend to the right of this plot, these two curves,"}, {"start": 372.56, "end": 382.2, "text": " they will never, you know, somehow find a way to dip down to this human level of performance or just keep on being kind of flat like this pretty much forever,"}, {"start": 382.2, "end": 385.52000000000004, "text": " no matter how large the training set gets."}, {"start": 385.52, "end": 392.71999999999997, "text": " So that gives this conclusion, maybe a little bit surprising, that if a learning algorithm has high bias,"}, {"start": 392.71999999999997, "end": 397.24, "text": " getting more training data will not by itself help that much."}, {"start": 397.24, "end": 402.24, "text": " And I know that we're used to thinking that having more data is good."}, {"start": 402.24, "end": 409.35999999999996, "text": " But if your algorithm has high bias, then if the only thing you do is throw more training data at it,"}, {"start": 409.35999999999996, "end": 412.91999999999996, "text": " that by itself will not ever let you bring down the error rate that much."}, {"start": 412.92, "end": 417.24, "text": " And it's because of this really, no matter how many more examples you add to this figure,"}, {"start": 417.24, "end": 421.76, "text": " the straight learning fitting just isn't going to get that much better."}, {"start": 421.76, "end": 426.92, "text": " And that's why before investing a lot of effort into collecting more training data,"}, {"start": 426.92, "end": 431.40000000000003, "text": " it's worth checking if your learning algorithm has high bias, because if it does,"}, {"start": 431.40000000000003, "end": 437.52000000000004, "text": " then you probably need to do some other things other than just throw more training data at it."}, {"start": 437.52, "end": 443.0, "text": " Let's now take a look at what the learning curve looks like for learning algorithm with high variance."}, {"start": 443.0, "end": 448.91999999999996, "text": " You might remember that if you were to fit a four-fold polynomial with small lambda,"}, {"start": 448.91999999999996, "end": 453.4, "text": " say, or even lambda equals zero, then you get a curve that looks like this."}, {"start": 453.4, "end": 458.88, "text": " And even though it fits the training data very well, it doesn't generalize."}, {"start": 458.88, "end": 464.96, "text": " Let's now look at what a learning curve might look like in this high variance scenario."}, {"start": 464.96, "end": 470.2, "text": " JTrain will be going up as the training set size increases,"}, {"start": 470.2, "end": 476.15999999999997, "text": " so you get a curve that looks like this, and JCV will be much higher."}, {"start": 476.15999999999997, "end": 480.32, "text": " So your cross-validation error is much higher than your training error."}, {"start": 480.32, "end": 487.03999999999996, "text": " And the fact there's a huge gap here is what can tell you that this high variance is doing much better"}, {"start": 487.03999999999996, "end": 491.24, "text": " on the training set than it's doing on your cross-validation set."}, {"start": 491.24, "end": 496.08, "text": " If you were to plot a baseline level of performance, such as human level performance,"}, {"start": 496.08, "end": 503.92, "text": " you may find that it turns out to be here that JTrain can sometimes be even lower than the human level of performance,"}, {"start": 503.92, "end": 507.56, "text": " or maybe human level performance is a little bit lower than this."}, {"start": 507.56, "end": 513.44, "text": " But when you're overfitting the training set, you may be able to fit the training set so well"}, {"start": 513.44, "end": 518.64, "text": " to have an unrealistically low error, such as zero error in this example over here,"}, {"start": 518.64, "end": 524.3199999999999, "text": " which is actually better than how well humans would actually be able to predict housing prices"}, {"start": 524.3199999999999, "end": 526.16, "text": " or whatever the application you're working on."}, {"start": 526.16, "end": 532.76, "text": " But again, the signal for high variance is whether JCV is much higher than JTrain."}, {"start": 532.76, "end": 539.84, "text": " And when you have high variance, then increasing the training set size could help a lot."}, {"start": 539.84, "end": 545.3199999999999, "text": " And in particular, if we could extrapolate these curves to the right, increase MTrain,"}, {"start": 545.32, "end": 551.48, "text": " then the training error will continue to go up, but then the cross-validation error"}, {"start": 551.48, "end": 556.2, "text": " hopefully will come down and approach JTrain."}, {"start": 556.2, "end": 563.2, "text": " And so in this scenario, it might be possible just by increasing the training set size"}, {"start": 563.2, "end": 568.7600000000001, "text": " to lower the cross-validation error and to get your algorithm to perform better and better."}, {"start": 568.7600000000001, "end": 574.72, "text": " And this is unlike the high bias case where if the only thing you do is get more training data,"}, {"start": 574.72, "end": 578.2, "text": " that won't actually help your learning algorithm performance much."}, {"start": 578.2, "end": 582.76, "text": " So to summarize, if a learning algorithm suffers from high variance,"}, {"start": 582.76, "end": 587.44, "text": " then getting more training data is indeed likely to help."}, {"start": 587.44, "end": 593.8000000000001, "text": " Because extrapolating to the right of this curve, you see that you can expect JCV to keep on coming down."}, {"start": 593.8000000000001, "end": 598.84, "text": " And in this example, just by getting more training data allows the algorithm to go from this"}, {"start": 598.84, "end": 604.84, "text": " relatively high cross-validation error to get much closer to human level performance."}, {"start": 604.84, "end": 611.12, "text": " You can see that if you were to add a lot more training examples and continue to fit a four-folder polynomial,"}, {"start": 611.12, "end": 618.84, "text": " then you can just get a better four-folder polynomial fit to this data than just very weakly curve up on top."}, {"start": 618.84, "end": 624.52, "text": " So if you're building a machine learning application, you could plot the learning curve if you want."}, {"start": 624.52, "end": 627.4000000000001, "text": " That is, you can take different subsets of your training sets."}, {"start": 627.4, "end": 633.6, "text": " And even if you have, say, a thousand training examples, you could train a model on just a hundred training examples"}, {"start": 633.6, "end": 639.28, "text": " and look at the training error and the cross-validation error, then train a model on 200 examples,"}, {"start": 639.28, "end": 645.92, "text": " holding out 800 examples and just not using them for now, and plot J train and JCV and so on and repeat"}, {"start": 645.92, "end": 648.9599999999999, "text": " and plot out what the learning curve looks like."}, {"start": 648.9599999999999, "end": 655.0, "text": " And if you were to visualize it that way, then that could be another way for you to see if your learning curve"}, {"start": 655.0, "end": 658.36, "text": " looks more like a high bias or high variance one."}, {"start": 658.36, "end": 661.48, "text": " One downside of plotting learning curves like this is something I've done,"}, {"start": 661.48, "end": 667.32, "text": " but one downside is it is computationally quite expensive to train so many different models"}, {"start": 667.32, "end": 670.56, "text": " using different size subsets of your training set."}, {"start": 670.56, "end": 673.4, "text": " So in practice, it isn't done that often."}, {"start": 673.4, "end": 679.4, "text": " But nonetheless, I find that having this mental visual picture in my head of what the training set looks like,"}, {"start": 679.4, "end": 684.0, "text": " sometimes that helps me to think through what I think my learning algorithm is doing"}, {"start": 684.0, "end": 687.8, "text": " and whether it has high bias or high variance."}, {"start": 687.8, "end": 691.88, "text": " So I know we've gone through a lot about bias and variance."}, {"start": 691.88, "end": 697.48, "text": " Let's go back to our earlier example of if you have trained a model for housing price prediction,"}, {"start": 697.48, "end": 701.24, "text": " how does bias and variance help you decide what to do next?"}, {"start": 701.24, "end": 705.84, "text": " Let's go back to that earlier example, which I hope will now make a lot more sense to you."}, {"start": 705.84, "end": 715.84, "text": " Let's do that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=00gn1isOd70
6.8 Bias and variance | Deciding what to try next revisited --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
You've seen how by looking at JTrain and JCV, that is the training error and the cross-validation error, or maybe even plotting a learning curve, you can try to get a sense of whether your learning algorithm has high bias or high variance. This is a procedure I routinely do when I'm training a learning algorithm. I'll often look at the training error and the cross-validation error to try to decide if my algorithm has high bias or high variance. It turns out this will help you make better decisions about what to try next in order to improve the performance of your learning algorithm. Let's look at an example. This is actually the example that you had seen earlier. If you've implemented regularized linear regression on predicting housing prices, but your algorithm makes unacceptably large errors in its predictions, what do you try next? And these were the six ideas that we had when we had looked over this slide earlier. Get more training examples, try a smaller set of features, additional features, and so on. It turns out that each of these six items either helps fix a high variance or a high bias problem. And in particular, if your learning algorithm has high bias, three of these techniques will be useful. If your learning algorithm has high variance, then a different three of these techniques will be useful. Let's see if we can figure out which is which. First one is get more training examples. We saw in the last video that if your algorithm has high bias, then if the only thing we do is get more training data, that by itself probably won't help that much. But in contrast, if your algorithm had high variance, say it was overfitting to a very small training set, then getting more training examples will help a lot. So this first option of getting more training examples helps to fix a high variance problem. How about the other five? Do you think you can figure out which of the remaining five fix high bias or high variance problems? I'm going to go through the rest of them in this video in a minute, but if you want, be free to pause the video and see if you can think through these five other things by yourself. So feel free to pause the video and just kidding, that was me pausing, not your video pausing. But seriously, if you want, go ahead and pause the video and think through it if you want or not. And we'll go over these with you in a minute. How about trying a smaller set of features? Sometimes if your learning algorithm has too many features, then it gives the algorithm too much flexibility to fit very complicated models. This is a little bit like if you had x squared, x cubed, x to the fourth, x to the fifth, and so on. And if only you were to eliminate a few of these, then your model won't be so complex and won't have such high variance. So if you suspect that your algorithm has a lot of features that are not actually relevant or helpful to predicting a housing price, or if you suspect that you had even somewhat redundant features, then eliminating or reducing the number of features will help reduce the flexibility of your algorithm to overfit the data. And so this is a tactic that will help you to fix high variance. Conversely, getting additional features that's just adding additional features is kind of the opposite of going to a smaller set of features. This will help you to fix a high bias problem. As a concrete example, if you were trying to predict the price of a house just based on the size, but it turns out that the price of a house also really depends on the number of bedrooms and on the number of floors and on the age of a house, then the algorithm will never do that well unless you add in those additional features. So that's a high bias problem because you just can't do that well on the training set when you know only the size. Because only when you tell the algorithm how many bedrooms are there, how many floors are there, what's the age of the house, that it finally has enough information to even do better on the training set. And so adding additional features is a way to fix a high bias problem. Adding polynomial features is a little bit like adding additional features. So if your linear function, straight line, can't fit the training set that well, then adding additional polynomial features could help you do better on the training set. And helping you do better on the training set is a way to fix a high bias problem. And then decreasing lambda means to use a lower value for the regularization parameter. And that means we're going to pay less attention to this term and pay more attention to this term to try to do better on the training set. And again, that helps you to fix a high bias problem. And finally, increasing lambda, well, that's the opposite of this. But that says you're overfitting the data. And so increasing lambda would make sense if it's overfitting the training set, just putting too much attention to fit in the training set, but at the expense of generalizing to new examples. And so increasing lambda would force the algorithm to fit a smoother function, maybe a less wiggly function and use this to fix a high variance problem. So I realized that this was a lot of stuff on the slide. But the takeaways I hope you have are, if you find that your algorithm has high variance, then the two main ways to fix that are either get more training data or simplify your model. And by simplify model, I mean, either get a smaller set of features or increase the regularization parameter lambda. So your algorithm has less flexibility to fit very complex, very wiggly curves. Conversely, if your algorithm has high bias, then that means it's not doing well even on the training set. So if that's the case, the main fixes are to make your model more powerful or to give it more flexibility to fit more complex or more wiggly functions. And so some ways to do that are to give it additional features or add these polynomial features or to decrease the regularization parameter lambda. By the way, in case you're wondering if you should fix high bias by reducing the training set size, that doesn't actually help. If you reduce the training set size, you will fit the training set better, but that tends to worsen your cross validation error and the performance of your learning algorithm. So don't randomly throw away training examples just to try to fix a high bias problem. One of my PhD students from Stanford, many years after he'd already graduated from Stanford, once said to me that while he was studying at Stanford, he learned about bias and variance and felt like he got it, he understood it. But that subsequently, after many years of work experience at a few different companies, he realized that bias and variance is one of those concepts that takes a short time to learn, but takes a lifetime to master. Those were his exact words. And I think bias and variance is one of those very powerful ideas. When I'm training learning algorithms, I almost always try to figure out if it is high bias or high variance. But the way you go about addressing it systematically is something that I think you will keep on getting better at through repeated practice. But you'll find, I think, that understanding these ideas will help you be much more effective at how you decide what to try next when developing a learning algorithm. Now I know that we did go through a lot in this video. And if you feel like, boy, there's just a lot of stuff here, it's okay, don't worry about it. Later this week in the practice labs and practice quizzes, we'll have also additional opportunities to go over these ideas so that you can get additional practice with thinking about bias and variance of different learning algorithms. So if it seems like a lot right now, it's okay. You get to practice these ideas later this week and hopefully deepen your understanding of them at that time. Before moving on, bias and variance also are very useful when thinking about how to train a neural network. So in the next video, let's take a look at these concepts applied to neural network training. Let's go on to the next video.
[{"start": 0.0, "end": 8.56, "text": " You've seen how by looking at JTrain and JCV, that is the training error and the cross-validation"}, {"start": 8.56, "end": 14.0, "text": " error, or maybe even plotting a learning curve, you can try to get a sense of whether your"}, {"start": 14.0, "end": 17.68, "text": " learning algorithm has high bias or high variance."}, {"start": 17.68, "end": 21.32, "text": " This is a procedure I routinely do when I'm training a learning algorithm."}, {"start": 21.32, "end": 25.48, "text": " I'll often look at the training error and the cross-validation error to try to decide"}, {"start": 25.48, "end": 28.240000000000002, "text": " if my algorithm has high bias or high variance."}, {"start": 28.24, "end": 32.8, "text": " It turns out this will help you make better decisions about what to try next in order"}, {"start": 32.8, "end": 35.96, "text": " to improve the performance of your learning algorithm."}, {"start": 35.96, "end": 37.96, "text": " Let's look at an example."}, {"start": 37.96, "end": 41.72, "text": " This is actually the example that you had seen earlier."}, {"start": 41.72, "end": 48.36, "text": " If you've implemented regularized linear regression on predicting housing prices, but your algorithm"}, {"start": 48.36, "end": 53.2, "text": " makes unacceptably large errors in its predictions, what do you try next?"}, {"start": 53.2, "end": 57.28, "text": " And these were the six ideas that we had when we had looked over this slide earlier."}, {"start": 57.28, "end": 61.32, "text": " Get more training examples, try a smaller set of features, additional features, and"}, {"start": 61.32, "end": 62.32, "text": " so on."}, {"start": 62.32, "end": 69.48, "text": " It turns out that each of these six items either helps fix a high variance or a high"}, {"start": 69.48, "end": 71.64, "text": " bias problem."}, {"start": 71.64, "end": 77.28, "text": " And in particular, if your learning algorithm has high bias, three of these techniques will"}, {"start": 77.28, "end": 78.62, "text": " be useful."}, {"start": 78.62, "end": 82.8, "text": " If your learning algorithm has high variance, then a different three of these techniques"}, {"start": 82.8, "end": 83.8, "text": " will be useful."}, {"start": 83.8, "end": 87.6, "text": " Let's see if we can figure out which is which."}, {"start": 87.6, "end": 91.92, "text": " First one is get more training examples."}, {"start": 91.92, "end": 97.96, "text": " We saw in the last video that if your algorithm has high bias, then if the only thing we do"}, {"start": 97.96, "end": 104.03999999999999, "text": " is get more training data, that by itself probably won't help that much."}, {"start": 104.03999999999999, "end": 109.24, "text": " But in contrast, if your algorithm had high variance, say it was overfitting to a very"}, {"start": 109.24, "end": 115.53999999999999, "text": " small training set, then getting more training examples will help a lot."}, {"start": 115.53999999999999, "end": 122.88, "text": " So this first option of getting more training examples helps to fix a high variance problem."}, {"start": 122.88, "end": 124.47999999999999, "text": " How about the other five?"}, {"start": 124.47999999999999, "end": 129.79999999999998, "text": " Do you think you can figure out which of the remaining five fix high bias or high variance"}, {"start": 129.79999999999998, "end": 130.79999999999998, "text": " problems?"}, {"start": 130.79999999999998, "end": 134.0, "text": " I'm going to go through the rest of them in this video in a minute, but if you want, be"}, {"start": 134.0, "end": 138.94, "text": " free to pause the video and see if you can think through these five other things by yourself."}, {"start": 138.94, "end": 145.84, "text": " So feel free to pause the video and just kidding, that was me pausing, not your video pausing."}, {"start": 145.84, "end": 149.12, "text": " But seriously, if you want, go ahead and pause the video and think through it if you want"}, {"start": 149.12, "end": 150.12, "text": " or not."}, {"start": 150.12, "end": 153.07999999999998, "text": " And we'll go over these with you in a minute."}, {"start": 153.07999999999998, "end": 157.48, "text": " How about trying a smaller set of features?"}, {"start": 157.48, "end": 162.76, "text": " Sometimes if your learning algorithm has too many features, then it gives the algorithm"}, {"start": 162.76, "end": 167.16, "text": " too much flexibility to fit very complicated models."}, {"start": 167.16, "end": 173.28, "text": " This is a little bit like if you had x squared, x cubed, x to the fourth, x to the fifth,"}, {"start": 173.28, "end": 174.88, "text": " and so on."}, {"start": 174.88, "end": 180.76, "text": " And if only you were to eliminate a few of these, then your model won't be so complex"}, {"start": 180.76, "end": 183.88, "text": " and won't have such high variance."}, {"start": 183.88, "end": 190.0, "text": " So if you suspect that your algorithm has a lot of features that are not actually relevant"}, {"start": 190.0, "end": 195.51999999999998, "text": " or helpful to predicting a housing price, or if you suspect that you had even somewhat"}, {"start": 195.52, "end": 202.44, "text": " redundant features, then eliminating or reducing the number of features will help reduce the"}, {"start": 202.44, "end": 206.56, "text": " flexibility of your algorithm to overfit the data."}, {"start": 206.56, "end": 210.48000000000002, "text": " And so this is a tactic that will help you to fix high variance."}, {"start": 210.48000000000002, "end": 214.92000000000002, "text": " Conversely, getting additional features that's just adding additional features is kind of"}, {"start": 214.92000000000002, "end": 219.20000000000002, "text": " the opposite of going to a smaller set of features."}, {"start": 219.20000000000002, "end": 223.04000000000002, "text": " This will help you to fix a high bias problem."}, {"start": 223.04, "end": 226.5, "text": " As a concrete example, if you were trying to predict the price of a house just based"}, {"start": 226.5, "end": 232.44, "text": " on the size, but it turns out that the price of a house also really depends on the number"}, {"start": 232.44, "end": 238.35999999999999, "text": " of bedrooms and on the number of floors and on the age of a house, then the algorithm"}, {"start": 238.35999999999999, "end": 241.76, "text": " will never do that well unless you add in those additional features."}, {"start": 241.76, "end": 247.72, "text": " So that's a high bias problem because you just can't do that well on the training set"}, {"start": 247.72, "end": 250.16, "text": " when you know only the size."}, {"start": 250.16, "end": 254.2, "text": " Because only when you tell the algorithm how many bedrooms are there, how many floors are"}, {"start": 254.2, "end": 257.8, "text": " there, what's the age of the house, that it finally has enough information to even do"}, {"start": 257.8, "end": 260.48, "text": " better on the training set."}, {"start": 260.48, "end": 266.64, "text": " And so adding additional features is a way to fix a high bias problem."}, {"start": 266.64, "end": 271.26, "text": " Adding polynomial features is a little bit like adding additional features."}, {"start": 271.26, "end": 276.36, "text": " So if your linear function, straight line, can't fit the training set that well, then"}, {"start": 276.36, "end": 281.16, "text": " adding additional polynomial features could help you do better on the training set."}, {"start": 281.16, "end": 286.44, "text": " And helping you do better on the training set is a way to fix a high bias problem."}, {"start": 286.44, "end": 294.40000000000003, "text": " And then decreasing lambda means to use a lower value for the regularization parameter."}, {"start": 294.40000000000003, "end": 298.88, "text": " And that means we're going to pay less attention to this term and pay more attention to this"}, {"start": 298.88, "end": 301.64, "text": " term to try to do better on the training set."}, {"start": 301.64, "end": 305.84000000000003, "text": " And again, that helps you to fix a high bias problem."}, {"start": 305.84, "end": 309.76, "text": " And finally, increasing lambda, well, that's the opposite of this."}, {"start": 309.76, "end": 312.94, "text": " But that says you're overfitting the data."}, {"start": 312.94, "end": 317.35999999999996, "text": " And so increasing lambda would make sense if it's overfitting the training set, just"}, {"start": 317.35999999999996, "end": 323.03999999999996, "text": " putting too much attention to fit in the training set, but at the expense of generalizing to"}, {"start": 323.03999999999996, "end": 324.85999999999996, "text": " new examples."}, {"start": 324.85999999999996, "end": 330.79999999999995, "text": " And so increasing lambda would force the algorithm to fit a smoother function, maybe a less wiggly"}, {"start": 330.79999999999995, "end": 335.79999999999995, "text": " function and use this to fix a high variance problem."}, {"start": 335.8, "end": 339.94, "text": " So I realized that this was a lot of stuff on the slide."}, {"start": 339.94, "end": 345.8, "text": " But the takeaways I hope you have are, if you find that your algorithm has high variance,"}, {"start": 345.8, "end": 353.86, "text": " then the two main ways to fix that are either get more training data or simplify your model."}, {"start": 353.86, "end": 360.04, "text": " And by simplify model, I mean, either get a smaller set of features or increase the"}, {"start": 360.04, "end": 362.44, "text": " regularization parameter lambda."}, {"start": 362.44, "end": 368.16, "text": " So your algorithm has less flexibility to fit very complex, very wiggly curves."}, {"start": 368.16, "end": 374.88, "text": " Conversely, if your algorithm has high bias, then that means it's not doing well even on"}, {"start": 374.88, "end": 376.2, "text": " the training set."}, {"start": 376.2, "end": 382.0, "text": " So if that's the case, the main fixes are to make your model more powerful or to give"}, {"start": 382.0, "end": 386.44, "text": " it more flexibility to fit more complex or more wiggly functions."}, {"start": 386.44, "end": 391.44, "text": " And so some ways to do that are to give it additional features or add these polynomial"}, {"start": 391.44, "end": 396.0, "text": " features or to decrease the regularization parameter lambda."}, {"start": 396.0, "end": 402.04, "text": " By the way, in case you're wondering if you should fix high bias by reducing the training"}, {"start": 402.04, "end": 404.8, "text": " set size, that doesn't actually help."}, {"start": 404.8, "end": 409.04, "text": " If you reduce the training set size, you will fit the training set better, but that tends"}, {"start": 409.04, "end": 413.04, "text": " to worsen your cross validation error and the performance of your learning algorithm."}, {"start": 413.04, "end": 418.15999999999997, "text": " So don't randomly throw away training examples just to try to fix a high bias problem."}, {"start": 418.16, "end": 423.20000000000005, "text": " One of my PhD students from Stanford, many years after he'd already graduated from Stanford,"}, {"start": 423.20000000000005, "end": 428.98, "text": " once said to me that while he was studying at Stanford, he learned about bias and variance"}, {"start": 428.98, "end": 431.64000000000004, "text": " and felt like he got it, he understood it."}, {"start": 431.64000000000004, "end": 436.86, "text": " But that subsequently, after many years of work experience at a few different companies,"}, {"start": 436.86, "end": 441.32000000000005, "text": " he realized that bias and variance is one of those concepts that takes a short time"}, {"start": 441.32000000000005, "end": 444.64000000000004, "text": " to learn, but takes a lifetime to master."}, {"start": 444.64000000000004, "end": 447.0, "text": " Those were his exact words."}, {"start": 447.0, "end": 451.96, "text": " And I think bias and variance is one of those very powerful ideas."}, {"start": 451.96, "end": 456.72, "text": " When I'm training learning algorithms, I almost always try to figure out if it is high bias"}, {"start": 456.72, "end": 458.28, "text": " or high variance."}, {"start": 458.28, "end": 464.56, "text": " But the way you go about addressing it systematically is something that I think you will keep on"}, {"start": 464.56, "end": 468.64, "text": " getting better at through repeated practice."}, {"start": 468.64, "end": 473.96, "text": " But you'll find, I think, that understanding these ideas will help you be much more effective"}, {"start": 473.96, "end": 479.64, "text": " at how you decide what to try next when developing a learning algorithm."}, {"start": 479.64, "end": 483.56, "text": " Now I know that we did go through a lot in this video."}, {"start": 483.56, "end": 486.56, "text": " And if you feel like, boy, there's just a lot of stuff here, it's okay, don't worry"}, {"start": 486.56, "end": 487.56, "text": " about it."}, {"start": 487.56, "end": 493.91999999999996, "text": " Later this week in the practice labs and practice quizzes, we'll have also additional opportunities"}, {"start": 493.91999999999996, "end": 498.91999999999996, "text": " to go over these ideas so that you can get additional practice with thinking about bias"}, {"start": 498.91999999999996, "end": 501.4, "text": " and variance of different learning algorithms."}, {"start": 501.4, "end": 504.44, "text": " So if it seems like a lot right now, it's okay."}, {"start": 504.44, "end": 508.56, "text": " You get to practice these ideas later this week and hopefully deepen your understanding"}, {"start": 508.56, "end": 511.76, "text": " of them at that time."}, {"start": 511.76, "end": 517.52, "text": " Before moving on, bias and variance also are very useful when thinking about how to train"}, {"start": 517.52, "end": 519.36, "text": " a neural network."}, {"start": 519.36, "end": 524.64, "text": " So in the next video, let's take a look at these concepts applied to neural network training."}, {"start": 524.64, "end": 531.64, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=Hso8Qq3-arc
6.9 Bias and variance | Bias/variance and neural networks --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
We've seen that high bias or high variance are both bad in the sense that they hurt the performance of your algorithm. One of the reasons that neural networks have been so successful is because neural networks, together with the idea of big data or hopefully having large datasets, is giving us a new way of new ways to address both high bias and high variance. Let's take a look. You saw that if you're fitting different order polynomials to a dataset, then if you were to fit a linear model like this on the left, you have a pretty simple model that can have high bias. Whereas if you were to fit a complex model, then you might suffer from high variance. And there's this trade-off between bias and variance. In our example, it was choosing a second-order polynomial that helps you make a trade-off and pick a model with lowest possible cross-validation error. And so before the days of neural networks, machine learning engineers talked a lot about this bias-variance trade-off in which you had to balance the complexity, that is, the degree of polynomial or the regularization parameter lambda to make bias and variance both not be too high. And if you hear machine learning engineers talk about the bias-variance trade-off, this is what they're referring to, where if you have too simple a model, you have high bias, too complex a model, high variance, and you have to find a trade-off between these two bad things to find hopefully the best possible outcome. But it turns out that neural networks offer us away all of this dilemma of having to trade off bias and variance with some caveats. And it turns out that large neural networks, when trained on small to moderate-sized datasets, are low-bias machines. And what I mean by that is if you make your neural network large enough, you can almost always fit your training set well. So unless your training set is not enormous. And what this means is this gives us a new recipe to try to reduce bias or reduce variance as needed without needing to really trade off between the two of them. So let me share with you a simple recipe that isn't always applicable, but if it applies, can be very powerful for getting an accurate model using a neural network, which is first train your algorithm on your training set and then ask, does it do well on the training set? So measure J train and see if it is high. And by high, I mean, for example, relative to human level performance or some baseline level performance. And if it is not doing well, then you have a high bias problem, high training set error. And one way to reduce bias is to just use a bigger neural network. And by bigger neural network, I mean, either more hidden layers or more hidden units per layer. And you can then keep on going through this loop and make your neural network bigger and bigger until it does well on the training set, meaning it achieves a level of error on your training set that is roughly comparable to the target level of error you hope to get to, which could be human level performance. After it does well on the training set, so the answer to that question is yes, you would then ask, does it do well on the cross validation set? In other words, does it have high variance? And if the answer is no, then you can conclude that the algorithm has high variance because it does well on the training set, does not do well on the cross validation set. So that big gap in JCV and JTrain indicates you probably have a high variance problem. And if you have a high variance problem, then one way to try to fix it is to get more data. So you get more data and go back and retrain the model and just double check that it does well on the training set. If not, have a bigger network or if it does, see if it does well on the cross validation set and if not, get more data. Then if you can keep on going round and round and round this loop until eventually it does well on the cross validation set, then you're probably done because now you have a model that does well on the cross validation set and hopefully will also generalize to new examples as well. Now of course there are limitations to the application of this recipe. Training a bigger neural network does reduce bias, but at some point it does get computationally expensive. That's why the rise of neural networks has been really assisted by the rise of very fast computers, including especially GPUs or graphics processor units. Hardware traditionally used to speed up computer graphics, but that turns out has been very useful for speeding up neural networks as well. But even with hardware accelerators, beyond a certain point, the neural networks are still large and take so long to train, it becomes infeasible. And then of course the other limitation is more data. Sometimes you can only get so much data and beyond a certain point, it's hard to get much more data. But I think this recipe explains a lot of the rise of deep learning in the last several years, which is for applications where you do have access to a lot of data. Then being able to train large neural networks allows you to eventually get pretty good performance on a lot of applications. One thing that was implicit in this slide that may not have been obvious is that as you're developing a learning algorithm, sometimes you find that you have high bias, in which case you do things like increase the neural network. But then after you increase the neural network, you may find that you have high variance, in which case you might do other things like collect more data. Then during the hours or days or weeks you're developing a machine learning algorithm, at different points you may have high bias or high variance and it can change. But it's depending on whether your algorithm has high bias or high variance at that time that that can help give guidance for what you should be trying next. When you're training a neural network, one thing that people have asked me before is, hey Andrew, whether my neural network is too big, will that create a high variance problem? It turns out that a large neural network with well-chosen regularization will usually do as well or better than a smaller one. And so, for example, if you have a small neural network like this and you were to switch to a much larger neural network like this, you would think that the risk of overfitting goes up significantly. But it turns out that if you were to regularize this larger neural network appropriately, then this larger neural network usually will do at least as well or better than the smaller one, so long as the regularization is chosen appropriately. So another way of saying this is that it almost never hurts to go to a larger neural network so long as you regularize appropriately. With one caveat, which is that when you train a larger neural network, it does become more computationally expensive. So the main way it hurts is it will slow down your training and your inference process. And very briefly, to regularize a neural network, this is what you do. If the cost function for your neural network is the average loss, and so the loss here could be squared error or logistic loss, then the regularization term for a neural network looks like pretty much what you expect is lambda over 2m times the sum of w squared, where this is the sum over all weights w in the neural network. And similar to regularization for linear regression and logistic regression, we usually don't regularize the parameters b in a neural network, although in practice it makes very little difference whether you do so or not. And the way you would implement regularization in TensorFlow is, recall that this was the code for implementing an un-regularized handwritten digit classification model. We create three layers like so with number of hidden units, activation, and then create a sequential model with the three layers. If you want to add regularization, then you would just add this extra term kernel regularizer equals L2 and then 0.01, where that's the value of lambda. TensorFlow actually lets you choose different values of lambda for different layers, although for simplicity, you can choose the same value of lambda for all the weights in all of the different layers as follows. And then this will allow you to implement regularization in your neural network. So to summarize, two takeaways I hope you have from this video are, one, it hardly ever hurts to have a larger neural network so long as you regularize appropriately. One caveat being that having a larger neural network can slow down your out-of-room, so maybe that's the one way it hurts, but it shouldn't hurt your out-of-room's performance for the most part. And in fact, it could even help it significantly. And second, so long as your training set isn't too large, then a neural network, especially a large neural network, is often a low-bias machine. It just fits very complicated functions very well, which is why when I'm training neural networks, I find that I'm often fighting various problems rather than bias problems, at least if the neural network is large enough. So the rise of deep learning has really changed the way that machine learning practitioners think about bias and variance. Having said that, even when you're training a neural network, measuring bias and variance and using that to guide what you do next is often a very hopeful thing to do. So that's it for bias and variance. Let's go on to the next video where we'll take all the ideas we've learned and see how they fit in to the development process of machine learning systems. And I hope that we'll tie a lot of these pieces together to give you practical advice for how to quickly move forward in the development of your machine learning systems.
[{"start": 0.0, "end": 6.4, "text": " We've seen that high bias or high variance are both bad in the sense that they hurt the"}, {"start": 6.4, "end": 8.88, "text": " performance of your algorithm."}, {"start": 8.88, "end": 14.0, "text": " One of the reasons that neural networks have been so successful is because neural networks,"}, {"start": 14.0, "end": 19.42, "text": " together with the idea of big data or hopefully having large datasets, is giving us a new"}, {"start": 19.42, "end": 24.560000000000002, "text": " way of new ways to address both high bias and high variance."}, {"start": 24.560000000000002, "end": 25.6, "text": " Let's take a look."}, {"start": 25.6, "end": 32.28, "text": " You saw that if you're fitting different order polynomials to a dataset, then if you were"}, {"start": 32.28, "end": 38.400000000000006, "text": " to fit a linear model like this on the left, you have a pretty simple model that can have"}, {"start": 38.400000000000006, "end": 39.400000000000006, "text": " high bias."}, {"start": 39.400000000000006, "end": 45.760000000000005, "text": " Whereas if you were to fit a complex model, then you might suffer from high variance."}, {"start": 45.760000000000005, "end": 50.32, "text": " And there's this trade-off between bias and variance."}, {"start": 50.32, "end": 57.08, "text": " In our example, it was choosing a second-order polynomial that helps you make a trade-off"}, {"start": 57.08, "end": 62.44, "text": " and pick a model with lowest possible cross-validation error."}, {"start": 62.44, "end": 67.96000000000001, "text": " And so before the days of neural networks, machine learning engineers talked a lot about"}, {"start": 67.96000000000001, "end": 75.36, "text": " this bias-variance trade-off in which you had to balance the complexity, that is, the"}, {"start": 75.36, "end": 80.8, "text": " degree of polynomial or the regularization parameter lambda to make bias and variance"}, {"start": 80.8, "end": 83.44, "text": " both not be too high."}, {"start": 83.44, "end": 88.2, "text": " And if you hear machine learning engineers talk about the bias-variance trade-off, this"}, {"start": 88.2, "end": 93.03999999999999, "text": " is what they're referring to, where if you have too simple a model, you have high bias,"}, {"start": 93.03999999999999, "end": 96.96000000000001, "text": " too complex a model, high variance, and you have to find a trade-off between these two"}, {"start": 96.96000000000001, "end": 101.9, "text": " bad things to find hopefully the best possible outcome."}, {"start": 101.9, "end": 108.08000000000001, "text": " But it turns out that neural networks offer us away all of this dilemma of having to trade"}, {"start": 108.08000000000001, "end": 111.44000000000001, "text": " off bias and variance with some caveats."}, {"start": 111.44000000000001, "end": 119.48, "text": " And it turns out that large neural networks, when trained on small to moderate-sized datasets,"}, {"start": 119.48, "end": 121.84, "text": " are low-bias machines."}, {"start": 121.84, "end": 128.34, "text": " And what I mean by that is if you make your neural network large enough, you can almost"}, {"start": 128.34, "end": 131.16, "text": " always fit your training set well."}, {"start": 131.16, "end": 133.79999999999998, "text": " So unless your training set is not enormous."}, {"start": 133.79999999999998, "end": 140.16, "text": " And what this means is this gives us a new recipe to try to reduce bias or reduce variance"}, {"start": 140.16, "end": 144.51999999999998, "text": " as needed without needing to really trade off between the two of them."}, {"start": 144.51999999999998, "end": 149.92, "text": " So let me share with you a simple recipe that isn't always applicable, but if it applies,"}, {"start": 149.92, "end": 158.2, "text": " can be very powerful for getting an accurate model using a neural network, which is first"}, {"start": 158.2, "end": 163.28, "text": " train your algorithm on your training set and then ask, does it do well on the training"}, {"start": 163.28, "end": 164.28, "text": " set?"}, {"start": 164.28, "end": 168.44, "text": " So measure J train and see if it is high."}, {"start": 168.44, "end": 173.64, "text": " And by high, I mean, for example, relative to human level performance or some baseline"}, {"start": 173.64, "end": 176.07999999999998, "text": " level performance."}, {"start": 176.07999999999998, "end": 182.67999999999998, "text": " And if it is not doing well, then you have a high bias problem, high training set error."}, {"start": 182.68, "end": 188.4, "text": " And one way to reduce bias is to just use a bigger neural network."}, {"start": 188.4, "end": 193.24, "text": " And by bigger neural network, I mean, either more hidden layers or more hidden units per"}, {"start": 193.24, "end": 194.84, "text": " layer."}, {"start": 194.84, "end": 199.96, "text": " And you can then keep on going through this loop and make your neural network bigger and"}, {"start": 199.96, "end": 204.56, "text": " bigger until it does well on the training set, meaning it achieves a level of error"}, {"start": 204.56, "end": 210.86, "text": " on your training set that is roughly comparable to the target level of error you hope to get"}, {"start": 210.86, "end": 215.0, "text": " to, which could be human level performance."}, {"start": 215.0, "end": 218.92000000000002, "text": " After it does well on the training set, so the answer to that question is yes, you would"}, {"start": 218.92000000000002, "end": 222.28, "text": " then ask, does it do well on the cross validation set?"}, {"start": 222.28, "end": 226.52, "text": " In other words, does it have high variance?"}, {"start": 226.52, "end": 232.10000000000002, "text": " And if the answer is no, then you can conclude that the algorithm has high variance because"}, {"start": 232.10000000000002, "end": 235.60000000000002, "text": " it does well on the training set, does not do well on the cross validation set."}, {"start": 235.6, "end": 243.0, "text": " So that big gap in JCV and JTrain indicates you probably have a high variance problem."}, {"start": 243.0, "end": 247.51999999999998, "text": " And if you have a high variance problem, then one way to try to fix it is to get more data."}, {"start": 247.51999999999998, "end": 253.04, "text": " So you get more data and go back and retrain the model and just double check that it does"}, {"start": 253.04, "end": 254.4, "text": " well on the training set."}, {"start": 254.4, "end": 258.44, "text": " If not, have a bigger network or if it does, see if it does well on the cross validation"}, {"start": 258.44, "end": 261.6, "text": " set and if not, get more data."}, {"start": 261.6, "end": 265.76000000000005, "text": " Then if you can keep on going round and round and round this loop until eventually it does"}, {"start": 265.76000000000005, "end": 271.6, "text": " well on the cross validation set, then you're probably done because now you have a model"}, {"start": 271.6, "end": 277.08000000000004, "text": " that does well on the cross validation set and hopefully will also generalize to new"}, {"start": 277.08000000000004, "end": 278.92, "text": " examples as well."}, {"start": 278.92, "end": 283.64000000000004, "text": " Now of course there are limitations to the application of this recipe."}, {"start": 283.64000000000004, "end": 288.52000000000004, "text": " Training a bigger neural network does reduce bias, but at some point it does get computationally"}, {"start": 288.52000000000004, "end": 289.92, "text": " expensive."}, {"start": 289.92, "end": 296.0, "text": " That's why the rise of neural networks has been really assisted by the rise of very fast"}, {"start": 296.0, "end": 301.92, "text": " computers, including especially GPUs or graphics processor units."}, {"start": 301.92, "end": 306.12, "text": " Hardware traditionally used to speed up computer graphics, but that turns out has been very"}, {"start": 306.12, "end": 308.88, "text": " useful for speeding up neural networks as well."}, {"start": 308.88, "end": 313.32, "text": " But even with hardware accelerators, beyond a certain point, the neural networks are still"}, {"start": 313.32, "end": 316.98, "text": " large and take so long to train, it becomes infeasible."}, {"start": 316.98, "end": 321.28000000000003, "text": " And then of course the other limitation is more data."}, {"start": 321.28000000000003, "end": 326.12, "text": " Sometimes you can only get so much data and beyond a certain point, it's hard to get much"}, {"start": 326.12, "end": 327.12, "text": " more data."}, {"start": 327.12, "end": 333.48, "text": " But I think this recipe explains a lot of the rise of deep learning in the last several"}, {"start": 333.48, "end": 338.28000000000003, "text": " years, which is for applications where you do have access to a lot of data."}, {"start": 338.28000000000003, "end": 345.26, "text": " Then being able to train large neural networks allows you to eventually get pretty good performance"}, {"start": 345.26, "end": 347.12, "text": " on a lot of applications."}, {"start": 347.12, "end": 352.94, "text": " One thing that was implicit in this slide that may not have been obvious is that as"}, {"start": 352.94, "end": 358.56, "text": " you're developing a learning algorithm, sometimes you find that you have high bias, in which"}, {"start": 358.56, "end": 361.08, "text": " case you do things like increase the neural network."}, {"start": 361.08, "end": 365.48, "text": " But then after you increase the neural network, you may find that you have high variance,"}, {"start": 365.48, "end": 369.52, "text": " in which case you might do other things like collect more data."}, {"start": 369.52, "end": 375.79999999999995, "text": " Then during the hours or days or weeks you're developing a machine learning algorithm, at"}, {"start": 375.79999999999995, "end": 379.88, "text": " different points you may have high bias or high variance and it can change."}, {"start": 379.88, "end": 384.71999999999997, "text": " But it's depending on whether your algorithm has high bias or high variance at that time"}, {"start": 384.71999999999997, "end": 389.4, "text": " that that can help give guidance for what you should be trying next."}, {"start": 389.4, "end": 394.28, "text": " When you're training a neural network, one thing that people have asked me before is,"}, {"start": 394.28, "end": 401.52, "text": " hey Andrew, whether my neural network is too big, will that create a high variance problem?"}, {"start": 401.52, "end": 408.59999999999997, "text": " It turns out that a large neural network with well-chosen regularization will usually do"}, {"start": 408.59999999999997, "end": 411.84, "text": " as well or better than a smaller one."}, {"start": 411.84, "end": 418.94, "text": " And so, for example, if you have a small neural network like this and you were to switch to"}, {"start": 418.94, "end": 424.7, "text": " a much larger neural network like this, you would think that the risk of overfitting goes"}, {"start": 424.7, "end": 426.7, "text": " up significantly."}, {"start": 426.7, "end": 432.36, "text": " But it turns out that if you were to regularize this larger neural network appropriately,"}, {"start": 432.36, "end": 438.24, "text": " then this larger neural network usually will do at least as well or better than the smaller"}, {"start": 438.24, "end": 442.56, "text": " one, so long as the regularization is chosen appropriately."}, {"start": 442.56, "end": 448.12, "text": " So another way of saying this is that it almost never hurts to go to a larger neural network"}, {"start": 448.12, "end": 450.98, "text": " so long as you regularize appropriately."}, {"start": 450.98, "end": 456.0, "text": " With one caveat, which is that when you train a larger neural network, it does become more"}, {"start": 456.0, "end": 457.68, "text": " computationally expensive."}, {"start": 457.68, "end": 463.08, "text": " So the main way it hurts is it will slow down your training and your inference process."}, {"start": 463.08, "end": 469.52, "text": " And very briefly, to regularize a neural network, this is what you do."}, {"start": 469.52, "end": 475.24, "text": " If the cost function for your neural network is the average loss, and so the loss here"}, {"start": 475.24, "end": 482.08, "text": " could be squared error or logistic loss, then the regularization term for a neural network"}, {"start": 482.08, "end": 489.56, "text": " looks like pretty much what you expect is lambda over 2m times the sum of w squared,"}, {"start": 489.56, "end": 494.12, "text": " where this is the sum over all weights w in the neural network."}, {"start": 494.12, "end": 499.2, "text": " And similar to regularization for linear regression and logistic regression, we usually don't"}, {"start": 499.2, "end": 503.92, "text": " regularize the parameters b in a neural network, although in practice it makes very little"}, {"start": 503.92, "end": 506.88, "text": " difference whether you do so or not."}, {"start": 506.88, "end": 512.74, "text": " And the way you would implement regularization in TensorFlow is, recall that this was the"}, {"start": 512.74, "end": 519.0600000000001, "text": " code for implementing an un-regularized handwritten digit classification model."}, {"start": 519.0600000000001, "end": 524.9200000000001, "text": " We create three layers like so with number of hidden units, activation, and then create"}, {"start": 524.9200000000001, "end": 528.02, "text": " a sequential model with the three layers."}, {"start": 528.02, "end": 535.12, "text": " If you want to add regularization, then you would just add this extra term kernel regularizer"}, {"start": 535.12, "end": 541.12, "text": " equals L2 and then 0.01, where that's the value of lambda."}, {"start": 541.12, "end": 545.52, "text": " TensorFlow actually lets you choose different values of lambda for different layers, although"}, {"start": 545.52, "end": 550.8, "text": " for simplicity, you can choose the same value of lambda for all the weights in all of the"}, {"start": 550.8, "end": 552.76, "text": " different layers as follows."}, {"start": 552.76, "end": 558.36, "text": " And then this will allow you to implement regularization in your neural network."}, {"start": 558.36, "end": 564.04, "text": " So to summarize, two takeaways I hope you have from this video are, one, it hardly ever"}, {"start": 564.04, "end": 570.16, "text": " hurts to have a larger neural network so long as you regularize appropriately."}, {"start": 570.16, "end": 574.64, "text": " One caveat being that having a larger neural network can slow down your out-of-room, so"}, {"start": 574.64, "end": 578.4399999999999, "text": " maybe that's the one way it hurts, but it shouldn't hurt your out-of-room's performance"}, {"start": 578.4399999999999, "end": 579.4399999999999, "text": " for the most part."}, {"start": 579.44, "end": 583.0400000000001, "text": " And in fact, it could even help it significantly."}, {"start": 583.0400000000001, "end": 589.1600000000001, "text": " And second, so long as your training set isn't too large, then a neural network, especially"}, {"start": 589.1600000000001, "end": 592.62, "text": " a large neural network, is often a low-bias machine."}, {"start": 592.62, "end": 598.1600000000001, "text": " It just fits very complicated functions very well, which is why when I'm training neural"}, {"start": 598.1600000000001, "end": 603.36, "text": " networks, I find that I'm often fighting various problems rather than bias problems, at least"}, {"start": 603.36, "end": 605.72, "text": " if the neural network is large enough."}, {"start": 605.72, "end": 610.28, "text": " So the rise of deep learning has really changed the way that machine learning practitioners"}, {"start": 610.28, "end": 612.48, "text": " think about bias and variance."}, {"start": 612.48, "end": 616.5, "text": " Having said that, even when you're training a neural network, measuring bias and variance"}, {"start": 616.5, "end": 623.32, "text": " and using that to guide what you do next is often a very hopeful thing to do."}, {"start": 623.32, "end": 626.32, "text": " So that's it for bias and variance."}, {"start": 626.32, "end": 630.4, "text": " Let's go on to the next video where we'll take all the ideas we've learned and see how"}, {"start": 630.4, "end": 634.36, "text": " they fit in to the development process of machine learning systems."}, {"start": 634.36, "end": 639.76, "text": " And I hope that we'll tie a lot of these pieces together to give you practical advice for"}, {"start": 639.76, "end": 668.3199999999999, "text": " how to quickly move forward in the development of your machine learning systems."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=CndPUzZpZ38
6.10 Machine Learning development process | Iterative loop of ML development --[ML | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the next few videos, I'd like to share with you what it's like to go through the process of developing a machine learning system so that when you are doing so yourself, hopefully you'll be in a position to make great decisions at many stages of the machine learning development process. Let's take a look first at the iterative loop of machine learning development. This is what developing a machine learning model will often feel like. First, you decide on what is the overall architecture of your system. And that means choosing your machine learning model as well as deciding what data to use, maybe picking the hyperparameters and so on. Then, given those decisions, you would implement and train a model. And as I've mentioned before, when you train a model for the first time, it will almost never work as well as you want it to. The next step that I recommend then is to implement or to look at a few diagnostics, such as looking at the bias and variance of your algorithm, as well as something we'll see in the next video called error analysis. And based on the insights from the diagnostics, you can then make decisions like, do you want to make your neural network bigger or change the lambda regularization parameter or maybe add more data or add more features or subtract features. And then you go around this loop again with your new choice of architecture. And it will often take multiple iterations through this loop until you get to the performance that you want. Let's look at an example of building an email spam classifier. I think many of us passionately hate email spam. And this is a problem that I worked on years ago and also was involved in starting an app anti-spam conference once years ago. The example on the left is what a highly spammy email might look like. Day of the week, buy now Rolex watches and spammers will sometimes deliberately misspell words like these watches, medicine, and mortgages in order to try to trip up a spam recognizer. And in contrast, this email on the right is an actual email I once got from my younger brother Alfred about getting together for Christmas. So how do you build a classifier to recognize spam versus non spam emails? One way to do so would be to train a supervised learning algorithm where the input features x will be the features of an email and the output label y will be one or zero depending on whether a spam or not spam. So one way this application is an example of text classification because you're taking a text document that is an email and try to classify it as either spam or non spam. One way to construct the features of the email would be to say, take the top 10,000 words in the English language or in some other dictionary and use them to define features x1 x2 through x 10,000. So for example, given this email on the right, if the list of words we have is a, Andrew, buy, deal, discount, and so on, then given the email on the right, we would set these features to be say zero or one depending on whether or not that word appears. So the word a does not appear. The word Andrew does appear. The word buy does appear. Deal does, discount does not, and so on. And so you can construct 10,000 features of this email. And there are many ways to construct the feature vector. Another way would be to let these numbers not just be one or zero, but actually count the number of times a given word appears in the email. So if buy appears twice, maybe you want to set this to two, but setting it to just one or zero actually works decently well. Given these features, you can then train a classification algorithm such as a logistic regression model or a neural network to predict y given these features x. After you've trained your initial model, if it doesn't work as well as you wish, you will quite likely have multiple ideas for improving the learning algorithm's performance. For example, it's always tempting to collect more data. In fact, I have friends that have worked on very large scale honeypot projects, and these are projects that create a large number of fake email addresses and tries deliberately to get these fake email addresses into the hands of spammers so that when they send spam email to these fake emails, well, we know these are spam messages. And so this is a way to get a lot of spam data. Or you might decide to work on developing more sophisticated features based on the email routing. Email routing refers to the sequence of compute service, sometimes around the world, that the email has gone through on its way to reach you. And emails actually have what's called email header information. That is information that keeps track of how the email has traveled across different servers, across different networks to find its way to you. And sometimes the path that an email has traveled can help tell you if it was sent by a spammer or not. Or you might work on coming up with more sophisticated features from the email body that is the text of the email. So in the features I talked about last time, discounting and discount might be treated as different words, but maybe they should be treated as the same words. Or you might decide to come up with algorithms to detect misspellings or deliberate misspellings like watches, medicine, and mortgage. And this too could help you decide if an email is spammy. So given all of these and possibly even more ideas, how can you decide which of these ideas are more promising to work on? Because choosing the more promising paths forward can speed up your project easily 10 times compared to if you were to somehow choose some of the less promising directions. For example, we've already seen that if your algorithm has high bias rather than high variance, then spending months and months on a honeypot project may not be the most fruitful direction. But if your algorithm has high variance, then collecting more data could help a lot. So during the interlude of machine learning development, you may have many ideas for how to modify the model or the data. And it will be coming up with different diagnostics that could give you a lot of guidance on what choices for the model or data or other parts of the architecture could be most promising to try. In the last several videos, we've already talked about bias and variance. In the next video, I'd like to start describing to you the error analysis process, which is a second key set of ideas for gaining insights about what architecture choices might be fruitful. So that's the iterative loop of machine learning development. And using the example of building a spam classifier, let's take a look at what error analysis looks like. We'll talk about that in the next video.
[{"start": 0.0, "end": 7.0, "text": " In the next few videos, I'd like to share with you what it's like to go through the"}, {"start": 7.0, "end": 12.8, "text": " process of developing a machine learning system so that when you are doing so yourself, hopefully"}, {"start": 12.8, "end": 18.32, "text": " you'll be in a position to make great decisions at many stages of the machine learning development"}, {"start": 18.32, "end": 19.32, "text": " process."}, {"start": 19.32, "end": 24.92, "text": " Let's take a look first at the iterative loop of machine learning development."}, {"start": 24.92, "end": 28.72, "text": " This is what developing a machine learning model will often feel like."}, {"start": 28.72, "end": 33.72, "text": " First, you decide on what is the overall architecture of your system."}, {"start": 33.72, "end": 39.239999999999995, "text": " And that means choosing your machine learning model as well as deciding what data to use,"}, {"start": 39.239999999999995, "end": 42.36, "text": " maybe picking the hyperparameters and so on."}, {"start": 42.36, "end": 48.4, "text": " Then, given those decisions, you would implement and train a model."}, {"start": 48.4, "end": 52.480000000000004, "text": " And as I've mentioned before, when you train a model for the first time, it will almost"}, {"start": 52.480000000000004, "end": 56.08, "text": " never work as well as you want it to."}, {"start": 56.08, "end": 62.0, "text": " The next step that I recommend then is to implement or to look at a few diagnostics,"}, {"start": 62.0, "end": 66.16, "text": " such as looking at the bias and variance of your algorithm, as well as something we'll"}, {"start": 66.16, "end": 69.96, "text": " see in the next video called error analysis."}, {"start": 69.96, "end": 75.48, "text": " And based on the insights from the diagnostics, you can then make decisions like, do you want"}, {"start": 75.48, "end": 82.32, "text": " to make your neural network bigger or change the lambda regularization parameter or maybe"}, {"start": 82.32, "end": 86.6, "text": " add more data or add more features or subtract features."}, {"start": 86.6, "end": 91.6, "text": " And then you go around this loop again with your new choice of architecture."}, {"start": 91.6, "end": 96.3, "text": " And it will often take multiple iterations through this loop until you get to the performance"}, {"start": 96.3, "end": 98.11999999999999, "text": " that you want."}, {"start": 98.11999999999999, "end": 102.32, "text": " Let's look at an example of building an email spam classifier."}, {"start": 102.32, "end": 106.19999999999999, "text": " I think many of us passionately hate email spam."}, {"start": 106.19999999999999, "end": 112.28, "text": " And this is a problem that I worked on years ago and also was involved in starting an app"}, {"start": 112.28, "end": 115.56, "text": " anti-spam conference once years ago."}, {"start": 115.56, "end": 119.6, "text": " The example on the left is what a highly spammy email might look like."}, {"start": 119.6, "end": 126.36, "text": " Day of the week, buy now Rolex watches and spammers will sometimes deliberately misspell"}, {"start": 126.36, "end": 133.56, "text": " words like these watches, medicine, and mortgages in order to try to trip up a spam recognizer."}, {"start": 133.56, "end": 138.28, "text": " And in contrast, this email on the right is an actual email I once got from my younger"}, {"start": 138.28, "end": 142.0, "text": " brother Alfred about getting together for Christmas."}, {"start": 142.0, "end": 150.52, "text": " So how do you build a classifier to recognize spam versus non spam emails?"}, {"start": 150.52, "end": 157.34, "text": " One way to do so would be to train a supervised learning algorithm where the input features"}, {"start": 157.34, "end": 164.4, "text": " x will be the features of an email and the output label y will be one or zero depending"}, {"start": 164.4, "end": 167.48, "text": " on whether a spam or not spam."}, {"start": 167.48, "end": 174.28, "text": " So one way this application is an example of text classification because you're taking"}, {"start": 174.28, "end": 180.51999999999998, "text": " a text document that is an email and try to classify it as either spam or non spam."}, {"start": 180.51999999999998, "end": 186.04, "text": " One way to construct the features of the email would be to say, take the top 10,000 words"}, {"start": 186.04, "end": 193.83999999999997, "text": " in the English language or in some other dictionary and use them to define features x1 x2 through"}, {"start": 193.83999999999997, "end": 196.33999999999997, "text": " x 10,000."}, {"start": 196.34, "end": 203.96, "text": " So for example, given this email on the right, if the list of words we have is a, Andrew,"}, {"start": 203.96, "end": 214.14000000000001, "text": " buy, deal, discount, and so on, then given the email on the right, we would set these"}, {"start": 214.14000000000001, "end": 219.42000000000002, "text": " features to be say zero or one depending on whether or not that word appears."}, {"start": 219.42000000000002, "end": 221.64000000000001, "text": " So the word a does not appear."}, {"start": 221.64000000000001, "end": 223.8, "text": " The word Andrew does appear."}, {"start": 223.8, "end": 227.04000000000002, "text": " The word buy does appear."}, {"start": 227.04000000000002, "end": 230.22, "text": " Deal does, discount does not, and so on."}, {"start": 230.22, "end": 234.92000000000002, "text": " And so you can construct 10,000 features of this email."}, {"start": 234.92000000000002, "end": 238.22, "text": " And there are many ways to construct the feature vector."}, {"start": 238.22, "end": 243.28, "text": " Another way would be to let these numbers not just be one or zero, but actually count"}, {"start": 243.28, "end": 246.68, "text": " the number of times a given word appears in the email."}, {"start": 246.68, "end": 252.04000000000002, "text": " So if buy appears twice, maybe you want to set this to two, but setting it to just one"}, {"start": 252.04, "end": 256.0, "text": " or zero actually works decently well."}, {"start": 256.0, "end": 262.03999999999996, "text": " Given these features, you can then train a classification algorithm such as a logistic"}, {"start": 262.03999999999996, "end": 269.32, "text": " regression model or a neural network to predict y given these features x."}, {"start": 269.32, "end": 275.12, "text": " After you've trained your initial model, if it doesn't work as well as you wish, you will"}, {"start": 275.12, "end": 280.03999999999996, "text": " quite likely have multiple ideas for improving the learning algorithm's performance."}, {"start": 280.04, "end": 283.64000000000004, "text": " For example, it's always tempting to collect more data."}, {"start": 283.64000000000004, "end": 288.64000000000004, "text": " In fact, I have friends that have worked on very large scale honeypot projects, and these"}, {"start": 288.64000000000004, "end": 294.12, "text": " are projects that create a large number of fake email addresses and tries deliberately"}, {"start": 294.12, "end": 299.56, "text": " to get these fake email addresses into the hands of spammers so that when they send spam"}, {"start": 299.56, "end": 303.40000000000003, "text": " email to these fake emails, well, we know these are spam messages."}, {"start": 303.40000000000003, "end": 307.32000000000005, "text": " And so this is a way to get a lot of spam data."}, {"start": 307.32, "end": 312.4, "text": " Or you might decide to work on developing more sophisticated features based on the email"}, {"start": 312.4, "end": 313.4, "text": " routing."}, {"start": 313.4, "end": 320.76, "text": " Email routing refers to the sequence of compute service, sometimes around the world, that"}, {"start": 320.76, "end": 324.44, "text": " the email has gone through on its way to reach you."}, {"start": 324.44, "end": 328.96, "text": " And emails actually have what's called email header information."}, {"start": 328.96, "end": 333.76, "text": " That is information that keeps track of how the email has traveled across different servers,"}, {"start": 333.76, "end": 337.48, "text": " across different networks to find its way to you."}, {"start": 337.48, "end": 343.32, "text": " And sometimes the path that an email has traveled can help tell you if it was sent by a spammer"}, {"start": 343.32, "end": 345.48, "text": " or not."}, {"start": 345.48, "end": 351.15999999999997, "text": " Or you might work on coming up with more sophisticated features from the email body that is the text"}, {"start": 351.15999999999997, "end": 352.58, "text": " of the email."}, {"start": 352.58, "end": 358.78, "text": " So in the features I talked about last time, discounting and discount might be treated as"}, {"start": 358.78, "end": 364.47999999999996, "text": " different words, but maybe they should be treated as the same words."}, {"start": 364.47999999999996, "end": 369.55999999999995, "text": " Or you might decide to come up with algorithms to detect misspellings or deliberate misspellings"}, {"start": 369.55999999999995, "end": 372.71999999999997, "text": " like watches, medicine, and mortgage."}, {"start": 372.71999999999997, "end": 376.82, "text": " And this too could help you decide if an email is spammy."}, {"start": 376.82, "end": 382.47999999999996, "text": " So given all of these and possibly even more ideas, how can you decide which of these ideas"}, {"start": 382.47999999999996, "end": 384.76, "text": " are more promising to work on?"}, {"start": 384.76, "end": 390.48, "text": " Because choosing the more promising paths forward can speed up your project easily 10"}, {"start": 390.48, "end": 395.8, "text": " times compared to if you were to somehow choose some of the less promising directions."}, {"start": 395.8, "end": 401.44, "text": " For example, we've already seen that if your algorithm has high bias rather than high variance,"}, {"start": 401.44, "end": 406.64, "text": " then spending months and months on a honeypot project may not be the most fruitful direction."}, {"start": 406.64, "end": 410.9, "text": " But if your algorithm has high variance, then collecting more data could help a lot."}, {"start": 410.9, "end": 415.64, "text": " So during the interlude of machine learning development, you may have many ideas for how"}, {"start": 415.64, "end": 418.79999999999995, "text": " to modify the model or the data."}, {"start": 418.79999999999995, "end": 423.85999999999996, "text": " And it will be coming up with different diagnostics that could give you a lot of guidance on what"}, {"start": 423.85999999999996, "end": 429.64, "text": " choices for the model or data or other parts of the architecture could be most promising"}, {"start": 429.64, "end": 431.59999999999997, "text": " to try."}, {"start": 431.59999999999997, "end": 436.28, "text": " In the last several videos, we've already talked about bias and variance."}, {"start": 436.28, "end": 442.67999999999995, "text": " In the next video, I'd like to start describing to you the error analysis process, which is"}, {"start": 442.67999999999995, "end": 450.7, "text": " a second key set of ideas for gaining insights about what architecture choices might be fruitful."}, {"start": 450.7, "end": 454.84, "text": " So that's the iterative loop of machine learning development."}, {"start": 454.84, "end": 460.0, "text": " And using the example of building a spam classifier, let's take a look at what error analysis looks"}, {"start": 460.0, "end": 461.0, "text": " like."}, {"start": 461.0, "end": 468.0, "text": " We'll talk about that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=-EKKWG6CXRQ
6.11 Machine Learning development process | Error analysis --[ML | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In terms of the most important ways to help you run diagnostics to choose what to try next to improve your learning algorithm performance, I would say bias invariance is probably the most important idea and error analysis would probably be second on my list. So let's take a look at what this means. Concretely, let's say you have MCV equals 500 cross-validation examples and your algorithm misclassifies 100 of these 500 cross-validating examples. The error analysis process just refers to manually looking through these 100 examples and trying to gain insights into where the algorithm is going wrong. Specifically, what I would often do is find a set of examples that the algorithm has misclassified examples from the cross-validation set and try to group them into common themes or common properties or common traits. So for example, if you notice that quite a lot of the misclassified spam emails are pharmaceutical sales, trying to sell medicines or drugs, then I would actually go through these examples and count up by hand how many emails are misclassified are pharmaceutical spam and say there are 21 emails that are pharmaceutical spam. Or if you suspect that deliberate misspellings may be tripping up your spam classifier, then I would also go through and just count up how many of these examples that it misclassified had a deliberate misspelling. Let's say I find three out of 100. Or looking through the email routing info, I find seven has unusual email routing and 18 emails trying to steal passwords or phishing emails. Spam is sometimes also instead of writing the spam message in the email body, they instead create an image and then write the spammy message inside an image that appears in the email. It's a bit harder for learning algorithm to figure out what's going on. And so maybe some of those emails are these embedded image spam. If you end up with these counts, then that tells you that pharmaceutical spam and emails trying to steal passwords or phishing emails seem to be huge problems. Whereas deliberate misspellings, well, it is a problem. It is a smaller one. In particular, what this analysis tells you is that even if you were to build really sophisticated algorithms to find deliberate misspellings, it would only solve three out of 100 of your misclassified examples. So the net impact seems like it may not be that large. Doesn't mean it's not worth doing, but when you're prioritizing what to do, you might therefore decide not to prioritize this as highly. And by the way, I'm telling the story because I once actually spent a lot of time building algorithms to find deliberate misspellings in spam emails only much later to realize that the net impact was actually quite small. So this is one example where I wish I had done a more careful error analysis before spending a lot of time myself trying to find these deliberate misspellings. Just a couple of notes on this process. These categories can be overlapping or in other words, they're not mutually exclusive. So for example, there can be a pharmaceutical spam email that also has unusual routing or a password that has deliberate misspellings and is also trying to carry out a phishing attack. So one email can be counted in multiple categories. And in this example, I had said that the algorithm misclassifies 100 examples and will look at 100 examples manually. If you have a larger cross validation set, say we had 5,000 cross validation examples, and if the algorithm misclassified, say 1,000 of them, then you may not have the time, depending on the team size and how much time you have to work on this project, you may not have the time to manually look at all 1,000 examples that the algorithm misclassifies. In that case, I will often sample randomly a subset of usually around 100, maybe a couple of hundred examples, because that's the amount that you can look through in a reasonable amount of time. And hopefully looking through maybe around 100 examples will give you enough statistics about what are the most common types of errors and therefore where may be most fruitful to focus your attention. And so after this analysis, if you find that a lot of errors are pharmaceutical spam emails, then this might give you some ideas or inspiration for things to do next. For example, you may decide to collect more data, but not more data of everything, but just try to find more data of pharmaceutical spam emails so that the learning algorithm can do a better job recognizing these pharmaceutical spams. Or you may decide to come up with some new features that are related to say specific names of drugs or specific names of pharmaceutical products that the spammers are trying to sell in order to help your learning algorithm become better at recognizing this type of pharma spam. Then again, this might inspire you to make specific changes to the algorithm relating to detecting phishing emails. For example, you might look at the URLs in the email and write special code to come up with extra features to see this linking to suspicious URLs. Or again, you may decide to get more data of phishing emails specifically in order to help your learning algorithm do a better job at recognizing them. So the point of this error analysis is by manually examining a set of examples that your algorithm is misclassifying or mislabeling. And this will create inspiration for what might be useful to try next. And sometimes it can also tell you that certain types of errors are sufficiently rare, that they aren't worth as much of your time to try to fix. And so returning to this list, a bias-variance analysis should tell you if collecting more data is helpful or not. Based on our error analysis in the example we just went through, it looks like more sophisticated email features could help, but only a bit. Whereas more sophisticated features to detect pharmas spam or phishing emails could help a lot. And this detecting misspellings would not help nearly as much. So in general, I've found both the bias-variance diagnostic as well as carrying out this form of error analysis to be really helpful to screening or to deciding which changes to the model are more promising to try out next. Now one limitation of error analysis is that it's much easier to do for problems that humans are good at. So you can look at an email and say, you think it's a spam email, why did the algorithm get it wrong? Error analysis can be a bit harder for tasks that even humans aren't good at. For example, if you're trying to predict what ads someone will click on on a website, well, I can't predict what someone will click on, so error analysis there actually tends to be more difficult. But when you apply error analysis to problems that you can, it can be extremely helpful for focusing attention on the more promising things to try. And that in turn can easily save you months of otherwise fruitless work. In the next video, I'd like to dive deeper into the problem of adding data. When you train a learning algorithm, sometimes you decide there's high variance and you want to get more data for it. And there's some techniques that can make how you add data much more efficient. So let's take a look at that so that hopefully you'll be armed with some good ways to get more data for your learning application.
[{"start": 0.0, "end": 7.5200000000000005, "text": " In terms of the most important ways to help you run diagnostics to choose what to try"}, {"start": 7.5200000000000005, "end": 12.76, "text": " next to improve your learning algorithm performance, I would say bias invariance is probably the"}, {"start": 12.76, "end": 17.36, "text": " most important idea and error analysis would probably be second on my list."}, {"start": 17.36, "end": 19.68, "text": " So let's take a look at what this means."}, {"start": 19.68, "end": 27.96, "text": " Concretely, let's say you have MCV equals 500 cross-validation examples and your algorithm"}, {"start": 27.96, "end": 33.08, "text": " misclassifies 100 of these 500 cross-validating examples."}, {"start": 33.08, "end": 40.52, "text": " The error analysis process just refers to manually looking through these 100 examples"}, {"start": 40.52, "end": 45.480000000000004, "text": " and trying to gain insights into where the algorithm is going wrong."}, {"start": 45.480000000000004, "end": 53.92, "text": " Specifically, what I would often do is find a set of examples that the algorithm has misclassified"}, {"start": 53.92, "end": 60.56, "text": " examples from the cross-validation set and try to group them into common themes or common"}, {"start": 60.56, "end": 63.2, "text": " properties or common traits."}, {"start": 63.2, "end": 70.0, "text": " So for example, if you notice that quite a lot of the misclassified spam emails are pharmaceutical"}, {"start": 70.0, "end": 75.6, "text": " sales, trying to sell medicines or drugs, then I would actually go through these examples"}, {"start": 75.6, "end": 82.56, "text": " and count up by hand how many emails are misclassified are pharmaceutical spam and say there are"}, {"start": 82.56, "end": 86.0, "text": " 21 emails that are pharmaceutical spam."}, {"start": 86.0, "end": 93.36, "text": " Or if you suspect that deliberate misspellings may be tripping up your spam classifier, then"}, {"start": 93.36, "end": 99.52000000000001, "text": " I would also go through and just count up how many of these examples that it misclassified"}, {"start": 99.52000000000001, "end": 101.2, "text": " had a deliberate misspelling."}, {"start": 101.2, "end": 104.28, "text": " Let's say I find three out of 100."}, {"start": 104.28, "end": 112.56, "text": " Or looking through the email routing info, I find seven has unusual email routing and"}, {"start": 112.56, "end": 118.36, "text": " 18 emails trying to steal passwords or phishing emails."}, {"start": 118.36, "end": 124.84, "text": " Spam is sometimes also instead of writing the spam message in the email body, they instead"}, {"start": 124.84, "end": 131.0, "text": " create an image and then write the spammy message inside an image that appears in the"}, {"start": 131.0, "end": 132.0, "text": " email."}, {"start": 132.0, "end": 134.8, "text": " It's a bit harder for learning algorithm to figure out what's going on."}, {"start": 134.8, "end": 139.96, "text": " And so maybe some of those emails are these embedded image spam."}, {"start": 139.96, "end": 148.88, "text": " If you end up with these counts, then that tells you that pharmaceutical spam and emails"}, {"start": 148.88, "end": 154.4, "text": " trying to steal passwords or phishing emails seem to be huge problems."}, {"start": 154.4, "end": 158.12, "text": " Whereas deliberate misspellings, well, it is a problem."}, {"start": 158.12, "end": 159.84, "text": " It is a smaller one."}, {"start": 159.84, "end": 165.72, "text": " In particular, what this analysis tells you is that even if you were to build really sophisticated"}, {"start": 165.72, "end": 172.84, "text": " algorithms to find deliberate misspellings, it would only solve three out of 100 of your"}, {"start": 172.84, "end": 174.6, "text": " misclassified examples."}, {"start": 174.6, "end": 178.76, "text": " So the net impact seems like it may not be that large."}, {"start": 178.76, "end": 183.38, "text": " Doesn't mean it's not worth doing, but when you're prioritizing what to do, you might"}, {"start": 183.38, "end": 186.92000000000002, "text": " therefore decide not to prioritize this as highly."}, {"start": 186.92, "end": 191.95999999999998, "text": " And by the way, I'm telling the story because I once actually spent a lot of time building"}, {"start": 191.95999999999998, "end": 197.07999999999998, "text": " algorithms to find deliberate misspellings in spam emails only much later to realize"}, {"start": 197.07999999999998, "end": 199.64, "text": " that the net impact was actually quite small."}, {"start": 199.64, "end": 204.04, "text": " So this is one example where I wish I had done a more careful error analysis before"}, {"start": 204.04, "end": 209.6, "text": " spending a lot of time myself trying to find these deliberate misspellings."}, {"start": 209.6, "end": 212.6, "text": " Just a couple of notes on this process."}, {"start": 212.6, "end": 218.92, "text": " These categories can be overlapping or in other words, they're not mutually exclusive."}, {"start": 218.92, "end": 226.2, "text": " So for example, there can be a pharmaceutical spam email that also has unusual routing or"}, {"start": 226.2, "end": 231.92, "text": " a password that has deliberate misspellings and is also trying to carry out a phishing"}, {"start": 231.92, "end": 233.06, "text": " attack."}, {"start": 233.06, "end": 237.51999999999998, "text": " So one email can be counted in multiple categories."}, {"start": 237.52, "end": 243.44, "text": " And in this example, I had said that the algorithm misclassifies 100 examples and will look at"}, {"start": 243.44, "end": 246.32000000000002, "text": " 100 examples manually."}, {"start": 246.32000000000002, "end": 252.88, "text": " If you have a larger cross validation set, say we had 5,000 cross validation examples,"}, {"start": 252.88, "end": 259.84000000000003, "text": " and if the algorithm misclassified, say 1,000 of them, then you may not have the time, depending"}, {"start": 259.84000000000003, "end": 265.12, "text": " on the team size and how much time you have to work on this project, you may not have"}, {"start": 265.12, "end": 270.6, "text": " the time to manually look at all 1,000 examples that the algorithm misclassifies."}, {"start": 270.6, "end": 277.04, "text": " In that case, I will often sample randomly a subset of usually around 100, maybe a couple"}, {"start": 277.04, "end": 280.72, "text": " of hundred examples, because that's the amount that you can look through in a reasonable"}, {"start": 280.72, "end": 282.48, "text": " amount of time."}, {"start": 282.48, "end": 287.2, "text": " And hopefully looking through maybe around 100 examples will give you enough statistics"}, {"start": 287.2, "end": 292.28000000000003, "text": " about what are the most common types of errors and therefore where may be most fruitful to"}, {"start": 292.28000000000003, "end": 294.64, "text": " focus your attention."}, {"start": 294.64, "end": 302.56, "text": " And so after this analysis, if you find that a lot of errors are pharmaceutical spam emails,"}, {"start": 302.56, "end": 308.64, "text": " then this might give you some ideas or inspiration for things to do next."}, {"start": 308.64, "end": 316.12, "text": " For example, you may decide to collect more data, but not more data of everything, but"}, {"start": 316.12, "end": 321.47999999999996, "text": " just try to find more data of pharmaceutical spam emails so that the learning algorithm"}, {"start": 321.48, "end": 325.8, "text": " can do a better job recognizing these pharmaceutical spams."}, {"start": 325.8, "end": 330.76, "text": " Or you may decide to come up with some new features that are related to say specific"}, {"start": 330.76, "end": 336.28000000000003, "text": " names of drugs or specific names of pharmaceutical products that the spammers are trying to sell"}, {"start": 336.28000000000003, "end": 341.20000000000005, "text": " in order to help your learning algorithm become better at recognizing this type of pharma"}, {"start": 341.20000000000005, "end": 342.20000000000005, "text": " spam."}, {"start": 342.20000000000005, "end": 348.36, "text": " Then again, this might inspire you to make specific changes to the algorithm relating"}, {"start": 348.36, "end": 350.32, "text": " to detecting phishing emails."}, {"start": 350.32, "end": 355.48, "text": " For example, you might look at the URLs in the email and write special code to come up"}, {"start": 355.48, "end": 359.24, "text": " with extra features to see this linking to suspicious URLs."}, {"start": 359.24, "end": 364.32, "text": " Or again, you may decide to get more data of phishing emails specifically in order to"}, {"start": 364.32, "end": 367.8, "text": " help your learning algorithm do a better job at recognizing them."}, {"start": 367.8, "end": 374.28, "text": " So the point of this error analysis is by manually examining a set of examples that"}, {"start": 374.28, "end": 378.2, "text": " your algorithm is misclassifying or mislabeling."}, {"start": 378.2, "end": 383.88, "text": " And this will create inspiration for what might be useful to try next."}, {"start": 383.88, "end": 390.32, "text": " And sometimes it can also tell you that certain types of errors are sufficiently rare, that"}, {"start": 390.32, "end": 394.2, "text": " they aren't worth as much of your time to try to fix."}, {"start": 394.2, "end": 400.08, "text": " And so returning to this list, a bias-variance analysis should tell you if collecting more"}, {"start": 400.08, "end": 402.84, "text": " data is helpful or not."}, {"start": 402.84, "end": 407.03999999999996, "text": " Based on our error analysis in the example we just went through, it looks like more sophisticated"}, {"start": 407.04, "end": 410.36, "text": " email features could help, but only a bit."}, {"start": 410.36, "end": 415.04, "text": " Whereas more sophisticated features to detect pharmas spam or phishing emails could help"}, {"start": 415.04, "end": 416.04, "text": " a lot."}, {"start": 416.04, "end": 420.36, "text": " And this detecting misspellings would not help nearly as much."}, {"start": 420.36, "end": 425.12, "text": " So in general, I've found both the bias-variance diagnostic as well as carrying out this form"}, {"start": 425.12, "end": 431.40000000000003, "text": " of error analysis to be really helpful to screening or to deciding which changes to"}, {"start": 431.40000000000003, "end": 434.40000000000003, "text": " the model are more promising to try out next."}, {"start": 434.4, "end": 440.32, "text": " Now one limitation of error analysis is that it's much easier to do for problems that humans"}, {"start": 440.32, "end": 441.32, "text": " are good at."}, {"start": 441.32, "end": 445.96, "text": " So you can look at an email and say, you think it's a spam email, why did the algorithm get"}, {"start": 445.96, "end": 446.96, "text": " it wrong?"}, {"start": 446.96, "end": 451.2, "text": " Error analysis can be a bit harder for tasks that even humans aren't good at."}, {"start": 451.2, "end": 455.91999999999996, "text": " For example, if you're trying to predict what ads someone will click on on a website, well,"}, {"start": 455.91999999999996, "end": 459.96, "text": " I can't predict what someone will click on, so error analysis there actually tends to"}, {"start": 459.96, "end": 461.5, "text": " be more difficult."}, {"start": 461.5, "end": 467.0, "text": " But when you apply error analysis to problems that you can, it can be extremely helpful"}, {"start": 467.0, "end": 470.36, "text": " for focusing attention on the more promising things to try."}, {"start": 470.36, "end": 475.8, "text": " And that in turn can easily save you months of otherwise fruitless work."}, {"start": 475.8, "end": 482.24, "text": " In the next video, I'd like to dive deeper into the problem of adding data."}, {"start": 482.24, "end": 486.62, "text": " When you train a learning algorithm, sometimes you decide there's high variance and you want"}, {"start": 486.62, "end": 488.82, "text": " to get more data for it."}, {"start": 488.82, "end": 493.44, "text": " And there's some techniques that can make how you add data much more efficient."}, {"start": 493.44, "end": 498.36, "text": " So let's take a look at that so that hopefully you'll be armed with some good ways to get"}, {"start": 498.36, "end": 519.4, "text": " more data for your learning application."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=YQNWuC69uqI
6.12 Machine Learning development process | Adding data --[ML | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, I'd like to share with you some tips for adding data or collecting more data or sometimes even creating more data for your machine learning application. Just a heads up that this and the next few videos will seem a little bit like a grab bag of different techniques and I apologize if it seems a little bit grab baggy. And that's because machine learning applications are different. Machine learning is applied to so many different problems and for some humans are great at creating labels and for some you can't get more data and for some you can't. And that's why different applications actually sometimes call for slightly different techniques. But I hope in this and the next few videos to share with you some of the techniques that I've found to be most useful for different applications. Although not every one of them will apply for every single application, but I hope many of them will be useful for many of the applications that you'll be working on as well. But let's take a look at some tips for how to add data for your application. When training machine learning algorithms, it feels like always we wish we had even more data almost all the time. And so sometimes it's tempting to let's just get more data of everything. But trying to get more data of all types can be slow and expensive. Instead an alternative way of adding data might be to focus on adding more data of the types where error analysis is indicated in my help. In the previous slide, we saw if error analysis review that pharma spam was a large problem, then you may decide to have a more targeted effort not to get more data of everything under the sun, but to instead focus on getting more examples of pharma spam. With a more modest cost, this could let you add just the emails you need to help your learning algorithm get smarter on recognizing pharma spam. And so one example of how you might do this is if you have a lot of unlabeled email data, say email sitting around and no one has bothered to label yet as spam or non spam, you might be able to go to the unlabeled data. You may be able to ask your labelers to quickly skim through the unlabeled data and find more examples specifically of pharma related spam. And this could boost your learning algorithm's performance much more than just trying to add more data of all sorts of emails. But the more general pattern I hope you take away from this is if you have some ways to add more data of everything, that's okay, nothing wrong with that. But if error analysis has indicated that there's certain subsets of the data that the algorithm is doing particularly poorly on and that you want to improve performance on, then getting more data of just the types where you want it to do better, be it more examples of pharmaceutical spam or more examples of phishing spam or something else, that could be a more efficient way to add just a little bit of data, but boost your algorithm's performance by quite a lot. Beyond getting your hands on brand new training examples x y, there's another technique that's widely used, especially for images and audio data that can increase your training set size significantly. This technique is called data augmentation, and what we're going to do is take an existing training example to create a new training example. For example, if you're trying to recognize the letters from A to Z for an OCR optical character recognition problem, so not just a digit 0 to 9, but also the letters from A to Z. Given an image like this, you might decide to create a new training example by rotating the image a bit or by enlarging the image a bit or by shrinking a little bit or by changing the contrast of the image. And these are examples of distortions to the image that don't change the fact that this is still the alphabet, that this is still the letter A. And for some letters, but not others, you can also take the mirror image of the letter and it still looks like the letter A, but this only applies to some letters. But these would be ways of taking a training example x y and applying a distortion or transformation to the input x in order to come up with another example that has the same label. And by doing this, you're telling the algorithm that the letter A rotated a bit or enlarged a bit or shrunk a little bit. It is still the letter A. And creating additional examples like this helps the learning algorithm do a better job learning how to recognize the letter A. For a more advanced example of data augmentation, you can also take the letter A and place a grid on top of it. And by introducing random warping of this grid, you can take the letter A and introduce warping of the letter A to create a much richer library of examples of the letter A. And this process of distorting these examples then has turned one image or one example into here, training examples that you can feed to the learning algorithm to help it learn more robustly. What is the letter A? This idea of data augmentation also works for speech recognition. Let's say for a voice search application, you have an original audio clip that sounds like this. What is today's weather? One way you can apply data augmentation to speech data would be to take noisy background audio like this. For example, this is what the sound of a crowd sounds like. And it turns out that if you take these two audio clips, the first one and the crowd noise and you add them together, then you end up with an audio clip that sounds like this. What is today's weather? And you just created an audio clip that sounds like someone saying, what's the weather today? But they're saying it around a noisy crowd in the background. Or in fact, if you were to take a different background noise, say someone in a car, this is what background noise of a car sounds like. And you were to add the original audio clip to the car noise, then you get this. What is today's weather? And it sounds like the original audio clip, but as if the speaker was saying it from a car. And the more advanced data augmentation step would be if you make the original audio sound like you're recording on a bad cell phone connection like this. What is today's weather? And so we've seen how you can take one audio clip and turn it into three training examples here. One with crowd background noise, one with car background noise, and one as if it was recorded on a bad cell phone connection. And the times I've worked on speech recognition systems, this was actually a really critical technique for increasing artificially the size of the training data I had to build a more accurate speech recognizer. One tip for data augmentation is that the changes or the distortions you make to the data should be representative of the types of noise or distortions in the test set. So for example, if you take the letter A and warp it like this, this still looks like examples of letters you might see out there that you would like to recognize. Or for audio, adding background noise or bad cell phone connection, if that's representative of what you expect to hear in the test set, then these would be helpful ways to carry out data augmentation on your audio data. In contrast, it's usually not that helpful to add purely random meaningless noise to audio data. For example, here I've taken the letter A and I've added per pixel noise, where if XI is the intensity or the brightness of pixel I, if I were to just add noise to each pixel, they end up with images that look like this. But if to the extent that this isn't that representative of what you see in the test set, because you don't often get images like this in the test set, this is actually going to be less helpful. So one way to think about data augmentation is how can you modify or warp or distort or make more noise in your data, but in a way so that what you get is still quite similar to what you have in your test set, because that's what the learning algorithm will ultimately end up doing well on. Now whereas data augmentation takes an existing training example and modifies it to create another training example, there's one other technique which is data synthesis, in which you make up brand new examples from scratch, not by modifying an existing example, but by creating brand new examples. So take the example of photo OCR. Photo OCR or photo optical character recognition refers to the problem of looking at an image like this and automatically having a computer read the text that appears in this image. So there's a lot of text in this image. How can you train an OCR algorithm to read text from an image like this? Well, when you look closely at what the letters in this image looks like, they actually look like this. So this is real data from a photo OCR task. And one key step of the photo OCR task is to be able to look at the little image like this and recognize the letter at the middle. So this has T in the middle, this has the letter L in the middle, this has the letter C in the middle, and so on. So one way to create artificial data for this task is if you go to your computer's text editor, you find that it has a lot of different fonts. And what you can do is take these fonts and basically type out random text in your text editor and screenshot it using different colors and different contrasts and very different fonts. And you get synthetic data like that on the right. The images on the left were real data from real pictures taken out in the world. The images on the right are synthetically generated using fonts on the computer. And it actually looks pretty realistic. So with synthetic data generation like this, you can generate a very large number of images or examples for your photo OCR task. It can be a lot of work to write the code to generate realistic looking synthetic data for a given application. But when you spend time to do so, it can sometimes help you generate a very large amount of data for your application and give you a huge boost to your average performance. Synthetic data generation has been used most probably for computer vision tasks and less for other applications, not that much for audio tasks as well. All the techniques you've seen in this video relate to finding ways to engineer the data used by your system. In the way that machine learning has developed over the last several decades, many decades, most machine learning researchers attention was on the conventional model centric approach. And here's what I mean. A machine learning system on AI system includes both code to implement your algorithm or your model as well as the data that you train the algorithm on. And over the last few decades, most researchers doing machine learning research with download the data set and hold the data fix while they focus on improving the code or the algorithm or the model. Thanks to that paradigm of machine learning research, I find that today the algorithms we have access to, such as linear regression, logistic regression, neural networks, also decision trees, which you see next week, there are algorithms that are already very good and will work well for many applications. And so sometimes it can be more fruitful to spend more of your time taking a data centric approach in which you focus on engineering the data used by your algorithm. And this can be anything from collecting more data to collecting more data just on pharmaceutical spam, if that's what error analysis told you to do, to using data augmentation to generate more images or more audio or using data synthesis to just create more training examples. And sometimes that focus on the data can be an efficient way to help your learning algorithm improve its performance. So I hope that this video gives you a set of tools to be efficient and effective in how you add more data to get your learning algorithm to work better. Now, there are also some applications where you just don't have that much data and it's really hard to get more data. It turns out that there's a technique called transfer learning, which could apply in that setting to give your learning algorithm's performance a huge boost. And the key idea is to take data from a totally different, fairly related task. But using a neural network, there's sometimes ways to use that data from a very different task to get your algorithm to do better on your application. Doesn't apply to everything, but when it does, it can be very powerful. Let's take a look in the next video at how transfer learning works.
[{"start": 0.0, "end": 7.96, "text": " In this video, I'd like to share with you some tips for adding data or collecting more"}, {"start": 7.96, "end": 13.24, "text": " data or sometimes even creating more data for your machine learning application."}, {"start": 13.24, "end": 17.56, "text": " Just a heads up that this and the next few videos will seem a little bit like a grab"}, {"start": 17.56, "end": 22.76, "text": " bag of different techniques and I apologize if it seems a little bit grab baggy."}, {"start": 22.76, "end": 25.6, "text": " And that's because machine learning applications are different."}, {"start": 25.6, "end": 30.080000000000002, "text": " Machine learning is applied to so many different problems and for some humans are great at"}, {"start": 30.080000000000002, "end": 35.160000000000004, "text": " creating labels and for some you can't get more data and for some you can't."}, {"start": 35.160000000000004, "end": 40.24, "text": " And that's why different applications actually sometimes call for slightly different techniques."}, {"start": 40.24, "end": 44.36, "text": " But I hope in this and the next few videos to share with you some of the techniques that"}, {"start": 44.36, "end": 47.2, "text": " I've found to be most useful for different applications."}, {"start": 47.2, "end": 51.68000000000001, "text": " Although not every one of them will apply for every single application, but I hope many"}, {"start": 51.68, "end": 56.84, "text": " of them will be useful for many of the applications that you'll be working on as well."}, {"start": 56.84, "end": 61.92, "text": " But let's take a look at some tips for how to add data for your application."}, {"start": 61.92, "end": 66.28, "text": " When training machine learning algorithms, it feels like always we wish we had even more"}, {"start": 66.28, "end": 68.8, "text": " data almost all the time."}, {"start": 68.8, "end": 74.76, "text": " And so sometimes it's tempting to let's just get more data of everything."}, {"start": 74.76, "end": 81.03999999999999, "text": " But trying to get more data of all types can be slow and expensive."}, {"start": 81.04, "end": 87.36000000000001, "text": " Instead an alternative way of adding data might be to focus on adding more data of the"}, {"start": 87.36000000000001, "end": 91.72, "text": " types where error analysis is indicated in my help."}, {"start": 91.72, "end": 99.80000000000001, "text": " In the previous slide, we saw if error analysis review that pharma spam was a large problem,"}, {"start": 99.80000000000001, "end": 104.84, "text": " then you may decide to have a more targeted effort not to get more data of everything"}, {"start": 104.84, "end": 111.24000000000001, "text": " under the sun, but to instead focus on getting more examples of pharma spam."}, {"start": 111.24000000000001, "end": 117.68, "text": " With a more modest cost, this could let you add just the emails you need to help your"}, {"start": 117.68, "end": 122.2, "text": " learning algorithm get smarter on recognizing pharma spam."}, {"start": 122.2, "end": 128.56, "text": " And so one example of how you might do this is if you have a lot of unlabeled email data,"}, {"start": 128.56, "end": 135.68, "text": " say email sitting around and no one has bothered to label yet as spam or non spam, you might"}, {"start": 135.68, "end": 138.44, "text": " be able to go to the unlabeled data."}, {"start": 138.44, "end": 144.28, "text": " You may be able to ask your labelers to quickly skim through the unlabeled data and find more"}, {"start": 144.28, "end": 149.36, "text": " examples specifically of pharma related spam."}, {"start": 149.36, "end": 154.52, "text": " And this could boost your learning algorithm's performance much more than just trying to"}, {"start": 154.52, "end": 157.72, "text": " add more data of all sorts of emails."}, {"start": 157.72, "end": 163.12, "text": " But the more general pattern I hope you take away from this is if you have some ways to"}, {"start": 163.12, "end": 167.04, "text": " add more data of everything, that's okay, nothing wrong with that."}, {"start": 167.04, "end": 174.2, "text": " But if error analysis has indicated that there's certain subsets of the data that the algorithm"}, {"start": 174.2, "end": 179.12, "text": " is doing particularly poorly on and that you want to improve performance on, then getting"}, {"start": 179.12, "end": 185.2, "text": " more data of just the types where you want it to do better, be it more examples of pharmaceutical"}, {"start": 185.2, "end": 190.39999999999998, "text": " spam or more examples of phishing spam or something else, that could be a more efficient"}, {"start": 190.39999999999998, "end": 194.23999999999998, "text": " way to add just a little bit of data, but boost your algorithm's performance by quite"}, {"start": 194.23999999999998, "end": 195.23999999999998, "text": " a lot."}, {"start": 195.23999999999998, "end": 201.76, "text": " Beyond getting your hands on brand new training examples x y, there's another technique that's"}, {"start": 201.76, "end": 209.72, "text": " widely used, especially for images and audio data that can increase your training set size"}, {"start": 209.72, "end": 211.16, "text": " significantly."}, {"start": 211.16, "end": 216.04, "text": " This technique is called data augmentation, and what we're going to do is take an existing"}, {"start": 216.04, "end": 219.48, "text": " training example to create a new training example."}, {"start": 219.48, "end": 225.64, "text": " For example, if you're trying to recognize the letters from A to Z for an OCR optical"}, {"start": 225.64, "end": 230.4, "text": " character recognition problem, so not just a digit 0 to 9, but also the letters from"}, {"start": 230.4, "end": 231.88, "text": " A to Z."}, {"start": 231.88, "end": 238.32, "text": " Given an image like this, you might decide to create a new training example by rotating"}, {"start": 238.32, "end": 248.6, "text": " the image a bit or by enlarging the image a bit or by shrinking a little bit or by changing"}, {"start": 248.6, "end": 251.12, "text": " the contrast of the image."}, {"start": 251.12, "end": 256.4, "text": " And these are examples of distortions to the image that don't change the fact that this"}, {"start": 256.4, "end": 264.0, "text": " is still the alphabet, that this is still the letter A. And for some letters, but not"}, {"start": 264.0, "end": 270.44, "text": " others, you can also take the mirror image of the letter and it still looks like the"}, {"start": 270.44, "end": 273.64, "text": " letter A, but this only applies to some letters."}, {"start": 273.64, "end": 284.16, "text": " But these would be ways of taking a training example x y and applying a distortion or transformation"}, {"start": 284.16, "end": 290.76, "text": " to the input x in order to come up with another example that has the same label."}, {"start": 290.76, "end": 296.03999999999996, "text": " And by doing this, you're telling the algorithm that the letter A rotated a bit or enlarged"}, {"start": 296.03999999999996, "end": 297.68, "text": " a bit or shrunk a little bit."}, {"start": 297.68, "end": 303.7, "text": " It is still the letter A. And creating additional examples like this helps the learning algorithm"}, {"start": 303.7, "end": 307.86, "text": " do a better job learning how to recognize the letter A."}, {"start": 307.86, "end": 314.36, "text": " For a more advanced example of data augmentation, you can also take the letter A and place a"}, {"start": 314.36, "end": 321.88, "text": " grid on top of it. And by introducing random warping of this grid, you can take the letter"}, {"start": 321.88, "end": 328.2, "text": " A and introduce warping of the letter A to create a much richer library of examples of"}, {"start": 328.2, "end": 334.44, "text": " the letter A. And this process of distorting these examples then has turned one image or"}, {"start": 334.44, "end": 340.32, "text": " one example into here, training examples that you can feed to the learning algorithm to"}, {"start": 340.32, "end": 342.56, "text": " help it learn more robustly."}, {"start": 342.56, "end": 344.2, "text": " What is the letter A?"}, {"start": 344.2, "end": 351.32, "text": " This idea of data augmentation also works for speech recognition. Let's say for a voice"}, {"start": 351.32, "end": 356.32, "text": " search application, you have an original audio clip that sounds like this."}, {"start": 356.32, "end": 358.12, "text": " What is today's weather?"}, {"start": 358.12, "end": 365.2, "text": " One way you can apply data augmentation to speech data would be to take noisy background"}, {"start": 365.2, "end": 375.96, "text": " audio like this. For example, this is what the sound of a crowd sounds like. And it turns"}, {"start": 375.96, "end": 381.88, "text": " out that if you take these two audio clips, the first one and the crowd noise and you"}, {"start": 381.88, "end": 387.44, "text": " add them together, then you end up with an audio clip that sounds like this."}, {"start": 387.44, "end": 389.48, "text": " What is today's weather?"}, {"start": 389.48, "end": 394.96, "text": " And you just created an audio clip that sounds like someone saying, what's the weather today?"}, {"start": 394.96, "end": 400.64, "text": " But they're saying it around a noisy crowd in the background. Or in fact, if you were"}, {"start": 400.64, "end": 406.12, "text": " to take a different background noise, say someone in a car, this is what background"}, {"start": 406.12, "end": 414.32, "text": " noise of a car sounds like. And you were to add the original audio clip to the car noise,"}, {"start": 414.32, "end": 416.96, "text": " then you get this."}, {"start": 416.96, "end": 419.35999999999996, "text": " What is today's weather?"}, {"start": 419.35999999999996, "end": 424.24, "text": " And it sounds like the original audio clip, but as if the speaker was saying it from a"}, {"start": 424.24, "end": 430.76, "text": " car. And the more advanced data augmentation step would be if you make the original audio"}, {"start": 430.76, "end": 434.88, "text": " sound like you're recording on a bad cell phone connection like this."}, {"start": 434.88, "end": 437.0, "text": " What is today's weather?"}, {"start": 437.0, "end": 442.44, "text": " And so we've seen how you can take one audio clip and turn it into three training examples"}, {"start": 442.44, "end": 447.32, "text": " here. One with crowd background noise, one with car background noise, and one as if it"}, {"start": 447.32, "end": 452.56, "text": " was recorded on a bad cell phone connection. And the times I've worked on speech recognition"}, {"start": 452.56, "end": 458.36, "text": " systems, this was actually a really critical technique for increasing artificially the size"}, {"start": 458.36, "end": 462.88, "text": " of the training data I had to build a more accurate speech recognizer."}, {"start": 462.88, "end": 469.16, "text": " One tip for data augmentation is that the changes or the distortions you make to the"}, {"start": 469.16, "end": 475.52, "text": " data should be representative of the types of noise or distortions in the test set. So"}, {"start": 475.52, "end": 481.12, "text": " for example, if you take the letter A and warp it like this, this still looks like examples"}, {"start": 481.12, "end": 486.84000000000003, "text": " of letters you might see out there that you would like to recognize. Or for audio, adding"}, {"start": 486.84000000000003, "end": 491.96, "text": " background noise or bad cell phone connection, if that's representative of what you expect"}, {"start": 491.96, "end": 498.52, "text": " to hear in the test set, then these would be helpful ways to carry out data augmentation"}, {"start": 498.52, "end": 501.12, "text": " on your audio data."}, {"start": 501.12, "end": 506.44, "text": " In contrast, it's usually not that helpful to add purely random meaningless noise to"}, {"start": 506.44, "end": 514.08, "text": " audio data. For example, here I've taken the letter A and I've added per pixel noise, where"}, {"start": 514.08, "end": 520.32, "text": " if XI is the intensity or the brightness of pixel I, if I were to just add noise to each"}, {"start": 520.32, "end": 525.96, "text": " pixel, they end up with images that look like this. But if to the extent that this isn't"}, {"start": 525.96, "end": 529.84, "text": " that representative of what you see in the test set, because you don't often get images"}, {"start": 529.84, "end": 534.0, "text": " like this in the test set, this is actually going to be less helpful."}, {"start": 534.0, "end": 540.28, "text": " So one way to think about data augmentation is how can you modify or warp or distort or"}, {"start": 540.28, "end": 545.96, "text": " make more noise in your data, but in a way so that what you get is still quite similar"}, {"start": 545.96, "end": 551.52, "text": " to what you have in your test set, because that's what the learning algorithm will ultimately"}, {"start": 551.52, "end": 553.92, "text": " end up doing well on."}, {"start": 553.92, "end": 559.96, "text": " Now whereas data augmentation takes an existing training example and modifies it to create"}, {"start": 559.96, "end": 564.96, "text": " another training example, there's one other technique which is data synthesis, in which"}, {"start": 564.96, "end": 570.6800000000001, "text": " you make up brand new examples from scratch, not by modifying an existing example, but"}, {"start": 570.6800000000001, "end": 573.9200000000001, "text": " by creating brand new examples."}, {"start": 573.9200000000001, "end": 581.12, "text": " So take the example of photo OCR. Photo OCR or photo optical character recognition refers"}, {"start": 581.12, "end": 587.1600000000001, "text": " to the problem of looking at an image like this and automatically having a computer read"}, {"start": 587.16, "end": 592.9599999999999, "text": " the text that appears in this image. So there's a lot of text in this image."}, {"start": 592.9599999999999, "end": 601.0, "text": " How can you train an OCR algorithm to read text from an image like this? Well, when you"}, {"start": 601.0, "end": 606.04, "text": " look closely at what the letters in this image looks like, they actually look like this."}, {"start": 606.04, "end": 612.16, "text": " So this is real data from a photo OCR task. And one key step of the photo OCR task is"}, {"start": 612.16, "end": 616.5799999999999, "text": " to be able to look at the little image like this and recognize the letter at the middle."}, {"start": 616.58, "end": 623.4000000000001, "text": " So this has T in the middle, this has the letter L in the middle, this has the letter"}, {"start": 623.4000000000001, "end": 626.32, "text": " C in the middle, and so on."}, {"start": 626.32, "end": 633.0400000000001, "text": " So one way to create artificial data for this task is if you go to your computer's text"}, {"start": 633.0400000000001, "end": 640.9200000000001, "text": " editor, you find that it has a lot of different fonts. And what you can do is take these fonts"}, {"start": 640.92, "end": 647.8, "text": " and basically type out random text in your text editor and screenshot it using different"}, {"start": 647.8, "end": 654.4399999999999, "text": " colors and different contrasts and very different fonts. And you get synthetic data like that"}, {"start": 654.4399999999999, "end": 660.28, "text": " on the right. The images on the left were real data from real pictures taken out in"}, {"start": 660.28, "end": 667.0799999999999, "text": " the world. The images on the right are synthetically generated using fonts on the computer. And"}, {"start": 667.08, "end": 673.84, "text": " it actually looks pretty realistic. So with synthetic data generation like this, you can"}, {"start": 673.84, "end": 680.8000000000001, "text": " generate a very large number of images or examples for your photo OCR task. It can be"}, {"start": 680.8000000000001, "end": 686.0400000000001, "text": " a lot of work to write the code to generate realistic looking synthetic data for a given"}, {"start": 686.0400000000001, "end": 692.84, "text": " application. But when you spend time to do so, it can sometimes help you generate a very"}, {"start": 692.84, "end": 698.52, "text": " large amount of data for your application and give you a huge boost to your average"}, {"start": 698.52, "end": 704.8000000000001, "text": " performance. Synthetic data generation has been used most probably for computer vision"}, {"start": 704.8000000000001, "end": 710.96, "text": " tasks and less for other applications, not that much for audio tasks as well. All the"}, {"start": 710.96, "end": 717.0, "text": " techniques you've seen in this video relate to finding ways to engineer the data used"}, {"start": 717.0, "end": 722.8000000000001, "text": " by your system. In the way that machine learning has developed over the last several decades,"}, {"start": 722.8, "end": 729.4799999999999, "text": " many decades, most machine learning researchers attention was on the conventional model centric"}, {"start": 729.4799999999999, "end": 735.76, "text": " approach. And here's what I mean. A machine learning system on AI system includes both"}, {"start": 735.76, "end": 740.9599999999999, "text": " code to implement your algorithm or your model as well as the data that you train the algorithm"}, {"start": 740.9599999999999, "end": 748.04, "text": " on. And over the last few decades, most researchers doing machine learning research with download"}, {"start": 748.04, "end": 753.52, "text": " the data set and hold the data fix while they focus on improving the code or the algorithm"}, {"start": 753.52, "end": 762.52, "text": " or the model. Thanks to that paradigm of machine learning research, I find that today the algorithms"}, {"start": 762.52, "end": 767.36, "text": " we have access to, such as linear regression, logistic regression, neural networks, also"}, {"start": 767.36, "end": 772.28, "text": " decision trees, which you see next week, there are algorithms that are already very good"}, {"start": 772.28, "end": 779.4, "text": " and will work well for many applications. And so sometimes it can be more fruitful to"}, {"start": 779.4, "end": 786.12, "text": " spend more of your time taking a data centric approach in which you focus on engineering"}, {"start": 786.12, "end": 792.1999999999999, "text": " the data used by your algorithm. And this can be anything from collecting more data"}, {"start": 792.1999999999999, "end": 796.64, "text": " to collecting more data just on pharmaceutical spam, if that's what error analysis told you"}, {"start": 796.64, "end": 802.72, "text": " to do, to using data augmentation to generate more images or more audio or using data synthesis"}, {"start": 802.72, "end": 808.64, "text": " to just create more training examples. And sometimes that focus on the data can be an"}, {"start": 808.64, "end": 814.56, "text": " efficient way to help your learning algorithm improve its performance. So I hope that this"}, {"start": 814.56, "end": 821.0, "text": " video gives you a set of tools to be efficient and effective in how you add more data to"}, {"start": 821.0, "end": 826.68, "text": " get your learning algorithm to work better. Now, there are also some applications where"}, {"start": 826.68, "end": 831.76, "text": " you just don't have that much data and it's really hard to get more data. It turns out"}, {"start": 831.76, "end": 836.92, "text": " that there's a technique called transfer learning, which could apply in that setting to give"}, {"start": 836.92, "end": 841.96, "text": " your learning algorithm's performance a huge boost. And the key idea is to take data from"}, {"start": 841.96, "end": 847.68, "text": " a totally different, fairly related task. But using a neural network, there's sometimes"}, {"start": 847.68, "end": 853.1999999999999, "text": " ways to use that data from a very different task to get your algorithm to do better on"}, {"start": 853.1999999999999, "end": 858.88, "text": " your application. Doesn't apply to everything, but when it does, it can be very powerful."}, {"start": 858.88, "end": 878.88, "text": " Let's take a look in the next video at how transfer learning works."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=QAZroxyW4oo
6.13 Machine Learning development process |Transfer learning using data from a different task -ML Ng
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
For an application where you don't have that much data, transfer learning is a wonderful technique that lets you use data from a different task to help on your application. This is one of those techniques that I use very frequently. Let's take a look at how transfer learning works. Here's how transfer learning works. Let's say you want to recognize the handwritten digits from zero through nine, but you don't have that much labeled data of these handwritten digits. Here's what you can do. Say you find a very large data set of one million images, of pictures of cats, dogs, cars, people, and so on, thousand classes. You can then start by training a neural network on this large data set of a million images with a thousand different classes, and train the algorithm to take as input an image X, and learn to recognize any of these 1,000 different classes. In this process, you end up learning parameters for the first layer of the neural network, W1, B1, for the second layer, W2, B2, and so on. W3, B3, W4, B4, and W5, B5 for the upper layer. To apply transfer learning, what you do is then make a copy of this neural network, where you would keep the parameters W1, B1, W2, B2, W3, B3, and W4, B4. But for the last layer, you would eliminate the upper layer and replace it with a much smaller upper layer, with just 10 rather than 1,000 upper units. These 10 upper units will correspond to the classes zero, one, through nine that you want your neural network to recognize. Notice that the parameters W5, B5, they can't be copied over because the dimension of this layer has changed. So you need to come up with new parameters, W5, B5, that you need to train from scratch rather than just copy it from the previous neural network. In transfer learning, what you can do is use the parameters from the first four layers, really all the layers except the final upper layer as a starting point for the parameters and then run an optimization algorithm such as gradient descent, or the Adam optimization algorithm with the parameters initialized using the values from this neural network up on top. So in detail, there are two options for how you can train this neural network's parameters. Option one is you only train the upper layers parameters. So you would take the parameters W1, B1, W2, B2, through W4, P4 as the values from on top and just hold them fixed and don't even bother to change them and use an algorithm like stochastic gradient descent or the Adam optimization algorithm to only update W5, B5 to lower the cost function, the usual cost function that you use for learning to recognize these digits zero to nine from a small training set of these digits zero to nine. So there's option one. Option two would be to train all the parameters in the network, including W1, B1, W2, B2, all the way through W5, B5. But the first four layers parameters would be initialized using the values that you had trained on top. If you have a very, very small training set, then option one might work a little bit better. But if you have a training set that's a little bit larger, then option two might work a little bit better. This algorithm is called transfer learning because the intuition is by learning to recognize cats, dogs, cars, people, and so on, it will hopefully have learned some plausible sets of parameters for the earlier layers for processing image inputs. Then by transferring these parameters to the new neural network, the new neural network starts off with the parameters in a much better place. So that with just a little bit of further learning, hopefully it can end up at a pretty good model. These two steps are first training on a large data set and then tuning the parameters further on a smaller data set. Go by the name of supervised pre-training for the step on top. That's when you train the neural network on a very large data set of say a million images, not quite a related task. And then the second step is called fine tuning, where you take the parameters that you had initialized or gotten from supervised pre-training and then run gradient descent further to fine tune the ways to suit the specific application of handwritten digit recognition that you may have. And so if you have a small data set, even tens or hundreds or thousands or just tens of thousands of images of the handwritten digits, being able to learn from this million images of a not quite related task can actually help your learning algorithms performance a lot. One nice thing about transfer learning as well is maybe you don't need to be the one to carry out supervised pre-training. For a lot of neural networks, there will already be researchers that have already trained a neural network on a large image and will have posted a trained neural networks on the internet, freely licensed for anyone to download and use. And what that means is rather than carrying out the first step yourself, you can just download a neural network that someone else may have spent weeks training and then replace the upper layer with your own upper layer and carry out either option one or option two to fine tune a neural network that someone else has already carried out supervised pre-training on and just do a little bit of fine tuning to quickly be able to get to neural network that performs well on your task. Downloading a pre-trained model that someone else has trained and provided for free is one of those techniques where by building on each other's work in the machine learning community, we can all get much better results by the generosity of other researchers that have pre-trained and posted their neural networks online. But why does transfer learning even work? How could you possibly take parameters obtained by recognizing cats, dogs, cars, and people and use that to help you recognize something as different as handwritten digits? Here's some intuition behind it. If you are training a neural network to detect, say, different objects from images, then the first layer of a neural network may learn to detect edges in the image. We think of these as somewhat low-level features in the image, which is to detect edges. Each of these squares is a visualization of what a single neuron has learned to detect, has learned to group together pixels to find edges in an image. The next layer of the neural network then learns to group together edges to detect corners. And so each of these is a visualization of what one neuron may have learned to detect and has learned to detect little simple shapes like corner-like shapes like this. And the next layer of the neural network may have learned to detect somewhat more complex but still generic shapes like basic curves or small little shapes like these. And that's why by learning on detecting lots of different images, you're teaching the neural network to detect edges, corners, and basic shapes. And that's why by training a neural network to detect things as diverse as cats and dogs and cars and people, you are helping it to learn to detect these pretty generic features of images. And finding edges, corners, curves, basic shapes, this is useful for many other computer vision tasks such as recognizing handwritten digits. One restriction of pre-training though is that the image type X has to be the same for the pre-training and the fine-tuning steps. So if the final task you want to solve is a computer vision task, then the pre-training step also has to be a neural network trained on the same type of input, namely an image of the desired dimensions. Conversely, if your goal is to build a speech recognition system to process audio, then a neural network pre-trained on images probably won't do much good on audio. Instead, you want a neural network pre-trained on audio data that you then fine-tune on your own audio data set. And the same for other types of applications. You can pre-train a neural network on text data, and if your application has the same feature input X of text data, then you can fine-tune that neural network on your own data. To summarize, these are the two steps for transfer learning. Step one is download a neural network with parameters that have been pre-trained on a large data set with the same input type as your application. And that input type could be images, audio, text, or something else. Or if you don't want to download a neural network, maybe you can train your own. But in practice, if you're using images, say, it's much more common to download someone else's pre-trained neural network. Then further train or fine-tune the network on your own data. And I found that if you can get a neural network pre-trained on a large data set, say a million images, then sometimes you can use a much smaller data set, maybe a thousand images, maybe even smaller, to fine-tune the neural network on your own data and get pretty good results. And I've sometimes trained neural networks on as few as 50 images that work quite well using this technique when it has already been pre-trained on a much larger data set. This technique isn't panacea. You can't get every application to work just on 50 images, but it does help a lot when the data set you have for your application isn't that large. And by the way, if you've heard of advanced techniques in the news like GPT-3 or BERT or neural networks pre-trained on ImageNet, those are actually examples of neural networks that someone else has pre-trained on a very large image data set or a text data set that can then be fine-tuned on other applications. If you haven't heard of GPT-3 or BERT or ImageNet, don't worry about it, but you have. Those have been successful applications of pre-training in the machine learning literature. If you haven't heard of GPT-3 or BERT or ImageNet, don't worry about it, but you have. Those have been successful applications of transfer learning in the machine learning literature. One of the things I like about transfer learning is it's been one of the ways that the machine learning community has shared ideas and code and even parameters with each other, because thanks to the researchers that have pre-trained large neural networks and posted the parameters on the internet freely for anyone else to download and use, this empowers anyone to take models they're pre-trained to fine-tune on potentially a much smaller data set. In machine learning, all of us end up often building on the work of each other, and that open sharing of ideas, of code, of trained parameters is one of the ways that the machine learning community, all of us collectively, manage to do much better work than any single person by themselves can. And so I hope that you, joining the machine learning community, will someday maybe find a way to contribute back to this community as well. So that's it for pre-training. I hope you find this technique useful. And in the next video, I'd like to share with you some thoughts on the full cycle of a machine learning project. So when building a machine learning system, what are all the steps that are worth thinking about? Let's take a look at that in the next video.
[{"start": 0.0, "end": 5.68, "text": " For an application where you don't have that much data,"}, {"start": 5.68, "end": 8.92, "text": " transfer learning is a wonderful technique that lets you use"}, {"start": 8.92, "end": 13.02, "text": " data from a different task to help on your application."}, {"start": 13.02, "end": 16.14, "text": " This is one of those techniques that I use very frequently."}, {"start": 16.14, "end": 19.32, "text": " Let's take a look at how transfer learning works."}, {"start": 19.32, "end": 22.54, "text": " Here's how transfer learning works."}, {"start": 22.54, "end": 24.76, "text": " Let's say you want to recognize"}, {"start": 24.76, "end": 28.72, "text": " the handwritten digits from zero through nine,"}, {"start": 28.72, "end": 32.82, "text": " but you don't have that much labeled data of these handwritten digits."}, {"start": 32.82, "end": 34.48, "text": " Here's what you can do."}, {"start": 34.48, "end": 39.68, "text": " Say you find a very large data set of one million images,"}, {"start": 39.68, "end": 42.16, "text": " of pictures of cats, dogs,"}, {"start": 42.16, "end": 45.76, "text": " cars, people, and so on, thousand classes."}, {"start": 45.76, "end": 49.2, "text": " You can then start by training a neural network on"}, {"start": 49.2, "end": 53.56, "text": " this large data set of a million images with a thousand different classes,"}, {"start": 53.56, "end": 57.0, "text": " and train the algorithm to take as input an image X,"}, {"start": 57.0, "end": 61.4, "text": " and learn to recognize any of these 1,000 different classes."}, {"start": 61.4, "end": 66.72, "text": " In this process, you end up learning parameters for the first layer of the neural network,"}, {"start": 66.72, "end": 71.0, "text": " W1, B1, for the second layer, W2, B2, and so on."}, {"start": 71.0, "end": 73.92, "text": " W3, B3, W4, B4, and W5,"}, {"start": 73.92, "end": 76.44, "text": " B5 for the upper layer."}, {"start": 76.44, "end": 79.2, "text": " To apply transfer learning,"}, {"start": 79.2, "end": 83.06, "text": " what you do is then make a copy of this neural network,"}, {"start": 83.06, "end": 87.32000000000001, "text": " where you would keep the parameters W1, B1,"}, {"start": 87.32000000000001, "end": 92.36, "text": " W2, B2, W3, B3, and W4, B4."}, {"start": 92.36, "end": 94.68, "text": " But for the last layer,"}, {"start": 94.68, "end": 101.04, "text": " you would eliminate the upper layer and replace it with a much smaller upper layer,"}, {"start": 101.04, "end": 107.24000000000001, "text": " with just 10 rather than 1,000 upper units."}, {"start": 107.24000000000001, "end": 110.76, "text": " These 10 upper units will correspond to the classes zero,"}, {"start": 110.76, "end": 114.68, "text": " one, through nine that you want your neural network to recognize."}, {"start": 114.68, "end": 116.96000000000001, "text": " Notice that the parameters W5,"}, {"start": 116.96000000000001, "end": 122.24000000000001, "text": " B5, they can't be copied over because the dimension of this layer has changed."}, {"start": 122.24000000000001, "end": 125.68, "text": " So you need to come up with new parameters,"}, {"start": 125.68, "end": 130.36, "text": " W5, B5, that you need to train from scratch rather than just"}, {"start": 130.36, "end": 134.6, "text": " copy it from the previous neural network."}, {"start": 134.6, "end": 138.64000000000001, "text": " In transfer learning, what you can do is use"}, {"start": 138.64, "end": 142.64, "text": " the parameters from the first four layers,"}, {"start": 142.64, "end": 146.6, "text": " really all the layers except the final upper layer as a starting point for"}, {"start": 146.6, "end": 151.04, "text": " the parameters and then run an optimization algorithm such as gradient descent,"}, {"start": 151.04, "end": 153.44, "text": " or the Adam optimization algorithm with"}, {"start": 153.44, "end": 158.23999999999998, "text": " the parameters initialized using the values from this neural network up on top."}, {"start": 158.23999999999998, "end": 164.48, "text": " So in detail, there are two options for how you can train this neural network's parameters."}, {"start": 164.48, "end": 168.04, "text": " Option one is you only train the upper layers parameters."}, {"start": 168.04, "end": 171.2, "text": " So you would take the parameters W1,"}, {"start": 171.2, "end": 173.23999999999998, "text": " B1, W2, B2, through W4,"}, {"start": 173.23999999999998, "end": 176.44, "text": " P4 as the values from on top and just hold them"}, {"start": 176.44, "end": 180.28, "text": " fixed and don't even bother to change them and use an algorithm like"}, {"start": 180.28, "end": 184.48, "text": " stochastic gradient descent or the Adam optimization algorithm to only update"}, {"start": 184.48, "end": 188.76, "text": " W5, B5 to lower the cost function,"}, {"start": 188.76, "end": 192.23999999999998, "text": " the usual cost function that you use for learning to recognize"}, {"start": 192.23999999999998, "end": 196.68, "text": " these digits zero to nine from a small training set of these digits zero to nine."}, {"start": 196.68, "end": 198.48000000000002, "text": " So there's option one."}, {"start": 198.48000000000002, "end": 201.92000000000002, "text": " Option two would be to train all the parameters in the network,"}, {"start": 201.92000000000002, "end": 204.28, "text": " including W1, B1, W2, B2,"}, {"start": 204.28, "end": 206.68, "text": " all the way through W5, B5."}, {"start": 206.68, "end": 209.52, "text": " But the first four layers parameters would be"}, {"start": 209.52, "end": 213.32, "text": " initialized using the values that you had trained on top."}, {"start": 213.32, "end": 216.96, "text": " If you have a very, very small training set,"}, {"start": 216.96, "end": 219.8, "text": " then option one might work a little bit better."}, {"start": 219.8, "end": 223.28, "text": " But if you have a training set that's a little bit larger,"}, {"start": 223.28, "end": 226.20000000000002, "text": " then option two might work a little bit better."}, {"start": 226.2, "end": 232.04, "text": " This algorithm is called transfer learning because the intuition is by"}, {"start": 232.04, "end": 235.6, "text": " learning to recognize cats, dogs, cars, people, and so on,"}, {"start": 235.6, "end": 239.72, "text": " it will hopefully have learned some plausible sets of parameters for"}, {"start": 239.72, "end": 243.35999999999999, "text": " the earlier layers for processing image inputs."}, {"start": 243.35999999999999, "end": 247.35999999999999, "text": " Then by transferring these parameters to the new neural network,"}, {"start": 247.35999999999999, "end": 252.2, "text": " the new neural network starts off with the parameters in a much better place."}, {"start": 252.2, "end": 255.04, "text": " So that with just a little bit of further learning,"}, {"start": 255.04, "end": 258.08, "text": " hopefully it can end up at a pretty good model."}, {"start": 258.08, "end": 262.03999999999996, "text": " These two steps are first training on a large data set and"}, {"start": 262.03999999999996, "end": 266.4, "text": " then tuning the parameters further on a smaller data set."}, {"start": 266.4, "end": 270.84, "text": " Go by the name of supervised pre-training for the step on top."}, {"start": 270.84, "end": 275.15999999999997, "text": " That's when you train the neural network on a very large data set of say a million"}, {"start": 275.15999999999997, "end": 278.44, "text": " images, not quite a related task."}, {"start": 278.44, "end": 282.24, "text": " And then the second step is called fine tuning,"}, {"start": 282.24, "end": 285.2, "text": " where you take the parameters that you had initialized or"}, {"start": 285.2, "end": 289.88, "text": " gotten from supervised pre-training and then run gradient descent further to"}, {"start": 289.88, "end": 294.08, "text": " fine tune the ways to suit the specific application of"}, {"start": 294.08, "end": 296.72, "text": " handwritten digit recognition that you may have."}, {"start": 296.72, "end": 301.6, "text": " And so if you have a small data set, even tens or hundreds or thousands or"}, {"start": 301.6, "end": 304.96000000000004, "text": " just tens of thousands of images of the handwritten digits,"}, {"start": 304.96000000000004, "end": 309.52, "text": " being able to learn from this million images of a not quite related task can"}, {"start": 309.52, "end": 314.03999999999996, "text": " actually help your learning algorithms performance a lot."}, {"start": 314.03999999999996, "end": 319.59999999999997, "text": " One nice thing about transfer learning as well is maybe you don't need to be"}, {"start": 319.59999999999997, "end": 322.88, "text": " the one to carry out supervised pre-training."}, {"start": 322.88, "end": 326.79999999999995, "text": " For a lot of neural networks, there will already be researchers that have already"}, {"start": 326.79999999999995, "end": 332.91999999999996, "text": " trained a neural network on a large image and will have posted a trained"}, {"start": 332.91999999999996, "end": 338.4, "text": " neural networks on the internet, freely licensed for anyone to download and use."}, {"start": 338.4, "end": 342.47999999999996, "text": " And what that means is rather than carrying out the first step yourself,"}, {"start": 342.47999999999996, "end": 346.15999999999997, "text": " you can just download a neural network that someone else may have spent weeks"}, {"start": 346.15999999999997, "end": 351.0, "text": " training and then replace the upper layer with your own upper layer and"}, {"start": 351.0, "end": 356.15999999999997, "text": " carry out either option one or option two to fine tune a neural network that"}, {"start": 356.15999999999997, "end": 360.47999999999996, "text": " someone else has already carried out supervised pre-training on and just do a"}, {"start": 360.47999999999996, "end": 364.12, "text": " little bit of fine tuning to quickly be able to get to neural network that"}, {"start": 364.12, "end": 366.03999999999996, "text": " performs well on your task."}, {"start": 366.04, "end": 370.84000000000003, "text": " Downloading a pre-trained model that someone else has trained and provided for"}, {"start": 370.84000000000003, "end": 375.48, "text": " free is one of those techniques where by building on each other's work in the"}, {"start": 375.48, "end": 380.08000000000004, "text": " machine learning community, we can all get much better results by the generosity"}, {"start": 380.08000000000004, "end": 384.88, "text": " of other researchers that have pre-trained and posted their neural networks online."}, {"start": 384.88, "end": 387.88, "text": " But why does transfer learning even work?"}, {"start": 387.88, "end": 391.84000000000003, "text": " How could you possibly take parameters obtained by recognizing cats, dogs,"}, {"start": 391.84, "end": 396.79999999999995, "text": " cars, and people and use that to help you recognize something as different as"}, {"start": 396.79999999999995, "end": 398.88, "text": " handwritten digits?"}, {"start": 398.88, "end": 401.71999999999997, "text": " Here's some intuition behind it."}, {"start": 401.71999999999997, "end": 409.15999999999997, "text": " If you are training a neural network to detect, say, different objects from images,"}, {"start": 409.15999999999997, "end": 415.32, "text": " then the first layer of a neural network may learn to detect edges in the image."}, {"start": 415.32, "end": 418.84, "text": " We think of these as somewhat low-level features in the image,"}, {"start": 418.84, "end": 420.96, "text": " which is to detect edges."}, {"start": 420.96, "end": 424.84, "text": " Each of these squares is a visualization of what a single neuron has learned to"}, {"start": 424.84, "end": 430.71999999999997, "text": " detect, has learned to group together pixels to find edges in an image."}, {"start": 430.71999999999997, "end": 435.08, "text": " The next layer of the neural network then learns to group together edges to detect"}, {"start": 435.08, "end": 436.84, "text": " corners."}, {"start": 436.84, "end": 441.88, "text": " And so each of these is a visualization of what one neuron may have learned to"}, {"start": 441.88, "end": 446.08, "text": " detect and has learned to detect little simple shapes like corner-like shapes"}, {"start": 446.08, "end": 447.28, "text": " like this."}, {"start": 447.28, "end": 451.15999999999997, "text": " And the next layer of the neural network may have learned to detect somewhat more"}, {"start": 451.15999999999997, "end": 456.28, "text": " complex but still generic shapes like basic curves or small little shapes like"}, {"start": 456.28, "end": 457.55999999999995, "text": " these."}, {"start": 457.55999999999995, "end": 463.32, "text": " And that's why by learning on detecting lots of different images, you're"}, {"start": 463.32, "end": 467.44, "text": " teaching the neural network to detect edges, corners, and basic shapes."}, {"start": 467.44, "end": 471.28, "text": " And that's why by training a neural network to detect things as diverse as"}, {"start": 471.28, "end": 475.88, "text": " cats and dogs and cars and people, you are helping it to learn to detect these"}, {"start": 475.88, "end": 479.76, "text": " pretty generic features of images."}, {"start": 479.76, "end": 485.96, "text": " And finding edges, corners, curves, basic shapes, this is useful for many other"}, {"start": 485.96, "end": 490.88, "text": " computer vision tasks such as recognizing handwritten digits."}, {"start": 490.88, "end": 496.36, "text": " One restriction of pre-training though is that the image type X has to be the"}, {"start": 496.36, "end": 499.71999999999997, "text": " same for the pre-training and the fine-tuning steps."}, {"start": 499.71999999999997, "end": 504.71999999999997, "text": " So if the final task you want to solve is a computer vision task, then the"}, {"start": 504.72, "end": 508.84000000000003, "text": " pre-training step also has to be a neural network trained on the same type of"}, {"start": 508.84000000000003, "end": 513.12, "text": " input, namely an image of the desired dimensions."}, {"start": 513.12, "end": 518.0, "text": " Conversely, if your goal is to build a speech recognition system to process"}, {"start": 518.0, "end": 522.8000000000001, "text": " audio, then a neural network pre-trained on images probably won't do much good on"}, {"start": 522.8000000000001, "end": 523.8000000000001, "text": " audio."}, {"start": 523.8000000000001, "end": 527.52, "text": " Instead, you want a neural network pre-trained on audio data that you then"}, {"start": 527.52, "end": 529.88, "text": " fine-tune on your own audio data set."}, {"start": 529.88, "end": 532.08, "text": " And the same for other types of applications."}, {"start": 532.08, "end": 537.1600000000001, "text": " You can pre-train a neural network on text data, and if your application has the"}, {"start": 537.1600000000001, "end": 542.1600000000001, "text": " same feature input X of text data, then you can fine-tune that neural network on"}, {"start": 542.1600000000001, "end": 543.5600000000001, "text": " your own data."}, {"start": 543.5600000000001, "end": 547.36, "text": " To summarize, these are the two steps for transfer learning."}, {"start": 547.36, "end": 553.24, "text": " Step one is download a neural network with parameters that have been pre-trained"}, {"start": 553.24, "end": 557.48, "text": " on a large data set with the same input type as your application."}, {"start": 557.48, "end": 561.2800000000001, "text": " And that input type could be images, audio, text, or something else."}, {"start": 561.28, "end": 565.4399999999999, "text": " Or if you don't want to download a neural network, maybe you can train your own."}, {"start": 565.4399999999999, "end": 570.28, "text": " But in practice, if you're using images, say, it's much more common to download"}, {"start": 570.28, "end": 572.6, "text": " someone else's pre-trained neural network."}, {"start": 572.6, "end": 579.1999999999999, "text": " Then further train or fine-tune the network on your own data."}, {"start": 579.1999999999999, "end": 583.3199999999999, "text": " And I found that if you can get a neural network pre-trained on a large data set,"}, {"start": 583.3199999999999, "end": 588.8, "text": " say a million images, then sometimes you can use a much smaller data set, maybe a"}, {"start": 588.8, "end": 593.4799999999999, "text": " thousand images, maybe even smaller, to fine-tune the neural network on your own"}, {"start": 593.4799999999999, "end": 596.4, "text": " data and get pretty good results."}, {"start": 596.4, "end": 601.3199999999999, "text": " And I've sometimes trained neural networks on as few as 50 images that work"}, {"start": 601.3199999999999, "end": 605.9599999999999, "text": " quite well using this technique when it has already been pre-trained on a much"}, {"start": 605.9599999999999, "end": 607.4799999999999, "text": " larger data set."}, {"start": 607.4799999999999, "end": 609.04, "text": " This technique isn't panacea."}, {"start": 609.04, "end": 613.92, "text": " You can't get every application to work just on 50 images, but it does help a lot"}, {"start": 613.92, "end": 618.8, "text": " when the data set you have for your application isn't that large."}, {"start": 618.8, "end": 623.8399999999999, "text": " And by the way, if you've heard of advanced techniques in the news like GPT-3"}, {"start": 623.8399999999999, "end": 630.3199999999999, "text": " or BERT or neural networks pre-trained on ImageNet, those are actually examples of"}, {"start": 630.3199999999999, "end": 634.8, "text": " neural networks that someone else has pre-trained on a very large image data"}, {"start": 634.8, "end": 639.36, "text": " set or a text data set that can then be fine-tuned on other applications."}, {"start": 639.36, "end": 642.92, "text": " If you haven't heard of GPT-3 or BERT or ImageNet, don't worry about it, but you"}, {"start": 642.92, "end": 644.04, "text": " have."}, {"start": 644.04, "end": 647.8399999999999, "text": " Those have been successful applications of pre-training in the machine learning"}, {"start": 647.8399999999999, "end": 648.56, "text": " literature."}, {"start": 648.56, "end": 652.12, "text": " If you haven't heard of GPT-3 or BERT or ImageNet, don't worry about it, but you"}, {"start": 652.12, "end": 652.8399999999999, "text": " have."}, {"start": 652.8399999999999, "end": 656.1999999999999, "text": " Those have been successful applications of transfer learning in the machine"}, {"start": 656.1999999999999, "end": 657.56, "text": " learning literature."}, {"start": 657.56, "end": 661.0799999999999, "text": " One of the things I like about transfer learning is it's been one of the ways"}, {"start": 661.0799999999999, "end": 666.0, "text": " that the machine learning community has shared ideas and code and even parameters"}, {"start": 666.0, "end": 670.5999999999999, "text": " with each other, because thanks to the researchers that have pre-trained large"}, {"start": 670.6, "end": 675.08, "text": " neural networks and posted the parameters on the internet freely for anyone else"}, {"start": 675.08, "end": 680.12, "text": " to download and use, this empowers anyone to take models they're pre-trained to"}, {"start": 680.12, "end": 683.6800000000001, "text": " fine-tune on potentially a much smaller data set."}, {"start": 683.6800000000001, "end": 688.5600000000001, "text": " In machine learning, all of us end up often building on the work of each other,"}, {"start": 688.5600000000001, "end": 694.76, "text": " and that open sharing of ideas, of code, of trained parameters is one of the ways"}, {"start": 694.76, "end": 699.36, "text": " that the machine learning community, all of us collectively, manage to do much"}, {"start": 699.36, "end": 702.6, "text": " better work than any single person by themselves can."}, {"start": 702.6, "end": 706.84, "text": " And so I hope that you, joining the machine learning community, will someday"}, {"start": 706.84, "end": 710.2, "text": " maybe find a way to contribute back to this community as well."}, {"start": 710.2, "end": 713.36, "text": " So that's it for pre-training."}, {"start": 713.36, "end": 715.8000000000001, "text": " I hope you find this technique useful."}, {"start": 715.8000000000001, "end": 721.12, "text": " And in the next video, I'd like to share with you some thoughts on the full cycle"}, {"start": 721.12, "end": 723.52, "text": " of a machine learning project."}, {"start": 723.52, "end": 727.2, "text": " So when building a machine learning system, what are all the steps that are"}, {"start": 727.2, "end": 728.44, "text": " worth thinking about?"}, {"start": 728.44, "end": 730.44, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=ekahwgWO5PQ
6.14 Machine Learning development process | Full cycle of a machine learning project -[ML|Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
So far, we've talked a lot about how to train a model and also talked a bit about how to get data for your machine learning application. But when I'm building a machine learning system, I find that training a model is just part of the puzzle. In this video, I'd like to share with you what I think of as the full cycle of a machine learning project. That is, when you're building a valuable machine learning system, what are the steps to think about and plan for? Let's take a look. Let me use speech recognition as an example to illustrate the full cycle of a machine learning project. The first step of a machine learning project is to scope the project. In other words, decide what is the project and what you want to work on. So for example, I once decided to work on speech recognition for voice search, that is to do web search using speaking to your mobile phone rather than typing into your mobile phone. So that's project scoping. After deciding what to work on, you have to collect data. So decide what data you need to train your machine learning system and go and do the work to get the audio and get the transcripts or the labels for your data set. So that's data collection. After you have your initial data collection, you can then start to train the model. And so here you would train a speech recognition system and carry out error analysis and iteratively improve your model. And it's not at all uncommon after you've started training the model for an error analysis or for a bias-variance analysis to tell you that you might want to go back to collect more data, maybe collect more data of everything or just collect more data of a specific type where your error analysis tells you you want to improve the performance of your learning algorithm. For example, once when working on speech, I realized that my model was doing politically poorly when there was car noise in the background. So if it sounded like someone was speaking in a car, my speech system performed poorly, decided to get more data, actually using data augmentation to get more speech data that sounds like it was a car in order to improve the performance of my learning algorithm. So you go around this loop a few times, train the model, carry error analysis, go back to collect more data, maybe do this for a while until eventually you think the model is good enough to then deploy in a production environment. And what that means is you make it available for users to use. And when you deploy a system, you want to also make sure that you continue to monitor the performance of the system and to maintain the system in case the performance gets worse, to bring its performance back up. Instead of just hosting your machine learning model on a server, I'll say a little bit more about why you need to maintain these machine learning systems on the next slide. But after this deployment, sometimes you realize that it's not working as well as you hoped and you go back to train the model to improve it again or even go back and get more data. In fact, if users and if you have permission to use data from your production deployment, sometimes that data from your working speech system can give you access to even more data with which to keep on improving the performance of your system. Now I think you have a sense of what scoping a project means and we've talked a bunch about collecting data and training models in discourse. But let me share with you a little bit more detail about what deploying in production might look like. After you've trained a high performing machine learning model, say a speech recognition model, a common way to deploy the model would be to take your machine learning model and implement it in a server, which I'm going to call an inference server, whose job it is to call your machine learning model, your trained model, in order to make predictions. Then if your team has implemented a mobile app, say a search application, then when the user talks to the mobile app, the mobile app can then make an API call to pass to your inference server the audio clip that was recorded and the inference service job is to apply the machine learning model to it and then return to it the prediction of your model, which in this case would be the text transcript of what was said. So this would be a common way of implementing an application that calls via an API an inference server that has your model repeatedly make predictions based on the input x. So this would be a common pattern where depending on the application that's implemented, you have an API call to give you learning algorithm the input x and your machine learning model would then output the prediction, say y hat. To implement this, some software engineering may be needed to write all the code that does all of these things. And depending on whether your application needs to serve just a few handful of users or millions of users, the amounts of software engineering needed can be quite different. So I've built software that serve just a handful of users on my laptop and I've also built software that serves hundreds of millions of users requiring significant data center resources. So depending on the scale of application needed software engineering may be needed to make sure that your inference server is able to make reliable and efficient predictions, hopefully at not too high a computational cost. Software engineering may be needed to manage scaling to a large number of users. You often want to log the data you're getting, both the inputs x as well as the predictions y hat, assuming that user privacy and consent allows you to store this data. And this data, if you can access to it, is also very useful for system monitoring. For example, I once built a speech recognition system on a certain data set that I had, but when there were new celebrities that suddenly became well known or elections caused new politicians to become elected, then people would search for these new names that were not in the training set and that my system did poorly on. And it was because we're monitoring the system, it allowed us to figure out when the data was shifting and the algorithm was becoming less accurate. And this allowed us to retrain the model and then to carry out a model update to replace the old model with a new one. So the deployment process can require some amount of software engineering. For some applications, if you're just running it on a laptop or on a one or two service, maybe not that much software engineering is needed. And depending on the team you're working on, it's possible that you build the machine learning model, but there could be a different team responsible for deploying it. But there is a growing field in machine learning called ML Ops, this stands for machine learning operations. And this refers to the practice of how to systematically build and deploy and maintain machine learning systems to do all of these things to make sure that your machine learning model is reliable, scales well, has good logs, is monitored, and that you have the opportunity to make updates to the model. As appropriate to keep it running well. For example, if you're deploying your system to millions of people, you may want to make sure you have a highly optimized implementation so that the compute costs of serving millions of people is not too expensive. So in this and the last class, we spent a lot of time talking about how to train a machine learning model. And that is absolutely the critical piece to making sure you have a high performance system. And if you ever have to deploy a system to millions of people, these are some additional steps that you probably have to address. Think about an address at that point as well. Before moving on from the topic of the machine learning development process, there's one more set of ideas that I want to share with you that relates to the ethics of building machine learning systems. This is a crucial topic for many applications. So let's take a look at this in the next video.
[{"start": 0.0, "end": 7.640000000000001, "text": " So far, we've talked a lot about how to train a model and also talked a bit about how to"}, {"start": 7.640000000000001, "end": 11.32, "text": " get data for your machine learning application."}, {"start": 11.32, "end": 16.64, "text": " But when I'm building a machine learning system, I find that training a model is just part"}, {"start": 16.64, "end": 18.04, "text": " of the puzzle."}, {"start": 18.04, "end": 22.8, "text": " In this video, I'd like to share with you what I think of as the full cycle of a machine"}, {"start": 22.8, "end": 23.8, "text": " learning project."}, {"start": 23.8, "end": 29.12, "text": " That is, when you're building a valuable machine learning system, what are the steps to think"}, {"start": 29.12, "end": 30.36, "text": " about and plan for?"}, {"start": 30.36, "end": 31.96, "text": " Let's take a look."}, {"start": 31.96, "end": 36.4, "text": " Let me use speech recognition as an example to illustrate the full cycle of a machine"}, {"start": 36.4, "end": 38.6, "text": " learning project."}, {"start": 38.6, "end": 41.52, "text": " The first step of a machine learning project is to scope the project."}, {"start": 41.52, "end": 46.2, "text": " In other words, decide what is the project and what you want to work on."}, {"start": 46.2, "end": 51.28, "text": " So for example, I once decided to work on speech recognition for voice search, that"}, {"start": 51.28, "end": 56.16, "text": " is to do web search using speaking to your mobile phone rather than typing into your"}, {"start": 56.16, "end": 57.44, "text": " mobile phone."}, {"start": 57.44, "end": 59.839999999999996, "text": " So that's project scoping."}, {"start": 59.839999999999996, "end": 62.839999999999996, "text": " After deciding what to work on, you have to collect data."}, {"start": 62.839999999999996, "end": 67.6, "text": " So decide what data you need to train your machine learning system and go and do the"}, {"start": 67.6, "end": 72.2, "text": " work to get the audio and get the transcripts or the labels for your data set."}, {"start": 72.2, "end": 75.16, "text": " So that's data collection."}, {"start": 75.16, "end": 80.56, "text": " After you have your initial data collection, you can then start to train the model."}, {"start": 80.56, "end": 86.6, "text": " And so here you would train a speech recognition system and carry out error analysis and iteratively"}, {"start": 86.6, "end": 89.24, "text": " improve your model."}, {"start": 89.24, "end": 96.72, "text": " And it's not at all uncommon after you've started training the model for an error analysis"}, {"start": 96.72, "end": 101.08, "text": " or for a bias-variance analysis to tell you that you might want to go back to collect"}, {"start": 101.08, "end": 106.67999999999999, "text": " more data, maybe collect more data of everything or just collect more data of a specific type"}, {"start": 106.67999999999999, "end": 111.08, "text": " where your error analysis tells you you want to improve the performance of your learning"}, {"start": 111.08, "end": 112.36, "text": " algorithm."}, {"start": 112.36, "end": 117.24, "text": " For example, once when working on speech, I realized that my model was doing politically"}, {"start": 117.24, "end": 120.56, "text": " poorly when there was car noise in the background."}, {"start": 120.56, "end": 125.44, "text": " So if it sounded like someone was speaking in a car, my speech system performed poorly,"}, {"start": 125.44, "end": 131.6, "text": " decided to get more data, actually using data augmentation to get more speech data that"}, {"start": 131.6, "end": 137.48, "text": " sounds like it was a car in order to improve the performance of my learning algorithm."}, {"start": 137.48, "end": 141.84, "text": " So you go around this loop a few times, train the model, carry error analysis, go back to"}, {"start": 141.84, "end": 146.76, "text": " collect more data, maybe do this for a while until eventually you think the model is good"}, {"start": 146.76, "end": 150.0, "text": " enough to then deploy in a production environment."}, {"start": 150.0, "end": 154.56, "text": " And what that means is you make it available for users to use."}, {"start": 154.56, "end": 159.2, "text": " And when you deploy a system, you want to also make sure that you continue to monitor"}, {"start": 159.2, "end": 164.44, "text": " the performance of the system and to maintain the system in case the performance gets worse,"}, {"start": 164.44, "end": 166.52, "text": " to bring its performance back up."}, {"start": 166.52, "end": 171.72, "text": " Instead of just hosting your machine learning model on a server, I'll say a little bit more"}, {"start": 171.72, "end": 176.2, "text": " about why you need to maintain these machine learning systems on the next slide."}, {"start": 176.2, "end": 181.6, "text": " But after this deployment, sometimes you realize that it's not working as well as you hoped"}, {"start": 181.6, "end": 187.88, "text": " and you go back to train the model to improve it again or even go back and get more data."}, {"start": 187.88, "end": 195.12, "text": " In fact, if users and if you have permission to use data from your production deployment,"}, {"start": 195.12, "end": 201.12, "text": " sometimes that data from your working speech system can give you access to even more data"}, {"start": 201.12, "end": 204.68, "text": " with which to keep on improving the performance of your system."}, {"start": 204.68, "end": 208.76, "text": " Now I think you have a sense of what scoping a project means and we've talked a bunch about"}, {"start": 208.76, "end": 212.52, "text": " collecting data and training models in discourse."}, {"start": 212.52, "end": 217.28, "text": " But let me share with you a little bit more detail about what deploying in production"}, {"start": 217.28, "end": 219.70000000000002, "text": " might look like."}, {"start": 219.70000000000002, "end": 226.0, "text": " After you've trained a high performing machine learning model, say a speech recognition model,"}, {"start": 226.0, "end": 233.72, "text": " a common way to deploy the model would be to take your machine learning model and implement"}, {"start": 233.72, "end": 240.28, "text": " it in a server, which I'm going to call an inference server, whose job it is to call"}, {"start": 240.28, "end": 245.72, "text": " your machine learning model, your trained model, in order to make predictions."}, {"start": 245.72, "end": 252.28, "text": " Then if your team has implemented a mobile app, say a search application, then when the"}, {"start": 252.28, "end": 258.64, "text": " user talks to the mobile app, the mobile app can then make an API call to pass to your"}, {"start": 258.64, "end": 265.12, "text": " inference server the audio clip that was recorded and the inference service job is to apply"}, {"start": 265.12, "end": 271.4, "text": " the machine learning model to it and then return to it the prediction of your model,"}, {"start": 271.4, "end": 277.12, "text": " which in this case would be the text transcript of what was said."}, {"start": 277.12, "end": 284.0, "text": " So this would be a common way of implementing an application that calls via an API an inference"}, {"start": 284.0, "end": 289.92, "text": " server that has your model repeatedly make predictions based on the input x."}, {"start": 289.92, "end": 295.0, "text": " So this would be a common pattern where depending on the application that's implemented, you"}, {"start": 295.0, "end": 300.84000000000003, "text": " have an API call to give you learning algorithm the input x and your machine learning model"}, {"start": 300.84000000000003, "end": 305.26, "text": " would then output the prediction, say y hat."}, {"start": 305.26, "end": 311.44, "text": " To implement this, some software engineering may be needed to write all the code that does"}, {"start": 311.44, "end": 313.44, "text": " all of these things."}, {"start": 313.44, "end": 318.68, "text": " And depending on whether your application needs to serve just a few handful of users"}, {"start": 318.68, "end": 324.12, "text": " or millions of users, the amounts of software engineering needed can be quite different."}, {"start": 324.12, "end": 329.56, "text": " So I've built software that serve just a handful of users on my laptop and I've also built"}, {"start": 329.56, "end": 335.36, "text": " software that serves hundreds of millions of users requiring significant data center"}, {"start": 335.36, "end": 337.36, "text": " resources."}, {"start": 337.36, "end": 341.2, "text": " So depending on the scale of application needed software engineering may be needed to make"}, {"start": 341.2, "end": 347.72, "text": " sure that your inference server is able to make reliable and efficient predictions, hopefully"}, {"start": 347.72, "end": 350.72, "text": " at not too high a computational cost."}, {"start": 350.72, "end": 355.0, "text": " Software engineering may be needed to manage scaling to a large number of users."}, {"start": 355.0, "end": 359.88, "text": " You often want to log the data you're getting, both the inputs x as well as the predictions"}, {"start": 359.88, "end": 366.24, "text": " y hat, assuming that user privacy and consent allows you to store this data."}, {"start": 366.24, "end": 372.44, "text": " And this data, if you can access to it, is also very useful for system monitoring."}, {"start": 372.44, "end": 378.2, "text": " For example, I once built a speech recognition system on a certain data set that I had, but"}, {"start": 378.2, "end": 384.2, "text": " when there were new celebrities that suddenly became well known or elections caused new"}, {"start": 384.2, "end": 389.68, "text": " politicians to become elected, then people would search for these new names that were"}, {"start": 389.68, "end": 393.32, "text": " not in the training set and that my system did poorly on."}, {"start": 393.32, "end": 397.92, "text": " And it was because we're monitoring the system, it allowed us to figure out when the data"}, {"start": 397.92, "end": 401.41999999999996, "text": " was shifting and the algorithm was becoming less accurate."}, {"start": 401.41999999999996, "end": 409.14, "text": " And this allowed us to retrain the model and then to carry out a model update to replace"}, {"start": 409.14, "end": 412.48, "text": " the old model with a new one."}, {"start": 412.48, "end": 418.12, "text": " So the deployment process can require some amount of software engineering."}, {"start": 418.12, "end": 422.84000000000003, "text": " For some applications, if you're just running it on a laptop or on a one or two service,"}, {"start": 422.84000000000003, "end": 426.92, "text": " maybe not that much software engineering is needed."}, {"start": 426.92, "end": 431.84000000000003, "text": " And depending on the team you're working on, it's possible that you build the machine learning"}, {"start": 431.84000000000003, "end": 438.36, "text": " model, but there could be a different team responsible for deploying it."}, {"start": 438.36, "end": 444.24, "text": " But there is a growing field in machine learning called ML Ops, this stands for machine learning"}, {"start": 444.24, "end": 445.84000000000003, "text": " operations."}, {"start": 445.84000000000003, "end": 454.56, "text": " And this refers to the practice of how to systematically build and deploy and maintain"}, {"start": 454.56, "end": 459.2, "text": " machine learning systems to do all of these things to make sure that your machine learning"}, {"start": 459.2, "end": 466.28000000000003, "text": " model is reliable, scales well, has good logs, is monitored, and that you have the opportunity"}, {"start": 466.28000000000003, "end": 468.2, "text": " to make updates to the model."}, {"start": 468.2, "end": 471.0, "text": " As appropriate to keep it running well."}, {"start": 471.0, "end": 475.47999999999996, "text": " For example, if you're deploying your system to millions of people, you may want to make"}, {"start": 475.47999999999996, "end": 480.84, "text": " sure you have a highly optimized implementation so that the compute costs of serving millions"}, {"start": 480.84, "end": 484.15999999999997, "text": " of people is not too expensive."}, {"start": 484.15999999999997, "end": 488.76, "text": " So in this and the last class, we spent a lot of time talking about how to train a machine"}, {"start": 488.76, "end": 489.76, "text": " learning model."}, {"start": 489.76, "end": 495.08, "text": " And that is absolutely the critical piece to making sure you have a high performance"}, {"start": 495.08, "end": 496.08, "text": " system."}, {"start": 496.08, "end": 501.71999999999997, "text": " And if you ever have to deploy a system to millions of people, these are some additional"}, {"start": 501.71999999999997, "end": 504.88, "text": " steps that you probably have to address."}, {"start": 504.88, "end": 507.52, "text": " Think about an address at that point as well."}, {"start": 507.52, "end": 512.68, "text": " Before moving on from the topic of the machine learning development process, there's one"}, {"start": 512.68, "end": 517.46, "text": " more set of ideas that I want to share with you that relates to the ethics of building"}, {"start": 517.46, "end": 519.28, "text": " machine learning systems."}, {"start": 519.28, "end": 521.96, "text": " This is a crucial topic for many applications."}, {"start": 521.96, "end": 526.96, "text": " So let's take a look at this in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=9ftZArU_8BA
6.15 Machine Learning development process | Fairness, bias, and ethics -[Machine Learning|Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Machine learning algorithms today are affecting billions of people. You've heard me mention ethics in other videos before, and I hope that if you're building a machine learning system that affects people, that you give some thought to making sure that your system is reasonably fair, reasonably free from bias, and that you're taking an ethical approach to your application. Let's take a look at some issues related to fairness, bias, and ethics. Unfortunately, in the history of machine learning, there have been a few systems, some widely publicized, that turned out to exhibit a completely unacceptable level of bias. For example, there was a hiring tool that was once shown to discriminate against women. The company that built the system stopped using it, but one wishes that this system had never been rolled out in the first place. Or there was also a well-documented example of face recognition systems that matched dark-skinned individuals to criminal mugshots much more often than lighter-skinned individuals. And clearly, this is not acceptable, and we should get better at the community at just not building and deploying systems with a problem like this in the first place. There have been systems that gave bank loan approvals in a way that was biased and discriminated against subgroups, and we also really like learning algorithms to not have the toxic effect of reinforcing negative stereotypes. For example, I have a daughter, and if she searches online for certain professions and doesn't see anyone that looks like her, I would hate for that to discourage her from taking on certain professions. In addition to the issues of bias and fair treatment of individuals, there have also been adverse use cases or negative use cases of machine learning algorithms. For example, there was this widely cited and widely viewed video released with full disclosure and full transparency by the company BuzzFeed of a deep fake of former US President Barack Obama. And you can actually find and watch the whole video online if you want. But the company that created this video did so with full transparency and full disclosure. But clearly using this technology to generate fake videos without consent and without disclosure would be unethical. We've also seen, unfortunately, social media sometimes spreading toxic or incendiary speech because optimizing for user engagements has led to algorithms doing so. There have been bots that were used to generate fake content for either commercial purposes, such as posting fake comments on products, or for political purposes. And there are users of machine learning to build harmful products, commit fraud, and so on. And in parts of the machine learning world, just as in email, there has been a battle between the spammers and the anti-spam community. I am seeing today in, for example, the financial industry, a battle between people trying to commit fraud and the people fighting fraud. And unfortunately, machine learning is used by some of the fraudsters and some of the spammers. So for goodness sakes, please don't build a machine learning system that has a negative impact on society. And if you are asked to work on an application that you consider unethical, I urge you to walk away. For what it's worth, there have been multiple times that I have looked at a project that seemed to be financially sound, you know, make money for some company, but I have killed the project just on ethical grounds. Because I think that even though the financial case was sound, I felt that it makes the world worse off. And I just don't ever want to be involved in a project like that. Ethics is a very complicated and very rich subject that humanity has studied for at least a few thousand years. When AI became more widespread, I actually went and read up multiple books on philosophy and multiple books on ethics, because I was hoping, naively it turned out, to come up with if only there's a checklist of five things we could do. And so once we do these five things, then we can be ethical. But I failed, and I don't think anyone has ever managed to come up with a simple checklist of things to do to give that level of concrete guidance about how to be ethical. So what I hope to share with you instead is not a checklist, because I wasn't able to come up with one, but just some general guidance and some suggestions for how to make sure that work is less biased, more fair, and more ethical. And I hope that some of these guidance, which would be relatively general, will help you with your work as well. So here are some suggestions for making your work more fair, less biased, and more ethical. Before deploying a system that could create harm, I will usually try to assemble a diverse team to brainstorm possible things that might go wrong with an emphasis on possible harm to vulnerable groups. I found many times in my life that having a more diverse team, and by diverse I mean diversity on multiple dimensions, ranging from gender to ethnicity to culture to many other traits. I found that having more diverse teams actually causes a team collectively to be better at coming up with ideas about things that might go wrong. And it increases the odds that we'll recognize a problem and fix it before rolling out the system and having that cause harm to some particular group. In addition to having a diverse team carry out brainstorming, I've also found it useful to carry out a literature search on any standards or guidelines for your industry or particular application area. For example, in the financial industry, there are starting to be established standards for what it means to be a system, say one that decides who to approve loans to. And it means for a system like that to be reasonably fair and free from bias and those standards that are still emerging in different sectors could inform your work, depending on what you're working on. After identifying possible problems, I found it useful to then audit the system against these identified dimensions of possible harm prior to deployment. You saw in the last video, the full cycle of machine learning project. And one key step that's often a crucial line of defense against deploying something problematic is after you've trained a model, but before you deploy it in production, if the team has brainstormed that it may be biased against certain subgroups, such as certain genders or certain ethnicities, you can then audit the system to measure the performance to see if it really is biased against certain genders or ethnicities or other subgroups and to make sure that any problems are identified and fixed prior to deployment. Finally, I found it useful to develop a mitigation plan if applicable. And one simple mitigation plan would be a rollback to the earlier system that we knew was reasonably fair. And then even after deployment, to continue to monitor harm so that you can then trigger a mitigation plan and act quickly in case there is a problem that needs to be addressed. For example, all of the self-driving car teams prior to rolling out self-driving cars on the road had developed mitigation plans for what to do in case the car ever gets involved in an accident so that if the car was ever in an accident, there was already a mitigation plan that they could execute immediately rather than have a car get into an accident and then only scramble after the fact to figure out what to do. I've worked on many machine learning systems and let me tell you the issues of ethics, fairness and bias are issues we should take seriously. It's not something to brush off. It's not something to take lightly. Now of course, there are some projects with more serious ethical implications than others. For example, if I'm building a neural network to decide how long to roast my coffee beans, clearly the ethical implications of that seem significantly less than if say you are building a system to decide what bank loans to approve, which if it's biased can cause significant harm. But I hope that all of us collectively working in machine learning can keep on getting better, debate these issues, spot problems, fix them before they cause harm so that we collectively can avoid some of the mistakes that the machine learning world had made before because this stuff matters and the systems we build can affect a lot of people. And so that's it on the process of developing a machine learning system and congratulations on getting to the end of this week's required videos. I have just two more optional videos this week for you on addressing skewed data sets. And that means data sets where the ratio of positive to negative examples is very far from 50-50. And it turns out that some special techniques are needed to address machine learning applications like that. So I hope to see you in the next video, optional video, on how to handle skewed data sets.
[{"start": 0.0, "end": 6.640000000000001, "text": " Machine learning algorithms today are affecting billions of people."}, {"start": 6.640000000000001, "end": 11.94, "text": " You've heard me mention ethics in other videos before, and I hope that if you're building"}, {"start": 11.94, "end": 17.240000000000002, "text": " a machine learning system that affects people, that you give some thought to making sure"}, {"start": 17.240000000000002, "end": 23.76, "text": " that your system is reasonably fair, reasonably free from bias, and that you're taking an"}, {"start": 23.76, "end": 27.240000000000002, "text": " ethical approach to your application."}, {"start": 27.24, "end": 32.32, "text": " Let's take a look at some issues related to fairness, bias, and ethics."}, {"start": 32.32, "end": 37.48, "text": " Unfortunately, in the history of machine learning, there have been a few systems, some widely"}, {"start": 37.48, "end": 44.2, "text": " publicized, that turned out to exhibit a completely unacceptable level of bias."}, {"start": 44.2, "end": 49.36, "text": " For example, there was a hiring tool that was once shown to discriminate against women."}, {"start": 49.36, "end": 54.92, "text": " The company that built the system stopped using it, but one wishes that this system"}, {"start": 54.92, "end": 57.64, "text": " had never been rolled out in the first place."}, {"start": 57.64, "end": 63.96, "text": " Or there was also a well-documented example of face recognition systems that matched dark-skinned"}, {"start": 63.96, "end": 69.52, "text": " individuals to criminal mugshots much more often than lighter-skinned individuals."}, {"start": 69.52, "end": 74.88, "text": " And clearly, this is not acceptable, and we should get better at the community at just"}, {"start": 74.88, "end": 78.8, "text": " not building and deploying systems with a problem like this in the first place."}, {"start": 78.8, "end": 84.88, "text": " There have been systems that gave bank loan approvals in a way that was biased and discriminated"}, {"start": 84.88, "end": 91.39999999999999, "text": " against subgroups, and we also really like learning algorithms to not have the toxic effect"}, {"start": 91.39999999999999, "end": 94.28, "text": " of reinforcing negative stereotypes."}, {"start": 94.28, "end": 99.56, "text": " For example, I have a daughter, and if she searches online for certain professions and"}, {"start": 99.56, "end": 104.24, "text": " doesn't see anyone that looks like her, I would hate for that to discourage her from"}, {"start": 104.24, "end": 106.3, "text": " taking on certain professions."}, {"start": 106.3, "end": 113.19999999999999, "text": " In addition to the issues of bias and fair treatment of individuals, there have also"}, {"start": 113.2, "end": 118.96000000000001, "text": " been adverse use cases or negative use cases of machine learning algorithms."}, {"start": 118.96000000000001, "end": 125.8, "text": " For example, there was this widely cited and widely viewed video released with full disclosure"}, {"start": 125.8, "end": 133.68, "text": " and full transparency by the company BuzzFeed of a deep fake of former US President Barack"}, {"start": 133.68, "end": 134.68, "text": " Obama."}, {"start": 134.68, "end": 138.92000000000002, "text": " And you can actually find and watch the whole video online if you want."}, {"start": 138.92, "end": 145.04, "text": " But the company that created this video did so with full transparency and full disclosure."}, {"start": 145.04, "end": 152.35999999999999, "text": " But clearly using this technology to generate fake videos without consent and without disclosure"}, {"start": 152.35999999999999, "end": 153.95999999999998, "text": " would be unethical."}, {"start": 153.95999999999998, "end": 162.67999999999998, "text": " We've also seen, unfortunately, social media sometimes spreading toxic or incendiary speech"}, {"start": 162.67999999999998, "end": 167.39999999999998, "text": " because optimizing for user engagements has led to algorithms doing so."}, {"start": 167.4, "end": 174.96, "text": " There have been bots that were used to generate fake content for either commercial purposes,"}, {"start": 174.96, "end": 181.48000000000002, "text": " such as posting fake comments on products, or for political purposes."}, {"start": 181.48000000000002, "end": 186.56, "text": " And there are users of machine learning to build harmful products, commit fraud, and"}, {"start": 186.56, "end": 188.04000000000002, "text": " so on."}, {"start": 188.04000000000002, "end": 193.6, "text": " And in parts of the machine learning world, just as in email, there has been a battle"}, {"start": 193.6, "end": 197.6, "text": " between the spammers and the anti-spam community."}, {"start": 197.6, "end": 205.04, "text": " I am seeing today in, for example, the financial industry, a battle between people trying to"}, {"start": 205.04, "end": 209.32, "text": " commit fraud and the people fighting fraud."}, {"start": 209.32, "end": 215.07999999999998, "text": " And unfortunately, machine learning is used by some of the fraudsters and some of the"}, {"start": 215.07999999999998, "end": 216.07999999999998, "text": " spammers."}, {"start": 216.07999999999998, "end": 220.76, "text": " So for goodness sakes, please don't build a machine learning system that has a negative"}, {"start": 220.76, "end": 222.95999999999998, "text": " impact on society."}, {"start": 222.96, "end": 230.20000000000002, "text": " And if you are asked to work on an application that you consider unethical, I urge you to"}, {"start": 230.20000000000002, "end": 231.44, "text": " walk away."}, {"start": 231.44, "end": 235.36, "text": " For what it's worth, there have been multiple times that I have looked at a project that"}, {"start": 235.36, "end": 240.0, "text": " seemed to be financially sound, you know, make money for some company, but I have killed"}, {"start": 240.0, "end": 242.36, "text": " the project just on ethical grounds."}, {"start": 242.36, "end": 247.0, "text": " Because I think that even though the financial case was sound, I felt that it makes the world"}, {"start": 247.0, "end": 248.0, "text": " worse off."}, {"start": 248.0, "end": 252.44, "text": " And I just don't ever want to be involved in a project like that."}, {"start": 252.44, "end": 257.28, "text": " Ethics is a very complicated and very rich subject that humanity has studied for at least"}, {"start": 257.28, "end": 259.36, "text": " a few thousand years."}, {"start": 259.36, "end": 265.92, "text": " When AI became more widespread, I actually went and read up multiple books on philosophy"}, {"start": 265.92, "end": 271.04, "text": " and multiple books on ethics, because I was hoping, naively it turned out, to come up"}, {"start": 271.04, "end": 274.24, "text": " with if only there's a checklist of five things we could do."}, {"start": 274.24, "end": 277.84, "text": " And so once we do these five things, then we can be ethical."}, {"start": 277.84, "end": 282.59999999999997, "text": " But I failed, and I don't think anyone has ever managed to come up with a simple checklist"}, {"start": 282.59999999999997, "end": 288.0, "text": " of things to do to give that level of concrete guidance about how to be ethical."}, {"start": 288.0, "end": 293.0, "text": " So what I hope to share with you instead is not a checklist, because I wasn't able to"}, {"start": 293.0, "end": 298.28, "text": " come up with one, but just some general guidance and some suggestions for how to make sure"}, {"start": 298.28, "end": 301.84, "text": " that work is less biased, more fair, and more ethical."}, {"start": 301.84, "end": 307.52, "text": " And I hope that some of these guidance, which would be relatively general, will help you"}, {"start": 307.52, "end": 309.0, "text": " with your work as well."}, {"start": 309.0, "end": 317.47999999999996, "text": " So here are some suggestions for making your work more fair, less biased, and more ethical."}, {"start": 317.47999999999996, "end": 324.56, "text": " Before deploying a system that could create harm, I will usually try to assemble a diverse"}, {"start": 324.56, "end": 331.34, "text": " team to brainstorm possible things that might go wrong with an emphasis on possible harm"}, {"start": 331.34, "end": 333.79999999999995, "text": " to vulnerable groups."}, {"start": 333.8, "end": 339.08, "text": " I found many times in my life that having a more diverse team, and by diverse I mean"}, {"start": 339.08, "end": 344.62, "text": " diversity on multiple dimensions, ranging from gender to ethnicity to culture to many"}, {"start": 344.62, "end": 346.64, "text": " other traits."}, {"start": 346.64, "end": 352.28000000000003, "text": " I found that having more diverse teams actually causes a team collectively to be better at"}, {"start": 352.28000000000003, "end": 355.84000000000003, "text": " coming up with ideas about things that might go wrong."}, {"start": 355.84000000000003, "end": 361.72, "text": " And it increases the odds that we'll recognize a problem and fix it before rolling out the"}, {"start": 361.72, "end": 366.64000000000004, "text": " system and having that cause harm to some particular group."}, {"start": 366.64000000000004, "end": 371.68, "text": " In addition to having a diverse team carry out brainstorming, I've also found it useful"}, {"start": 371.68, "end": 377.52000000000004, "text": " to carry out a literature search on any standards or guidelines for your industry or particular"}, {"start": 377.52000000000004, "end": 379.64000000000004, "text": " application area."}, {"start": 379.64000000000004, "end": 385.48, "text": " For example, in the financial industry, there are starting to be established standards for"}, {"start": 385.48, "end": 390.40000000000003, "text": " what it means to be a system, say one that decides who to approve loans to."}, {"start": 390.4, "end": 394.76, "text": " And it means for a system like that to be reasonably fair and free from bias and those"}, {"start": 394.76, "end": 399.59999999999997, "text": " standards that are still emerging in different sectors could inform your work, depending"}, {"start": 399.59999999999997, "end": 401.71999999999997, "text": " on what you're working on."}, {"start": 401.71999999999997, "end": 410.15999999999997, "text": " After identifying possible problems, I found it useful to then audit the system against"}, {"start": 410.15999999999997, "end": 415.88, "text": " these identified dimensions of possible harm prior to deployment."}, {"start": 415.88, "end": 421.2, "text": " You saw in the last video, the full cycle of machine learning project."}, {"start": 421.2, "end": 426.04, "text": " And one key step that's often a crucial line of defense against deploying something problematic"}, {"start": 426.04, "end": 431.6, "text": " is after you've trained a model, but before you deploy it in production, if the team has"}, {"start": 431.6, "end": 435.96, "text": " brainstormed that it may be biased against certain subgroups, such as certain genders"}, {"start": 435.96, "end": 441.8, "text": " or certain ethnicities, you can then audit the system to measure the performance to see"}, {"start": 441.8, "end": 447.68, "text": " if it really is biased against certain genders or ethnicities or other subgroups and to make"}, {"start": 447.68, "end": 453.6, "text": " sure that any problems are identified and fixed prior to deployment."}, {"start": 453.6, "end": 459.92, "text": " Finally, I found it useful to develop a mitigation plan if applicable."}, {"start": 459.92, "end": 464.64, "text": " And one simple mitigation plan would be a rollback to the earlier system that we knew"}, {"start": 464.64, "end": 466.62, "text": " was reasonably fair."}, {"start": 466.62, "end": 471.46000000000004, "text": " And then even after deployment, to continue to monitor harm so that you can then trigger"}, {"start": 471.46, "end": 477.32, "text": " a mitigation plan and act quickly in case there is a problem that needs to be addressed."}, {"start": 477.32, "end": 483.56, "text": " For example, all of the self-driving car teams prior to rolling out self-driving cars on"}, {"start": 483.56, "end": 488.91999999999996, "text": " the road had developed mitigation plans for what to do in case the car ever gets involved"}, {"start": 488.91999999999996, "end": 494.47999999999996, "text": " in an accident so that if the car was ever in an accident, there was already a mitigation"}, {"start": 494.47999999999996, "end": 499.32, "text": " plan that they could execute immediately rather than have a car get into an accident and then"}, {"start": 499.32, "end": 502.12, "text": " only scramble after the fact to figure out what to do."}, {"start": 502.12, "end": 507.88, "text": " I've worked on many machine learning systems and let me tell you the issues of ethics,"}, {"start": 507.88, "end": 510.92, "text": " fairness and bias are issues we should take seriously."}, {"start": 510.92, "end": 512.4, "text": " It's not something to brush off."}, {"start": 512.4, "end": 515.04, "text": " It's not something to take lightly."}, {"start": 515.04, "end": 520.76, "text": " Now of course, there are some projects with more serious ethical implications than others."}, {"start": 520.76, "end": 525.64, "text": " For example, if I'm building a neural network to decide how long to roast my coffee beans,"}, {"start": 525.64, "end": 530.92, "text": " clearly the ethical implications of that seem significantly less than if say you are building"}, {"start": 530.92, "end": 536.96, "text": " a system to decide what bank loans to approve, which if it's biased can cause significant"}, {"start": 536.96, "end": 537.96, "text": " harm."}, {"start": 537.96, "end": 544.1999999999999, "text": " But I hope that all of us collectively working in machine learning can keep on getting better,"}, {"start": 544.1999999999999, "end": 549.6, "text": " debate these issues, spot problems, fix them before they cause harm so that we collectively"}, {"start": 549.6, "end": 554.36, "text": " can avoid some of the mistakes that the machine learning world had made before because this"}, {"start": 554.36, "end": 559.16, "text": " stuff matters and the systems we build can affect a lot of people."}, {"start": 559.16, "end": 566.12, "text": " And so that's it on the process of developing a machine learning system and congratulations"}, {"start": 566.12, "end": 570.0600000000001, "text": " on getting to the end of this week's required videos."}, {"start": 570.0600000000001, "end": 576.48, "text": " I have just two more optional videos this week for you on addressing skewed data sets."}, {"start": 576.48, "end": 581.72, "text": " And that means data sets where the ratio of positive to negative examples is very far"}, {"start": 581.72, "end": 582.72, "text": " from 50-50."}, {"start": 582.72, "end": 588.08, "text": " And it turns out that some special techniques are needed to address machine learning applications"}, {"start": 588.08, "end": 589.08, "text": " like that."}, {"start": 589.08, "end": 616.08, "text": " So I hope to see you in the next video, optional video, on how to handle skewed data sets."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=WiaW5w3CFyM
6.16 Skewed datasets (optional) | Error metrics for skewed datasets -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
If you're working on a machine learning application where the ratio of positive to negative examples is very skewed, very far from 50-50, then it turns out that the usual error metrics like accuracy don't work that well. Let's start with an example. Let's say you're training a binary classifier to detect a rare disease in patients based on lab tests or based on other data from the patients. So y is equal to 1 if the disease is present and y is equal to 0 otherwise. And suppose you find that you've achieved 1% error on the test set, so you have a 99% correct diagnosis. This seems like a great outcome, right? But it turns out that if this is a rare disease, so y is equal to 1 very rarely, then this may not be as impressive as it sounds. Specifically, if it is a rare disease and if only 0.5% of the patients in your population have the disease, then if instead you wrote a program that just said print y equals 0, it predicts y equals 0 all the time, this very simple even non-learning algorithm, because it just says y equals 0 all the time, this will actually have 99.5% accuracy or 0.5% error. So this really dumb algorithm outperforms your learning algorithm, which had 1% error, much worse than 0.5% error. But I think a piece of software that just prints y equals 0 is not a very useful diagnostic tool. What this really means is that you can't tell if getting 1% error is actually a good result or a bad result. And in particular, if you have one algorithm that achieves 99.5% accuracy, different one that achieves 99.2% accuracy, different one that achieves 99.6% accuracy, it's difficult to know which of these is actually the best algorithm. Because if you have an algorithm that achieves 0.5% error and a different one that achieves 1.2% error, it's difficult to know which of these is the best algorithm, because the one with the lowest error may be a not particularly useful prediction like this that always predicts y equals 0 and never ever diagnose any patient as having this disease. And quite possibly an algorithm that has 1% error, but that at least diagnoses some patients as having the disease could be more useful than just printing y equals 0 all the time. When working on problems with skewed datasets, we usually use a different error metric rather than just classification error to figure out how well your learning algorithm is doing. In particular, a common pair of error metrics are precision and recall, which we'll define on the slide. In this example, y equals 1 will be the rare class such as a rare disease that we may want to detect. And in particular, to evaluate a learning algorithm's performance with one rare class, it'll be useful to construct what's called a confusion matrix, which is a 2x2 matrix or a 2x2 table that looks like this. On the axis on top, I'm going to write the actual class, which could be 1 or 0. And on the vertical axis, I'm going to write the predicted class, which is what did your learning algorithm predict on a given example, 1 or 0. To evaluate your algorithm's performance on the cross-validation set or the test set, say, we would then count up how many examples was the actual class 1 and the predicted class 1. So here you have 100 cross-validation examples, and on 15 of them, the learning algorithm had predicted 1 and the actual label was also 1. And over here, you would count up the number of examples in, say, your cross-validation set where the actual class was 0 and your algorithm predicted 1. So maybe you have five examples there. And here, predicted class 0, actual class 1. So you have 10 examples and, let's say, 70 examples with predicted class 0 and actual class 0. In this example, the skew isn't as extreme as what I had on the previous slide because in these 100 examples in your cross-validation set, we have a total of 25 examples where the actual class was 1 and 75 where the actual class was 0 by adding up these numbers vertically. And you notice also that I'm using different colors to indicate these four cells in the table. I'm actually going to give names to these four cells. When the actual class is 1 and the predicted class is 1, we're going to call that a true positive. Because you predicted positive and it was true, there's a positive example. In this cell on the lower right, where the actual class is 0 and the predicted class is 0, we'll call that a true negative because you predicted negative and it was true. It really was a negative example. This cell on the upper right is called a false positive because the algorithm predicted positive but it was false. It's not actually positive. So this is called a false positive. And this cell is called the number of false negatives because the algorithm predicted 0 but it was false. It wasn't actually negative. The actual label, the actual class was 1. Having divided the classifications into these four cells, two common metrics you might compute are then the precision and recall. Here's what they mean. The precision of learning algorithm computes of all the patients where we predicted y is equal to 1. What fraction actually has the rare disease? In other words, precision is defined as the number of true positives divided by the number classified as positive. In other words, of all the examples you predicted as positive, what fraction do we actually get right? Another way to write this formula would be true positives divided by true positives plus false positives because it is by summing this cell and this cell that you end up with the total number that was predicted as positive. In this example, the numerator true positives would be 15 and divided by 15 plus 5 and so that's 15 over 20 or three quarters or 0.75. So we say that this algorithm has a precision of 75% because of all the things that predicted as positive, of all the patients that it thought has this rare disease, it was right 75% of the time. The second metric that is useful to compute is recall. And recall asks, of all the patients that actually have the rare disease, what fraction do we correctly detect as having it? So recall is defined as the number of true positives divided by the number of actual positives. And alternatively, we can write that as number of true positives divided by the number of actual positives is, well, it's this cell plus this cell. So it's actually the number of true positives plus the number of false negatives because it's by summing up this upper left cell and this lower left cell that you get the number of actual positive examples. And so in our example, this would be 15 divided by 15 plus 10, which is 15 over 25, which is 0.6 or 60%. So this learning algorithm would have 0.75 precision and 0.60 recall. And you notice that this will help you detect if a learning algorithm is just printing y equals 0 all the time, because if it predicts 0 all the time, then the numerator of both of these quantities will be 0. It has no true positives. And the recall metric in particular helps you detect if the learning algorithm is predicting 0 all the time. Because if your learning algorithm just prints y equals 0, then the number of true positives will be 0 because it never predicts positive. And so the recall will be equal to 0 divided by the number of actual positives, which is equal to 0. And in general, a learning algorithm with either 0 precision or 0 recall is not a useful algorithm. But just as a side note, if an algorithm actually predicts 0 all the time, precision actually becomes undefined because it's actually 0 over 0. But in practice, if an algorithm doesn't predict even a single positive, we'll just say that precision is also equal to 0. But computing precision and recall makes it easier for you to spot if a learning algorithm. But we'll find that computing both precision and recall makes it easier to spot if an algorithm is both reasonably accurate in that when it says a patient has a disease, there's a good chance the patient has a disease, such as 0.75 chance in this example. And also making sure that of all the patients that have the disease is hoping to diagnose a reasonable fraction of them, such as here is finding 60% of them. So when you have a rare class looking at precision and recall and making sure that both numbers are decently high, that hopefully helps reassure you that your learning algorithm is actually useful. And the term recall was motivated by this observation that if you have a group of patients or a population of patients, then recall measures of all the patients that have the disease, how many would you have accurately diagnosed as having it. So when you have skewed classes or a rare class that you want to detect, precision and recall helps you tell if your learning algorithms is making good predictions or useful predictions. Now that we have these metrics for telling how well your learning algorithm is doing, in the next video, let's take a look at how to trade off between precision and recall to try to optimize the performance of your learning algorithm.
[{"start": 0.0, "end": 7.5600000000000005, "text": " If you're working on a machine learning application where the ratio of positive to negative examples"}, {"start": 7.5600000000000005, "end": 13.620000000000001, "text": " is very skewed, very far from 50-50, then it turns out that the usual error metrics like"}, {"start": 13.620000000000001, "end": 16.0, "text": " accuracy don't work that well."}, {"start": 16.0, "end": 18.48, "text": " Let's start with an example."}, {"start": 18.48, "end": 24.64, "text": " Let's say you're training a binary classifier to detect a rare disease in patients based"}, {"start": 24.64, "end": 29.86, "text": " on lab tests or based on other data from the patients."}, {"start": 29.86, "end": 38.28, "text": " So y is equal to 1 if the disease is present and y is equal to 0 otherwise."}, {"start": 38.28, "end": 44.36, "text": " And suppose you find that you've achieved 1% error on the test set, so you have a 99%"}, {"start": 44.36, "end": 45.36, "text": " correct diagnosis."}, {"start": 45.36, "end": 48.519999999999996, "text": " This seems like a great outcome, right?"}, {"start": 48.519999999999996, "end": 55.34, "text": " But it turns out that if this is a rare disease, so y is equal to 1 very rarely, then this"}, {"start": 55.34, "end": 58.44, "text": " may not be as impressive as it sounds."}, {"start": 58.44, "end": 65.6, "text": " Specifically, if it is a rare disease and if only 0.5% of the patients in your population"}, {"start": 65.6, "end": 72.56, "text": " have the disease, then if instead you wrote a program that just said print y equals 0,"}, {"start": 72.56, "end": 78.52, "text": " it predicts y equals 0 all the time, this very simple even non-learning algorithm, because"}, {"start": 78.52, "end": 86.92, "text": " it just says y equals 0 all the time, this will actually have 99.5% accuracy or 0.5%"}, {"start": 86.92, "end": 88.8, "text": " error."}, {"start": 88.8, "end": 94.36, "text": " So this really dumb algorithm outperforms your learning algorithm, which had 1% error,"}, {"start": 94.36, "end": 98.08, "text": " much worse than 0.5% error."}, {"start": 98.08, "end": 104.6, "text": " But I think a piece of software that just prints y equals 0 is not a very useful diagnostic"}, {"start": 104.6, "end": 105.78, "text": " tool."}, {"start": 105.78, "end": 111.72, "text": " What this really means is that you can't tell if getting 1% error is actually a good result"}, {"start": 111.72, "end": 113.92, "text": " or a bad result."}, {"start": 113.92, "end": 121.52, "text": " And in particular, if you have one algorithm that achieves 99.5% accuracy, different one"}, {"start": 121.52, "end": 129.52, "text": " that achieves 99.2% accuracy, different one that achieves 99.6% accuracy, it's difficult"}, {"start": 129.52, "end": 134.12, "text": " to know which of these is actually the best algorithm."}, {"start": 134.12, "end": 140.96, "text": " Because if you have an algorithm that achieves 0.5% error and a different one that achieves"}, {"start": 140.96, "end": 150.04000000000002, "text": " 1.2% error, it's difficult to know which of these is the best algorithm, because the one"}, {"start": 150.04000000000002, "end": 156.16, "text": " with the lowest error may be a not particularly useful prediction like this that always predicts"}, {"start": 156.16, "end": 161.32, "text": " y equals 0 and never ever diagnose any patient as having this disease."}, {"start": 161.32, "end": 167.04000000000002, "text": " And quite possibly an algorithm that has 1% error, but that at least diagnoses some patients"}, {"start": 167.04, "end": 173.64, "text": " as having the disease could be more useful than just printing y equals 0 all the time."}, {"start": 173.64, "end": 180.72, "text": " When working on problems with skewed datasets, we usually use a different error metric rather"}, {"start": 180.72, "end": 186.04, "text": " than just classification error to figure out how well your learning algorithm is doing."}, {"start": 186.04, "end": 192.82, "text": " In particular, a common pair of error metrics are precision and recall, which we'll define"}, {"start": 192.82, "end": 193.82, "text": " on the slide."}, {"start": 193.82, "end": 200.68, "text": " In this example, y equals 1 will be the rare class such as a rare disease that we may want"}, {"start": 200.68, "end": 201.68, "text": " to detect."}, {"start": 201.68, "end": 208.16, "text": " And in particular, to evaluate a learning algorithm's performance with one rare class,"}, {"start": 208.16, "end": 215.07999999999998, "text": " it'll be useful to construct what's called a confusion matrix, which is a 2x2 matrix"}, {"start": 215.07999999999998, "end": 218.6, "text": " or a 2x2 table that looks like this."}, {"start": 218.6, "end": 224.6, "text": " On the axis on top, I'm going to write the actual class, which could be 1 or 0."}, {"start": 224.6, "end": 229.6, "text": " And on the vertical axis, I'm going to write the predicted class, which is what did your"}, {"start": 229.6, "end": 234.48, "text": " learning algorithm predict on a given example, 1 or 0."}, {"start": 234.48, "end": 239.76, "text": " To evaluate your algorithm's performance on the cross-validation set or the test set,"}, {"start": 239.76, "end": 246.24, "text": " say, we would then count up how many examples was the actual class 1 and the predicted class"}, {"start": 246.24, "end": 247.24, "text": " 1."}, {"start": 247.24, "end": 253.4, "text": " So here you have 100 cross-validation examples, and on 15 of them, the learning algorithm"}, {"start": 253.4, "end": 258.64, "text": " had predicted 1 and the actual label was also 1."}, {"start": 258.64, "end": 263.08, "text": " And over here, you would count up the number of examples in, say, your cross-validation"}, {"start": 263.08, "end": 268.48, "text": " set where the actual class was 0 and your algorithm predicted 1."}, {"start": 268.48, "end": 271.44, "text": " So maybe you have five examples there."}, {"start": 271.44, "end": 274.52, "text": " And here, predicted class 0, actual class 1."}, {"start": 274.52, "end": 279.91999999999996, "text": " So you have 10 examples and, let's say, 70 examples with predicted class 0 and actual"}, {"start": 279.91999999999996, "end": 282.03999999999996, "text": " class 0."}, {"start": 282.03999999999996, "end": 289.28, "text": " In this example, the skew isn't as extreme as what I had on the previous slide because"}, {"start": 289.28, "end": 297.35999999999996, "text": " in these 100 examples in your cross-validation set, we have a total of 25 examples where"}, {"start": 297.36, "end": 307.28000000000003, "text": " the actual class was 1 and 75 where the actual class was 0 by adding up these numbers vertically."}, {"start": 307.28000000000003, "end": 312.52000000000004, "text": " And you notice also that I'm using different colors to indicate these four cells in the"}, {"start": 312.52000000000004, "end": 313.52000000000004, "text": " table."}, {"start": 313.52000000000004, "end": 317.12, "text": " I'm actually going to give names to these four cells."}, {"start": 317.12, "end": 321.72, "text": " When the actual class is 1 and the predicted class is 1, we're going to call that a true"}, {"start": 321.72, "end": 323.86, "text": " positive."}, {"start": 323.86, "end": 328.2, "text": " Because you predicted positive and it was true, there's a positive example."}, {"start": 328.2, "end": 332.44, "text": " In this cell on the lower right, where the actual class is 0 and the predicted class"}, {"start": 332.44, "end": 338.28000000000003, "text": " is 0, we'll call that a true negative because you predicted negative and it was true."}, {"start": 338.28000000000003, "end": 341.48, "text": " It really was a negative example."}, {"start": 341.48, "end": 350.72, "text": " This cell on the upper right is called a false positive because the algorithm predicted positive"}, {"start": 350.72, "end": 351.72, "text": " but it was false."}, {"start": 351.72, "end": 353.08000000000004, "text": " It's not actually positive."}, {"start": 353.08, "end": 355.91999999999996, "text": " So this is called a false positive."}, {"start": 355.91999999999996, "end": 361.52, "text": " And this cell is called the number of false negatives because the algorithm predicted"}, {"start": 361.52, "end": 362.52, "text": " 0 but it was false."}, {"start": 362.52, "end": 364.2, "text": " It wasn't actually negative."}, {"start": 364.2, "end": 368.8, "text": " The actual label, the actual class was 1."}, {"start": 368.8, "end": 375.12, "text": " Having divided the classifications into these four cells, two common metrics you might compute"}, {"start": 375.12, "end": 378.79999999999995, "text": " are then the precision and recall."}, {"start": 378.79999999999995, "end": 380.59999999999997, "text": " Here's what they mean."}, {"start": 380.6, "end": 386.08000000000004, "text": " The precision of learning algorithm computes of all the patients where we predicted y is"}, {"start": 386.08000000000004, "end": 387.32000000000005, "text": " equal to 1."}, {"start": 387.32000000000005, "end": 390.72, "text": " What fraction actually has the rare disease?"}, {"start": 390.72, "end": 399.22, "text": " In other words, precision is defined as the number of true positives divided by the number"}, {"start": 399.22, "end": 403.12, "text": " classified as positive."}, {"start": 403.12, "end": 407.56, "text": " In other words, of all the examples you predicted as positive, what fraction do we actually"}, {"start": 407.56, "end": 409.88, "text": " get right?"}, {"start": 409.88, "end": 420.4, "text": " Another way to write this formula would be true positives divided by true positives plus"}, {"start": 420.4, "end": 430.0, "text": " false positives because it is by summing this cell and this cell that you end up with the"}, {"start": 430.0, "end": 434.71999999999997, "text": " total number that was predicted as positive."}, {"start": 434.72, "end": 443.92, "text": " In this example, the numerator true positives would be 15 and divided by 15 plus 5 and so"}, {"start": 443.92, "end": 448.68, "text": " that's 15 over 20 or three quarters or 0.75."}, {"start": 448.68, "end": 454.88000000000005, "text": " So we say that this algorithm has a precision of 75% because of all the things that predicted"}, {"start": 454.88000000000005, "end": 460.72, "text": " as positive, of all the patients that it thought has this rare disease, it was right 75% of"}, {"start": 460.72, "end": 461.72, "text": " the time."}, {"start": 461.72, "end": 467.84000000000003, "text": " The second metric that is useful to compute is recall."}, {"start": 467.84000000000003, "end": 472.84000000000003, "text": " And recall asks, of all the patients that actually have the rare disease, what fraction"}, {"start": 472.84000000000003, "end": 476.22, "text": " do we correctly detect as having it?"}, {"start": 476.22, "end": 482.96000000000004, "text": " So recall is defined as the number of true positives divided by the number of actual"}, {"start": 482.96000000000004, "end": 485.56, "text": " positives."}, {"start": 485.56, "end": 492.66, "text": " And alternatively, we can write that as number of true positives divided by the number of"}, {"start": 492.66, "end": 496.48, "text": " actual positives is, well, it's this cell plus this cell."}, {"start": 496.48, "end": 502.48, "text": " So it's actually the number of true positives plus the number of false negatives because"}, {"start": 502.48, "end": 506.6, "text": " it's by summing up this upper left cell and this lower left cell that you get the number"}, {"start": 506.6, "end": 509.64, "text": " of actual positive examples."}, {"start": 509.64, "end": 519.0, "text": " And so in our example, this would be 15 divided by 15 plus 10, which is 15 over 25, which"}, {"start": 519.0, "end": 523.24, "text": " is 0.6 or 60%."}, {"start": 523.24, "end": 529.48, "text": " So this learning algorithm would have 0.75 precision and 0.60 recall."}, {"start": 529.48, "end": 536.64, "text": " And you notice that this will help you detect if a learning algorithm is just printing y"}, {"start": 536.64, "end": 544.52, "text": " equals 0 all the time, because if it predicts 0 all the time, then the numerator of both"}, {"start": 544.52, "end": 548.04, "text": " of these quantities will be 0."}, {"start": 548.04, "end": 550.96, "text": " It has no true positives."}, {"start": 550.96, "end": 557.24, "text": " And the recall metric in particular helps you detect if the learning algorithm is predicting"}, {"start": 557.24, "end": 559.56, "text": " 0 all the time."}, {"start": 559.56, "end": 568.92, "text": " Because if your learning algorithm just prints y equals 0, then the number of true positives"}, {"start": 568.92, "end": 572.2199999999999, "text": " will be 0 because it never predicts positive."}, {"start": 572.2199999999999, "end": 578.4799999999999, "text": " And so the recall will be equal to 0 divided by the number of actual positives, which is"}, {"start": 578.4799999999999, "end": 580.3, "text": " equal to 0."}, {"start": 580.3, "end": 585.8399999999999, "text": " And in general, a learning algorithm with either 0 precision or 0 recall is not a useful"}, {"start": 585.8399999999999, "end": 586.8399999999999, "text": " algorithm."}, {"start": 586.84, "end": 593.08, "text": " But just as a side note, if an algorithm actually predicts 0 all the time, precision actually"}, {"start": 593.08, "end": 596.76, "text": " becomes undefined because it's actually 0 over 0."}, {"start": 596.76, "end": 601.5600000000001, "text": " But in practice, if an algorithm doesn't predict even a single positive, we'll just say that"}, {"start": 601.5600000000001, "end": 604.24, "text": " precision is also equal to 0."}, {"start": 604.24, "end": 610.96, "text": " But computing precision and recall makes it easier for you to spot if a learning algorithm."}, {"start": 610.96, "end": 616.44, "text": " But we'll find that computing both precision and recall makes it easier to spot if an algorithm"}, {"start": 616.44, "end": 622.8800000000001, "text": " is both reasonably accurate in that when it says a patient has a disease, there's a good"}, {"start": 622.8800000000001, "end": 627.5600000000001, "text": " chance the patient has a disease, such as 0.75 chance in this example."}, {"start": 627.5600000000001, "end": 632.24, "text": " And also making sure that of all the patients that have the disease is hoping to diagnose"}, {"start": 632.24, "end": 637.5600000000001, "text": " a reasonable fraction of them, such as here is finding 60% of them."}, {"start": 637.5600000000001, "end": 643.6800000000001, "text": " So when you have a rare class looking at precision and recall and making sure that both numbers"}, {"start": 643.68, "end": 650.16, "text": " are decently high, that hopefully helps reassure you that your learning algorithm is actually"}, {"start": 650.16, "end": 651.64, "text": " useful."}, {"start": 651.64, "end": 658.4399999999999, "text": " And the term recall was motivated by this observation that if you have a group of patients"}, {"start": 658.4399999999999, "end": 664.64, "text": " or a population of patients, then recall measures of all the patients that have the disease,"}, {"start": 664.64, "end": 669.12, "text": " how many would you have accurately diagnosed as having it."}, {"start": 669.12, "end": 674.96, "text": " So when you have skewed classes or a rare class that you want to detect, precision and"}, {"start": 674.96, "end": 682.0, "text": " recall helps you tell if your learning algorithms is making good predictions or useful predictions."}, {"start": 682.0, "end": 686.36, "text": " Now that we have these metrics for telling how well your learning algorithm is doing,"}, {"start": 686.36, "end": 691.76, "text": " in the next video, let's take a look at how to trade off between precision and recall"}, {"start": 691.76, "end": 699.4399999999999, "text": " to try to optimize the performance of your learning algorithm."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=JhPHwCMpyO8
6.17 Skewed datasets (optional) | Trading off precision and recall -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the IDU case, we like for learning algorithms to have high precision and high recall. High precision would mean that if it diagnoses a patient with that rare disease, probably the patient does have it and is in accurate diagnosis. And high recall means that if there's a patient with that rare disease, probably the algorithm will correctly identify that they do have that disease. But it turns out that in practice, there's often a trade-off between precision and recall. In this video, we'll take a look at that trade-off and how you can pick a good point along that trade-off. So here are the definitions from the last video on precision and recall. I'll just write them here. What you recall precision is the number of true positives divided by the total number that was predicted positive. And recall is the number of true positives divided by the total actual number of positives. If you're using logistic regression to make predictions, then the logistic regression model will output numbers between 0 and 1. We would typically threshold the outputs of logistic regression at 0.5 and predict 1 if f of x is greater than or equal to 0.5 and predict 0 if it's less than 0.5. But suppose we want to predict that y is equal to 1, that is, the rare disease is present, only if we're very confident. So if our philosophy is whenever we predict that a patient has a disease, we may have to send them for possibly invasive and expensive treatment. So if the consequences of the disease aren't that bad, even if left not treated aggressively, then we may want to predict y equals 1 only if we're very confident. In that case, we may choose to set a higher threshold where we will predict y is 1 only if f of x is greater than or equal to 0.7. So this is saying we'll predict y equals 1 only if we're at least 70% sure rather than just 50% sure. And so this number also becomes 0.7. Notice that these two numbers have to be the same because it's just depending on whether it's greater than or equal to or less than this number that you predict 1 or 0. And by raising this threshold, you predict y equals 1 only if you're pretty confident. And what that means is that precision will increase because whenever you predict 1, you're more likely to be right. So raising the thresholds will result in higher precision, but it also results in lower recall because we're now predicting 1 less often. And so of the total number of patients with the disease, we're going to correctly diagnose fewer of them. So by raising this threshold to 0.7, you end up with higher precision, but lower recall. And in fact, if you want to predict y equals 1 only if you're very, very confident, you can even raise this higher to 0.9. And that results in an even higher precision. And so whenever you predict a patient has a disease, you're probably right. And this will give you a very high precision, but recall will go even further down. On the flip side, suppose we want to avoid missing too many cases of the rare disease. So if what we want is when in doubt, predict y equals 1. This might be the case where if treatment is not too invasive or painful or expensive, but leaving a disease untreated has much worse consequences for the patient. So in that case, you might say, when in doubt, in the interest of safety, let's just predict that they have it and consider them for treatment because untreated cases could be quite bad. If for your application, that is the better way to make decisions, then you would take this threshold and instead lower it, say set it to 0.3. In that case, you predict 1 so long as you think that's maybe a 30% chance or better of the disease being present. And you predict 0 only if you're pretty sure that the disease is absent. And as you can imagine, the impact on precision and recall will be opposite to what you saw appear and lowering this threshold will result in lower precision because we're now looser. We're more willing to predict 1 even if we aren't sure, but to result in higher recall. Because of all the patients that do have that disease, we're probably going to correctly identify more of them. More generally, we have the flexibility to predict 1 only if f is above some threshold. And by choosing this threshold, we can make different trade-offs between precision and recall. And it turns out that for most learning algorithms, there is a trade-off between precision and recall. Precision and recall both go between 0 and 1. And if you were to set a very high threshold, say a threshold of 0.99, then you end up with very high precision but lower recall. And as you reduce the value of this threshold, you then end up with a curve that trades off precision and recall until eventually, if you have a very low threshold, say the threshold equals 0.01, then you end up with very low precision but relatively high recall. And sometimes by plotting this curve, you can then try to pick a threshold which corresponds to picking a point on this curve that balances the cost of false positives and false negatives, or that balances the benefits of high precision and high recall. So plotting precision and recall for different values of the threshold allows you to pick a point that you want. Notice that picking the threshold is not something you can really do with cross-validation because it's up to you to specify the best points. For many applications, manually picking the threshold to trade off precision and recall will be what you end up doing. It turns out that if you want to automatically trade off precision and recall rather than have to do so yourself, there is another metric called the F1 score that is sometimes used to automatically combine precision and recall to help you pick the best value or the best trade off between the two. One challenge with precision and recall is you're now evaluating your algorithms using two different metrics. So if you've trained three different algorithms and the precision and recall numbers look like this, it's not that obvious how to pick which algorithm to use. If there was an algorithm that's better on precision and better on recall, then you probably want to go with that one. But in this example, algorithm 2 has the highest precision, but algorithm 3 has the highest recall and algorithm 1 kind of trades off the two in between. So no one algorithm is obviously the best choice. So in order to help you decide which algorithm to pick, it may be useful to find a way to combine precision and recall into a single score. So you can just look at which algorithm has the highest score and maybe go with that one. One way you could combine precision and recall is to take the average. This turns out not to be a good way, so I don't really recommend this, but if you were to take the average, you'd get 0.45, 0.4, and 0.5. But it turns out that computing the average and picking the algorithm with the highest average between precision and recall doesn't work that well because this algorithm has very, very low precision. And in fact, this corresponds maybe to an algorithm that actually does print y equals one and diagnoses all patients as having the disease. That's why its recall is perfect, but the precision is really low. So algorithm 3 is actually not a particularly useful algorithm, even though the average between precision and recall is quite high. So let's not use the average between precision and recall. Instead, the most common way of combining precision and recall is to compute something called the F1 score. And the F1 score is a way of combining P and R, precision and recall, but that gives more emphasis to whichever of these values is lower. Because it turns out if an algorithm has very low precision or very low recall, it's probably not that useful. So the F1 score is a way of computing an average of sorts that pays more attention to whichever is lower. And the formula for computing F1 score is this. We're going to compute one over P and one over R and average them and then take the inverse of that. So rather than averaging P and R, precision and recall, we're going to average one over P and one over R and then take one over that. And if you simplify this equation, it can also be computed as follows. But by averaging one over P and one over R, this gives a much greater emphasis to if either P or R turns out to be very small. If you were to compute the F1 score for these three algorithms, you find that the F1 score for algorithm one is 0.444 and for the second algorithm is 0.175. And you notice that 0.175 is much closer to the lower value than the higher value. And for the third algorithm is 0.0392. And so F1 score gives a way to trade off precision and recall. And in this case, it'll tell us that maybe the first algorithm is better than the second or the third algorithms. And by the way, in math, this equation is also called the harmonic mean of P and R. And the harmonic mean is a way of taking an average that emphasizes the smaller values more. But for the purposes of the class, you don't need to worry about that terminology of the harmonic mean. So congratulations on getting to the last video of this week. And thank you also for sticking with me through these two optional videos. In this week, you've learned a lot of practical tips, practical advice for how to build a machine learning system. And by applying these ideas, I think you'd be very effective at building machine learning algorithms. Next week, we'll come back to talk about another very powerful machine learning algorithm. In fact, of the advanced techniques that are widely used in many commercial production settings, I think at the top of the list would be neural networks and decision trees. So next week, we'll talk about decision trees, which I think will be another very powerful technique that you could have used to build many successful applications as well. So I look forward to seeing you next week.
[{"start": 0.0, "end": 6.92, "text": " In the IDU case, we like for learning algorithms to have high precision and high recall."}, {"start": 6.92, "end": 11.96, "text": " High precision would mean that if it diagnoses a patient with that rare disease, probably"}, {"start": 11.96, "end": 15.36, "text": " the patient does have it and is in accurate diagnosis."}, {"start": 15.36, "end": 20.16, "text": " And high recall means that if there's a patient with that rare disease, probably the algorithm"}, {"start": 20.16, "end": 23.92, "text": " will correctly identify that they do have that disease."}, {"start": 23.92, "end": 29.88, "text": " But it turns out that in practice, there's often a trade-off between precision and recall."}, {"start": 29.88, "end": 34.68, "text": " In this video, we'll take a look at that trade-off and how you can pick a good point along that"}, {"start": 34.68, "end": 36.12, "text": " trade-off."}, {"start": 36.12, "end": 40.36, "text": " So here are the definitions from the last video on precision and recall."}, {"start": 40.36, "end": 42.28, "text": " I'll just write them here."}, {"start": 42.28, "end": 47.0, "text": " What you recall precision is the number of true positives divided by the total number"}, {"start": 47.0, "end": 49.08, "text": " that was predicted positive."}, {"start": 49.08, "end": 56.56, "text": " And recall is the number of true positives divided by the total actual number of positives."}, {"start": 56.56, "end": 63.64, "text": " If you're using logistic regression to make predictions, then the logistic regression"}, {"start": 63.64, "end": 67.76, "text": " model will output numbers between 0 and 1."}, {"start": 67.76, "end": 75.52000000000001, "text": " We would typically threshold the outputs of logistic regression at 0.5 and predict 1 if"}, {"start": 75.52000000000001, "end": 82.48, "text": " f of x is greater than or equal to 0.5 and predict 0 if it's less than 0.5."}, {"start": 82.48, "end": 88.88000000000001, "text": " But suppose we want to predict that y is equal to 1, that is, the rare disease is present,"}, {"start": 88.88000000000001, "end": 90.48, "text": " only if we're very confident."}, {"start": 90.48, "end": 95.28, "text": " So if our philosophy is whenever we predict that a patient has a disease, we may have"}, {"start": 95.28, "end": 101.10000000000001, "text": " to send them for possibly invasive and expensive treatment."}, {"start": 101.10000000000001, "end": 107.72, "text": " So if the consequences of the disease aren't that bad, even if left not treated aggressively,"}, {"start": 107.72, "end": 112.24000000000001, "text": " then we may want to predict y equals 1 only if we're very confident."}, {"start": 112.24, "end": 118.91999999999999, "text": " In that case, we may choose to set a higher threshold where we will predict y is 1 only"}, {"start": 118.91999999999999, "end": 122.83999999999999, "text": " if f of x is greater than or equal to 0.7."}, {"start": 122.83999999999999, "end": 128.51999999999998, "text": " So this is saying we'll predict y equals 1 only if we're at least 70% sure rather than"}, {"start": 128.51999999999998, "end": 130.72, "text": " just 50% sure."}, {"start": 130.72, "end": 134.48, "text": " And so this number also becomes 0.7."}, {"start": 134.48, "end": 138.44, "text": " Notice that these two numbers have to be the same because it's just depending on whether"}, {"start": 138.44, "end": 144.88, "text": " it's greater than or equal to or less than this number that you predict 1 or 0."}, {"start": 144.88, "end": 151.24, "text": " And by raising this threshold, you predict y equals 1 only if you're pretty confident."}, {"start": 151.24, "end": 157.92, "text": " And what that means is that precision will increase because whenever you predict 1, you're"}, {"start": 157.92, "end": 160.4, "text": " more likely to be right."}, {"start": 160.4, "end": 168.4, "text": " So raising the thresholds will result in higher precision, but it also results in lower recall"}, {"start": 168.4, "end": 172.56, "text": " because we're now predicting 1 less often."}, {"start": 172.56, "end": 179.48000000000002, "text": " And so of the total number of patients with the disease, we're going to correctly diagnose"}, {"start": 179.48000000000002, "end": 181.24, "text": " fewer of them."}, {"start": 181.24, "end": 189.92000000000002, "text": " So by raising this threshold to 0.7, you end up with higher precision, but lower recall."}, {"start": 189.92000000000002, "end": 194.84, "text": " And in fact, if you want to predict y equals 1 only if you're very, very confident, you"}, {"start": 194.84, "end": 198.84, "text": " can even raise this higher to 0.9."}, {"start": 198.84, "end": 201.92000000000002, "text": " And that results in an even higher precision."}, {"start": 201.92000000000002, "end": 205.56, "text": " And so whenever you predict a patient has a disease, you're probably right."}, {"start": 205.56, "end": 210.96, "text": " And this will give you a very high precision, but recall will go even further down."}, {"start": 210.96, "end": 217.72, "text": " On the flip side, suppose we want to avoid missing too many cases of the rare disease."}, {"start": 217.72, "end": 223.96, "text": " So if what we want is when in doubt, predict y equals 1."}, {"start": 223.96, "end": 231.32000000000002, "text": " This might be the case where if treatment is not too invasive or painful or expensive,"}, {"start": 231.32000000000002, "end": 235.72, "text": " but leaving a disease untreated has much worse consequences for the patient."}, {"start": 235.72, "end": 240.68, "text": " So in that case, you might say, when in doubt, in the interest of safety, let's just predict"}, {"start": 240.68, "end": 246.76000000000002, "text": " that they have it and consider them for treatment because untreated cases could be quite bad."}, {"start": 246.76000000000002, "end": 252.08, "text": " If for your application, that is the better way to make decisions, then you would take"}, {"start": 252.08, "end": 257.68, "text": " this threshold and instead lower it, say set it to 0.3."}, {"start": 257.68, "end": 263.28000000000003, "text": " In that case, you predict 1 so long as you think that's maybe a 30% chance or better"}, {"start": 263.28000000000003, "end": 265.72, "text": " of the disease being present."}, {"start": 265.72, "end": 270.68, "text": " And you predict 0 only if you're pretty sure that the disease is absent."}, {"start": 270.68, "end": 277.46000000000004, "text": " And as you can imagine, the impact on precision and recall will be opposite to what you saw"}, {"start": 277.46, "end": 287.4, "text": " appear and lowering this threshold will result in lower precision because we're now looser."}, {"start": 287.4, "end": 295.41999999999996, "text": " We're more willing to predict 1 even if we aren't sure, but to result in higher recall."}, {"start": 295.41999999999996, "end": 299.52, "text": " Because of all the patients that do have that disease, we're probably going to correctly"}, {"start": 299.52, "end": 301.82, "text": " identify more of them."}, {"start": 301.82, "end": 309.2, "text": " More generally, we have the flexibility to predict 1 only if f is above some threshold."}, {"start": 309.2, "end": 313.84, "text": " And by choosing this threshold, we can make different trade-offs between precision and"}, {"start": 313.84, "end": 314.84, "text": " recall."}, {"start": 314.84, "end": 321.08, "text": " And it turns out that for most learning algorithms, there is a trade-off between precision and"}, {"start": 321.08, "end": 322.08, "text": " recall."}, {"start": 322.08, "end": 326.4, "text": " Precision and recall both go between 0 and 1."}, {"start": 326.4, "end": 333.79999999999995, "text": " And if you were to set a very high threshold, say a threshold of 0.99, then you end up with"}, {"start": 333.79999999999995, "end": 337.32, "text": " very high precision but lower recall."}, {"start": 337.32, "end": 344.23999999999995, "text": " And as you reduce the value of this threshold, you then end up with a curve that trades off"}, {"start": 344.23999999999995, "end": 349.96, "text": " precision and recall until eventually, if you have a very low threshold, say the threshold"}, {"start": 349.96, "end": 357.0, "text": " equals 0.01, then you end up with very low precision but relatively high recall."}, {"start": 357.0, "end": 362.12, "text": " And sometimes by plotting this curve, you can then try to pick a threshold which corresponds"}, {"start": 362.12, "end": 368.32, "text": " to picking a point on this curve that balances the cost of false positives and false negatives,"}, {"start": 368.32, "end": 372.67999999999995, "text": " or that balances the benefits of high precision and high recall."}, {"start": 372.67999999999995, "end": 378.88, "text": " So plotting precision and recall for different values of the threshold allows you to pick"}, {"start": 378.88, "end": 381.96, "text": " a point that you want."}, {"start": 381.96, "end": 391.12, "text": " Notice that picking the threshold is not something you can really do with cross-validation because"}, {"start": 391.12, "end": 394.44, "text": " it's up to you to specify the best points."}, {"start": 394.44, "end": 399.52, "text": " For many applications, manually picking the threshold to trade off precision and recall"}, {"start": 399.52, "end": 402.64, "text": " will be what you end up doing."}, {"start": 402.64, "end": 407.6, "text": " It turns out that if you want to automatically trade off precision and recall rather than"}, {"start": 407.6, "end": 414.36, "text": " have to do so yourself, there is another metric called the F1 score that is sometimes used"}, {"start": 414.36, "end": 420.20000000000005, "text": " to automatically combine precision and recall to help you pick the best value or the best"}, {"start": 420.20000000000005, "end": 421.92, "text": " trade off between the two."}, {"start": 421.92, "end": 427.12, "text": " One challenge with precision and recall is you're now evaluating your algorithms using"}, {"start": 427.12, "end": 429.04, "text": " two different metrics."}, {"start": 429.04, "end": 433.88, "text": " So if you've trained three different algorithms and the precision and recall numbers look"}, {"start": 433.88, "end": 439.68, "text": " like this, it's not that obvious how to pick which algorithm to use."}, {"start": 439.68, "end": 444.0, "text": " If there was an algorithm that's better on precision and better on recall, then you probably"}, {"start": 444.0, "end": 445.64, "text": " want to go with that one."}, {"start": 445.64, "end": 452.2, "text": " But in this example, algorithm 2 has the highest precision, but algorithm 3 has the highest"}, {"start": 452.2, "end": 456.36, "text": " recall and algorithm 1 kind of trades off the two in between."}, {"start": 456.36, "end": 461.08, "text": " So no one algorithm is obviously the best choice."}, {"start": 461.08, "end": 467.84, "text": " So in order to help you decide which algorithm to pick, it may be useful to find a way to"}, {"start": 467.84, "end": 471.88, "text": " combine precision and recall into a single score."}, {"start": 471.88, "end": 479.0, "text": " So you can just look at which algorithm has the highest score and maybe go with that one."}, {"start": 479.0, "end": 483.52, "text": " One way you could combine precision and recall is to take the average."}, {"start": 483.52, "end": 487.28, "text": " This turns out not to be a good way, so I don't really recommend this, but if you were"}, {"start": 487.28, "end": 493.03999999999996, "text": " to take the average, you'd get 0.45, 0.4, and 0.5."}, {"start": 493.03999999999996, "end": 496.94, "text": " But it turns out that computing the average and picking the algorithm with the highest"}, {"start": 496.94, "end": 501.78, "text": " average between precision and recall doesn't work that well because this algorithm has"}, {"start": 501.78, "end": 503.96, "text": " very, very low precision."}, {"start": 503.96, "end": 510.46, "text": " And in fact, this corresponds maybe to an algorithm that actually does print y equals"}, {"start": 510.46, "end": 514.86, "text": " one and diagnoses all patients as having the disease."}, {"start": 514.86, "end": 518.5600000000001, "text": " That's why its recall is perfect, but the precision is really low."}, {"start": 518.5600000000001, "end": 524.08, "text": " So algorithm 3 is actually not a particularly useful algorithm, even though the average"}, {"start": 524.08, "end": 528.32, "text": " between precision and recall is quite high."}, {"start": 528.32, "end": 533.2, "text": " So let's not use the average between precision and recall."}, {"start": 533.2, "end": 538.44, "text": " Instead, the most common way of combining precision and recall is to compute something"}, {"start": 538.44, "end": 541.08, "text": " called the F1 score."}, {"start": 541.08, "end": 546.36, "text": " And the F1 score is a way of combining P and R, precision and recall, but that gives more"}, {"start": 546.36, "end": 549.9200000000001, "text": " emphasis to whichever of these values is lower."}, {"start": 549.9200000000001, "end": 554.2800000000001, "text": " Because it turns out if an algorithm has very low precision or very low recall, it's probably"}, {"start": 554.2800000000001, "end": 556.1, "text": " not that useful."}, {"start": 556.1, "end": 562.74, "text": " So the F1 score is a way of computing an average of sorts that pays more attention to whichever"}, {"start": 562.74, "end": 565.6800000000001, "text": " is lower."}, {"start": 565.6800000000001, "end": 570.48, "text": " And the formula for computing F1 score is this."}, {"start": 570.48, "end": 579.44, "text": " We're going to compute one over P and one over R and average them and then take the"}, {"start": 579.44, "end": 583.12, "text": " inverse of that."}, {"start": 583.12, "end": 588.2, "text": " So rather than averaging P and R, precision and recall, we're going to average one over"}, {"start": 588.2, "end": 592.1800000000001, "text": " P and one over R and then take one over that."}, {"start": 592.1800000000001, "end": 596.64, "text": " And if you simplify this equation, it can also be computed as follows."}, {"start": 596.64, "end": 601.84, "text": " But by averaging one over P and one over R, this gives a much greater emphasis to if either"}, {"start": 601.84, "end": 605.52, "text": " P or R turns out to be very small."}, {"start": 605.52, "end": 610.8, "text": " If you were to compute the F1 score for these three algorithms, you find that the F1 score"}, {"start": 610.8, "end": 617.52, "text": " for algorithm one is 0.444 and for the second algorithm is 0.175."}, {"start": 617.52, "end": 623.4399999999999, "text": " And you notice that 0.175 is much closer to the lower value than the higher value."}, {"start": 623.44, "end": 628.5600000000001, "text": " And for the third algorithm is 0.0392."}, {"start": 628.5600000000001, "end": 633.7600000000001, "text": " And so F1 score gives a way to trade off precision and recall."}, {"start": 633.7600000000001, "end": 638.7600000000001, "text": " And in this case, it'll tell us that maybe the first algorithm is better than the second"}, {"start": 638.7600000000001, "end": 640.7600000000001, "text": " or the third algorithms."}, {"start": 640.7600000000001, "end": 648.12, "text": " And by the way, in math, this equation is also called the harmonic mean of P and R."}, {"start": 648.12, "end": 653.6, "text": " And the harmonic mean is a way of taking an average that emphasizes the smaller values"}, {"start": 653.6, "end": 654.6, "text": " more."}, {"start": 654.6, "end": 657.68, "text": " But for the purposes of the class, you don't need to worry about that terminology of the"}, {"start": 657.68, "end": 659.0600000000001, "text": " harmonic mean."}, {"start": 659.0600000000001, "end": 662.2, "text": " So congratulations on getting to the last video of this week."}, {"start": 662.2, "end": 666.88, "text": " And thank you also for sticking with me through these two optional videos."}, {"start": 666.88, "end": 671.64, "text": " In this week, you've learned a lot of practical tips, practical advice for how to build a"}, {"start": 671.64, "end": 673.6, "text": " machine learning system."}, {"start": 673.6, "end": 678.94, "text": " And by applying these ideas, I think you'd be very effective at building machine learning"}, {"start": 678.94, "end": 681.5600000000001, "text": " algorithms."}, {"start": 681.5600000000001, "end": 686.48, "text": " Next week, we'll come back to talk about another very powerful machine learning algorithm."}, {"start": 686.48, "end": 691.52, "text": " In fact, of the advanced techniques that are widely used in many commercial production"}, {"start": 691.52, "end": 697.08, "text": " settings, I think at the top of the list would be neural networks and decision trees."}, {"start": 697.08, "end": 702.08, "text": " So next week, we'll talk about decision trees, which I think will be another very powerful"}, {"start": 702.08, "end": 707.0, "text": " technique that you could have used to build many successful applications as well."}, {"start": 707.0, "end": 733.08, "text": " So I look forward to seeing you next week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=HkIEEOhnwEs
7.1 Decision Trees | Decision Tree Model -[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Welcome to the final week of this course on advanced learning algorithms. One of the learning algorithms that's very powerful, widely used in many applications, also used by many to win machine learning competitions is decision trees and tree ensembles. Despite all the successes of decision trees, they somehow haven't received that much attention in academia and so you may not hear about decision trees nearly that much, but it is a tool well worth having in your toolbox. So in this week, we'll learn about decision trees and you'll see how to get them to work for yourself. Let's dive in. To explain how decision trees work, I'm going to use as a running example this week a cat classification example. You're running a cat adoption center and given a few features, you want to train a classifier to quickly tell you if an animal is a cat or not. I have here 10 training examples and associated with each of these 10 examples, we're going to have features regarding the animal's ear shape, face shape, whether it has whiskers and then the ground truth label that you want to predict is this animal cat. So the first example has pointy ears, round face, whiskers are present and it is a cat. The second example has floppy ears, the face shape is not round, whiskers are present and yes that is a cat and so on for the rest of the examples. This data set has five cats and five dogs in it and the input features X are these three columns and the target output that you want to predict Y is this final column of is this a cat or not. In this example the features X take on categorical values. In other words the features take on just a few discrete values. Ear shapes are either pointy or floppy, the face shape is either round or not round and whiskers are either present or absent and this is a binary classification task because the labels are also one or zero. So now each of the features X1, X2 and X3 take on only two possible values. We'll talk about features that can take on more than two possible values as well as continuous value features later in this week. So what is a decision tree? Here's an example of a model that you might get after training a decision tree learning algorithm on the data set that you just saw. The model that is output by the learning algorithm looks like a tree and a picture like this is what computer scientists call a tree. If it looks nothing like the biological trees that you see out there to you it's okay, don't worry about it. We'll go through an example to make sure that this computer science definition of a tree makes sense to you as well. Maybe one of these ovals or rectangles is called a node in the tree. If you haven't seen computer scientists' definitions of trees before, it may seem non-intuitive that the roots of the tree is at the top and the leaves of the tree are down at the bottom. Maybe one way to think about this is this is more akin to an indoor hanging plant which is why the roots are up top and then the leaves tend to fall down to the bottom of the tree. The way this model works is if you have a new test example. She has a cat where the ear shape is pointy, face shape is round and whiskers are present. The way this model will look at this example and make a classification decision is we'll start with this example at this topmost node of the tree. This is called the root node of the tree and we will look at the feature written inside which is ear shape. Based on the value of the ear shape of this example, we'll either go left or go right. The value of the ear shape of this example is pointy and so we'll go down the left branch of the tree like so and end up at this oval node over here. We then look at the face shape of this example which turns out to be round and so we will follow this arrow down over here and the algorithm will make an inference that it thinks this is a cat. And so you get to this node and the algorithm will make a prediction that this is the cat. What I've shown on this slide is one specific decision tree model. To introduce a bit more terminology, this topmost node in the tree is called the root node and all of these nodes, that is all of these oval shapes but excluding the boxes at the bottom, all of these are called decision nodes. And the decision nodes because they look at a particular feature and then based on the value of the feature, causes you to decide whether to go left or right down the tree. Finally, these nodes at the bottom, these rectangular boxes are called leaf nodes and they make a prediction. We'll start at this node here at the top. In this slide I've shown just one example of a decision tree. Here are a few others. This is a different decision tree for trying to classify cat versus not cat. In this tree, to make a classification decision, you would again start at this topmost root node and depending on the ear shape of an example, you go either left or right. If the ear shape is pointy, then you look at the whiskers feature and depending on whether whiskers are present or absent, you go left or right again and classify cat versus not cat. And just for fun, here's a second example of a decision tree. Here's a third one and here's a fourth one. And among these different decision trees, some will do better and some will do worse on the training sets or on the cross validation and the test sets. So the job of the decision tree learning algorithm is out of all possible decision trees to try to pick one that hopefully does well on the training set and that also ideally generalizes well to new data such as to your cross validation and test sets as well. So seems like there are a lot of different decision trees one could build for a given application. How do you get an algorithm to learn a specific decision tree based on a training set? Let's take a look at that in the next video.
[{"start": 0.0, "end": 6.8, "text": " Welcome to the final week of this course on advanced learning algorithms."}, {"start": 6.8, "end": 11.64, "text": " One of the learning algorithms that's very powerful, widely used in many applications,"}, {"start": 11.64, "end": 18.52, "text": " also used by many to win machine learning competitions is decision trees and tree ensembles."}, {"start": 18.52, "end": 24.2, "text": " Despite all the successes of decision trees, they somehow haven't received that much attention"}, {"start": 24.2, "end": 28.76, "text": " in academia and so you may not hear about decision trees nearly that much, but it is"}, {"start": 28.76, "end": 31.720000000000002, "text": " a tool well worth having in your toolbox."}, {"start": 31.720000000000002, "end": 35.64, "text": " So in this week, we'll learn about decision trees and you'll see how to get them to work"}, {"start": 35.64, "end": 36.64, "text": " for yourself."}, {"start": 36.64, "end": 37.64, "text": " Let's dive in."}, {"start": 37.64, "end": 41.6, "text": " To explain how decision trees work, I'm going to use as a running example this week a cat"}, {"start": 41.6, "end": 43.96, "text": " classification example."}, {"start": 43.96, "end": 50.040000000000006, "text": " You're running a cat adoption center and given a few features, you want to train a classifier"}, {"start": 50.040000000000006, "end": 54.88, "text": " to quickly tell you if an animal is a cat or not."}, {"start": 54.88, "end": 62.56, "text": " I have here 10 training examples and associated with each of these 10 examples, we're going"}, {"start": 62.56, "end": 68.8, "text": " to have features regarding the animal's ear shape, face shape, whether it has whiskers"}, {"start": 68.8, "end": 73.46000000000001, "text": " and then the ground truth label that you want to predict is this animal cat."}, {"start": 73.46000000000001, "end": 79.78, "text": " So the first example has pointy ears, round face, whiskers are present and it is a cat."}, {"start": 79.78, "end": 85.36, "text": " The second example has floppy ears, the face shape is not round, whiskers are present and"}, {"start": 85.36, "end": 89.94, "text": " yes that is a cat and so on for the rest of the examples."}, {"start": 89.94, "end": 98.84, "text": " This data set has five cats and five dogs in it and the input features X are these three"}, {"start": 98.84, "end": 105.72, "text": " columns and the target output that you want to predict Y is this final column of is this"}, {"start": 105.72, "end": 107.6, "text": " a cat or not."}, {"start": 107.6, "end": 112.28, "text": " In this example the features X take on categorical values."}, {"start": 112.28, "end": 117.6, "text": " In other words the features take on just a few discrete values."}, {"start": 117.6, "end": 123.67999999999999, "text": " Ear shapes are either pointy or floppy, the face shape is either round or not round and"}, {"start": 123.67999999999999, "end": 130.6, "text": " whiskers are either present or absent and this is a binary classification task because"}, {"start": 130.6, "end": 134.2, "text": " the labels are also one or zero."}, {"start": 134.2, "end": 142.92, "text": " So now each of the features X1, X2 and X3 take on only two possible values."}, {"start": 142.92, "end": 148.76, "text": " We'll talk about features that can take on more than two possible values as well as continuous"}, {"start": 148.76, "end": 152.73999999999998, "text": " value features later in this week."}, {"start": 152.73999999999998, "end": 155.72, "text": " So what is a decision tree?"}, {"start": 155.72, "end": 161.92, "text": " Here's an example of a model that you might get after training a decision tree learning"}, {"start": 161.92, "end": 165.55999999999997, "text": " algorithm on the data set that you just saw."}, {"start": 165.55999999999997, "end": 171.67999999999998, "text": " The model that is output by the learning algorithm looks like a tree and a picture like this"}, {"start": 171.67999999999998, "end": 174.64, "text": " is what computer scientists call a tree."}, {"start": 174.64, "end": 180.23999999999998, "text": " If it looks nothing like the biological trees that you see out there to you it's okay, don't"}, {"start": 180.23999999999998, "end": 182.2, "text": " worry about it."}, {"start": 182.2, "end": 186.14, "text": " We'll go through an example to make sure that this computer science definition of a tree"}, {"start": 186.14, "end": 188.5, "text": " makes sense to you as well."}, {"start": 188.5, "end": 195.64, "text": " Maybe one of these ovals or rectangles is called a node in the tree."}, {"start": 195.64, "end": 202.96, "text": " If you haven't seen computer scientists' definitions of trees before, it may seem non-intuitive"}, {"start": 202.96, "end": 209.68, "text": " that the roots of the tree is at the top and the leaves of the tree are down at the bottom."}, {"start": 209.68, "end": 215.4, "text": " Maybe one way to think about this is this is more akin to an indoor hanging plant which"}, {"start": 215.4, "end": 222.16, "text": " is why the roots are up top and then the leaves tend to fall down to the bottom of the tree."}, {"start": 222.16, "end": 227.0, "text": " The way this model works is if you have a new test example."}, {"start": 227.0, "end": 231.76, "text": " She has a cat where the ear shape is pointy, face shape is round and whiskers are present."}, {"start": 231.76, "end": 238.68, "text": " The way this model will look at this example and make a classification decision is we'll"}, {"start": 238.68, "end": 244.28, "text": " start with this example at this topmost node of the tree."}, {"start": 244.28, "end": 252.36, "text": " This is called the root node of the tree and we will look at the feature written inside"}, {"start": 252.36, "end": 253.36, "text": " which is ear shape."}, {"start": 253.36, "end": 261.24, "text": " Based on the value of the ear shape of this example, we'll either go left or go right."}, {"start": 261.24, "end": 269.42, "text": " The value of the ear shape of this example is pointy and so we'll go down the left branch"}, {"start": 269.42, "end": 275.92, "text": " of the tree like so and end up at this oval node over here."}, {"start": 275.92, "end": 281.48, "text": " We then look at the face shape of this example which turns out to be round and so we will"}, {"start": 281.48, "end": 289.56, "text": " follow this arrow down over here and the algorithm will make an inference that it thinks this"}, {"start": 289.56, "end": 291.36, "text": " is a cat."}, {"start": 291.36, "end": 296.6, "text": " And so you get to this node and the algorithm will make a prediction that this is the cat."}, {"start": 296.6, "end": 302.36, "text": " What I've shown on this slide is one specific decision tree model."}, {"start": 302.36, "end": 310.6, "text": " To introduce a bit more terminology, this topmost node in the tree is called the root"}, {"start": 310.6, "end": 317.26000000000005, "text": " node and all of these nodes, that is all of these oval shapes but excluding the boxes"}, {"start": 317.26000000000005, "end": 322.36, "text": " at the bottom, all of these are called decision nodes."}, {"start": 322.36, "end": 326.84000000000003, "text": " And the decision nodes because they look at a particular feature and then based on the"}, {"start": 326.84000000000003, "end": 333.12, "text": " value of the feature, causes you to decide whether to go left or right down the tree."}, {"start": 333.12, "end": 340.96000000000004, "text": " Finally, these nodes at the bottom, these rectangular boxes are called leaf nodes and"}, {"start": 340.96000000000004, "end": 343.40000000000003, "text": " they make a prediction."}, {"start": 343.40000000000003, "end": 346.08000000000004, "text": " We'll start at this node here at the top."}, {"start": 346.08000000000004, "end": 350.64, "text": " In this slide I've shown just one example of a decision tree."}, {"start": 350.64, "end": 352.24, "text": " Here are a few others."}, {"start": 352.24, "end": 358.26, "text": " This is a different decision tree for trying to classify cat versus not cat."}, {"start": 358.26, "end": 363.48, "text": " In this tree, to make a classification decision, you would again start at this topmost root"}, {"start": 363.48, "end": 369.48, "text": " node and depending on the ear shape of an example, you go either left or right."}, {"start": 369.48, "end": 373.02, "text": " If the ear shape is pointy, then you look at the whiskers feature and depending on whether"}, {"start": 373.02, "end": 378.24, "text": " whiskers are present or absent, you go left or right again and classify cat versus not"}, {"start": 378.24, "end": 379.76, "text": " cat."}, {"start": 379.76, "end": 382.88, "text": " And just for fun, here's a second example of a decision tree."}, {"start": 382.88, "end": 386.92, "text": " Here's a third one and here's a fourth one."}, {"start": 386.92, "end": 391.24, "text": " And among these different decision trees, some will do better and some will do worse"}, {"start": 391.24, "end": 395.5, "text": " on the training sets or on the cross validation and the test sets."}, {"start": 395.5, "end": 400.62, "text": " So the job of the decision tree learning algorithm is out of all possible decision trees to try"}, {"start": 400.62, "end": 407.52, "text": " to pick one that hopefully does well on the training set and that also ideally generalizes"}, {"start": 407.52, "end": 412.56, "text": " well to new data such as to your cross validation and test sets as well."}, {"start": 412.56, "end": 416.79999999999995, "text": " So seems like there are a lot of different decision trees one could build for a given"}, {"start": 416.79999999999995, "end": 418.15999999999997, "text": " application."}, {"start": 418.15999999999997, "end": 423.2, "text": " How do you get an algorithm to learn a specific decision tree based on a training set?"}, {"start": 423.2, "end": 438.64, "text": " Let's take a look at that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=AtGze6G7GV8
7.2 Decision Trees | Learning Process --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
The process of building a decision tree, given a training set, has a few steps. In this video, let's take a look at the overall process of what you need to do to build a decision tree. Given a training set of 10 examples of cats and dogs, like you saw in the last video, the first step of decision tree learning is we have to decide what feature to use at the root node that is the first node at the very top of the decision tree. Via an algorithm that we'll talk about in the next few videos, let's say that we decide to pick as the feature in the root node the ear shape feature. What that means is we will decide to look at all of our training examples, all 10 training examples shown here, and split them according to the value of the ear shape feature. In particular, let's pick out the five examples with pointy ears and move them over down to the left, and let's pick the five examples with floppy ears and move them down to the right. The second step is focusing just on the left part, or sometimes called the left branch of the decision tree, to decide what node to put over there. And in particular, what feature do we want to split on, or what feature do we want to use next. Via an algorithm that again we'll talk about later this week, let's say you decide to use the face shape feature there. What we'll do now is take these five examples and split these five examples into two subsets based on their value of the face shape. So we'll take the four examples, all of these five with a round face shape, and move them down to the left, and the one example with a not round face shape and move it down to the right. And finally we notice that these four examples are all cats, four of the four of them are cats, and so rather than splitting further, we create a leaf node that makes a prediction that things that get down to that node are cats. And over here we notice that none of the examples, zero of the one examples are cats, or alternatively 100% of the examples here are dogs. And so we can create a leaf node here that makes a prediction of not cat. Having done this on the left part, or the left branch of this decision tree, we now repeat a similar process on the right part, or the right branch of this decision tree, and focus attention on just these five examples, which contains one cat and four dogs. And we would have to pick some feature over here to use to split these five examples further. If we end up choosing the whiskers feature, we would then split these five examples based on whether whiskers are present or absent, like so. And you notice that one of the one examples on the left are cats, and zero of the four are cats. So each of these nodes is completely pure, meaning that it's all cats or all not cats, and there's no longer a mix of cats and dogs. And so we can create these leaf nodes, making a cat prediction on the left and a not cat prediction here on the right. So this is a process of building a decision tree. Through this process, there were a couple key decisions that we had to make at various steps during the algorithm. Let's talk through what those key decisions were, and we'll keep on flashing of the details of how to make these decisions in the next few videos. The first key decision was, how do you choose what feature to use to split on at each node? So at the root node, as well as on the left branch and the right branch of the decision tree, we had to decide if there were a few examples at that node, comprising a mix of cats and dogs, do you want to split on the ear shape feature or the face shape feature or the whiskers feature? We'll see in the next video that decision trees will choose what feature to split on in order to try to maximize purity. And by purity, I mean, you want to get to what subsets which are as close as possible to all cats or all dogs. For example, if we had a feature that said, does this animal have cat DNA? We don't actually have this feature, but if we did, we could have split on this feature at the root node, which would have resulted in five out of five cats in the left branch and zero out of five cats in the right branch. And both these left and right subsets of the data are completely pure, meaning that there's only one class, either cats only or not cats only in both of these left and right sub branches, which is why the cat DNA feature, if we had this feature, would have been a great feature to use. But with the features that we actually have, we had to decide whether to split on ear shape, which results in four out of five examples on the left being cats and one of the five examples on the right being cats or face shape, which resulted in four out of seven on the left and one of the three on the right or whiskers, which resulted in three out of four examples being cats on the left and two of the six being not cats on the right. And so the decision tree learning algorithm has to choose between ear shape, face shape and whiskers, which of these features results in the greatest purity of the labels on the left and right sub branches, because it is if you can get to a highly pure subset of examples, then you can either predict cat or predict not cat and get it mostly right. So the next video on entropy will talk about how to estimate impurity and how to minimize impurity. So the first decision we have to make when learning a decision tree is how to choose which feature to split on in each node. The second key decision you need to make when building a decision tree is to decide when do you stop splitting. The criteria that we use just now was until a node is either 100% all cats or 100% all dogs or not cats, because at that point it seems natural to build a leaf node that just makes a classification prediction. Alternatively, you might also decide to stop splitting when splitting a node further will result in the tree exceeding a maximum depth, where the maximum depth that you allow the tree to grow to is a parameter that you could decide. In a decision tree, the depth of a node is defined as the number of hops that it takes to get from the root node, that is the node at the very top to that particular node. So the root node takes zero hops to get to itself and is at depth zero. The nodes below it are at depth one and the nodes below it would be at depth two. And so if you had decided that the maximum depth of the decision tree is say two, then you would decide not to split any nodes below this level so that the tree never gets to depth three. And one reason you might want to limit the depth of the decision tree is to make sure first the tree doesn't get too big and unwieldy. And second, by keeping the tree small, it makes it less prone to overfitting. Another criteria you might use to decide to stop splitting might be if the improvements in the purity score, which you see in a later video, are below a certain threshold. So if splitting a node results in minimum improvements to purity, or you see later it actually decreases in impurity, but if the gains are too small, then you might not bother. Again, both to keep the tree smaller and to reduce the risk of overfitting. And finally, if the number of examples at a node is below a certain threshold, then you might also decide to stop splitting. So for example, if at the root node we had split on the face shape feature, then the right branch would have had just three training examples with one cat and two dogs. And rather than splitting this into even smaller subsets, if you decided not to split further sets of examples with just three or fewer examples, then you would just create a decision node. And because there are mainly dogs, two out of three are dogs here, this would be a node that makes a prediction of not cat. And again, one reason you might decide this is not worth splitting on is to keep the tree smaller and to avoid overfitting. When I look at decision tree learning algorithms myself, sometimes I feel like, boy, there are a lot of different pieces, a lot of different things going on in this algorithm. Part of the reason it might feel like that is in the evolution of decision trees, there was one researcher that proposed a basic version of decision trees, and then a different researcher said, oh, we can modify this thing this way, such as here's a new criteria for splitting. Then a different researcher comes up with a different thing, like, oh, maybe we should stop splitting when it reaches a certain maximum depth. And over the years, different researchers came up with different refinements to the algorithm. As a result of that, it does work really well. But when you look at all the details of how to implement a decision tree, it feels like a lot of different pieces, such as why there's so many different ways to decide when to stop splitting. So if it feels like a somewhat complicated, messy algorithm to you, it does to me too. But these different pieces, they do fit together into a very effective learning algorithm. And what you learn in this course is the key, most important idea is how to make it work well. And then at the end of this week, I'll also share with you some guidance, some suggestions for how to use open source packages so that you don't have to have too complicated a procedure for making all these decisions, like how do I decide to stop splitting, so that you really get these algorithms to work well for yourself. But I want to reassure you that if this algorithm seems complicated and messy, it frankly does to me too, but it does work well. Now the next key decision that I want to dive more deeply into is how do you decide how to split at a node. So in the next video, let's take a look at this definition of entropy, which would be a way for us to measure purity or more precisely impurity in a node. Let's go on to the next video.
[{"start": 0.0, "end": 6.96, "text": " The process of building a decision tree, given a training set, has a few steps."}, {"start": 6.96, "end": 11.48, "text": " In this video, let's take a look at the overall process of what you need to do to build a"}, {"start": 11.48, "end": 13.36, "text": " decision tree."}, {"start": 13.36, "end": 19.88, "text": " Given a training set of 10 examples of cats and dogs, like you saw in the last video,"}, {"start": 19.88, "end": 27.92, "text": " the first step of decision tree learning is we have to decide what feature to use at the"}, {"start": 27.92, "end": 33.04, "text": " root node that is the first node at the very top of the decision tree."}, {"start": 33.04, "end": 37.72, "text": " Via an algorithm that we'll talk about in the next few videos, let's say that we decide"}, {"start": 37.72, "end": 43.120000000000005, "text": " to pick as the feature in the root node the ear shape feature."}, {"start": 43.120000000000005, "end": 48.56, "text": " What that means is we will decide to look at all of our training examples, all 10 training"}, {"start": 48.56, "end": 54.96, "text": " examples shown here, and split them according to the value of the ear shape feature."}, {"start": 54.96, "end": 62.6, "text": " In particular, let's pick out the five examples with pointy ears and move them over down to"}, {"start": 62.6, "end": 68.56, "text": " the left, and let's pick the five examples with floppy ears and move them down to the"}, {"start": 68.56, "end": 69.64, "text": " right."}, {"start": 69.64, "end": 74.88, "text": " The second step is focusing just on the left part, or sometimes called the left branch"}, {"start": 74.88, "end": 81.32, "text": " of the decision tree, to decide what node to put over there."}, {"start": 81.32, "end": 87.32, "text": " And in particular, what feature do we want to split on, or what feature do we want to"}, {"start": 87.32, "end": 88.63999999999999, "text": " use next."}, {"start": 88.63999999999999, "end": 94.16, "text": " Via an algorithm that again we'll talk about later this week, let's say you decide to use"}, {"start": 94.16, "end": 96.78, "text": " the face shape feature there."}, {"start": 96.78, "end": 102.96, "text": " What we'll do now is take these five examples and split these five examples into two subsets"}, {"start": 102.96, "end": 106.08, "text": " based on their value of the face shape."}, {"start": 106.08, "end": 112.8, "text": " So we'll take the four examples, all of these five with a round face shape, and move them"}, {"start": 112.8, "end": 118.03999999999999, "text": " down to the left, and the one example with a not round face shape and move it down to"}, {"start": 118.03999999999999, "end": 120.08, "text": " the right."}, {"start": 120.08, "end": 126.08, "text": " And finally we notice that these four examples are all cats, four of the four of them are"}, {"start": 126.08, "end": 132.24, "text": " cats, and so rather than splitting further, we create a leaf node that makes a prediction"}, {"start": 132.24, "end": 135.56, "text": " that things that get down to that node are cats."}, {"start": 135.56, "end": 142.68, "text": " And over here we notice that none of the examples, zero of the one examples are cats, or alternatively"}, {"start": 142.68, "end": 146.2, "text": " 100% of the examples here are dogs."}, {"start": 146.2, "end": 151.68, "text": " And so we can create a leaf node here that makes a prediction of not cat."}, {"start": 151.68, "end": 155.96, "text": " Having done this on the left part, or the left branch of this decision tree, we now"}, {"start": 155.96, "end": 160.84, "text": " repeat a similar process on the right part, or the right branch of this decision tree,"}, {"start": 160.84, "end": 167.16, "text": " and focus attention on just these five examples, which contains one cat and four dogs."}, {"start": 167.16, "end": 173.8, "text": " And we would have to pick some feature over here to use to split these five examples further."}, {"start": 173.8, "end": 180.72, "text": " If we end up choosing the whiskers feature, we would then split these five examples based"}, {"start": 180.72, "end": 187.0, "text": " on whether whiskers are present or absent, like so."}, {"start": 187.0, "end": 192.16, "text": " And you notice that one of the one examples on the left are cats, and zero of the four"}, {"start": 192.16, "end": 193.48, "text": " are cats."}, {"start": 193.48, "end": 201.12, "text": " So each of these nodes is completely pure, meaning that it's all cats or all not cats,"}, {"start": 201.12, "end": 203.72, "text": " and there's no longer a mix of cats and dogs."}, {"start": 203.72, "end": 209.4, "text": " And so we can create these leaf nodes, making a cat prediction on the left and a not cat"}, {"start": 209.4, "end": 211.7, "text": " prediction here on the right."}, {"start": 211.7, "end": 215.84, "text": " So this is a process of building a decision tree."}, {"start": 215.84, "end": 221.0, "text": " Through this process, there were a couple key decisions that we had to make at various"}, {"start": 221.0, "end": 223.96, "text": " steps during the algorithm."}, {"start": 223.96, "end": 227.98000000000002, "text": " Let's talk through what those key decisions were, and we'll keep on flashing of the details"}, {"start": 227.98000000000002, "end": 231.4, "text": " of how to make these decisions in the next few videos."}, {"start": 231.4, "end": 238.94, "text": " The first key decision was, how do you choose what feature to use to split on at each node?"}, {"start": 238.94, "end": 244.52, "text": " So at the root node, as well as on the left branch and the right branch of the decision"}, {"start": 244.52, "end": 251.16000000000003, "text": " tree, we had to decide if there were a few examples at that node, comprising a mix of"}, {"start": 251.16000000000003, "end": 255.92000000000002, "text": " cats and dogs, do you want to split on the ear shape feature or the face shape feature"}, {"start": 255.92000000000002, "end": 257.96000000000004, "text": " or the whiskers feature?"}, {"start": 257.96000000000004, "end": 262.88, "text": " We'll see in the next video that decision trees will choose what feature to split on"}, {"start": 262.88, "end": 266.12, "text": " in order to try to maximize purity."}, {"start": 266.12, "end": 271.2, "text": " And by purity, I mean, you want to get to what subsets which are as close as possible"}, {"start": 271.2, "end": 274.04, "text": " to all cats or all dogs."}, {"start": 274.04, "end": 279.64000000000004, "text": " For example, if we had a feature that said, does this animal have cat DNA?"}, {"start": 279.64000000000004, "end": 283.48, "text": " We don't actually have this feature, but if we did, we could have split on this feature"}, {"start": 283.48, "end": 289.32000000000005, "text": " at the root node, which would have resulted in five out of five cats in the left branch"}, {"start": 289.32000000000005, "end": 291.92, "text": " and zero out of five cats in the right branch."}, {"start": 291.92, "end": 297.36, "text": " And both these left and right subsets of the data are completely pure, meaning that there's"}, {"start": 297.36, "end": 305.6, "text": " only one class, either cats only or not cats only in both of these left and right sub branches,"}, {"start": 305.6, "end": 310.6, "text": " which is why the cat DNA feature, if we had this feature, would have been a great feature"}, {"start": 310.6, "end": 311.6, "text": " to use."}, {"start": 311.6, "end": 318.48, "text": " But with the features that we actually have, we had to decide whether to split on ear shape,"}, {"start": 318.48, "end": 323.7, "text": " which results in four out of five examples on the left being cats and one of the five"}, {"start": 323.7, "end": 329.88, "text": " examples on the right being cats or face shape, which resulted in four out of seven on the"}, {"start": 329.88, "end": 334.76, "text": " left and one of the three on the right or whiskers, which resulted in three out of four"}, {"start": 334.76, "end": 340.03999999999996, "text": " examples being cats on the left and two of the six being not cats on the right."}, {"start": 340.03999999999996, "end": 346.65999999999997, "text": " And so the decision tree learning algorithm has to choose between ear shape, face shape"}, {"start": 346.66, "end": 355.04, "text": " and whiskers, which of these features results in the greatest purity of the labels on the"}, {"start": 355.04, "end": 362.56, "text": " left and right sub branches, because it is if you can get to a highly pure subset of"}, {"start": 362.56, "end": 370.44000000000005, "text": " examples, then you can either predict cat or predict not cat and get it mostly right."}, {"start": 370.44, "end": 377.76, "text": " So the next video on entropy will talk about how to estimate impurity and how to minimize"}, {"start": 377.76, "end": 379.44, "text": " impurity."}, {"start": 379.44, "end": 384.44, "text": " So the first decision we have to make when learning a decision tree is how to choose"}, {"start": 384.44, "end": 388.2, "text": " which feature to split on in each node."}, {"start": 388.2, "end": 393.04, "text": " The second key decision you need to make when building a decision tree is to decide when"}, {"start": 393.04, "end": 395.84, "text": " do you stop splitting."}, {"start": 395.84, "end": 402.59999999999997, "text": " The criteria that we use just now was until a node is either 100% all cats or 100% all"}, {"start": 402.59999999999997, "end": 409.71999999999997, "text": " dogs or not cats, because at that point it seems natural to build a leaf node that just"}, {"start": 409.71999999999997, "end": 412.12, "text": " makes a classification prediction."}, {"start": 412.12, "end": 418.2, "text": " Alternatively, you might also decide to stop splitting when splitting a node further will"}, {"start": 418.2, "end": 423.76, "text": " result in the tree exceeding a maximum depth, where the maximum depth that you allow the"}, {"start": 423.76, "end": 427.8, "text": " tree to grow to is a parameter that you could decide."}, {"start": 427.8, "end": 435.15999999999997, "text": " In a decision tree, the depth of a node is defined as the number of hops that it takes"}, {"start": 435.15999999999997, "end": 440.64, "text": " to get from the root node, that is the node at the very top to that particular node."}, {"start": 440.64, "end": 446.03999999999996, "text": " So the root node takes zero hops to get to itself and is at depth zero."}, {"start": 446.03999999999996, "end": 453.24, "text": " The nodes below it are at depth one and the nodes below it would be at depth two."}, {"start": 453.24, "end": 459.56, "text": " And so if you had decided that the maximum depth of the decision tree is say two, then"}, {"start": 459.56, "end": 468.48, "text": " you would decide not to split any nodes below this level so that the tree never gets to"}, {"start": 468.48, "end": 471.48, "text": " depth three."}, {"start": 471.48, "end": 476.96000000000004, "text": " And one reason you might want to limit the depth of the decision tree is to make sure"}, {"start": 476.96000000000004, "end": 480.64, "text": " first the tree doesn't get too big and unwieldy."}, {"start": 480.64, "end": 486.52, "text": " And second, by keeping the tree small, it makes it less prone to overfitting."}, {"start": 486.52, "end": 491.03999999999996, "text": " Another criteria you might use to decide to stop splitting might be if the improvements"}, {"start": 491.03999999999996, "end": 496.36, "text": " in the purity score, which you see in a later video, are below a certain threshold."}, {"start": 496.36, "end": 502.76, "text": " So if splitting a node results in minimum improvements to purity, or you see later it"}, {"start": 502.76, "end": 510.2, "text": " actually decreases in impurity, but if the gains are too small, then you might not bother."}, {"start": 510.2, "end": 515.92, "text": " Again, both to keep the tree smaller and to reduce the risk of overfitting."}, {"start": 515.92, "end": 523.04, "text": " And finally, if the number of examples at a node is below a certain threshold, then"}, {"start": 523.04, "end": 525.8, "text": " you might also decide to stop splitting."}, {"start": 525.8, "end": 532.12, "text": " So for example, if at the root node we had split on the face shape feature, then the"}, {"start": 532.12, "end": 539.16, "text": " right branch would have had just three training examples with one cat and two dogs."}, {"start": 539.16, "end": 545.92, "text": " And rather than splitting this into even smaller subsets, if you decided not to split further"}, {"start": 545.92, "end": 551.36, "text": " sets of examples with just three or fewer examples, then you would just create a decision"}, {"start": 551.36, "end": 552.52, "text": " node."}, {"start": 552.52, "end": 558.4, "text": " And because there are mainly dogs, two out of three are dogs here, this would be a node"}, {"start": 558.4, "end": 561.52, "text": " that makes a prediction of not cat."}, {"start": 561.52, "end": 566.12, "text": " And again, one reason you might decide this is not worth splitting on is to keep the tree"}, {"start": 566.12, "end": 569.16, "text": " smaller and to avoid overfitting."}, {"start": 569.16, "end": 573.08, "text": " When I look at decision tree learning algorithms myself, sometimes I feel like, boy, there"}, {"start": 573.08, "end": 578.04, "text": " are a lot of different pieces, a lot of different things going on in this algorithm."}, {"start": 578.04, "end": 583.36, "text": " Part of the reason it might feel like that is in the evolution of decision trees, there"}, {"start": 583.36, "end": 588.5600000000001, "text": " was one researcher that proposed a basic version of decision trees, and then a different researcher"}, {"start": 588.5600000000001, "end": 594.16, "text": " said, oh, we can modify this thing this way, such as here's a new criteria for splitting."}, {"start": 594.16, "end": 597.28, "text": " Then a different researcher comes up with a different thing, like, oh, maybe we should"}, {"start": 597.28, "end": 600.3199999999999, "text": " stop splitting when it reaches a certain maximum depth."}, {"start": 600.3199999999999, "end": 603.92, "text": " And over the years, different researchers came up with different refinements to the"}, {"start": 603.92, "end": 605.4, "text": " algorithm."}, {"start": 605.4, "end": 608.24, "text": " As a result of that, it does work really well."}, {"start": 608.24, "end": 611.8399999999999, "text": " But when you look at all the details of how to implement a decision tree, it feels like"}, {"start": 611.8399999999999, "end": 617.12, "text": " a lot of different pieces, such as why there's so many different ways to decide when to stop"}, {"start": 617.12, "end": 618.1999999999999, "text": " splitting."}, {"start": 618.1999999999999, "end": 624.06, "text": " So if it feels like a somewhat complicated, messy algorithm to you, it does to me too."}, {"start": 624.06, "end": 629.0, "text": " But these different pieces, they do fit together into a very effective learning algorithm."}, {"start": 629.0, "end": 634.0799999999999, "text": " And what you learn in this course is the key, most important idea is how to make it work"}, {"start": 634.0799999999999, "end": 635.28, "text": " well."}, {"start": 635.28, "end": 639.68, "text": " And then at the end of this week, I'll also share with you some guidance, some suggestions"}, {"start": 639.68, "end": 645.88, "text": " for how to use open source packages so that you don't have to have too complicated a procedure"}, {"start": 645.88, "end": 650.0, "text": " for making all these decisions, like how do I decide to stop splitting, so that you really"}, {"start": 650.0, "end": 652.92, "text": " get these algorithms to work well for yourself."}, {"start": 652.92, "end": 657.24, "text": " But I want to reassure you that if this algorithm seems complicated and messy, it frankly does"}, {"start": 657.24, "end": 660.8399999999999, "text": " to me too, but it does work well."}, {"start": 660.8399999999999, "end": 667.02, "text": " Now the next key decision that I want to dive more deeply into is how do you decide how"}, {"start": 667.02, "end": 668.5999999999999, "text": " to split at a node."}, {"start": 668.5999999999999, "end": 673.0799999999999, "text": " So in the next video, let's take a look at this definition of entropy, which would be"}, {"start": 673.0799999999999, "end": 678.48, "text": " a way for us to measure purity or more precisely impurity in a node."}, {"start": 678.48, "end": 684.32, "text": " Let's go on to the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=eRUK9CEmOcI
7.3 Decision Tree Learning | Measuring purity --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In this video, we'll look at a way of measuring the purity of a set of examples. If the examples are all cats or single class, then it's very pure. If it's all not cats, that's also very pure. But if it's somewhere in between, how do you quantify how pure is this set of examples? Let's take a look at the definition of entropy, which is a measure of the impurity of a set of data. Given a set of six examples like this, we have three cats and three dogs. Let's define P1 to be the fraction of examples that are cats. That is the fraction of examples with a label 1. That's what the subscript 1 indicates. And so P1 in this example is equal to 3 out of 6. We're going to measure the impurity of a set of examples using a function called the entropy, which looks like this. The entropy function is conventionally denoted as capital H of this number P1. And the function looks like this curve over here, where the horizontal axis is P1, the fraction of cats in the sample, and the vertical axis is the value of the entropy. So in this example, where P1 is 3 out of 6, or 0.5, the value of the entropy of P1 would be equal to 1. And you notice that this curve is highest when your set of examples is 50-50, so it's most impure as an impurity of 1, or really an entropy of 1, when your set of examples is 50-50. Whereas in contrast, if your set of examples was either all cats or all not cats, then the entropy is 0. Let's just go through a few more examples to gain further intuition about entropy and how it works. Here's a different set of examples with five cats and one dog. So P1, the fraction of positive examples, the fraction of examples labeled 1 is 5-6. And so P1 is about 0.83, and if you read off that value at about 0.83, we find that the entropy of P1 is about 0.65. And here I'm writing it only to two significant digits. And here I'm rounding it off to two decimal points. Here's one more example. This sample of six images has all cats. So P1 is 6 out of 6, because all six are cats. And the entropy of P1 is this point over here, which is 0. And so we see that as you go from 3-6 to 6 out of 6 cats, the impurity decreases from 1 to 0. Or in other words, the purity increases as you go from a 50-50 mix of cats and dogs to all cats. Let's look at a few more examples. Here's another sample with two cats and four dogs. So P1 here is 2-6, which is 1 third. And if you read off the entropy at 0.33, it turns out to be about 0.92. And this is actually quite impure. And in particular, this set is more impure than this set, because it's closer to a 50-50 mix, which is why the impurity here is 0.92 as opposed to 0.65. And finally, one last example. If we have a set of all six dogs, then P1 is equal to 0. And the entropy of P1 is just this number down here, which is equal to 0. So this is a zero impurity, or this would be a completely pure set of all not cats or all dogs. And don't worry, we'll show the equation for the entropy of P1 later in this video. Now let's look at the actual equation for the entropy function, h of P1. Recall that P1 is a fraction of examples that are equal to cats. So if you have a sample that is 2 third cats, then that sample must have 1 third not cats. So let me define P0 to be equal to the fraction of examples that are not cats to be just equal to 1 minus P1. The entropy function is then defined as negative P1 log P1. And by convention, when computing entropy, we take logs to base two, rather than to base E. And then minus P0 log base two of P0. Alternatively, this is also equal to negative P1 log P1 minus 1 minus P1 log 1 minus P1. And if you were to plot this function in a computer, you find that it will be exactly this function on the left. And we take log base two, just to make the peak of this curve equal to one. If you were to take log of base E, or the base of natural logarithms, then that just vertically scales this function. And it'll still work, but the numbers become a bit hard to interpret because the peak of the function is a nice round number like one anymore. One notes on computing this function. If P1 or P0 is equal to zero, then an expression like this will look like zero log of zero. And log of zero is technically undefined, it's actually negative infinity. But by convention, for the purposes of computing entropy, we'll take zero log zero to be equal to zero. And that will correctly compute the entropy as zero or it's one to be equal to zero. If you're thinking that this definition of entropy looks a little bit like the definition of the logistic loss that we learned about in the last course, there is actually a mathematical rationale for why these two formulas look so similar. But you don't have to worry about it, and we won't get into it in this class. But applying this formula for entropy should work just fine when you're building a decision entry. To summarize, the entropy function is a measure of the impurity of a set of data. It starts from zero, goes up to one, and then comes back down to zero as a function of the fraction of positive examples in your sample. There are other functions that look like this that go from zero up to one and then back down. For example, if you look in open source packages, you may also hear about something called the Gini criteria, which is another function that looks a lot like the entropy function, and that will work well as well for building decision trees. But for the sake of simplicity, in these videos, I'm going to focus on using the entropy criteria, which will usually work just fine for most applications. Now that we have this definition of entropy in the next video, let's take a look at how you can actually use it to make decisions as to what feature to split on in the nodes of a decision tree.
[{"start": 0.0, "end": 8.040000000000001, "text": " In this video, we'll look at a way of measuring the purity of a set of examples."}, {"start": 8.040000000000001, "end": 11.68, "text": " If the examples are all cats or single class, then it's very pure."}, {"start": 11.68, "end": 15.6, "text": " If it's all not cats, that's also very pure."}, {"start": 15.6, "end": 21.28, "text": " But if it's somewhere in between, how do you quantify how pure is this set of examples?"}, {"start": 21.28, "end": 26.6, "text": " Let's take a look at the definition of entropy, which is a measure of the impurity of a set"}, {"start": 26.6, "end": 27.6, "text": " of data."}, {"start": 27.6, "end": 33.52, "text": " Given a set of six examples like this, we have three cats and three dogs."}, {"start": 33.52, "end": 38.68, "text": " Let's define P1 to be the fraction of examples that are cats."}, {"start": 38.68, "end": 41.68, "text": " That is the fraction of examples with a label 1."}, {"start": 41.68, "end": 44.2, "text": " That's what the subscript 1 indicates."}, {"start": 44.2, "end": 49.760000000000005, "text": " And so P1 in this example is equal to 3 out of 6."}, {"start": 49.76, "end": 59.76, "text": " We're going to measure the impurity of a set of examples using a function called the entropy,"}, {"start": 59.76, "end": 61.64, "text": " which looks like this."}, {"start": 61.64, "end": 71.84, "text": " The entropy function is conventionally denoted as capital H of this number P1."}, {"start": 71.84, "end": 77.24, "text": " And the function looks like this curve over here, where the horizontal axis is P1, the"}, {"start": 77.24, "end": 84.08, "text": " fraction of cats in the sample, and the vertical axis is the value of the entropy."}, {"start": 84.08, "end": 91.75999999999999, "text": " So in this example, where P1 is 3 out of 6, or 0.5, the value of the entropy of P1 would"}, {"start": 91.75999999999999, "end": 94.11999999999999, "text": " be equal to 1."}, {"start": 94.11999999999999, "end": 101.0, "text": " And you notice that this curve is highest when your set of examples is 50-50, so it's"}, {"start": 101.0, "end": 107.72, "text": " most impure as an impurity of 1, or really an entropy of 1, when your set of examples"}, {"start": 107.72, "end": 108.72, "text": " is 50-50."}, {"start": 108.72, "end": 115.36, "text": " Whereas in contrast, if your set of examples was either all cats or all not cats, then"}, {"start": 115.36, "end": 118.2, "text": " the entropy is 0."}, {"start": 118.2, "end": 122.48, "text": " Let's just go through a few more examples to gain further intuition about entropy and"}, {"start": 122.48, "end": 124.72, "text": " how it works."}, {"start": 124.72, "end": 129.64, "text": " Here's a different set of examples with five cats and one dog."}, {"start": 129.64, "end": 137.6, "text": " So P1, the fraction of positive examples, the fraction of examples labeled 1 is 5-6."}, {"start": 137.6, "end": 147.72, "text": " And so P1 is about 0.83, and if you read off that value at about 0.83, we find that the"}, {"start": 147.72, "end": 152.79999999999998, "text": " entropy of P1 is about 0.65."}, {"start": 152.79999999999998, "end": 156.29999999999998, "text": " And here I'm writing it only to two significant digits."}, {"start": 156.3, "end": 160.24, "text": " And here I'm rounding it off to two decimal points."}, {"start": 160.24, "end": 162.4, "text": " Here's one more example."}, {"start": 162.4, "end": 166.06, "text": " This sample of six images has all cats."}, {"start": 166.06, "end": 170.16000000000003, "text": " So P1 is 6 out of 6, because all six are cats."}, {"start": 170.16000000000003, "end": 175.54000000000002, "text": " And the entropy of P1 is this point over here, which is 0."}, {"start": 175.54000000000002, "end": 183.72000000000003, "text": " And so we see that as you go from 3-6 to 6 out of 6 cats, the impurity decreases from"}, {"start": 183.72000000000003, "end": 184.72000000000003, "text": " 1 to 0."}, {"start": 184.72, "end": 191.64, "text": " Or in other words, the purity increases as you go from a 50-50 mix of cats and dogs to"}, {"start": 191.64, "end": 193.8, "text": " all cats."}, {"start": 193.8, "end": 196.28, "text": " Let's look at a few more examples."}, {"start": 196.28, "end": 200.12, "text": " Here's another sample with two cats and four dogs."}, {"start": 200.12, "end": 204.24, "text": " So P1 here is 2-6, which is 1 third."}, {"start": 204.24, "end": 214.2, "text": " And if you read off the entropy at 0.33, it turns out to be about 0.92."}, {"start": 214.2, "end": 217.67999999999998, "text": " And this is actually quite impure."}, {"start": 217.67999999999998, "end": 226.39999999999998, "text": " And in particular, this set is more impure than this set, because it's closer to a 50-50"}, {"start": 226.39999999999998, "end": 232.79999999999998, "text": " mix, which is why the impurity here is 0.92 as opposed to 0.65."}, {"start": 232.79999999999998, "end": 235.6, "text": " And finally, one last example."}, {"start": 235.6, "end": 241.06, "text": " If we have a set of all six dogs, then P1 is equal to 0."}, {"start": 241.06, "end": 246.16, "text": " And the entropy of P1 is just this number down here, which is equal to 0."}, {"start": 246.16, "end": 252.68, "text": " So this is a zero impurity, or this would be a completely pure set of all not cats or"}, {"start": 252.68, "end": 254.8, "text": " all dogs."}, {"start": 254.8, "end": 259.48, "text": " And don't worry, we'll show the equation for the entropy of P1 later in this video."}, {"start": 259.48, "end": 266.56, "text": " Now let's look at the actual equation for the entropy function, h of P1."}, {"start": 266.56, "end": 273.6, "text": " Recall that P1 is a fraction of examples that are equal to cats."}, {"start": 273.6, "end": 280.9, "text": " So if you have a sample that is 2 third cats, then that sample must have 1 third not cats."}, {"start": 280.9, "end": 287.2, "text": " So let me define P0 to be equal to the fraction of examples that are not cats to be just equal"}, {"start": 287.2, "end": 290.16, "text": " to 1 minus P1."}, {"start": 290.16, "end": 297.24, "text": " The entropy function is then defined as negative P1 log P1."}, {"start": 297.24, "end": 304.36, "text": " And by convention, when computing entropy, we take logs to base two, rather than to base"}, {"start": 304.36, "end": 311.88, "text": " E. And then minus P0 log base two of P0."}, {"start": 311.88, "end": 326.48, "text": " Alternatively, this is also equal to negative P1 log P1 minus 1 minus P1 log 1 minus P1."}, {"start": 326.48, "end": 331.28, "text": " And if you were to plot this function in a computer, you find that it will be exactly"}, {"start": 331.28, "end": 333.96, "text": " this function on the left."}, {"start": 333.96, "end": 338.92, "text": " And we take log base two, just to make the peak of this curve equal to one."}, {"start": 338.92, "end": 344.36, "text": " If you were to take log of base E, or the base of natural logarithms, then that just"}, {"start": 344.36, "end": 347.92, "text": " vertically scales this function."}, {"start": 347.92, "end": 351.96000000000004, "text": " And it'll still work, but the numbers become a bit hard to interpret because the peak of"}, {"start": 351.96000000000004, "end": 357.88, "text": " the function is a nice round number like one anymore."}, {"start": 357.88, "end": 361.72, "text": " One notes on computing this function."}, {"start": 361.72, "end": 372.36, "text": " If P1 or P0 is equal to zero, then an expression like this will look like zero log of zero."}, {"start": 372.36, "end": 378.16, "text": " And log of zero is technically undefined, it's actually negative infinity."}, {"start": 378.16, "end": 384.08000000000004, "text": " But by convention, for the purposes of computing entropy, we'll take zero log zero to be equal"}, {"start": 384.08000000000004, "end": 385.28000000000003, "text": " to zero."}, {"start": 385.28, "end": 392.08, "text": " And that will correctly compute the entropy as zero or it's one to be equal to zero."}, {"start": 392.08, "end": 397.28, "text": " If you're thinking that this definition of entropy looks a little bit like the definition"}, {"start": 397.28, "end": 402.88, "text": " of the logistic loss that we learned about in the last course, there is actually a mathematical"}, {"start": 402.88, "end": 406.91999999999996, "text": " rationale for why these two formulas look so similar."}, {"start": 406.91999999999996, "end": 411.08, "text": " But you don't have to worry about it, and we won't get into it in this class."}, {"start": 411.08, "end": 414.85999999999996, "text": " But applying this formula for entropy should work just fine when you're building a decision"}, {"start": 414.86, "end": 416.32, "text": " entry."}, {"start": 416.32, "end": 422.24, "text": " To summarize, the entropy function is a measure of the impurity of a set of data."}, {"start": 422.24, "end": 427.24, "text": " It starts from zero, goes up to one, and then comes back down to zero as a function of the"}, {"start": 427.24, "end": 431.04, "text": " fraction of positive examples in your sample."}, {"start": 431.04, "end": 434.2, "text": " There are other functions that look like this that go from zero up to one and then back"}, {"start": 434.2, "end": 435.2, "text": " down."}, {"start": 435.2, "end": 439.40000000000003, "text": " For example, if you look in open source packages, you may also hear about something called the"}, {"start": 439.4, "end": 444.84, "text": " Gini criteria, which is another function that looks a lot like the entropy function, and"}, {"start": 444.84, "end": 448.03999999999996, "text": " that will work well as well for building decision trees."}, {"start": 448.03999999999996, "end": 454.47999999999996, "text": " But for the sake of simplicity, in these videos, I'm going to focus on using the entropy criteria,"}, {"start": 454.47999999999996, "end": 458.79999999999995, "text": " which will usually work just fine for most applications."}, {"start": 458.79999999999995, "end": 463.29999999999995, "text": " Now that we have this definition of entropy in the next video, let's take a look at how"}, {"start": 463.29999999999995, "end": 468.0, "text": " you can actually use it to make decisions as to what feature to split on in the nodes"}, {"start": 468.0, "end": 469.52, "text": " of a decision tree."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=pZ9uGPkqolc
7.4 Decision Tree Learning | Choosing a split : Information Gain --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
When building a decision tree, the way we'll decide what feature to split on at a node will be based on what choice of feature reduces entropy the most. Reduces entropy or reduces impurity or maximizes purity. In decision tree learning, the reduction of entropy is called information gain. Let's take a look in this video at how to compute information gain and therefore choose what feature to use to split on at each node in your decision tree. Let's use the example of deciding what feature to use at the root node of the decision tree we're building just now for recognizing cats versus not cats. If we had split using the ear shape feature at the root node, this is what we would have gotten. Five examples on the left and five on the right. And on the left, we would have four out of five cats. The P1 would be equal to four fifths or 0.8. And on the right, one out of five are cats. So P1 is equal to one fifth or 0.2. If you apply the entropy formula from the last video to this left subset of data and this right subset of data, we find that the degree of impurity on the left is entropy of 0.8, which is about 0.72. And on the right, the entropy of 0.2 turns out also to be 0.72. So this would be the entropy at the left and right sub branches if we were to split on the ear shape feature. One other option would be to split on the face shape feature. If we'd done so, then on the left, for the seven examples would be cats. So P1 is four sevenths. And on the right, one third are cats. So P1 on the right is one third. And the entropy of four sevenths and the entropy of one third are 0.99 and 0.92. So the degree of impurity in the left and right node seems much higher, 0.99 and 0.92 compared to 0.72 and 0.72. Finally, the third possible choice of feature to use at the root node would be the whiskers feature, in which case you split based on whether whiskers are present or absent. In this case, P1 on the left is three quarters, P1 on the right is two sixths, and the entropy values are as follows. So the key question we need to answer is, given these three options of a feature to use at the root node, which one do we think works best? It turns out that rather than looking at these entropy numbers and comparing them, it would be useful to take a weighted average of them. And here's what I mean. If there's a node with a lot of examples in it with high entropy, that seems worse than if there was a node with just a few examples in it with high entropy. Because entropy as a measure of impurity is worse if you have a very large and impure data set compared to just a few examples in a branch of a tree that is very impure. So the key decision is, of these three possible choices of features to use at the root node, which one do we want to use? Consider with each of these splits is two numbers, the entropy on the left sub-branch and the entropy on the right sub-branch. And in order to pick from these, we like to actually combine these two numbers into a single number. So we can just pick of these three choices, which one does best. And the way we're going to combine these two numbers is by taking a weighted average. Because how important it is to have low entropy in say the left or right sub-branch, also depends on how many examples went into the left or right sub-branch. Because if there are a lot of examples in say the left sub-branch, then it seems more important to make sure that that left sub-branch's entropy value is low. So in this example, we have five of the 10 examples went to the left sub-branch. So we can compute the weighted average as five out of 10 times the entropy of 0.8. And then after that, five of the 10 examples also went to the right sub-branch plus five tens times the entropy of 0.2. Now for this example in the middle, the left sub-branch had received seven out of 10 examples. And so we're going to compute seven tens times the entropy of 0.57 plus the right sub-branch had three out of 10 examples, so plus three tens times entropy of 0.33 or one-third. And finally, on the right, we'll compute four tens times entropy of 0.75 plus six tens times entropy of 0.33. And so the way we will choose to split is by computing these three numbers and picking whichever one is lowest because that gives us the left and right sub-branch's with the lowest average weighted entropy. In the way that decision trees are built, we're actually going to make one more change to these formulas to stick to the convention in decision tree building, but it won't actually change the outcome, which is rather than computing this weighted average entropy, we're going to compute the reduction in entropy compared to if we hadn't split at all. So if we go to the root node, remember that at the root node, we had started off with all 10 examples in the root node with five cats and five dogs. And so at the root node, we had P1 equals five tens of 0.5. And so the entropy of the root node, entropy of 0.5 was actually equal to one. This was maximum impurity because it was five cats and five dogs. So the formula that we're actually going to use for choosing a split is not this weighted entropy at the left and right sub-branch's. Instead is going to be the entropy at the root node, which is entropy of 0.5, then minus this formula. And in this example, if you work out the math, it turns out to be 0.28. For the face shape example, we can compute entropy of the root node, entropy of 0.5 minus this, which turns out to be 0.03. And for whiskers, compute that, which turns out to be 0.12. These numbers that we just calculated, 0.28, 0.03, and 0.12, these are called the information gain. And what it measures is the reduction in entropy that you get in your tree resulting from the making a split. Because the entropy was originally one at the root node, and by making the split, you end up with a lower value of entropy. And the difference between those two values is a reduction in entropy, and that's 0.28 in the case of splitting on the ear shape. So why do we bother to compute reduction in entropy rather than just entropy at the left and right sub-branch's? It turns out that one of the stopping criteria for deciding when to not bother to split any further is if the reduction in entropy is too small, in which case you could decide you're just increasing the size of the tree unnecessarily and risking overfitting by splitting and just decide to not bother if the reduction in entropy is too small below a threshold. In this particular example, splitting on ear shape results in the biggest reduction in entropy, 0.28 is bigger than 0.03 or 0.12, and so we would choose to split on the ear shape feature at the root node. On the next slide, let's give a more formal definition of information gain. And by the way, one additional piece of notation that we'll introduce also in the next slide is these numbers, 5 tens and 5 tens, I'm going to call this w left, because that's the fraction of examples that went to the left branch, and I'm going to call this w right, because that's the fraction of examples that went to the right branch, whereas for this middle example w left would be 7 tens and w right would be 3 tens. So let's now write down the general formula for how to compute information gain. Using the example of splitting on the ear shape feature, let me define p1 left to be equal to the fraction of examples in the left subtree that have a positive label that are cats, and so in this example p1 left would be equal to 4 fifths, and also let me define w left to be the fraction of examples out of all the examples at the root node that went to the left sub branch, and so in this example w left would be 5 tens. Similarly, let's define p1 right to be of all the examples in the right branch, the fraction that are positive examples, and so of one out of five of these examples being cats that would be one fifth, and similarly w right is 5 tens, the fraction of examples that went to the right sub branch, and let's also define p1 root to be the fraction of examples that are positive in the root node. So in this case this would be 5 tens or 0.5. Information gain is then defined as the entropy of p1 root, so what's the entropy at the root node, minus that weighted entropy calculation that we had on the previous slide, minus w left, this would be 5 tens in the example, times the entropy applied to p1 left, that's entropy on the left sub branch, plus w right, the fraction of examples that went to the right branch, times entropy of p1 right, and so with this definition of entropy you can calculate the information gain associated with choosing any particular feature to split on in the node, and then out of all the possible features you could choose to split on, you can then pick the one that gives you the highest information gain, and that will result in hopefully increasing the purity of your subsets of data that you get on the left and right sub branches of your decision tree, and that will result in choosing a feature to split on that increases the purity of your subsets of data in both the left and right sub branches of your decision tree. Now that you know how to calculate information gain or reduction in entropy, you know how to pick a feature to split on at a node. Let's put all the things we've talked about together into the overall algorithm for building a decision tree given a training set. Let's go see that in the next video.
[{"start": 0.0, "end": 7.3, "text": " When building a decision tree, the way we'll decide what feature to split on at a node"}, {"start": 7.3, "end": 12.56, "text": " will be based on what choice of feature reduces entropy the most."}, {"start": 12.56, "end": 17.44, "text": " Reduces entropy or reduces impurity or maximizes purity."}, {"start": 17.44, "end": 23.080000000000002, "text": " In decision tree learning, the reduction of entropy is called information gain."}, {"start": 23.080000000000002, "end": 28.12, "text": " Let's take a look in this video at how to compute information gain and therefore choose"}, {"start": 28.12, "end": 32.84, "text": " what feature to use to split on at each node in your decision tree."}, {"start": 32.84, "end": 38.32, "text": " Let's use the example of deciding what feature to use at the root node of the decision tree"}, {"start": 38.32, "end": 43.400000000000006, "text": " we're building just now for recognizing cats versus not cats."}, {"start": 43.400000000000006, "end": 49.92, "text": " If we had split using the ear shape feature at the root node, this is what we would have"}, {"start": 49.92, "end": 51.84, "text": " gotten."}, {"start": 51.84, "end": 54.92, "text": " Five examples on the left and five on the right."}, {"start": 54.92, "end": 59.2, "text": " And on the left, we would have four out of five cats."}, {"start": 59.2, "end": 64.6, "text": " The P1 would be equal to four fifths or 0.8."}, {"start": 64.6, "end": 66.84, "text": " And on the right, one out of five are cats."}, {"start": 66.84, "end": 70.48, "text": " So P1 is equal to one fifth or 0.2."}, {"start": 70.48, "end": 76.96000000000001, "text": " If you apply the entropy formula from the last video to this left subset of data and"}, {"start": 76.96000000000001, "end": 82.72, "text": " this right subset of data, we find that the degree of impurity on the left is entropy"}, {"start": 82.72, "end": 87.24, "text": " of 0.8, which is about 0.72."}, {"start": 87.24, "end": 95.28, "text": " And on the right, the entropy of 0.2 turns out also to be 0.72."}, {"start": 95.28, "end": 102.0, "text": " So this would be the entropy at the left and right sub branches if we were to split on"}, {"start": 102.0, "end": 104.42, "text": " the ear shape feature."}, {"start": 104.42, "end": 109.2, "text": " One other option would be to split on the face shape feature."}, {"start": 109.2, "end": 115.18, "text": " If we'd done so, then on the left, for the seven examples would be cats."}, {"start": 115.18, "end": 117.32000000000001, "text": " So P1 is four sevenths."}, {"start": 117.32000000000001, "end": 120.36, "text": " And on the right, one third are cats."}, {"start": 120.36, "end": 123.44, "text": " So P1 on the right is one third."}, {"start": 123.44, "end": 130.92000000000002, "text": " And the entropy of four sevenths and the entropy of one third are 0.99 and 0.92."}, {"start": 130.92000000000002, "end": 137.24, "text": " So the degree of impurity in the left and right node seems much higher, 0.99 and 0.92"}, {"start": 137.24, "end": 140.20000000000002, "text": " compared to 0.72 and 0.72."}, {"start": 140.20000000000002, "end": 145.08, "text": " Finally, the third possible choice of feature to use at the root node would be the whiskers"}, {"start": 145.08, "end": 150.28, "text": " feature, in which case you split based on whether whiskers are present or absent."}, {"start": 150.28, "end": 156.24, "text": " In this case, P1 on the left is three quarters, P1 on the right is two sixths, and the entropy"}, {"start": 156.24, "end": 159.92000000000002, "text": " values are as follows."}, {"start": 159.92000000000002, "end": 164.68, "text": " So the key question we need to answer is, given these three options of a feature to"}, {"start": 164.68, "end": 170.28, "text": " use at the root node, which one do we think works best?"}, {"start": 170.28, "end": 179.64000000000001, "text": " It turns out that rather than looking at these entropy numbers and comparing them, it would"}, {"start": 179.64000000000001, "end": 183.04000000000002, "text": " be useful to take a weighted average of them."}, {"start": 183.04000000000002, "end": 185.24, "text": " And here's what I mean."}, {"start": 185.24, "end": 191.56, "text": " If there's a node with a lot of examples in it with high entropy, that seems worse than"}, {"start": 191.56, "end": 195.88, "text": " if there was a node with just a few examples in it with high entropy."}, {"start": 195.88, "end": 201.12, "text": " Because entropy as a measure of impurity is worse if you have a very large and impure"}, {"start": 201.12, "end": 208.72, "text": " data set compared to just a few examples in a branch of a tree that is very impure."}, {"start": 208.72, "end": 214.68, "text": " So the key decision is, of these three possible choices of features to use at the root node,"}, {"start": 214.68, "end": 217.88, "text": " which one do we want to use?"}, {"start": 217.88, "end": 222.79999999999998, "text": " Consider with each of these splits is two numbers, the entropy on the left sub-branch"}, {"start": 222.79999999999998, "end": 225.68, "text": " and the entropy on the right sub-branch."}, {"start": 225.68, "end": 230.12, "text": " And in order to pick from these, we like to actually combine these two numbers into a"}, {"start": 230.12, "end": 231.12, "text": " single number."}, {"start": 231.12, "end": 235.92, "text": " So we can just pick of these three choices, which one does best."}, {"start": 235.92, "end": 240.88, "text": " And the way we're going to combine these two numbers is by taking a weighted average."}, {"start": 240.88, "end": 247.76, "text": " Because how important it is to have low entropy in say the left or right sub-branch, also"}, {"start": 247.76, "end": 252.84, "text": " depends on how many examples went into the left or right sub-branch."}, {"start": 252.84, "end": 257.71999999999997, "text": " Because if there are a lot of examples in say the left sub-branch, then it seems more"}, {"start": 257.71999999999997, "end": 263.2, "text": " important to make sure that that left sub-branch's entropy value is low."}, {"start": 263.2, "end": 269.76, "text": " So in this example, we have five of the 10 examples went to the left sub-branch."}, {"start": 269.76, "end": 275.44, "text": " So we can compute the weighted average as five out of 10 times the entropy of 0.8."}, {"start": 275.44, "end": 281.52, "text": " And then after that, five of the 10 examples also went to the right sub-branch plus five"}, {"start": 281.52, "end": 286.68, "text": " tens times the entropy of 0.2."}, {"start": 286.68, "end": 294.34, "text": " Now for this example in the middle, the left sub-branch had received seven out of 10 examples."}, {"start": 294.34, "end": 304.22, "text": " And so we're going to compute seven tens times the entropy of 0.57 plus the right sub-branch"}, {"start": 304.22, "end": 312.6, "text": " had three out of 10 examples, so plus three tens times entropy of 0.33 or one-third."}, {"start": 312.6, "end": 321.04, "text": " And finally, on the right, we'll compute four tens times entropy of 0.75 plus six tens times"}, {"start": 321.04, "end": 324.6, "text": " entropy of 0.33."}, {"start": 324.6, "end": 330.76000000000005, "text": " And so the way we will choose to split is by computing these three numbers and picking"}, {"start": 330.76, "end": 337.12, "text": " whichever one is lowest because that gives us the left and right sub-branch's with the"}, {"start": 337.12, "end": 340.84, "text": " lowest average weighted entropy."}, {"start": 340.84, "end": 344.92, "text": " In the way that decision trees are built, we're actually going to make one more change"}, {"start": 344.92, "end": 350.56, "text": " to these formulas to stick to the convention in decision tree building, but it won't actually"}, {"start": 350.56, "end": 356.88, "text": " change the outcome, which is rather than computing this weighted average entropy, we're going"}, {"start": 356.88, "end": 362.48, "text": " to compute the reduction in entropy compared to if we hadn't split at all."}, {"start": 362.48, "end": 367.64, "text": " So if we go to the root node, remember that at the root node, we had started off with"}, {"start": 367.64, "end": 372.24, "text": " all 10 examples in the root node with five cats and five dogs."}, {"start": 372.24, "end": 379.15999999999997, "text": " And so at the root node, we had P1 equals five tens of 0.5."}, {"start": 379.15999999999997, "end": 385.96, "text": " And so the entropy of the root node, entropy of 0.5 was actually equal to one."}, {"start": 385.96, "end": 390.09999999999997, "text": " This was maximum impurity because it was five cats and five dogs."}, {"start": 390.09999999999997, "end": 395.08, "text": " So the formula that we're actually going to use for choosing a split is not this weighted"}, {"start": 395.08, "end": 399.08, "text": " entropy at the left and right sub-branch's."}, {"start": 399.08, "end": 405.76, "text": " Instead is going to be the entropy at the root node, which is entropy of 0.5, then minus"}, {"start": 405.76, "end": 408.76, "text": " this formula."}, {"start": 408.76, "end": 413.44, "text": " And in this example, if you work out the math, it turns out to be 0.28."}, {"start": 413.44, "end": 419.4, "text": " For the face shape example, we can compute entropy of the root node, entropy of 0.5 minus"}, {"start": 419.4, "end": 424.08, "text": " this, which turns out to be 0.03."}, {"start": 424.08, "end": 430.64, "text": " And for whiskers, compute that, which turns out to be 0.12."}, {"start": 430.64, "end": 436.88, "text": " These numbers that we just calculated, 0.28, 0.03, and 0.12, these are called the information"}, {"start": 436.88, "end": 438.04, "text": " gain."}, {"start": 438.04, "end": 443.4, "text": " And what it measures is the reduction in entropy that you get in your tree resulting from the"}, {"start": 443.4, "end": 445.12, "text": " making a split."}, {"start": 445.12, "end": 453.03999999999996, "text": " Because the entropy was originally one at the root node, and by making the split, you"}, {"start": 453.03999999999996, "end": 455.67999999999995, "text": " end up with a lower value of entropy."}, {"start": 455.67999999999995, "end": 461.2, "text": " And the difference between those two values is a reduction in entropy, and that's 0.28"}, {"start": 461.2, "end": 464.28, "text": " in the case of splitting on the ear shape."}, {"start": 464.28, "end": 470.96, "text": " So why do we bother to compute reduction in entropy rather than just entropy at the left"}, {"start": 470.96, "end": 472.59999999999997, "text": " and right sub-branch's?"}, {"start": 472.6, "end": 477.44, "text": " It turns out that one of the stopping criteria for deciding when to not bother to split any"}, {"start": 477.44, "end": 482.64000000000004, "text": " further is if the reduction in entropy is too small, in which case you could decide"}, {"start": 482.64000000000004, "end": 487.40000000000003, "text": " you're just increasing the size of the tree unnecessarily and risking overfitting by splitting"}, {"start": 487.40000000000003, "end": 493.8, "text": " and just decide to not bother if the reduction in entropy is too small below a threshold."}, {"start": 493.8, "end": 499.12, "text": " In this particular example, splitting on ear shape results in the biggest reduction in"}, {"start": 499.12, "end": 506.08, "text": " entropy, 0.28 is bigger than 0.03 or 0.12, and so we would choose to split on the ear"}, {"start": 506.08, "end": 509.8, "text": " shape feature at the root node."}, {"start": 509.8, "end": 514.92, "text": " On the next slide, let's give a more formal definition of information gain."}, {"start": 514.92, "end": 519.24, "text": " And by the way, one additional piece of notation that we'll introduce also in the next slide"}, {"start": 519.24, "end": 526.36, "text": " is these numbers, 5 tens and 5 tens, I'm going to call this w left, because that's the fraction"}, {"start": 526.36, "end": 531.44, "text": " of examples that went to the left branch, and I'm going to call this w right, because"}, {"start": 531.44, "end": 536.16, "text": " that's the fraction of examples that went to the right branch, whereas for this middle"}, {"start": 536.16, "end": 540.9, "text": " example w left would be 7 tens and w right would be 3 tens."}, {"start": 540.9, "end": 548.42, "text": " So let's now write down the general formula for how to compute information gain."}, {"start": 548.42, "end": 554.84, "text": " Using the example of splitting on the ear shape feature, let me define p1 left to be"}, {"start": 554.84, "end": 561.84, "text": " equal to the fraction of examples in the left subtree that have a positive label that are"}, {"start": 561.84, "end": 569.1800000000001, "text": " cats, and so in this example p1 left would be equal to 4 fifths, and also let me define"}, {"start": 569.1800000000001, "end": 574.84, "text": " w left to be the fraction of examples out of all the examples at the root node that"}, {"start": 574.84, "end": 581.24, "text": " went to the left sub branch, and so in this example w left would be 5 tens."}, {"start": 581.24, "end": 589.16, "text": " Similarly, let's define p1 right to be of all the examples in the right branch, the"}, {"start": 589.16, "end": 593.4, "text": " fraction that are positive examples, and so of one out of five of these examples being"}, {"start": 593.4, "end": 600.48, "text": " cats that would be one fifth, and similarly w right is 5 tens, the fraction of examples"}, {"start": 600.48, "end": 608.48, "text": " that went to the right sub branch, and let's also define p1 root to be the fraction of"}, {"start": 608.48, "end": 612.6, "text": " examples that are positive in the root node."}, {"start": 612.6, "end": 619.36, "text": " So in this case this would be 5 tens or 0.5."}, {"start": 619.36, "end": 626.48, "text": " Information gain is then defined as the entropy of p1 root, so what's the entropy at the root"}, {"start": 626.48, "end": 633.36, "text": " node, minus that weighted entropy calculation that we had on the previous slide, minus w"}, {"start": 633.36, "end": 639.12, "text": " left, this would be 5 tens in the example, times the entropy applied to p1 left, that's"}, {"start": 639.12, "end": 645.6800000000001, "text": " entropy on the left sub branch, plus w right, the fraction of examples that went to the"}, {"start": 645.6800000000001, "end": 654.6800000000001, "text": " right branch, times entropy of p1 right, and so with this definition of entropy you can"}, {"start": 654.6800000000001, "end": 661.2, "text": " calculate the information gain associated with choosing any particular feature to split"}, {"start": 661.2, "end": 666.32, "text": " on in the node, and then out of all the possible features you could choose to split on, you"}, {"start": 666.32, "end": 672.88, "text": " can then pick the one that gives you the highest information gain, and that will result in"}, {"start": 672.88, "end": 678.96, "text": " hopefully increasing the purity of your subsets of data that you get on the left and right"}, {"start": 678.96, "end": 684.0, "text": " sub branches of your decision tree, and that will result in choosing a feature to split"}, {"start": 684.0, "end": 691.56, "text": " on that increases the purity of your subsets of data in both the left and right sub branches"}, {"start": 691.56, "end": 693.76, "text": " of your decision tree."}, {"start": 693.76, "end": 698.6, "text": " Now that you know how to calculate information gain or reduction in entropy, you know how"}, {"start": 698.6, "end": 701.84, "text": " to pick a feature to split on at a node."}, {"start": 701.84, "end": 707.24, "text": " Let's put all the things we've talked about together into the overall algorithm for building"}, {"start": 707.24, "end": 709.84, "text": " a decision tree given a training set."}, {"start": 709.84, "end": 714.6, "text": " Let's go see that in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=8Gc2AfeP5yE
7.5 Decision Tree Learning | Putting it together --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
The information gain criteria lets you decide how to choose one feature to split at one node. Let's take that and use that in multiple places through a decision tree in order to figure out how to build a large decision tree with multiple nodes. Here's the overall process of building a decision tree. Start with all training examples at the root node of the tree and calculate the information gain for all possible features and pick the feature to split on that gives the highest information gain. Having chosen this feature, you would then split the data set into two subsets according to the selected feature and create left and right branches of the tree and send the training examples to either the left or the right branch depending on the value of that feature for that example. And this allows you to have made a split at the root node. After that, you would then keep on repeating the splitting process on the left branch of the tree, on the right branch of the tree and so on and keep on doing that until a stopping criteria is met. Where the stopping criteria can be when a node is 100% a single clause, so one has reached entropy of zero or when further splitting a node will cause the tree to exceed the maximum depth that you have set or if the information gain from an additional split is less than a threshold or if the number of examples in a node is below a threshold. And so you would keep on repeating the splitting process until the stopping criteria that you've chosen which could be one or more of these criteria is met. Let's look at an illustration of how this process will work. We started all of the examples at the root nodes and based on computing information gain for all three features, decide that ear shape is the best feature to split on. Based on that, we create left and right sub branches and send the subset of the data with pointy versus floppy ears to the left and right sub branches. So let me cover up the root node and the right sub branch and just focus on the left sub branch where we have these five examples. Let's see how splitting criteria is to keep splitting until everything in the node belongs to a single clause, so either all cats or all dogs. We will look at this node and see if it meets the splitting criteria and it does not because there is a mix of cats and dogs here. So the next step is to then pick a feature to split on. So we then go through the features one at a time and compute the information gain of each of those features as if this node were the new root node of a decision tree that was trained using just five training examples shown here. So we would compute the information gain for splitting on the whiskers feature, the information gain on splitting on the face shape feature, and it turns out that the information gain for splitting on ear shape will be zero because all of these have the same pointy ear shape. And between whiskers and face shape, face shape turns out to have a highest information gain, so we're going to split on face shape and that allows us to build left and right sub-branches as follows. And then for the left sub-branch here, we check for the stopping criteria and we find that it is all cats, so we stop splitting and build a prediction node that predicts cat and for the right sub-branch, we again check the stopping criteria and we find that it is all dogs and so we stop splitting and have a prediction node. So for the left sub-branch, we check for the criteria for whether or not we should stop splitting and we have all cats here and so the stopping criteria is met and we create a leaf node that makes a prediction of cat and for the right sub-branch, we find that it is all dogs and so we will also stop splitting since we've met the splitting criteria and put a leaf node there that predicts not cat. So having built out this left subtree, we can now turn our attention to building the right subtree. Let me now again cover up the root node and the entire left subtree. To build out the right subtree, we have these five examples here and again, the first thing we do is check if the criteria to stop splitting has been met. The criteria being whether or not all the examples are single class, we've not met the criteria and so we'll decide to keep splitting in this right sub-branch as well. And in fact, the procedure for building the right sub-branch will be a lot like as if you were training a decision tree learning algorithm from scratch where the data set you have comprises just these five training examples. And so again, computing information gain for all of the possible features to split on, you find that the whiskers feature gives the highest information gain. So these are the set of five examples according to whether whiskers are present or absent, check if the criteria to stop splitting are met in the left and right sub-branches here and decide that they are and so you end up with leaf nodes that predict cat and not cat. And so this is the overall process for building the decision tree. Notice that there's an interesting aspect of what we've done, which is after we decided what to split on at the root node, the way we built the left sub-tree was by building a decision tree on a subset of five examples and the way we built the right sub-tree was by again building a decision tree on a subset of five examples. In computer science, this is an example of a recursive algorithm and all that means is the way you build a decision tree at the root is by building other smaller decision trees in the left and the right sub-branches. So recursion in computer science refers to writing code that calls itself and the way this comes up in building a decision tree is you build the overall decision tree by building smaller sub-decision trees and then putting them all together. So that's why if you look at software implementations of decision trees, you see sometimes references to a recursive algorithm. But if you don't feel like you fully understood this concept of recursive algorithms, don't worry about it. You'll still be able to fully complete this week's assignments as well as use libraries to get decision trees to work for yourself. But if you're implementing a decision tree algorithm from scratch, then a recursive algorithm turns out to be one of the steps you'd have to implement. And by the way, you may be wondering how to choose the maximum depth parameter. There are many different possible choices, but some of the open source libraries will have good default choices that you can use. One intuition is the larger the maximum depth, the bigger the decision tree you're willing to build. And this is a bit like fitting a higher degree polynomial or training a larger neural network. It lets the decision tree learn a more complex model, but it also increases the risk of overfitting if it's fitting a very complex function to your data. In theory, you could use cross-validation to pick parameters like the maximum depth where you try out different values to the maximum depth and pick what works best on the cross-validation set. But in practice, the open source libraries have even somewhat better ways to choose this parameter for you. Or another criteria that you can use to decide when to stop splitting is if the information gain from an additional split is less than a certain threshold. So if any feature you split on achieves only a small reduction in entropy or a very small information gain, then you might also decide to not bother. And finally, you can also decide to stop splitting when the number of examples in the node is below a certain threshold. So that's the process for building a decision tree. Now that you've learned the decision tree, if you want to make a prediction, you can then follow the procedure that you saw in the very first video of this week where you take a new example, say a test example, and start at the root and keep on following the decisions down until you get to a leave node, which then makes the prediction. Now that you know the basic decision tree learning algorithm, in the next few videos, I'd like to go into some further refinements of this algorithm. So far, we've only used features that take on two possible values, but sometimes you have a feature that takes on categorical or discrete values, but maybe more than two values. Let's take a look in the next video at how to handle that case.
[{"start": 0.0, "end": 6.88, "text": " The information gain criteria lets you decide how to choose one feature to split at one"}, {"start": 6.88, "end": 7.88, "text": " node."}, {"start": 7.88, "end": 12.72, "text": " Let's take that and use that in multiple places through a decision tree in order to figure"}, {"start": 12.72, "end": 16.44, "text": " out how to build a large decision tree with multiple nodes."}, {"start": 16.44, "end": 20.36, "text": " Here's the overall process of building a decision tree."}, {"start": 20.36, "end": 26.92, "text": " Start with all training examples at the root node of the tree and calculate the information"}, {"start": 26.92, "end": 32.160000000000004, "text": " gain for all possible features and pick the feature to split on that gives the highest"}, {"start": 32.160000000000004, "end": 33.84, "text": " information gain."}, {"start": 33.84, "end": 39.160000000000004, "text": " Having chosen this feature, you would then split the data set into two subsets according"}, {"start": 39.160000000000004, "end": 46.36, "text": " to the selected feature and create left and right branches of the tree and send the training"}, {"start": 46.36, "end": 51.64, "text": " examples to either the left or the right branch depending on the value of that feature for"}, {"start": 51.64, "end": 53.68000000000001, "text": " that example."}, {"start": 53.68, "end": 58.84, "text": " And this allows you to have made a split at the root node."}, {"start": 58.84, "end": 64.2, "text": " After that, you would then keep on repeating the splitting process on the left branch of"}, {"start": 64.2, "end": 70.28, "text": " the tree, on the right branch of the tree and so on and keep on doing that until a stopping"}, {"start": 70.28, "end": 72.12, "text": " criteria is met."}, {"start": 72.12, "end": 79.56, "text": " Where the stopping criteria can be when a node is 100% a single clause, so one has reached"}, {"start": 79.56, "end": 85.48, "text": " entropy of zero or when further splitting a node will cause the tree to exceed the maximum"}, {"start": 85.48, "end": 91.36, "text": " depth that you have set or if the information gain from an additional split is less than"}, {"start": 91.36, "end": 98.96000000000001, "text": " a threshold or if the number of examples in a node is below a threshold."}, {"start": 98.96000000000001, "end": 105.92, "text": " And so you would keep on repeating the splitting process until the stopping criteria that you've"}, {"start": 105.92, "end": 110.44, "text": " chosen which could be one or more of these criteria is met."}, {"start": 110.44, "end": 114.92, "text": " Let's look at an illustration of how this process will work."}, {"start": 114.92, "end": 120.76, "text": " We started all of the examples at the root nodes and based on computing information gain"}, {"start": 120.76, "end": 127.72, "text": " for all three features, decide that ear shape is the best feature to split on."}, {"start": 127.72, "end": 133.2, "text": " Based on that, we create left and right sub branches and send the subset of the data with"}, {"start": 133.2, "end": 138.2, "text": " pointy versus floppy ears to the left and right sub branches."}, {"start": 138.2, "end": 143.2, "text": " So let me cover up the root node and the right sub branch and just focus on the left sub"}, {"start": 143.2, "end": 146.83999999999997, "text": " branch where we have these five examples."}, {"start": 146.83999999999997, "end": 151.79999999999998, "text": " Let's see how splitting criteria is to keep splitting until everything in the node belongs"}, {"start": 151.79999999999998, "end": 155.54, "text": " to a single clause, so either all cats or all dogs."}, {"start": 155.54, "end": 160.06, "text": " We will look at this node and see if it meets the splitting criteria and it does not because"}, {"start": 160.06, "end": 163.28, "text": " there is a mix of cats and dogs here."}, {"start": 163.28, "end": 167.6, "text": " So the next step is to then pick a feature to split on."}, {"start": 167.6, "end": 172.4, "text": " So we then go through the features one at a time and compute the information gain of"}, {"start": 172.4, "end": 180.08, "text": " each of those features as if this node were the new root node of a decision tree that"}, {"start": 180.08, "end": 186.54, "text": " was trained using just five training examples shown here."}, {"start": 186.54, "end": 191.35999999999999, "text": " So we would compute the information gain for splitting on the whiskers feature, the information"}, {"start": 191.35999999999999, "end": 196.56, "text": " gain on splitting on the face shape feature, and it turns out that the information gain"}, {"start": 196.56, "end": 202.66, "text": " for splitting on ear shape will be zero because all of these have the same pointy ear shape."}, {"start": 202.66, "end": 209.32, "text": " And between whiskers and face shape, face shape turns out to have a highest information"}, {"start": 209.32, "end": 214.76, "text": " gain, so we're going to split on face shape and that allows us to build left and right"}, {"start": 214.76, "end": 217.35999999999999, "text": " sub-branches as follows."}, {"start": 217.35999999999999, "end": 222.62, "text": " And then for the left sub-branch here, we check for the stopping criteria and we find"}, {"start": 222.62, "end": 226.92, "text": " that it is all cats, so we stop splitting and build a prediction node that predicts"}, {"start": 226.92, "end": 232.56, "text": " cat and for the right sub-branch, we again check the stopping criteria and we find that"}, {"start": 232.56, "end": 237.44, "text": " it is all dogs and so we stop splitting and have a prediction node."}, {"start": 237.44, "end": 241.84, "text": " So for the left sub-branch, we check for the criteria for whether or not we should stop"}, {"start": 241.84, "end": 248.0, "text": " splitting and we have all cats here and so the stopping criteria is met and we create"}, {"start": 248.0, "end": 253.26, "text": " a leaf node that makes a prediction of cat and for the right sub-branch, we find that"}, {"start": 253.26, "end": 259.76, "text": " it is all dogs and so we will also stop splitting since we've met the splitting criteria and"}, {"start": 259.76, "end": 264.28000000000003, "text": " put a leaf node there that predicts not cat."}, {"start": 264.28000000000003, "end": 269.76, "text": " So having built out this left subtree, we can now turn our attention to building the"}, {"start": 269.76, "end": 270.76, "text": " right subtree."}, {"start": 270.76, "end": 276.59999999999997, "text": " Let me now again cover up the root node and the entire left subtree."}, {"start": 276.59999999999997, "end": 281.2, "text": " To build out the right subtree, we have these five examples here and again, the first thing"}, {"start": 281.2, "end": 286.4, "text": " we do is check if the criteria to stop splitting has been met."}, {"start": 286.4, "end": 291.56, "text": " The criteria being whether or not all the examples are single class, we've not met the"}, {"start": 291.56, "end": 298.2, "text": " criteria and so we'll decide to keep splitting in this right sub-branch as well."}, {"start": 298.2, "end": 302.59999999999997, "text": " And in fact, the procedure for building the right sub-branch will be a lot like as if"}, {"start": 302.59999999999997, "end": 307.68, "text": " you were training a decision tree learning algorithm from scratch where the data set"}, {"start": 307.68, "end": 313.08, "text": " you have comprises just these five training examples."}, {"start": 313.08, "end": 319.15999999999997, "text": " And so again, computing information gain for all of the possible features to split on,"}, {"start": 319.15999999999997, "end": 324.08, "text": " you find that the whiskers feature gives the highest information gain."}, {"start": 324.08, "end": 328.56, "text": " So these are the set of five examples according to whether whiskers are present or absent,"}, {"start": 328.56, "end": 334.71999999999997, "text": " check if the criteria to stop splitting are met in the left and right sub-branches here"}, {"start": 334.71999999999997, "end": 340.36, "text": " and decide that they are and so you end up with leaf nodes that predict cat and not cat."}, {"start": 340.36, "end": 346.64, "text": " And so this is the overall process for building the decision tree."}, {"start": 346.64, "end": 352.44, "text": " Notice that there's an interesting aspect of what we've done, which is after we decided"}, {"start": 352.44, "end": 360.2, "text": " what to split on at the root node, the way we built the left sub-tree was by building"}, {"start": 360.2, "end": 367.08, "text": " a decision tree on a subset of five examples and the way we built the right sub-tree was"}, {"start": 367.08, "end": 372.36, "text": " by again building a decision tree on a subset of five examples."}, {"start": 372.36, "end": 379.48, "text": " In computer science, this is an example of a recursive algorithm and all that means is"}, {"start": 379.48, "end": 385.0, "text": " the way you build a decision tree at the root is by building other smaller decision trees"}, {"start": 385.0, "end": 389.48, "text": " in the left and the right sub-branches."}, {"start": 389.48, "end": 395.6, "text": " So recursion in computer science refers to writing code that calls itself and the way"}, {"start": 395.6, "end": 401.62, "text": " this comes up in building a decision tree is you build the overall decision tree by"}, {"start": 401.62, "end": 407.0, "text": " building smaller sub-decision trees and then putting them all together."}, {"start": 407.0, "end": 412.52, "text": " So that's why if you look at software implementations of decision trees, you see sometimes references"}, {"start": 412.52, "end": 415.68, "text": " to a recursive algorithm."}, {"start": 415.68, "end": 420.16, "text": " But if you don't feel like you fully understood this concept of recursive algorithms, don't"}, {"start": 420.16, "end": 421.16, "text": " worry about it."}, {"start": 421.16, "end": 426.7, "text": " You'll still be able to fully complete this week's assignments as well as use libraries"}, {"start": 426.7, "end": 429.72, "text": " to get decision trees to work for yourself."}, {"start": 429.72, "end": 435.84, "text": " But if you're implementing a decision tree algorithm from scratch, then a recursive algorithm"}, {"start": 435.84, "end": 440.23999999999995, "text": " turns out to be one of the steps you'd have to implement."}, {"start": 440.23999999999995, "end": 444.91999999999996, "text": " And by the way, you may be wondering how to choose the maximum depth parameter."}, {"start": 444.91999999999996, "end": 449.44, "text": " There are many different possible choices, but some of the open source libraries will"}, {"start": 449.44, "end": 453.62, "text": " have good default choices that you can use."}, {"start": 453.62, "end": 459.23999999999995, "text": " One intuition is the larger the maximum depth, the bigger the decision tree you're willing"}, {"start": 459.23999999999995, "end": 460.52, "text": " to build."}, {"start": 460.52, "end": 466.2, "text": " And this is a bit like fitting a higher degree polynomial or training a larger neural network."}, {"start": 466.2, "end": 472.28, "text": " It lets the decision tree learn a more complex model, but it also increases the risk of overfitting"}, {"start": 472.28, "end": 476.08, "text": " if it's fitting a very complex function to your data."}, {"start": 476.08, "end": 481.59999999999997, "text": " In theory, you could use cross-validation to pick parameters like the maximum depth where"}, {"start": 481.59999999999997, "end": 486.56, "text": " you try out different values to the maximum depth and pick what works best on the cross-validation"}, {"start": 486.56, "end": 487.56, "text": " set."}, {"start": 487.56, "end": 492.08, "text": " But in practice, the open source libraries have even somewhat better ways to choose this"}, {"start": 492.08, "end": 493.6, "text": " parameter for you."}, {"start": 493.6, "end": 499.52, "text": " Or another criteria that you can use to decide when to stop splitting is if the information"}, {"start": 499.52, "end": 504.64, "text": " gain from an additional split is less than a certain threshold."}, {"start": 504.64, "end": 510.4, "text": " So if any feature you split on achieves only a small reduction in entropy or a very small"}, {"start": 510.4, "end": 514.78, "text": " information gain, then you might also decide to not bother."}, {"start": 514.78, "end": 519.8199999999999, "text": " And finally, you can also decide to stop splitting when the number of examples in the node is"}, {"start": 519.8199999999999, "end": 521.92, "text": " below a certain threshold."}, {"start": 521.92, "end": 525.36, "text": " So that's the process for building a decision tree."}, {"start": 525.36, "end": 529.36, "text": " Now that you've learned the decision tree, if you want to make a prediction, you can"}, {"start": 529.36, "end": 533.56, "text": " then follow the procedure that you saw in the very first video of this week where you"}, {"start": 533.56, "end": 539.3399999999999, "text": " take a new example, say a test example, and start at the root and keep on following the"}, {"start": 539.3399999999999, "end": 544.68, "text": " decisions down until you get to a leave node, which then makes the prediction."}, {"start": 544.68, "end": 548.92, "text": " Now that you know the basic decision tree learning algorithm, in the next few videos,"}, {"start": 548.92, "end": 552.9599999999999, "text": " I'd like to go into some further refinements of this algorithm."}, {"start": 552.9599999999999, "end": 558.38, "text": " So far, we've only used features that take on two possible values, but sometimes you"}, {"start": 558.38, "end": 563.8399999999999, "text": " have a feature that takes on categorical or discrete values, but maybe more than two values."}, {"start": 563.84, "end": 575.36, "text": " Let's take a look in the next video at how to handle that case."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=y_VfAJG6H-Q
7.6 Decision Tree Learning | Using one-hot encoding of categorical features --[ML | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In the example we've seen so far, each of the features could take on only one of two possible values. The ear shape was either pointy or floppy, the face shape was either round or not round, and whiskers were either present or absent. But what if you have features that can take on more than two discrete values? In this video, we'll look at how you can use one-hot encoding to adjust features like that. Here's a new training set for our Pet Adoption Center application, where all the data is the same except for the ear shape feature. Rather than the ear shape only being pointy and floppy, it can now also take on an oval shape. And so the ear shape feature is still a categorical value feature, but it can take on three possible values instead of just two possible values. And this means that when you split on this feature, you end up creating three subsets of the data and end up building three sub-branches for this tree. But in this video, I'd like to describe a different way of adjusting features that can take on more than two values, which is to use a one-hot encoding. In particular, rather than using an ear shape feature that can take on any of three possible values, we're instead going to create three new features where one feature is, does this animal have pointy ears? A second is, does it have floppy ears? And the third is, does it have oval ears? And so for the first example, whereas we previously had ear shape as pointy, we will now instead say that this animal has a value for the pointy ear feature of one and zero for floppy and oval. Whereas previously for the second example, we previously said it had oval ears. Now we'll say that it has a value of zero for pointy ears because it doesn't have pointy ears. It also doesn't have floppy ears, but it does have oval ears, which is why this value here is one and so on for the rest of the examples in the dataset. And so instead of one feature taking on three possible values, we've now constructed three new features, each of which can take on only one of two possible values, either zero or one. In a little bit more detail, if a categorical feature can take on k possible values, k was three in our example, then we will replace it by creating k binary features that can only take on the values zero or one. And you notice that among all of these three features, if you look at any row here, exactly one of the values is equal to one. And that's what gives this method of feature construction the name one hot encoding. And because one of these features will always take on the value one, that's the hot feature and hence the name one hot encoding. And with this choice of features, we're now back to the original setting of where each feature only takes on one of two possible values. And so the decision tree learning algorithm that we've seen previously will apply to this data with no further modifications. Just as an aside, even though this week's material has been focused on training decision tree models, the idea of using one hot encodings to encode categorical features also works for training neural networks. In particular, if you were to take the face shape feature and replace round and not round with one and zero, where round gets mapped to one, not round gets mapped to zero, and so on. And for whiskers, similarly, replace presence with one and absence with zero. Then notice that we have taken all the categorical features we had where we had three possible values for ear shape, two for face shape and one for whiskers and encoded as a list of these five features, three from the one hot encoding of ear shape, one from face shape and one from whiskers. And now this list of five features can also be fed to a neural network or to logistic regression to try to train a cat classifier. So one hot encoding is a technique that works not just for decision tree learning, but also lets you encode categorical features using ones and zeros so that it can be fed as inputs to a neural network as well, which expects numbers as inputs. So that's it, with a one hot encoding you can get your decision tree to work on features that can take on more than two discrete values. And you can also apply this to neural network or linear regression or logistic regression training. But how about features that are numbers that can take on any value, not just a small number of discrete values? In the next video, let's look at how you can get the decision tree to handle continuous value features that can be any number.
[{"start": 0.0, "end": 7.5200000000000005, "text": " In the example we've seen so far, each of the features could take on only one of two"}, {"start": 7.5200000000000005, "end": 9.32, "text": " possible values."}, {"start": 9.32, "end": 14.32, "text": " The ear shape was either pointy or floppy, the face shape was either round or not round,"}, {"start": 14.32, "end": 17.22, "text": " and whiskers were either present or absent."}, {"start": 17.22, "end": 22.64, "text": " But what if you have features that can take on more than two discrete values?"}, {"start": 22.64, "end": 28.0, "text": " In this video, we'll look at how you can use one-hot encoding to adjust features like that."}, {"start": 28.0, "end": 34.4, "text": " Here's a new training set for our Pet Adoption Center application, where all the data is"}, {"start": 34.4, "end": 37.76, "text": " the same except for the ear shape feature."}, {"start": 37.76, "end": 45.0, "text": " Rather than the ear shape only being pointy and floppy, it can now also take on an oval"}, {"start": 45.0, "end": 46.0, "text": " shape."}, {"start": 46.0, "end": 53.72, "text": " And so the ear shape feature is still a categorical value feature, but it can take on three possible"}, {"start": 53.72, "end": 57.519999999999996, "text": " values instead of just two possible values."}, {"start": 57.52, "end": 62.480000000000004, "text": " And this means that when you split on this feature, you end up creating three subsets"}, {"start": 62.480000000000004, "end": 70.36, "text": " of the data and end up building three sub-branches for this tree."}, {"start": 70.36, "end": 76.36, "text": " But in this video, I'd like to describe a different way of adjusting features that can"}, {"start": 76.36, "end": 82.10000000000001, "text": " take on more than two values, which is to use a one-hot encoding."}, {"start": 82.1, "end": 87.83999999999999, "text": " In particular, rather than using an ear shape feature that can take on any of three possible"}, {"start": 87.83999999999999, "end": 96.28, "text": " values, we're instead going to create three new features where one feature is, does this"}, {"start": 96.28, "end": 98.08, "text": " animal have pointy ears?"}, {"start": 98.08, "end": 100.39999999999999, "text": " A second is, does it have floppy ears?"}, {"start": 100.39999999999999, "end": 103.72, "text": " And the third is, does it have oval ears?"}, {"start": 103.72, "end": 111.02, "text": " And so for the first example, whereas we previously had ear shape as pointy, we will now instead"}, {"start": 111.02, "end": 118.72, "text": " say that this animal has a value for the pointy ear feature of one and zero for floppy and"}, {"start": 118.72, "end": 120.03999999999999, "text": " oval."}, {"start": 120.03999999999999, "end": 125.5, "text": " Whereas previously for the second example, we previously said it had oval ears."}, {"start": 125.5, "end": 131.2, "text": " Now we'll say that it has a value of zero for pointy ears because it doesn't have pointy"}, {"start": 131.2, "end": 132.2, "text": " ears."}, {"start": 132.2, "end": 136.88, "text": " It also doesn't have floppy ears, but it does have oval ears, which is why this value here"}, {"start": 136.88, "end": 141.68, "text": " is one and so on for the rest of the examples in the dataset."}, {"start": 141.68, "end": 147.44, "text": " And so instead of one feature taking on three possible values, we've now constructed three"}, {"start": 147.44, "end": 153.4, "text": " new features, each of which can take on only one of two possible values, either zero or"}, {"start": 153.4, "end": 154.4, "text": " one."}, {"start": 154.4, "end": 162.2, "text": " In a little bit more detail, if a categorical feature can take on k possible values, k was"}, {"start": 162.2, "end": 169.16, "text": " three in our example, then we will replace it by creating k binary features that can"}, {"start": 169.16, "end": 172.64, "text": " only take on the values zero or one."}, {"start": 172.64, "end": 179.23999999999998, "text": " And you notice that among all of these three features, if you look at any row here, exactly"}, {"start": 179.23999999999998, "end": 183.07999999999998, "text": " one of the values is equal to one."}, {"start": 183.07999999999998, "end": 188.67999999999998, "text": " And that's what gives this method of feature construction the name one hot encoding."}, {"start": 188.68, "end": 194.48000000000002, "text": " And because one of these features will always take on the value one, that's the hot feature"}, {"start": 194.48000000000002, "end": 197.44, "text": " and hence the name one hot encoding."}, {"start": 197.44, "end": 202.12, "text": " And with this choice of features, we're now back to the original setting of where each"}, {"start": 202.12, "end": 205.54000000000002, "text": " feature only takes on one of two possible values."}, {"start": 205.54000000000002, "end": 209.64000000000001, "text": " And so the decision tree learning algorithm that we've seen previously will apply to this"}, {"start": 209.64000000000001, "end": 213.52, "text": " data with no further modifications."}, {"start": 213.52, "end": 219.0, "text": " Just as an aside, even though this week's material has been focused on training decision"}, {"start": 219.0, "end": 225.92000000000002, "text": " tree models, the idea of using one hot encodings to encode categorical features also works"}, {"start": 225.92000000000002, "end": 228.84, "text": " for training neural networks."}, {"start": 228.84, "end": 234.16000000000003, "text": " In particular, if you were to take the face shape feature and replace round and not round"}, {"start": 234.16000000000003, "end": 241.16000000000003, "text": " with one and zero, where round gets mapped to one, not round gets mapped to zero, and"}, {"start": 241.16000000000003, "end": 242.16000000000003, "text": " so on."}, {"start": 242.16, "end": 248.88, "text": " And for whiskers, similarly, replace presence with one and absence with zero."}, {"start": 248.88, "end": 254.56, "text": " Then notice that we have taken all the categorical features we had where we had three possible"}, {"start": 254.56, "end": 260.64, "text": " values for ear shape, two for face shape and one for whiskers and encoded as a list of"}, {"start": 260.64, "end": 267.04, "text": " these five features, three from the one hot encoding of ear shape, one from face shape"}, {"start": 267.04, "end": 268.78, "text": " and one from whiskers."}, {"start": 268.78, "end": 273.96, "text": " And now this list of five features can also be fed to a neural network or to logistic"}, {"start": 273.96, "end": 277.52, "text": " regression to try to train a cat classifier."}, {"start": 277.52, "end": 283.5, "text": " So one hot encoding is a technique that works not just for decision tree learning, but also"}, {"start": 283.5, "end": 290.47999999999996, "text": " lets you encode categorical features using ones and zeros so that it can be fed as inputs"}, {"start": 290.47999999999996, "end": 295.08, "text": " to a neural network as well, which expects numbers as inputs."}, {"start": 295.08, "end": 299.88, "text": " So that's it, with a one hot encoding you can get your decision tree to work on features"}, {"start": 299.88, "end": 303.47999999999996, "text": " that can take on more than two discrete values."}, {"start": 303.47999999999996, "end": 308.44, "text": " And you can also apply this to neural network or linear regression or logistic regression"}, {"start": 308.44, "end": 310.2, "text": " training."}, {"start": 310.2, "end": 316.15999999999997, "text": " But how about features that are numbers that can take on any value, not just a small number"}, {"start": 316.15999999999997, "end": 317.84, "text": " of discrete values?"}, {"start": 317.84, "end": 322.52, "text": " In the next video, let's look at how you can get the decision tree to handle continuous"}, {"start": 322.52, "end": 325.44, "text": " value features that can be any number."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=oK5pBru6Tfo
7.7 Decision Tree Learning | Continuous valued features --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Let's look at how you can modify a decision tree to work with features that aren't just discrete value but continuous value, that is features that can be any number. Let's start with an example. I have modified the cat adoption center of datasets to add one more feature, which is the weight of the animal in pounds. On average, between cats and dogs, cats are a little bit lighter than dogs, although there are some cats that are heavier than some dogs. But so the weight of an animal is a useful feature for deciding if it is a cat or not. So how do you get a decision tree to use a feature like this? The decision tree learning algorithm will proceed similarly as before, except that rather than considering splitting just on ear shape, face shape, and whiskers, you have to consider splitting on ear shape, face shape, whiskers, or weight. If splitting on the weight feature gives better information gain than the other options, then you will split on the weight feature. But how do you decide how to split on the weight feature? Let's take a look. Here's a plot of the data at the root node. I've plotted on the horizontal axis, the weight of the animal, and the vertical axis is cat on top and not cat below. So the vertical axis indicates the label y being 1 or 0. The way we will split on the weight feature would be if we were to split the data based on whether or not the weight is less than or equal to some value, let's say 8 or some of the number, that will be the job of the learning algorithm to choose. What we should do when considering splitting on the weight feature is to consider many different values of this threshold and then to pick the one that is the best. By the best, I mean the one that results in the best information gain. So in particular, if you were considering splitting the examples based on whether the weight is less than or equal to 8, then you would be splitting this data set into two subsets, where the subset on the left has two cats and the subset on the right has three cats and five dogs. So if you were to calculate our usual information gain calculation, you'll be computing the entropy at the root node, entropy of 0.5 minus now two-tenths times entropy of the left split has two other two cats, so entropy of two other two plus the right split has eight out of 10 examples, and then entropy of the eight examples on the right, three are cats, entropy of three-eighths, and this turns out to be 0.24. So this would be information gain if you were to split on whether the weight is less than or equal to 8. But we should try other values as well. So what if you were to split on whether or not the weight is less than or equal to 9? And that corresponds to this new line over here, and the information gain calculation becomes h of 0.5 minus, so now we have four examples in the left split, all cats, so that's four out of 10 times entropy of four out of four plus six examples on the right, of which you have one cat, so that's six-tenths times h of one-sixth, which is equal to, turns out, 0.61. So the information gain here looks much better, it's 0.61 information gain, which is much higher than 0.24. Or we could try another value, say 13, and the calculation turns out to look like this, which is 0.40. In the more general case, we'll actually try not just three values, but multiple values along the x-axis. And one convention would be to sort all of the examples according to the weight or according to the value on this feature, and take all the values that are the midpoints between the sorted list of training examples as the values for consideration for this threshold over here. This way, if you have 10 training examples, you will test nine different possible values for this threshold, and then try to pick the one that gives you the highest information gain. And finally, if the information gain from splitting on a given value of this threshold is better than the information gain from splitting on any other feature, then you will decide to split that node at that feature. And in this example, an information gain of 0.61 turns out to be higher than that of any other feature. It turns out there are actually two thresholds. And so, assuming the algorithm chooses this feature to split on, you would end up splitting the data set according to whether or not the weight of the animal is less than or equal to nine pounds. And so you end up with two subsets of the data like this, and you can then build recursively additional decision trees using these two subsets of the data to build out the rest of the tree. So to summarize, to get the decision tree to work on continuous value features at every node, when considering splits, you would just consider different values to split on, carry out the usual information gain calculation, and decide to split on that continuous value feature if it gives the highest possible information gain. So that's how you get a decision tree to work with continuous value features. Try different thresholds, do the usual information gain calculation, and split on the continuous value feature with the selected threshold if it gives you the best possible information gain of all possible features to split on. And that's it for the required videos on the core decision tree algorithm. After this, there is an optional video that you can watch or not that generalizes the decision tree learning algorithm to regression trees. So far, we've only talked about using decision trees to make predictions that are classifications, predicting a discrete category such as cat or not cat. But what if you have a regression problem where you want to predict a number? In the next video, I'll talk about a generalization of decision trees to handle that.
[{"start": 0.0, "end": 3.96, "text": " Let's look at how you can modify a decision tree to work with"}, {"start": 3.96, "end": 7.5, "text": " features that aren't just discrete value but continuous value,"}, {"start": 7.5, "end": 10.16, "text": " that is features that can be any number."}, {"start": 10.16, "end": 12.120000000000001, "text": " Let's start with an example."}, {"start": 12.120000000000001, "end": 17.8, "text": " I have modified the cat adoption center of datasets to add one more feature,"}, {"start": 17.8, "end": 21.32, "text": " which is the weight of the animal in pounds."}, {"start": 21.32, "end": 24.92, "text": " On average, between cats and dogs,"}, {"start": 24.92, "end": 28.400000000000002, "text": " cats are a little bit lighter than dogs,"}, {"start": 28.4, "end": 31.119999999999997, "text": " although there are some cats that are heavier than some dogs."}, {"start": 31.119999999999997, "end": 37.72, "text": " But so the weight of an animal is a useful feature for deciding if it is a cat or not."}, {"start": 37.72, "end": 42.58, "text": " So how do you get a decision tree to use a feature like this?"}, {"start": 42.58, "end": 46.68, "text": " The decision tree learning algorithm will proceed similarly as before,"}, {"start": 46.68, "end": 50.56, "text": " except that rather than considering splitting just on ear shape,"}, {"start": 50.56, "end": 51.879999999999995, "text": " face shape, and whiskers,"}, {"start": 51.879999999999995, "end": 53.84, "text": " you have to consider splitting on ear shape,"}, {"start": 53.84, "end": 56.480000000000004, "text": " face shape, whiskers, or weight."}, {"start": 56.48, "end": 59.559999999999995, "text": " If splitting on the weight feature gives"}, {"start": 59.559999999999995, "end": 62.08, "text": " better information gain than the other options,"}, {"start": 62.08, "end": 64.39999999999999, "text": " then you will split on the weight feature."}, {"start": 64.39999999999999, "end": 70.36, "text": " But how do you decide how to split on the weight feature? Let's take a look."}, {"start": 70.36, "end": 74.19999999999999, "text": " Here's a plot of the data at the root node."}, {"start": 74.19999999999999, "end": 76.47999999999999, "text": " I've plotted on the horizontal axis,"}, {"start": 76.47999999999999, "end": 77.82, "text": " the weight of the animal,"}, {"start": 77.82, "end": 83.19999999999999, "text": " and the vertical axis is cat on top and not cat below."}, {"start": 83.2, "end": 88.12, "text": " So the vertical axis indicates the label y being 1 or 0."}, {"start": 88.12, "end": 93.4, "text": " The way we will split on the weight feature would be if we were to split the data"}, {"start": 93.4, "end": 97.88, "text": " based on whether or not the weight is less than or equal to some value,"}, {"start": 97.88, "end": 100.60000000000001, "text": " let's say 8 or some of the number,"}, {"start": 100.60000000000001, "end": 103.88, "text": " that will be the job of the learning algorithm to choose."}, {"start": 103.88, "end": 110.64, "text": " What we should do when considering splitting on the weight feature is to"}, {"start": 110.64, "end": 117.24, "text": " consider many different values of this threshold and then to pick the one that is the best."}, {"start": 117.24, "end": 122.44, "text": " By the best, I mean the one that results in the best information gain."}, {"start": 122.44, "end": 127.68, "text": " So in particular, if you were considering splitting the examples"}, {"start": 127.68, "end": 131.6, "text": " based on whether the weight is less than or equal to 8,"}, {"start": 131.6, "end": 135.8, "text": " then you would be splitting this data set into two subsets,"}, {"start": 135.8, "end": 144.16000000000003, "text": " where the subset on the left has two cats and the subset on the right has three cats and five dogs."}, {"start": 144.16000000000003, "end": 150.4, "text": " So if you were to calculate our usual information gain calculation,"}, {"start": 150.4, "end": 154.04000000000002, "text": " you'll be computing the entropy at the root node,"}, {"start": 154.04000000000002, "end": 164.24, "text": " entropy of 0.5 minus now two-tenths times entropy of the left split has two other two cats,"}, {"start": 164.24, "end": 172.36, "text": " so entropy of two other two plus the right split has eight out of 10 examples,"}, {"start": 172.36, "end": 176.24, "text": " and then entropy of the eight examples on the right,"}, {"start": 176.24, "end": 178.96, "text": " three are cats, entropy of three-eighths,"}, {"start": 178.96, "end": 182.36, "text": " and this turns out to be 0.24."}, {"start": 182.36, "end": 189.12, "text": " So this would be information gain if you were to split on whether the weight is less than or equal to 8."}, {"start": 189.12, "end": 191.92000000000002, "text": " But we should try other values as well."}, {"start": 191.92, "end": 198.23999999999998, "text": " So what if you were to split on whether or not the weight is less than or equal to 9?"}, {"start": 198.23999999999998, "end": 203.6, "text": " And that corresponds to this new line over here,"}, {"start": 203.6, "end": 210.04, "text": " and the information gain calculation becomes h of 0.5 minus,"}, {"start": 210.04, "end": 213.64, "text": " so now we have four examples in the left split, all cats,"}, {"start": 213.64, "end": 220.76, "text": " so that's four out of 10 times entropy of four out of four plus six examples on the right,"}, {"start": 220.76, "end": 226.92, "text": " of which you have one cat, so that's six-tenths times h of one-sixth,"}, {"start": 226.92, "end": 229.84, "text": " which is equal to, turns out, 0.61."}, {"start": 229.84, "end": 232.51999999999998, "text": " So the information gain here looks much better,"}, {"start": 232.51999999999998, "end": 238.84, "text": " it's 0.61 information gain, which is much higher than 0.24."}, {"start": 238.84, "end": 242.04, "text": " Or we could try another value, say 13,"}, {"start": 242.04, "end": 249.12, "text": " and the calculation turns out to look like this, which is 0.40."}, {"start": 249.12, "end": 253.12, "text": " In the more general case, we'll actually try not just three values,"}, {"start": 253.12, "end": 256.72, "text": " but multiple values along the x-axis."}, {"start": 256.72, "end": 261.56, "text": " And one convention would be to sort all of the examples"}, {"start": 261.56, "end": 265.88, "text": " according to the weight or according to the value on this feature,"}, {"start": 265.88, "end": 271.92, "text": " and take all the values that are the midpoints between the sorted list of training examples"}, {"start": 271.92, "end": 276.52, "text": " as the values for consideration for this threshold over here."}, {"start": 276.52, "end": 279.52, "text": " This way, if you have 10 training examples,"}, {"start": 279.52, "end": 283.56, "text": " you will test nine different possible values for this threshold,"}, {"start": 283.56, "end": 288.03999999999996, "text": " and then try to pick the one that gives you the highest information gain."}, {"start": 288.03999999999996, "end": 294.64, "text": " And finally, if the information gain from splitting on a given value of this threshold"}, {"start": 294.64, "end": 298.76, "text": " is better than the information gain from splitting on any other feature,"}, {"start": 298.76, "end": 303.44, "text": " then you will decide to split that node at that feature."}, {"start": 303.44, "end": 310.52, "text": " And in this example, an information gain of 0.61 turns out to be higher than that of any other feature."}, {"start": 310.52, "end": 313.76, "text": " It turns out there are actually two thresholds."}, {"start": 313.76, "end": 318.4, "text": " And so, assuming the algorithm chooses this feature to split on,"}, {"start": 318.4, "end": 323.96, "text": " you would end up splitting the data set according to whether or not"}, {"start": 323.96, "end": 328.4, "text": " the weight of the animal is less than or equal to nine pounds."}, {"start": 328.4, "end": 332.44, "text": " And so you end up with two subsets of the data like this,"}, {"start": 332.44, "end": 337.28, "text": " and you can then build recursively additional decision trees"}, {"start": 337.28, "end": 341.0, "text": " using these two subsets of the data to build out the rest of the tree."}, {"start": 341.0, "end": 347.36, "text": " So to summarize, to get the decision tree to work on continuous value features at every node,"}, {"start": 347.36, "end": 351.6, "text": " when considering splits, you would just consider different values to split on,"}, {"start": 351.6, "end": 354.44, "text": " carry out the usual information gain calculation,"}, {"start": 354.44, "end": 361.88, "text": " and decide to split on that continuous value feature if it gives the highest possible information gain."}, {"start": 361.88, "end": 366.76, "text": " So that's how you get a decision tree to work with continuous value features."}, {"start": 366.76, "end": 370.88, "text": " Try different thresholds, do the usual information gain calculation,"}, {"start": 370.88, "end": 374.36, "text": " and split on the continuous value feature with the selected threshold"}, {"start": 374.36, "end": 380.88, "text": " if it gives you the best possible information gain of all possible features to split on."}, {"start": 380.88, "end": 386.36, "text": " And that's it for the required videos on the core decision tree algorithm."}, {"start": 386.36, "end": 390.12, "text": " After this, there is an optional video that you can watch or not"}, {"start": 390.12, "end": 395.4, "text": " that generalizes the decision tree learning algorithm to regression trees."}, {"start": 395.4, "end": 400.76, "text": " So far, we've only talked about using decision trees to make predictions that are classifications,"}, {"start": 400.76, "end": 404.64, "text": " predicting a discrete category such as cat or not cat."}, {"start": 404.64, "end": 409.04, "text": " But what if you have a regression problem where you want to predict a number?"}, {"start": 409.04, "end": 420.56, "text": " In the next video, I'll talk about a generalization of decision trees to handle that."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=0ha0UEJJydU
7.8 Decision Tree Learning | Regression Trees (optional) --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
So far, we've only been talking about decision trees as classification algorithms. In this optional video, we'll generalize decision trees to be regression algorithms so that they can predict a number. Let's take a look. The example I'm going to use for this video will be to use the discrete value features that we had previously, that is these features x, in order to predict the weight of the animal y. So just to be clear, the weight here, unlike the previous video, is no longer an input feature. Instead, this is the target output y that we want to predict, rather than trying to predict whether or not an animal is or is not a cat. This is a regression problem because we want to predict a number y. Let's look at what a regression tree will look like. Here I've already constructed a tree for this regression problem, where the root node splits on ear shape, and then the left and right subtrees split on face shape and also face shape here on the right. And there's nothing wrong with a decision tree that chooses to split on the same feature in both the left and right sub-branches. It's perfectly fine if the splitting algorithm chooses to do that. If during training you had decided on these splits, then this node down here would have these four animals with weights 7.2, 8.4, 7.6, and 10.2. This node would have this one animal with weight 9.2 and so on for these remaining two nodes. So the last thing we need to fill in for this decision tree is, if there's a test example that comes down to this node, what is the weight that we should predict for an animal with pointy ears and a round face shape? The decision tree is going to make a prediction based on taking the average of the weights in the training examples down here, and by averaging these four numbers, it turns out you get 8.35. If on the other hand an animal has pointy ears and a not round face shape, then it will predict 9.2 or 9.20 pounds because that's the weight of this one animal down here. And similarly, this would be 17.70 and 9.90. So what this model will do is, given a new test example, follow the decision nodes down as usual until it gets to a leaf node and then predict that value at the leaf node, which I had just computed by taking an average of the weights of the animals that during training had gotten down to that same leaf node. So if you were constructing a decision tree from scratch using this data set in order to predict the weight, the key decision, as you've seen earlier this week, will be how do you choose which feature to split on? Let me illustrate how to make that decision with an example. At the root node, one thing you could do is split on the ear shape, and if you do that, you end up with left and right branches of the tree with five animals on the left and right with the following weights. If you were to choose to split on the face shape, you end up with these animals on the left and right with the corresponding weights that are written below. And if you were to choose to split on whiskers being present or absent, you end up with this. So the question is, given these three possible features to split on at the root node, which one do you want to pick that gives the best predictions for the weight of the animal? When building a regression tree, rather than trying to reduce entropy, which was that measure of impurity that we had for a classification problem, we will instead try to reduce the variance of the weights of the values y at each of these subsets of the data. So if you've seen the notion of variance in other contexts, that's great. This is the statistical and mathematical notion of variance that we'll use in a minute. But if you've not seen how to compute the variance of a set of numbers before, don't worry about it. All you need to know for this slide is that variance informally computes how widely a set of numbers varies. So for this set of numbers, 7.2, 9.2, and so on up to 10.2, it turns out the variance is 1.47. So it doesn't vary that much. Because here, 8.8, 15, 11, 18, and 20, these numbers go all the way from 8.8 all the way up to 20. And so the variance is much larger. Turns out to have a variance of 21.87. And so the way we'll evaluate the quality of this split is we'll compute, same as before, w left and w right as the fraction of examples that went to the left and right branches. And the average variance after the split is going to be 510, which is w left, times 1.47, which is the variance on the left, and then plus 510 times the variance on the right, which is 21.87. So this weighted average variance plays a very similar role to the weighted average entropy that we had used when deciding what split to use for a classification problem. And we can then repeat this calculation for the other possible choices of features to split on. Here in the tree in the middle, the variance of these numbers here turns out to be 27.80. The variance here is 1.37. And so with w left equals 710, and w right is 310. And so with these values, you can compute the weighted variance as follows. Finally, for the last example, if you were to split on the whiskers feature, this is the variance on the left and right. There's w left and w right. And so the weighted variance is this. A good way to choose a split would be to just choose the value of the weighted variance that is lowest. Similar to when we're computing information gain, I'm going to make just one more modification to this equation. Just as for the classification problem, we didn't just measure the average weighted entropy. We measured the reduction in entropy, and that was information gain. For regression tree, we'll also similarly measure the reduction in variance. Turns out, if you look at all of the examples in the training set, all 10 examples, and you compute the variance of all of them, the variance of all the examples turns out to be 20.51. And that's the same value for the root node in all of these, of course, because it's the same 10 examples at the root node. And so what we'll actually compute is the variance of the root node, which is 20.51, minus this expression down here, which turns out to be equal to 8.84. And so at the root node, the variance was 20.51, and after splitting on ear shape, the average weighted variance at these two nodes is 8.84 lower. So the reduction in variance is 8.84. And similarly, if you compute the expression for reduction in variance for this example in the middle, it's 20.51 minus this expression that we had before, which turns out to be equal to 0.64. So this is a very small reduction in variance. And for the whiskers feature, you end up with this, which is 6.22. So between all three of these examples, 8.84 gives you the largest reduction in variance. So just as previously, we would choose the feature that gives you the largest information gain. For regression tree, you would choose the feature that gives you the largest reduction in variance, which is why you choose ear shape as the feature to split on. Having chosen the ear shape feature to split on, you now have two subsets of five examples in the left and right sub-branches. And you would then, again, we say recursively, but you take these five examples and build a new decision tree focusing on just these five examples, again, evaluating different options of features to split on and picking the one that gives you the biggest variance reduction. And similarly, on the right, and you keep on splitting until you meet the criteria for not splitting any further. And so that's it. With this technique, you can get your decision tree to not just carry out classification problems, but also regression problems. So far, we've talked about how to train a single decision tree. It turns out if you train a lot of decision trees, we call this an ensemble of decision trees, you can get a much better result. Let's take a look at why and how to do so in the next video.
[{"start": 0.0, "end": 7.4, "text": " So far, we've only been talking about decision trees as classification algorithms."}, {"start": 7.4, "end": 12.68, "text": " In this optional video, we'll generalize decision trees to be regression algorithms so that"}, {"start": 12.68, "end": 14.46, "text": " they can predict a number."}, {"start": 14.46, "end": 15.46, "text": " Let's take a look."}, {"start": 15.46, "end": 20.64, "text": " The example I'm going to use for this video will be to use the discrete value features"}, {"start": 20.64, "end": 27.84, "text": " that we had previously, that is these features x, in order to predict the weight of the animal"}, {"start": 27.84, "end": 28.84, "text": " y."}, {"start": 28.84, "end": 34.24, "text": " So just to be clear, the weight here, unlike the previous video, is no longer an input"}, {"start": 34.24, "end": 35.24, "text": " feature."}, {"start": 35.24, "end": 41.16, "text": " Instead, this is the target output y that we want to predict, rather than trying to"}, {"start": 41.16, "end": 45.980000000000004, "text": " predict whether or not an animal is or is not a cat."}, {"start": 45.980000000000004, "end": 50.08, "text": " This is a regression problem because we want to predict a number y."}, {"start": 50.08, "end": 54.879999999999995, "text": " Let's look at what a regression tree will look like."}, {"start": 54.88, "end": 60.160000000000004, "text": " Here I've already constructed a tree for this regression problem, where the root node splits"}, {"start": 60.160000000000004, "end": 65.68, "text": " on ear shape, and then the left and right subtrees split on face shape and also face"}, {"start": 65.68, "end": 67.44, "text": " shape here on the right."}, {"start": 67.44, "end": 72.52000000000001, "text": " And there's nothing wrong with a decision tree that chooses to split on the same feature"}, {"start": 72.52000000000001, "end": 75.6, "text": " in both the left and right sub-branches."}, {"start": 75.6, "end": 80.80000000000001, "text": " It's perfectly fine if the splitting algorithm chooses to do that."}, {"start": 80.8, "end": 87.36, "text": " If during training you had decided on these splits, then this node down here would have"}, {"start": 87.36, "end": 93.56, "text": " these four animals with weights 7.2, 8.4, 7.6, and 10.2."}, {"start": 93.56, "end": 101.8, "text": " This node would have this one animal with weight 9.2 and so on for these remaining two"}, {"start": 101.8, "end": 103.84, "text": " nodes."}, {"start": 103.84, "end": 110.16, "text": " So the last thing we need to fill in for this decision tree is, if there's a test example"}, {"start": 110.16, "end": 117.32, "text": " that comes down to this node, what is the weight that we should predict for an animal"}, {"start": 117.32, "end": 122.52, "text": " with pointy ears and a round face shape?"}, {"start": 122.52, "end": 126.8, "text": " The decision tree is going to make a prediction based on taking the average of the weights"}, {"start": 126.8, "end": 131.4, "text": " in the training examples down here, and by averaging these four numbers, it turns out"}, {"start": 131.4, "end": 134.4, "text": " you get 8.35."}, {"start": 134.4, "end": 139.48, "text": " If on the other hand an animal has pointy ears and a not round face shape, then it will"}, {"start": 139.48, "end": 148.44, "text": " predict 9.2 or 9.20 pounds because that's the weight of this one animal down here."}, {"start": 148.44, "end": 154.28, "text": " And similarly, this would be 17.70 and 9.90."}, {"start": 154.28, "end": 160.48, "text": " So what this model will do is, given a new test example, follow the decision nodes down"}, {"start": 160.48, "end": 166.44, "text": " as usual until it gets to a leaf node and then predict that value at the leaf node,"}, {"start": 166.44, "end": 172.88, "text": " which I had just computed by taking an average of the weights of the animals that during"}, {"start": 172.88, "end": 176.56, "text": " training had gotten down to that same leaf node."}, {"start": 176.56, "end": 182.6, "text": " So if you were constructing a decision tree from scratch using this data set in order"}, {"start": 182.6, "end": 188.64, "text": " to predict the weight, the key decision, as you've seen earlier this week, will be how"}, {"start": 188.64, "end": 192.0, "text": " do you choose which feature to split on?"}, {"start": 192.0, "end": 197.0, "text": " Let me illustrate how to make that decision with an example."}, {"start": 197.0, "end": 203.84, "text": " At the root node, one thing you could do is split on the ear shape, and if you do that,"}, {"start": 203.84, "end": 209.08, "text": " you end up with left and right branches of the tree with five animals on the left and"}, {"start": 209.08, "end": 212.44, "text": " right with the following weights."}, {"start": 212.44, "end": 217.64, "text": " If you were to choose to split on the face shape, you end up with these animals on the"}, {"start": 217.64, "end": 221.84, "text": " left and right with the corresponding weights that are written below."}, {"start": 221.84, "end": 228.28, "text": " And if you were to choose to split on whiskers being present or absent, you end up with this."}, {"start": 228.28, "end": 235.92000000000002, "text": " So the question is, given these three possible features to split on at the root node, which"}, {"start": 235.92000000000002, "end": 242.16, "text": " one do you want to pick that gives the best predictions for the weight of the animal?"}, {"start": 242.16, "end": 247.96, "text": " When building a regression tree, rather than trying to reduce entropy, which was that measure"}, {"start": 247.96, "end": 252.64000000000001, "text": " of impurity that we had for a classification problem, we will instead try to reduce the"}, {"start": 252.64000000000001, "end": 261.0, "text": " variance of the weights of the values y at each of these subsets of the data."}, {"start": 261.0, "end": 266.98, "text": " So if you've seen the notion of variance in other contexts, that's great."}, {"start": 266.98, "end": 272.08, "text": " This is the statistical and mathematical notion of variance that we'll use in a minute."}, {"start": 272.08, "end": 278.08, "text": " But if you've not seen how to compute the variance of a set of numbers before, don't"}, {"start": 278.08, "end": 279.08, "text": " worry about it."}, {"start": 279.08, "end": 284.96, "text": " All you need to know for this slide is that variance informally computes how widely a"}, {"start": 284.96, "end": 287.76, "text": " set of numbers varies."}, {"start": 287.76, "end": 294.15999999999997, "text": " So for this set of numbers, 7.2, 9.2, and so on up to 10.2, it turns out the variance"}, {"start": 294.15999999999997, "end": 296.79999999999995, "text": " is 1.47."}, {"start": 296.79999999999995, "end": 298.79999999999995, "text": " So it doesn't vary that much."}, {"start": 298.8, "end": 306.16, "text": " Because here, 8.8, 15, 11, 18, and 20, these numbers go all the way from 8.8 all the way"}, {"start": 306.16, "end": 307.16, "text": " up to 20."}, {"start": 307.16, "end": 309.48, "text": " And so the variance is much larger."}, {"start": 309.48, "end": 313.06, "text": " Turns out to have a variance of 21.87."}, {"start": 313.06, "end": 320.04, "text": " And so the way we'll evaluate the quality of this split is we'll compute, same as before,"}, {"start": 320.04, "end": 326.64, "text": " w left and w right as the fraction of examples that went to the left and right branches."}, {"start": 326.64, "end": 334.59999999999997, "text": " And the average variance after the split is going to be 510, which is w left, times 1.47,"}, {"start": 334.59999999999997, "end": 340.0, "text": " which is the variance on the left, and then plus 510 times the variance on the right,"}, {"start": 340.0, "end": 343.36, "text": " which is 21.87."}, {"start": 343.36, "end": 348.84, "text": " So this weighted average variance plays a very similar role to the weighted average"}, {"start": 348.84, "end": 355.15999999999997, "text": " entropy that we had used when deciding what split to use for a classification problem."}, {"start": 355.16, "end": 361.24, "text": " And we can then repeat this calculation for the other possible choices of features to"}, {"start": 361.24, "end": 362.24, "text": " split on."}, {"start": 362.24, "end": 370.36, "text": " Here in the tree in the middle, the variance of these numbers here turns out to be 27.80."}, {"start": 370.36, "end": 374.84000000000003, "text": " The variance here is 1.37."}, {"start": 374.84000000000003, "end": 379.40000000000003, "text": " And so with w left equals 710, and w right is 310."}, {"start": 379.4, "end": 385.32, "text": " And so with these values, you can compute the weighted variance as follows."}, {"start": 385.32, "end": 390.0, "text": " Finally, for the last example, if you were to split on the whiskers feature, this is"}, {"start": 390.0, "end": 392.4, "text": " the variance on the left and right."}, {"start": 392.4, "end": 394.59999999999997, "text": " There's w left and w right."}, {"start": 394.59999999999997, "end": 397.0, "text": " And so the weighted variance is this."}, {"start": 397.0, "end": 402.03999999999996, "text": " A good way to choose a split would be to just choose the value of the weighted variance"}, {"start": 402.03999999999996, "end": 405.4, "text": " that is lowest."}, {"start": 405.4, "end": 409.64, "text": " Similar to when we're computing information gain, I'm going to make just one more modification"}, {"start": 409.64, "end": 410.64, "text": " to this equation."}, {"start": 410.64, "end": 416.15999999999997, "text": " Just as for the classification problem, we didn't just measure the average weighted"}, {"start": 416.15999999999997, "end": 417.4, "text": " entropy."}, {"start": 417.4, "end": 421.96, "text": " We measured the reduction in entropy, and that was information gain."}, {"start": 421.96, "end": 426.23999999999995, "text": " For regression tree, we'll also similarly measure the reduction in variance."}, {"start": 426.23999999999995, "end": 433.23999999999995, "text": " Turns out, if you look at all of the examples in the training set, all 10 examples, and"}, {"start": 433.24, "end": 438.2, "text": " you compute the variance of all of them, the variance of all the examples turns out to"}, {"start": 438.2, "end": 439.2, "text": " be 20.51."}, {"start": 439.2, "end": 445.44, "text": " And that's the same value for the root node in all of these, of course, because it's the"}, {"start": 445.44, "end": 448.76, "text": " same 10 examples at the root node."}, {"start": 448.76, "end": 455.24, "text": " And so what we'll actually compute is the variance of the root node, which is 20.51,"}, {"start": 455.24, "end": 462.24, "text": " minus this expression down here, which turns out to be equal to 8.84."}, {"start": 462.24, "end": 468.56, "text": " And so at the root node, the variance was 20.51, and after splitting on ear shape, the"}, {"start": 468.56, "end": 475.04, "text": " average weighted variance at these two nodes is 8.84 lower."}, {"start": 475.04, "end": 478.12, "text": " So the reduction in variance is 8.84."}, {"start": 478.12, "end": 484.56, "text": " And similarly, if you compute the expression for reduction in variance for this example"}, {"start": 484.56, "end": 490.32, "text": " in the middle, it's 20.51 minus this expression that we had before, which turns out to be"}, {"start": 490.32, "end": 492.22, "text": " equal to 0.64."}, {"start": 492.22, "end": 495.44000000000005, "text": " So this is a very small reduction in variance."}, {"start": 495.44000000000005, "end": 502.90000000000003, "text": " And for the whiskers feature, you end up with this, which is 6.22."}, {"start": 502.90000000000003, "end": 510.04, "text": " So between all three of these examples, 8.84 gives you the largest reduction in variance."}, {"start": 510.04, "end": 514.9200000000001, "text": " So just as previously, we would choose the feature that gives you the largest information"}, {"start": 514.9200000000001, "end": 516.1600000000001, "text": " gain."}, {"start": 516.1600000000001, "end": 520.6, "text": " For regression tree, you would choose the feature that gives you the largest reduction"}, {"start": 520.6, "end": 526.6, "text": " in variance, which is why you choose ear shape as the feature to split on."}, {"start": 526.6, "end": 535.32, "text": " Having chosen the ear shape feature to split on, you now have two subsets of five examples"}, {"start": 535.32, "end": 538.24, "text": " in the left and right sub-branches."}, {"start": 538.24, "end": 544.4, "text": " And you would then, again, we say recursively, but you take these five examples and build"}, {"start": 544.4, "end": 550.0400000000001, "text": " a new decision tree focusing on just these five examples, again, evaluating different"}, {"start": 550.04, "end": 555.0799999999999, "text": " options of features to split on and picking the one that gives you the biggest variance"}, {"start": 555.0799999999999, "end": 556.4, "text": " reduction."}, {"start": 556.4, "end": 563.0, "text": " And similarly, on the right, and you keep on splitting until you meet the criteria for"}, {"start": 563.0, "end": 565.04, "text": " not splitting any further."}, {"start": 565.04, "end": 566.12, "text": " And so that's it."}, {"start": 566.12, "end": 571.0799999999999, "text": " With this technique, you can get your decision tree to not just carry out classification"}, {"start": 571.0799999999999, "end": 574.92, "text": " problems, but also regression problems."}, {"start": 574.92, "end": 579.12, "text": " So far, we've talked about how to train a single decision tree."}, {"start": 579.12, "end": 583.68, "text": " It turns out if you train a lot of decision trees, we call this an ensemble of decision"}, {"start": 583.68, "end": 587.12, "text": " trees, you can get a much better result."}, {"start": 587.12, "end": 610.12, "text": " Let's take a look at why and how to do so in the next video."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=yoYgcddMXrg
7.9 Tree ensembles | Using multiple decision trees --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
One of the weaknesses of using a single decision tree is that that decision tree can be highly sensitive to small changes in the data. And one solution to make the algorithm less sensitive or more robust is to build not one decision tree but to build a lot of decision trees and we call that a tree ensemble. Let's take a look. With the example that we've been using, the best feature to split on at the root node turned out to be the ear shape resulting in these two subsets of the data and then building further subtrees on these two subsets of the data. But it turns out that if you were to take just one of the ten examples and change it to a different cat so that instead of having pointy ears, round face, whiskers absent, this new cat has floppy ears, round face, whiskers present, with just changing a single training example, the highest information gain feature to split on becomes the whiskers feature instead of the ear shape feature. As a result of that, the subsets of data you get in the left and right subtrees become totally different and as you continue to run the decision tree learning algorithm recursively, you build out totally different subtrees on the left and right. So the fact that changing just one training example causes the algorithm to come up with a different split at the root and therefore a totally different tree, that makes this algorithm just not that robust. That's why when you're using decision trees, you often get a much better result, that is you get more accurate predictions if you train not just a single decision tree, but a whole bunch of different decision trees. And this is what we call a tree ensemble, which just means a collection of multiple trees. We'll see in the next few videos how to construct this ensemble of trees, but if you had this ensemble of three trees, each one of these is maybe a plausible way to classify cat versus not cat. If you had a new test example that you wanted to classify, then what you would do is run all three of these trees on your new example and get them to vote on what is the final prediction. So this test example has pointy ears, a not round face shape and whiskers are present. And so the first tree would carry an inference like this and predict that it is a cat. The second tree's inference would follow this path through the tree and therefore predict that it's not a cat. And the third tree would follow this path and therefore predict that it is a cat. These three trees have made different predictions. And so what we'll do is actually get them to vote and the majority vote of the predictions among these three trees is cat. So the final prediction of this ensemble of trees is that this is a cat, which happens to be the correct prediction. So the reason we use an ensemble of trees is by having lots of decision trees and having them vote, it makes your overall algorithm less sensitive to what any single tree may be doing because it gets only one vote out of three or one vote out of many many different votes and it makes your overall algorithm more robust. But how do you come up with all of these different plausible but maybe slightly different decision trees in order to get them to vote? In the next video, we'll talk about a technique from statistics called sampling with replacement. And this will turn out to be a key technique that we'll use in the video after that in order to build this ensemble of trees. So let's go on to the next video to talk about sampling with replacement.
[{"start": 0.0, "end": 7.48, "text": " One of the weaknesses of using a single decision tree is that that decision tree can be highly"}, {"start": 7.48, "end": 11.36, "text": " sensitive to small changes in the data."}, {"start": 11.36, "end": 16.7, "text": " And one solution to make the algorithm less sensitive or more robust is to build not one"}, {"start": 16.7, "end": 22.400000000000002, "text": " decision tree but to build a lot of decision trees and we call that a tree ensemble."}, {"start": 22.400000000000002, "end": 23.52, "text": " Let's take a look."}, {"start": 23.52, "end": 28.68, "text": " With the example that we've been using, the best feature to split on at the root node"}, {"start": 28.68, "end": 34.2, "text": " turned out to be the ear shape resulting in these two subsets of the data and then building"}, {"start": 34.2, "end": 38.04, "text": " further subtrees on these two subsets of the data."}, {"start": 38.04, "end": 42.519999999999996, "text": " But it turns out that if you were to take just one of the ten examples and change it"}, {"start": 42.519999999999996, "end": 47.96, "text": " to a different cat so that instead of having pointy ears, round face, whiskers absent,"}, {"start": 47.96, "end": 55.2, "text": " this new cat has floppy ears, round face, whiskers present, with just changing a single"}, {"start": 55.2, "end": 60.74, "text": " training example, the highest information gain feature to split on becomes the whiskers"}, {"start": 60.74, "end": 64.8, "text": " feature instead of the ear shape feature."}, {"start": 64.8, "end": 70.52000000000001, "text": " As a result of that, the subsets of data you get in the left and right subtrees become"}, {"start": 70.52000000000001, "end": 77.28, "text": " totally different and as you continue to run the decision tree learning algorithm recursively,"}, {"start": 77.28, "end": 81.08, "text": " you build out totally different subtrees on the left and right."}, {"start": 81.08, "end": 87.3, "text": " So the fact that changing just one training example causes the algorithm to come up with"}, {"start": 87.3, "end": 92.52, "text": " a different split at the root and therefore a totally different tree, that makes this"}, {"start": 92.52, "end": 96.36, "text": " algorithm just not that robust."}, {"start": 96.36, "end": 100.92, "text": " That's why when you're using decision trees, you often get a much better result, that is"}, {"start": 100.92, "end": 106.64, "text": " you get more accurate predictions if you train not just a single decision tree, but a whole"}, {"start": 106.64, "end": 109.18, "text": " bunch of different decision trees."}, {"start": 109.18, "end": 113.80000000000001, "text": " And this is what we call a tree ensemble, which just means a collection of multiple"}, {"start": 113.80000000000001, "end": 114.80000000000001, "text": " trees."}, {"start": 114.80000000000001, "end": 121.84, "text": " We'll see in the next few videos how to construct this ensemble of trees, but if you had this"}, {"start": 121.84, "end": 128.06, "text": " ensemble of three trees, each one of these is maybe a plausible way to classify cat versus"}, {"start": 128.06, "end": 129.56, "text": " not cat."}, {"start": 129.56, "end": 135.4, "text": " If you had a new test example that you wanted to classify, then what you would do is run"}, {"start": 135.4, "end": 141.64000000000001, "text": " all three of these trees on your new example and get them to vote on what is the final"}, {"start": 141.64000000000001, "end": 142.68, "text": " prediction."}, {"start": 142.68, "end": 148.84, "text": " So this test example has pointy ears, a not round face shape and whiskers are present."}, {"start": 148.84, "end": 155.32, "text": " And so the first tree would carry an inference like this and predict that it is a cat."}, {"start": 155.32, "end": 161.16, "text": " The second tree's inference would follow this path through the tree and therefore predict"}, {"start": 161.16, "end": 164.32, "text": " that it's not a cat."}, {"start": 164.32, "end": 171.22, "text": " And the third tree would follow this path and therefore predict that it is a cat."}, {"start": 171.22, "end": 174.12, "text": " These three trees have made different predictions."}, {"start": 174.12, "end": 178.4, "text": " And so what we'll do is actually get them to vote and the majority vote of the predictions"}, {"start": 178.4, "end": 181.22, "text": " among these three trees is cat."}, {"start": 181.22, "end": 186.4, "text": " So the final prediction of this ensemble of trees is that this is a cat, which happens"}, {"start": 186.4, "end": 188.22, "text": " to be the correct prediction."}, {"start": 188.22, "end": 194.26, "text": " So the reason we use an ensemble of trees is by having lots of decision trees and having"}, {"start": 194.26, "end": 200.46, "text": " them vote, it makes your overall algorithm less sensitive to what any single tree may"}, {"start": 200.46, "end": 205.72, "text": " be doing because it gets only one vote out of three or one vote out of many many different"}, {"start": 205.72, "end": 209.66, "text": " votes and it makes your overall algorithm more robust."}, {"start": 209.66, "end": 214.98, "text": " But how do you come up with all of these different plausible but maybe slightly different decision"}, {"start": 214.98, "end": 217.64, "text": " trees in order to get them to vote?"}, {"start": 217.64, "end": 224.05999999999997, "text": " In the next video, we'll talk about a technique from statistics called sampling with replacement."}, {"start": 224.05999999999997, "end": 228.6, "text": " And this will turn out to be a key technique that we'll use in the video after that in"}, {"start": 228.6, "end": 231.51999999999998, "text": " order to build this ensemble of trees."}, {"start": 231.52, "end": 248.32000000000002, "text": " So let's go on to the next video to talk about sampling with replacement."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=dELaA7ZykMs
7.10 Tree ensembles | Sampling with replacement --[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
In order to build a tree ensemble, we're going to need a technique called sampling with replacement. Let's take a look at what that means. In order to illustrate how sampling with replacement works, I'm going to show you a demonstration of sampling with replacement using four tokens that are colored red, yellow, green, and blue. So I actually have here with me four tokens of colors red, yellow, green, and blue, and I'm going to demonstrate what sampling with replacement using them looks like. Here is a black velvet bag empty, and I'm going to take this example of four tokens and drop them in, and I'm going to sample four times with replacement out of this bag. And what that means is I'm going to shake it up and can't see what I'm picking. Pick up one token, turns out to be green. And the term with replacement means that if I take out the next token, I'm going to take this and put it back in and shake it up again, and then take out another one, yellow, replace it, that's the with replacement part, and then go again, blue, replace it again, and then pick up one more, which is blue again. So that sequence of tokens I got was green, yellow, blue, blue. Notice that I got blue twice and didn't get red even a single time. If you were to repeat this sampling with replacement procedure multiple times, if you were to do the game, you might get red, yellow, red, green, or green, green, blue, red, or you might also get red, blue, yellow, green. Notice that the with replacement part of this is critical because if I were not replacing a token every time I sample, then if I were to pull out four tokens from my bag of four, I will always just get the same four tokens. That's why replacing a token after I pull it out each time is important to make sure I don't just get the same four tokens every single time. The way that sampling with replacement applies to building an ensemble of trees is as follows. We are going to construct multiple random training sets that are all slightly different from our original training set. In particular, we're going to take our 10 examples of cats and dogs, and we're going to put the 10 training examples in a theoretical bag. Please don't actually put a real cat or dog in a bag. That sounds inhumane, but you can take a training example and put it in a theoretical bag if you want. And using this theoretical bag, we're going to create a new random training set of 10 examples of the exact same size as the original data set. And the way we'll do so is we'll reach in and pick out one random training example, and let's say we get this training example. Then we put it back into the bag, and then again, randomly pick out one training example, and see you get that, and you pick again, and again, and again. And notice now this fifth training example is identical to the second one that we had up there, but that's fine. And you keep going and keep going, and we get another repeated example, and so on and so forth, until eventually you end up with 10 training examples, some of which are repeats. And you notice also that this training set does not contain all 10 of the original training examples, but that's okay. That is part of the sampling with replacement procedure. The process of sampling with replacement lets you construct a new training set that's a little bit similar to, but also pretty different from your original training set. It turns out that this would be the key building block for building an ensemble of trees. Let's take a look in the next video at how you could do that.
[{"start": 0.0, "end": 8.88, "text": " In order to build a tree ensemble, we're going to need a technique called sampling with replacement."}, {"start": 8.88, "end": 11.200000000000001, "text": " Let's take a look at what that means."}, {"start": 11.200000000000001, "end": 17.36, "text": " In order to illustrate how sampling with replacement works, I'm going to show you a demonstration"}, {"start": 17.36, "end": 24.52, "text": " of sampling with replacement using four tokens that are colored red, yellow, green, and blue."}, {"start": 24.52, "end": 30.4, "text": " So I actually have here with me four tokens of colors red, yellow, green, and blue, and"}, {"start": 30.4, "end": 35.7, "text": " I'm going to demonstrate what sampling with replacement using them looks like."}, {"start": 35.7, "end": 42.480000000000004, "text": " Here is a black velvet bag empty, and I'm going to take this example of four tokens"}, {"start": 42.480000000000004, "end": 47.480000000000004, "text": " and drop them in, and I'm going to sample four times with replacement out of this bag."}, {"start": 47.480000000000004, "end": 52.0, "text": " And what that means is I'm going to shake it up and can't see what I'm picking."}, {"start": 52.0, "end": 54.68, "text": " Pick up one token, turns out to be green."}, {"start": 54.68, "end": 59.84, "text": " And the term with replacement means that if I take out the next token, I'm going to take"}, {"start": 59.84, "end": 66.48, "text": " this and put it back in and shake it up again, and then take out another one, yellow, replace"}, {"start": 66.48, "end": 73.8, "text": " it, that's the with replacement part, and then go again, blue, replace it again, and"}, {"start": 73.8, "end": 77.0, "text": " then pick up one more, which is blue again."}, {"start": 77.0, "end": 81.4, "text": " So that sequence of tokens I got was green, yellow, blue, blue."}, {"start": 81.4, "end": 85.28, "text": " Notice that I got blue twice and didn't get red even a single time."}, {"start": 85.28, "end": 90.56, "text": " If you were to repeat this sampling with replacement procedure multiple times, if you were to do"}, {"start": 90.56, "end": 99.12, "text": " the game, you might get red, yellow, red, green, or green, green, blue, red, or you"}, {"start": 99.12, "end": 104.56, "text": " might also get red, blue, yellow, green."}, {"start": 104.56, "end": 110.88000000000001, "text": " Notice that the with replacement part of this is critical because if I were not replacing"}, {"start": 110.88, "end": 116.39999999999999, "text": " a token every time I sample, then if I were to pull out four tokens from my bag of four,"}, {"start": 116.39999999999999, "end": 119.0, "text": " I will always just get the same four tokens."}, {"start": 119.0, "end": 124.52, "text": " That's why replacing a token after I pull it out each time is important to make sure"}, {"start": 124.52, "end": 128.12, "text": " I don't just get the same four tokens every single time."}, {"start": 128.12, "end": 135.56, "text": " The way that sampling with replacement applies to building an ensemble of trees is as follows."}, {"start": 135.56, "end": 141.24, "text": " We are going to construct multiple random training sets that are all slightly different"}, {"start": 141.24, "end": 143.68, "text": " from our original training set."}, {"start": 143.68, "end": 148.24, "text": " In particular, we're going to take our 10 examples of cats and dogs, and we're going"}, {"start": 148.24, "end": 153.88, "text": " to put the 10 training examples in a theoretical bag."}, {"start": 153.88, "end": 157.04, "text": " Please don't actually put a real cat or dog in a bag."}, {"start": 157.04, "end": 162.12, "text": " That sounds inhumane, but you can take a training example and put it in a theoretical bag if"}, {"start": 162.12, "end": 163.24, "text": " you want."}, {"start": 163.24, "end": 170.32000000000002, "text": " And using this theoretical bag, we're going to create a new random training set of 10"}, {"start": 170.32000000000002, "end": 174.0, "text": " examples of the exact same size as the original data set."}, {"start": 174.0, "end": 178.88, "text": " And the way we'll do so is we'll reach in and pick out one random training example,"}, {"start": 178.88, "end": 181.60000000000002, "text": " and let's say we get this training example."}, {"start": 181.60000000000002, "end": 188.48000000000002, "text": " Then we put it back into the bag, and then again, randomly pick out one training example,"}, {"start": 188.48, "end": 194.51999999999998, "text": " and see you get that, and you pick again, and again, and again."}, {"start": 194.51999999999998, "end": 198.79999999999998, "text": " And notice now this fifth training example is identical to the second one that we had"}, {"start": 198.79999999999998, "end": 201.04, "text": " up there, but that's fine."}, {"start": 201.04, "end": 205.64, "text": " And you keep going and keep going, and we get another repeated example, and so on and"}, {"start": 205.64, "end": 211.83999999999997, "text": " so forth, until eventually you end up with 10 training examples, some of which are repeats."}, {"start": 211.83999999999997, "end": 217.48, "text": " And you notice also that this training set does not contain all 10 of the original training"}, {"start": 217.48, "end": 218.79999999999998, "text": " examples, but that's okay."}, {"start": 218.79999999999998, "end": 221.84, "text": " That is part of the sampling with replacement procedure."}, {"start": 221.84, "end": 227.2, "text": " The process of sampling with replacement lets you construct a new training set that's a"}, {"start": 227.2, "end": 232.0, "text": " little bit similar to, but also pretty different from your original training set."}, {"start": 232.0, "end": 236.95999999999998, "text": " It turns out that this would be the key building block for building an ensemble of trees."}, {"start": 236.96, "end": 248.08, "text": " Let's take a look in the next video at how you could do that."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=VfKwWWrSnq4
7.11 Tree ensembles | Random forest algorithm--[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Now that we have a way to use Soundplay with Replacement to create new training sets that are a bit similar to but also quite different from the original training set, we're ready to build our first tree ensemble algorithm. In particular, in this video, we'll talk about the random forest algorithm, which is one powerful tree ensemble algorithm that works much better than using a single decision tree. Here's how we can generate an ensemble of trees. If you are given a training set of size m, then for b equals 1 to capital B, so we do this capital B times, you can use Soundplay with Replacement to create a new training set of size m. So if you had 10 training examples, you will put the 10 training examples in that virtual bag and Soundplay with Replacement 10 times to generate a new training set with also 10 examples. And then you would train a decision tree on this data set. So here's the data set I've generated using Soundplay with Replacement. If you look carefully, you may notice that some of the training examples are repeated and that's okay. And if you train the decision tree algorithm on this data set, you end up with this decision tree. And having done this once, we would then go and repeat this a second time. Use Soundplay with Replacement to generate another training set of m or 10 training examples. This again looks a bit like the original training set, but it's also a little bit different. You then train the decision tree on this new data set and you end up with a somewhat different decision tree and so on. And you may do this a total of capital B times. Typical choice of capital B, the number of such trees you build might be around 100. People recommend any value from say 64 to 128. And having built an ensemble of say 100 different trees, you would then when you are trying to make a prediction, get these trees to all vote on the correct final prediction. It turns out that setting capital B to be larger never hurts performance. But beyond a certain point, you end up with diminishing returns and it doesn't actually get that much better when B is much larger than say 100 or so. And that's why I never use say a thousand trees that just slows down the computation significantly without meaningfully increasing the performance of the overall algorithm. Just to give this particular algorithm a name, this specific instantiation of tree ensemble is sometimes also called a bagged decision tree. And that refers to putting your training examples in that virtual bag. And that's why also we use the letters lowercase b and uppercase b here because that stands for bag. There's one modification to this algorithm that will actually make it work even much better. And that changes this algorithm, the bagged decision tree into the random forest algorithm. The key idea is that even with this sampling with replacement procedure, sometimes you end up with always using the same split at the root node and very similar splits near the root node. That didn't happen in this particular example where a small change to the training set resulted in a different split at the root node. But for other training sets, it's not uncommon that for many or even all capital B training sets, you end up with the same choice of feature at the root node and at a few of the nodes near the root node. So there's one modification to the algorithm to further try to randomize the feature choice at each node that can cause the set of trees you learn to become more different from each other so that when you vote them, you end up with an even more accurate prediction. The way this is typically done is at every node when choosing a feature to use the slit, if n features are available. So in our example, we had three features available. Rather than picking from all n features, we will instead pick a random subset of k less than n features and allow the algorithm to choose only from that subset of k features. So in other words, you would pick k features as the allowed features and then out of those k features, choose the one with the highest information gain as the choice of feature to use the split. When n is large, say n is dozens or tens or even hundreds, a typical choice for the value of k would be to choose it to be square root of n. In our example, we had only three features and this technique tends to be used more for larger problems with a larger number of features. And we'll just further change the algorithm, you end up with the random forest algorithm, which will work typically much better and becomes much more robust than just a single decision tree. One way to think about why this is more robust than a single decision tree is the sampling with replacement procedure causes the algorithm to explore a lot of small changes in the data already and is training different decision trees and is averaging over all of those changes to the data that the sampling with replacement procedure causes. And so this means that any little change further to the training set makes it less likely to have a huge impact on the overall output of the overall random forest algorithm because it's already explored and is averaging over a lot of small changes to the training set. Before wrapping up this video, there's just one more thought I want to share with you, which is where does a machine learning engineer go camping? In a random forest. All right, go and tell that joke to your friends. I hope you enjoy it. The random forest is an effective algorithm and I hope you will use it in your work. Beyond the random forest, it turns out there's one other algorithm that works even better, which is a boosted decision tree. In the next video, let's talk about a boosted decision tree algorithm called XGBoost.
[{"start": 0.0, "end": 7.4, "text": " Now that we have a way to use Soundplay with Replacement to create new training sets that"}, {"start": 7.4, "end": 12.14, "text": " are a bit similar to but also quite different from the original training set, we're ready"}, {"start": 12.14, "end": 16.1, "text": " to build our first tree ensemble algorithm."}, {"start": 16.1, "end": 21.42, "text": " In particular, in this video, we'll talk about the random forest algorithm, which is one"}, {"start": 21.42, "end": 26.580000000000002, "text": " powerful tree ensemble algorithm that works much better than using a single decision tree."}, {"start": 26.580000000000002, "end": 29.96, "text": " Here's how we can generate an ensemble of trees."}, {"start": 29.96, "end": 37.84, "text": " If you are given a training set of size m, then for b equals 1 to capital B, so we do"}, {"start": 37.84, "end": 44.06, "text": " this capital B times, you can use Soundplay with Replacement to create a new training"}, {"start": 44.06, "end": 45.480000000000004, "text": " set of size m."}, {"start": 45.480000000000004, "end": 50.68, "text": " So if you had 10 training examples, you will put the 10 training examples in that virtual"}, {"start": 50.68, "end": 57.16, "text": " bag and Soundplay with Replacement 10 times to generate a new training set with also 10"}, {"start": 57.16, "end": 58.46, "text": " examples."}, {"start": 58.46, "end": 62.24, "text": " And then you would train a decision tree on this data set."}, {"start": 62.24, "end": 66.18, "text": " So here's the data set I've generated using Soundplay with Replacement."}, {"start": 66.18, "end": 70.3, "text": " If you look carefully, you may notice that some of the training examples are repeated"}, {"start": 70.3, "end": 71.84, "text": " and that's okay."}, {"start": 71.84, "end": 76.82, "text": " And if you train the decision tree algorithm on this data set, you end up with this decision"}, {"start": 76.82, "end": 78.18, "text": " tree."}, {"start": 78.18, "end": 82.98, "text": " And having done this once, we would then go and repeat this a second time."}, {"start": 82.98, "end": 89.56, "text": " Use Soundplay with Replacement to generate another training set of m or 10 training examples."}, {"start": 89.56, "end": 94.62, "text": " This again looks a bit like the original training set, but it's also a little bit different."}, {"start": 94.62, "end": 99.94, "text": " You then train the decision tree on this new data set and you end up with a somewhat different"}, {"start": 99.94, "end": 102.22, "text": " decision tree and so on."}, {"start": 102.22, "end": 106.96000000000001, "text": " And you may do this a total of capital B times."}, {"start": 106.96000000000001, "end": 112.2, "text": " Typical choice of capital B, the number of such trees you build might be around 100."}, {"start": 112.2, "end": 117.10000000000001, "text": " People recommend any value from say 64 to 128."}, {"start": 117.10000000000001, "end": 123.68, "text": " And having built an ensemble of say 100 different trees, you would then when you are trying"}, {"start": 123.68, "end": 128.66, "text": " to make a prediction, get these trees to all vote on the correct final prediction."}, {"start": 128.66, "end": 134.94, "text": " It turns out that setting capital B to be larger never hurts performance."}, {"start": 134.94, "end": 139.26, "text": " But beyond a certain point, you end up with diminishing returns and it doesn't actually"}, {"start": 139.26, "end": 144.26, "text": " get that much better when B is much larger than say 100 or so."}, {"start": 144.26, "end": 149.82, "text": " And that's why I never use say a thousand trees that just slows down the computation"}, {"start": 149.82, "end": 155.98, "text": " significantly without meaningfully increasing the performance of the overall algorithm."}, {"start": 155.98, "end": 162.1, "text": " Just to give this particular algorithm a name, this specific instantiation of tree ensemble"}, {"start": 162.1, "end": 165.38, "text": " is sometimes also called a bagged decision tree."}, {"start": 165.38, "end": 170.29999999999998, "text": " And that refers to putting your training examples in that virtual bag."}, {"start": 170.29999999999998, "end": 176.14, "text": " And that's why also we use the letters lowercase b and uppercase b here because that stands"}, {"start": 176.14, "end": 177.14, "text": " for bag."}, {"start": 177.14, "end": 181.06, "text": " There's one modification to this algorithm that will actually make it work even much"}, {"start": 181.06, "end": 182.06, "text": " better."}, {"start": 182.06, "end": 187.42, "text": " And that changes this algorithm, the bagged decision tree into the random forest algorithm."}, {"start": 187.42, "end": 193.26, "text": " The key idea is that even with this sampling with replacement procedure, sometimes you"}, {"start": 193.26, "end": 199.26, "text": " end up with always using the same split at the root node and very similar splits near"}, {"start": 199.26, "end": 200.48, "text": " the root node."}, {"start": 200.48, "end": 205.1, "text": " That didn't happen in this particular example where a small change to the training set resulted"}, {"start": 205.1, "end": 208.39999999999998, "text": " in a different split at the root node."}, {"start": 208.39999999999998, "end": 216.18, "text": " But for other training sets, it's not uncommon that for many or even all capital B training"}, {"start": 216.18, "end": 220.7, "text": " sets, you end up with the same choice of feature at the root node and at a few of the nodes"}, {"start": 220.7, "end": 222.12, "text": " near the root node."}, {"start": 222.12, "end": 228.02, "text": " So there's one modification to the algorithm to further try to randomize the feature choice"}, {"start": 228.02, "end": 234.18, "text": " at each node that can cause the set of trees you learn to become more different from each"}, {"start": 234.18, "end": 239.38, "text": " other so that when you vote them, you end up with an even more accurate prediction."}, {"start": 239.38, "end": 247.0, "text": " The way this is typically done is at every node when choosing a feature to use the slit,"}, {"start": 247.0, "end": 249.22, "text": " if n features are available."}, {"start": 249.22, "end": 253.66, "text": " So in our example, we had three features available."}, {"start": 253.66, "end": 259.4, "text": " Rather than picking from all n features, we will instead pick a random subset of k less"}, {"start": 259.4, "end": 267.34, "text": " than n features and allow the algorithm to choose only from that subset of k features."}, {"start": 267.34, "end": 272.9, "text": " So in other words, you would pick k features as the allowed features and then out of those"}, {"start": 272.9, "end": 278.02, "text": " k features, choose the one with the highest information gain as the choice of feature"}, {"start": 278.02, "end": 279.82, "text": " to use the split."}, {"start": 279.82, "end": 287.26, "text": " When n is large, say n is dozens or tens or even hundreds, a typical choice for the value"}, {"start": 287.26, "end": 291.29999999999995, "text": " of k would be to choose it to be square root of n."}, {"start": 291.29999999999995, "end": 296.7, "text": " In our example, we had only three features and this technique tends to be used more for"}, {"start": 296.7, "end": 300.03999999999996, "text": " larger problems with a larger number of features."}, {"start": 300.03999999999996, "end": 305.09999999999997, "text": " And we'll just further change the algorithm, you end up with the random forest algorithm,"}, {"start": 305.1, "end": 310.58000000000004, "text": " which will work typically much better and becomes much more robust than just a single"}, {"start": 310.58000000000004, "end": 312.02000000000004, "text": " decision tree."}, {"start": 312.02000000000004, "end": 317.78000000000003, "text": " One way to think about why this is more robust than a single decision tree is the sampling"}, {"start": 317.78000000000003, "end": 323.54, "text": " with replacement procedure causes the algorithm to explore a lot of small changes in the data"}, {"start": 323.54, "end": 329.22, "text": " already and is training different decision trees and is averaging over all of those changes"}, {"start": 329.22, "end": 333.26000000000005, "text": " to the data that the sampling with replacement procedure causes."}, {"start": 333.26, "end": 338.34, "text": " And so this means that any little change further to the training set makes it less likely to"}, {"start": 338.34, "end": 344.3, "text": " have a huge impact on the overall output of the overall random forest algorithm because"}, {"start": 344.3, "end": 350.21999999999997, "text": " it's already explored and is averaging over a lot of small changes to the training set."}, {"start": 350.21999999999997, "end": 353.78, "text": " Before wrapping up this video, there's just one more thought I want to share with you,"}, {"start": 353.78, "end": 358.78, "text": " which is where does a machine learning engineer go camping?"}, {"start": 358.78, "end": 359.78, "text": " In a random forest."}, {"start": 359.78, "end": 362.86, "text": " All right, go and tell that joke to your friends."}, {"start": 362.86, "end": 364.34000000000003, "text": " I hope you enjoy it."}, {"start": 364.34000000000003, "end": 368.66, "text": " The random forest is an effective algorithm and I hope you will use it in your work."}, {"start": 368.66, "end": 373.86, "text": " Beyond the random forest, it turns out there's one other algorithm that works even better,"}, {"start": 373.86, "end": 376.62, "text": " which is a boosted decision tree."}, {"start": 376.62, "end": 393.42, "text": " In the next video, let's talk about a boosted decision tree algorithm called XGBoost."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=4e4BTYrA0IE
7.12 Tree ensembles | XGBoost--[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Over the years, machine learning researchers have come up with a lot of different ways to build decision trees and decision tree ensembles. Today, by far the most commonly used way or implementation of decision tree ensembles or decision trees is an algorithm called XGBoost. It runs quickly, the open source implementations are easy to use, has also been used very successfully to win many machine learning competitions as well as in many commercial applications. Let's take a look at how XGBoost works. There's a modification to the bagged decision tree algorithm that we saw in the last video that can make it work much better. Here again is the algorithm that we had written down previously. Given a training set of size M, you repeat capital B times U sampling replacement to create a new training set of size M and then train the decision tree on the new data set. And so the first time through this loop, we may create a training set like that and train the decision tree like that. But here's where we're going to change the algorithm, which is every time through this loop other than the first time, that is the second time, third time and so on, when sampling, rather than choosing each of the training examples with equal probability, we're going to choose examples with higher probability that the ensemble of trees from the trees we've built so far still gets wrong. But the second time through this loop, instead of picking from all M examples with equal probability with one over M probability, let's make it more likely that we'll pick examples that the previously trained trees misclassified or that the previously trained trees do poorly on. In training and education, there's an idea called deliberate practice. For example, if you're learning to play the piano and you're trying to master a piece on the piano, rather than practicing the entire, say, five minute piece over and over, which is quite time consuming, if you instead play the piece and then focus your attention on just the parts of the piece that you aren't yet playing that well and practice those smaller parts over and over, then that turns out to be a more efficient way for you to learn to play the piano well. And so this idea of boosting is similar. We're going to look at the decision trees we've trained so far and look at what we're still not yet doing well on. And then when building the next decision tree, we're going to focus more attention on the examples that were not yet doing well. So rather than looking at all the training examples, we focus more attention on the subset of examples that are not yet doing well on and get the new decision tree, the next decision tree we build the ensemble to try to do well on them. And this is the idea behind boosting. And it turns out to help the learning algorithm learn to do better more quickly. So in detail, we would look at this tree that we have just built and go back to the original training set. Notice that this is the original training set, not one generated through sampling or replacement, and we'll go through all 10 examples and look at what this learned decision tree predicts on all 10 examples. So this fourth most column are their predictions and put a checkmark or cross next to each example depending on whether the tree's classification was correct or incorrect. So what we'll do in the second time through this loop is we will sort of use sampling with replacement to generate another training set of 10 examples. But every time we pick an example from these 10, we'll give a higher chance of picking from one of these three examples that we're still misclassifying. And so this focuses the second decision trees attention via a process like deliberate practice on the examples that the algorithm is still not yet doing that well on. And the boosting procedure will do this for a total of capital B times, where on each iteration, you'll look at what the ensemble of trees for trees one, two, up through B minus one are not yet doing that well on. And when you're building tree number B, you will then have a higher probability of picking examples that the ensemble of the previously both trees is still not yet doing well on the mathematical details of exactly how much to increase the probability of picking this versus that example are quite complex. But you don't have to worry about them in order to use boosted tree implementations and of different ways of implementing boosting, the most widely used one today is XGBoost, which stands for extreme gradient boosting, which is an open source implementation of booster trees that is very fast and efficient. XGBoost also has a good choice of the default splitting criteria and criteria for when to stop splitting. And one of the innovations in XGBoost is that it also has built in regularization to prevent overfitting and in machine learning competitions, such as there's a widely used competition site called Kaggle, XGBoost is often a highly competitive algorithm. In fact, XGBoost and deep learning algorithms seem to be the two types of algorithms that win a lot of these competitions. Oh, and one technical note, rather than doing something with replacement, XGBoost actually assigns different weights to different training samples. So it doesn't actually need to generate a lot of randomly chosen training sets. And this makes it even a little bit more efficient than using a sampling replacement procedure. But the intuition that you saw in the previous slide is still correct in terms of how XGBoost is choosing examples to focus on. The details of XGBoost are quite complex to implement, which is why many practitioners will use the open source libraries that implement XGBoost. This is all you need to do in order to use XGBoost. You would import the XGBoost library as follows and initialize a model as a XGBoost classifier for the model. And then finally, this allows you to make predictions using this boosted decision trees algorithm. I hope that you find this algorithm useful for many applications that you may build in the future. Or alternatively, if you want to use XGBoost for regression rather than for classification, then this line here just becomes XGBregressor and the rest of the code works similarly. So that's it for the XGBoost algorithm. We have just one last video for this week and for this course where we'll wrap up and also talk about when should you use a decision tree versus maybe use a neural network. Let's go on to the last and final video of this week.
[{"start": 0.0, "end": 5.86, "text": " Over the years, machine learning researchers have come up with a lot of different ways"}, {"start": 5.86, "end": 9.3, "text": " to build decision trees and decision tree ensembles."}, {"start": 9.3, "end": 15.540000000000001, "text": " Today, by far the most commonly used way or implementation of decision tree ensembles"}, {"start": 15.540000000000001, "end": 18.88, "text": " or decision trees is an algorithm called XGBoost."}, {"start": 18.88, "end": 24.240000000000002, "text": " It runs quickly, the open source implementations are easy to use, has also been used very successfully"}, {"start": 24.240000000000002, "end": 29.2, "text": " to win many machine learning competitions as well as in many commercial applications."}, {"start": 29.2, "end": 32.08, "text": " Let's take a look at how XGBoost works."}, {"start": 32.08, "end": 37.12, "text": " There's a modification to the bagged decision tree algorithm that we saw in the last video"}, {"start": 37.12, "end": 39.8, "text": " that can make it work much better."}, {"start": 39.8, "end": 43.96, "text": " Here again is the algorithm that we had written down previously."}, {"start": 43.96, "end": 49.120000000000005, "text": " Given a training set of size M, you repeat capital B times U sampling replacement to"}, {"start": 49.120000000000005, "end": 54.32, "text": " create a new training set of size M and then train the decision tree on the new data set."}, {"start": 54.32, "end": 60.52, "text": " And so the first time through this loop, we may create a training set like that and train"}, {"start": 60.52, "end": 62.44, "text": " the decision tree like that."}, {"start": 62.44, "end": 66.76, "text": " But here's where we're going to change the algorithm, which is every time through this"}, {"start": 66.76, "end": 73.28, "text": " loop other than the first time, that is the second time, third time and so on, when sampling,"}, {"start": 73.28, "end": 78.76, "text": " rather than choosing each of the training examples with equal probability, we're going"}, {"start": 78.76, "end": 85.96000000000001, "text": " to choose examples with higher probability that the ensemble of trees from the trees"}, {"start": 85.96000000000001, "end": 89.64, "text": " we've built so far still gets wrong."}, {"start": 89.64, "end": 96.08000000000001, "text": " But the second time through this loop, instead of picking from all M examples with equal"}, {"start": 96.08000000000001, "end": 101.56, "text": " probability with one over M probability, let's make it more likely that we'll pick examples"}, {"start": 101.56, "end": 107.0, "text": " that the previously trained trees misclassified or that the previously trained trees do poorly"}, {"start": 107.0, "end": 108.96, "text": " on."}, {"start": 108.96, "end": 113.28, "text": " In training and education, there's an idea called deliberate practice."}, {"start": 113.28, "end": 118.16, "text": " For example, if you're learning to play the piano and you're trying to master a piece"}, {"start": 118.16, "end": 124.16, "text": " on the piano, rather than practicing the entire, say, five minute piece over and over, which"}, {"start": 124.16, "end": 129.44, "text": " is quite time consuming, if you instead play the piece and then focus your attention on"}, {"start": 129.44, "end": 133.32, "text": " just the parts of the piece that you aren't yet playing that well and practice those smaller"}, {"start": 133.32, "end": 137.38, "text": " parts over and over, then that turns out to be a more efficient way for you to learn to"}, {"start": 137.38, "end": 139.4, "text": " play the piano well."}, {"start": 139.4, "end": 142.51999999999998, "text": " And so this idea of boosting is similar."}, {"start": 142.51999999999998, "end": 146.23999999999998, "text": " We're going to look at the decision trees we've trained so far and look at what we're"}, {"start": 146.23999999999998, "end": 148.42, "text": " still not yet doing well on."}, {"start": 148.42, "end": 152.16, "text": " And then when building the next decision tree, we're going to focus more attention on the"}, {"start": 152.16, "end": 154.35999999999999, "text": " examples that were not yet doing well."}, {"start": 154.35999999999999, "end": 159.88, "text": " So rather than looking at all the training examples, we focus more attention on the subset"}, {"start": 159.88, "end": 164.51999999999998, "text": " of examples that are not yet doing well on and get the new decision tree, the next decision"}, {"start": 164.51999999999998, "end": 168.0, "text": " tree we build the ensemble to try to do well on them."}, {"start": 168.0, "end": 170.24, "text": " And this is the idea behind boosting."}, {"start": 170.24, "end": 175.06, "text": " And it turns out to help the learning algorithm learn to do better more quickly."}, {"start": 175.06, "end": 182.72, "text": " So in detail, we would look at this tree that we have just built and go back to the original"}, {"start": 182.72, "end": 183.72, "text": " training set."}, {"start": 183.72, "end": 188.28, "text": " Notice that this is the original training set, not one generated through sampling or"}, {"start": 188.28, "end": 194.52, "text": " replacement, and we'll go through all 10 examples and look at what this learned decision tree"}, {"start": 194.52, "end": 197.08, "text": " predicts on all 10 examples."}, {"start": 197.08, "end": 205.08, "text": " So this fourth most column are their predictions and put a checkmark or cross next to each"}, {"start": 205.08, "end": 211.8, "text": " example depending on whether the tree's classification was correct or incorrect."}, {"start": 211.8, "end": 218.24, "text": " So what we'll do in the second time through this loop is we will sort of use sampling"}, {"start": 218.24, "end": 222.76000000000002, "text": " with replacement to generate another training set of 10 examples."}, {"start": 222.76000000000002, "end": 228.4, "text": " But every time we pick an example from these 10, we'll give a higher chance of picking"}, {"start": 228.4, "end": 233.0, "text": " from one of these three examples that we're still misclassifying."}, {"start": 233.0, "end": 239.38000000000002, "text": " And so this focuses the second decision trees attention via a process like deliberate practice"}, {"start": 239.38, "end": 244.84, "text": " on the examples that the algorithm is still not yet doing that well on."}, {"start": 244.84, "end": 252.51999999999998, "text": " And the boosting procedure will do this for a total of capital B times, where on each"}, {"start": 252.51999999999998, "end": 260.71999999999997, "text": " iteration, you'll look at what the ensemble of trees for trees one, two, up through B"}, {"start": 260.71999999999997, "end": 264.94, "text": " minus one are not yet doing that well on."}, {"start": 264.94, "end": 270.48, "text": " And when you're building tree number B, you will then have a higher probability of picking"}, {"start": 270.48, "end": 277.4, "text": " examples that the ensemble of the previously both trees is still not yet doing well on"}, {"start": 277.4, "end": 283.02, "text": " the mathematical details of exactly how much to increase the probability of picking this"}, {"start": 283.02, "end": 285.84, "text": " versus that example are quite complex."}, {"start": 285.84, "end": 291.24, "text": " But you don't have to worry about them in order to use boosted tree implementations"}, {"start": 291.24, "end": 297.96000000000004, "text": " and of different ways of implementing boosting, the most widely used one today is XGBoost,"}, {"start": 297.96000000000004, "end": 303.64, "text": " which stands for extreme gradient boosting, which is an open source implementation of"}, {"start": 303.64, "end": 307.24, "text": " booster trees that is very fast and efficient."}, {"start": 307.24, "end": 311.92, "text": " XGBoost also has a good choice of the default splitting criteria and criteria for when to"}, {"start": 311.92, "end": 313.48, "text": " stop splitting."}, {"start": 313.48, "end": 319.32, "text": " And one of the innovations in XGBoost is that it also has built in regularization to prevent"}, {"start": 319.32, "end": 325.24, "text": " overfitting and in machine learning competitions, such as there's a widely used competition"}, {"start": 325.24, "end": 330.44, "text": " site called Kaggle, XGBoost is often a highly competitive algorithm."}, {"start": 330.44, "end": 336.03999999999996, "text": " In fact, XGBoost and deep learning algorithms seem to be the two types of algorithms that"}, {"start": 336.03999999999996, "end": 337.71999999999997, "text": " win a lot of these competitions."}, {"start": 337.71999999999997, "end": 343.96, "text": " Oh, and one technical note, rather than doing something with replacement, XGBoost actually"}, {"start": 343.96, "end": 346.88, "text": " assigns different weights to different training samples."}, {"start": 346.88, "end": 351.92, "text": " So it doesn't actually need to generate a lot of randomly chosen training sets."}, {"start": 351.92, "end": 357.04, "text": " And this makes it even a little bit more efficient than using a sampling replacement procedure."}, {"start": 357.04, "end": 362.2, "text": " But the intuition that you saw in the previous slide is still correct in terms of how XGBoost"}, {"start": 362.2, "end": 365.2, "text": " is choosing examples to focus on."}, {"start": 365.2, "end": 371.68, "text": " The details of XGBoost are quite complex to implement, which is why many practitioners"}, {"start": 371.68, "end": 377.56, "text": " will use the open source libraries that implement XGBoost."}, {"start": 377.56, "end": 381.8, "text": " This is all you need to do in order to use XGBoost."}, {"start": 381.8, "end": 390.92, "text": " You would import the XGBoost library as follows and initialize a model as a XGBoost classifier"}, {"start": 390.92, "end": 392.2, "text": " for the model."}, {"start": 392.2, "end": 397.4, "text": " And then finally, this allows you to make predictions using this boosted decision trees"}, {"start": 397.4, "end": 398.84000000000003, "text": " algorithm."}, {"start": 398.84, "end": 402.79999999999995, "text": " I hope that you find this algorithm useful for many applications that you may build in"}, {"start": 402.79999999999995, "end": 403.79999999999995, "text": " the future."}, {"start": 403.79999999999995, "end": 410.71999999999997, "text": " Or alternatively, if you want to use XGBoost for regression rather than for classification,"}, {"start": 410.71999999999997, "end": 417.67999999999995, "text": " then this line here just becomes XGBregressor and the rest of the code works similarly."}, {"start": 417.67999999999995, "end": 420.71999999999997, "text": " So that's it for the XGBoost algorithm."}, {"start": 420.71999999999997, "end": 425.58, "text": " We have just one last video for this week and for this course where we'll wrap up and"}, {"start": 425.58, "end": 431.15999999999997, "text": " also talk about when should you use a decision tree versus maybe use a neural network."}, {"start": 431.16, "end": 458.16, "text": " Let's go on to the last and final video of this week."}]
Machine Learning Specialization 2022 -- Andrew Ng, Stanford University.
https://www.youtube.com/watch?v=ckbQPwQ6y98
7.13 Conclusion | When to use decision trees--[Machine Learning | Andrew Ng]
Second Course: Advanced Learning Algorithms. If you liked the content please subscribe and put a little blue thumb. Take heart!
Both decision trees, including tree ensembles, as well as neural networks, are very powerful, very effective learning algorithms. When should you pick one or the other? Let's look at some of the pros and cons of each. Decision trees and tree ensembles will often work well on tabular data, also called structured data. And what that means is if your data set looks like a giant spreadsheet, then decision trees would be worth considering. So for example, in the housing price prediction application, we had a data set with features corresponding to the size of the house, the number of bedrooms, the number of floors, and the age of the home. And that type of data stored in a spreadsheet with either categorical or continuous value features and both for classification or for regression tasks, where you're trying to predict a discrete category or predict a number. All of these problems are ones that decision trees can do well on. In contrast, I would not recommend using decision trees and tree ensembles on unstructured data. And that's data such as images, video, audio, and text that you're less likely to store in a spreadsheet format. Neural networks, as we'll see in a second, will tend to work better for unstructured data tasks. One huge advantage of decision trees and tree ensembles is that they can be very fast to train. You might remember this diagram from the previous week in which we talked about the iterative loop of machine learning development. If your model takes many hours to train, then that limits how quickly you can go through this loop and improve the performance of your algorithm. But because decision trees, including tree ensembles, tend to be pretty fast to train, that allows you to go through this loop more quickly and maybe more efficiently improve the performance of your learning algorithm. Finally, small decision trees may be human interpretable. If you are training just a single decision tree and if that decision tree has only, say, a few dozen nodes, you may be able to print out the decision tree to understand exactly how it's making decisions. I think that the interpretability of decision trees is sometimes a bit overstated because when you build an ensemble of a hundred trees and if each of those trees has hundreds of nodes, then looking at that ensemble to figure out what it's doing does become difficult and may need some separate visualization techniques. But if you have a small decision tree, you can actually look at it and see, oh, it's classifying whether something is a cat by looking at certain features in certain ways. If you've decided to use a decision tree or tree ensemble, I would probably use XGBoost for most of the applications I would work on. One slight downside of a tree ensemble is that it is a bit more expensive than a single decision tree. And so if you had a very, very constrained computational budget, you might use a single decision tree, but other than that setting, I would almost always use a tree ensemble and use XGBoost in particular. How about neural networks? In contrast to decision trees and tree ensembles, it works well on all types of data, including tabular or structured data, as well as unstructured data, as well as mixed data that includes both structured and unstructured components. Whereas on tabular structured data, neural networks and decision trees are often both competitive on unstructured data, such as images, video, audio, and text, a neural network will really be the preferred algorithm and not a decision tree or tree ensemble. On the downside, though, neural networks may be slower than a decision tree. A large neural network can just take a long time to train. Other benefits of neural networks includes that it works with transfer learning. And this is really important because for many applications where you have only a small data set, being able to use transfer learning and carry out pre-training on a much larger data set that is critical to getting competitive performance. Finally, there are also some technical reasons why it might be easier to string together multiple neural networks to build a larger machine learning system. And the basic reason is neural networks compute the output y as a smooth or continuous function of the input x. And so even if you string together a lot of different models, the outputs of all of these different models is self-differentiable. And so you can train them all at the same time using gradient descent. Whereas with a decision tree, you can only train one tree at a time. Finally, if you're building a system of multiple machine learning models working together, it might be easier to string together and train multiple neural networks than multiple decision trees. The reasons for this are quite technical, and you don't need to worry about it for the purpose of this course. But it relates to that even when you string together multiple neural networks, you can train them all together using gradient descent. Whereas for decision trees, you can only train one decision tree at a time. So that's it. You've reached the end of the videos for this course on advanced learning algorithms. Thank you for sticking with me all this way and congratulations on getting to the end of the videos on advanced learning algorithms. You've now learned how to build and use both neural networks and decision trees, and also heard about a variety of tips, practical advice on how to get these algorithms to work well for you. But even with all that you've seen on supervised learning, that's just part of what learning algorithms can do. Supervised learnings need label datasets with the labels Y on your training set. There's another set of very powerful algorithms called unsupervised learning algorithms, where you don't even need labels Y for the algorithm to figure out very interesting patterns and to do things with the data that you have. So I look forward to seeing you also in the third and final course of this specialization, which will be on unsupervised learning. Now before you finish up this course, I hope you also enjoy practicing the ideas of decision trees in their practice quizzes and in their practice labs. I'd like to wish you the best of luck in the practice labs. For those of you that may be Star Wars fans, let me say may the forest be with you.
[{"start": 0.0, "end": 8.0, "text": " Both decision trees, including tree ensembles, as well as neural networks, are very powerful,"}, {"start": 8.0, "end": 10.200000000000001, "text": " very effective learning algorithms."}, {"start": 10.200000000000001, "end": 12.68, "text": " When should you pick one or the other?"}, {"start": 12.68, "end": 15.64, "text": " Let's look at some of the pros and cons of each."}, {"start": 15.64, "end": 21.0, "text": " Decision trees and tree ensembles will often work well on tabular data, also called structured"}, {"start": 21.0, "end": 22.44, "text": " data."}, {"start": 22.44, "end": 28.080000000000002, "text": " And what that means is if your data set looks like a giant spreadsheet, then decision trees"}, {"start": 28.08, "end": 30.4, "text": " would be worth considering."}, {"start": 30.4, "end": 36.72, "text": " So for example, in the housing price prediction application, we had a data set with features"}, {"start": 36.72, "end": 41.08, "text": " corresponding to the size of the house, the number of bedrooms, the number of floors,"}, {"start": 41.08, "end": 42.8, "text": " and the age of the home."}, {"start": 42.8, "end": 49.08, "text": " And that type of data stored in a spreadsheet with either categorical or continuous value"}, {"start": 49.08, "end": 53.84, "text": " features and both for classification or for regression tasks, where you're trying to predict"}, {"start": 53.84, "end": 57.36, "text": " a discrete category or predict a number."}, {"start": 57.36, "end": 63.04, "text": " All of these problems are ones that decision trees can do well on."}, {"start": 63.04, "end": 69.6, "text": " In contrast, I would not recommend using decision trees and tree ensembles on unstructured data."}, {"start": 69.6, "end": 75.16, "text": " And that's data such as images, video, audio, and text that you're less likely to store"}, {"start": 75.16, "end": 77.8, "text": " in a spreadsheet format."}, {"start": 77.8, "end": 81.16, "text": " Neural networks, as we'll see in a second, will tend to work better for unstructured"}, {"start": 81.16, "end": 83.24, "text": " data tasks."}, {"start": 83.24, "end": 88.32, "text": " One huge advantage of decision trees and tree ensembles is that they can be very fast to"}, {"start": 88.32, "end": 90.28, "text": " train."}, {"start": 90.28, "end": 95.96, "text": " You might remember this diagram from the previous week in which we talked about the iterative"}, {"start": 95.96, "end": 98.72, "text": " loop of machine learning development."}, {"start": 98.72, "end": 104.11999999999999, "text": " If your model takes many hours to train, then that limits how quickly you can go through"}, {"start": 104.11999999999999, "end": 107.47999999999999, "text": " this loop and improve the performance of your algorithm."}, {"start": 107.48, "end": 113.36, "text": " But because decision trees, including tree ensembles, tend to be pretty fast to train,"}, {"start": 113.36, "end": 118.24000000000001, "text": " that allows you to go through this loop more quickly and maybe more efficiently improve"}, {"start": 118.24000000000001, "end": 121.0, "text": " the performance of your learning algorithm."}, {"start": 121.0, "end": 125.60000000000001, "text": " Finally, small decision trees may be human interpretable."}, {"start": 125.60000000000001, "end": 131.0, "text": " If you are training just a single decision tree and if that decision tree has only, say,"}, {"start": 131.0, "end": 136.78, "text": " a few dozen nodes, you may be able to print out the decision tree to understand exactly"}, {"start": 136.78, "end": 139.2, "text": " how it's making decisions."}, {"start": 139.2, "end": 144.72, "text": " I think that the interpretability of decision trees is sometimes a bit overstated because"}, {"start": 144.72, "end": 149.72, "text": " when you build an ensemble of a hundred trees and if each of those trees has hundreds of"}, {"start": 149.72, "end": 155.2, "text": " nodes, then looking at that ensemble to figure out what it's doing does become difficult"}, {"start": 155.2, "end": 158.24, "text": " and may need some separate visualization techniques."}, {"start": 158.24, "end": 162.92000000000002, "text": " But if you have a small decision tree, you can actually look at it and see, oh, it's"}, {"start": 162.92, "end": 169.35999999999999, "text": " classifying whether something is a cat by looking at certain features in certain ways."}, {"start": 169.35999999999999, "end": 176.35999999999999, "text": " If you've decided to use a decision tree or tree ensemble, I would probably use XGBoost"}, {"start": 176.35999999999999, "end": 179.56, "text": " for most of the applications I would work on."}, {"start": 179.56, "end": 184.51999999999998, "text": " One slight downside of a tree ensemble is that it is a bit more expensive than a single"}, {"start": 184.51999999999998, "end": 186.11999999999998, "text": " decision tree."}, {"start": 186.11999999999998, "end": 191.11999999999998, "text": " And so if you had a very, very constrained computational budget, you might use a single"}, {"start": 191.12, "end": 196.84, "text": " decision tree, but other than that setting, I would almost always use a tree ensemble"}, {"start": 196.84, "end": 198.72, "text": " and use XGBoost in particular."}, {"start": 198.72, "end": 201.12, "text": " How about neural networks?"}, {"start": 201.12, "end": 207.04, "text": " In contrast to decision trees and tree ensembles, it works well on all types of data, including"}, {"start": 207.04, "end": 212.84, "text": " tabular or structured data, as well as unstructured data, as well as mixed data that includes"}, {"start": 212.84, "end": 216.68, "text": " both structured and unstructured components."}, {"start": 216.68, "end": 222.32, "text": " Whereas on tabular structured data, neural networks and decision trees are often both"}, {"start": 222.32, "end": 228.6, "text": " competitive on unstructured data, such as images, video, audio, and text, a neural network will"}, {"start": 228.6, "end": 234.60000000000002, "text": " really be the preferred algorithm and not a decision tree or tree ensemble."}, {"start": 234.60000000000002, "end": 240.56, "text": " On the downside, though, neural networks may be slower than a decision tree."}, {"start": 240.56, "end": 245.08, "text": " A large neural network can just take a long time to train."}, {"start": 245.08, "end": 250.04000000000002, "text": " Other benefits of neural networks includes that it works with transfer learning."}, {"start": 250.04000000000002, "end": 254.68, "text": " And this is really important because for many applications where you have only a small data"}, {"start": 254.68, "end": 261.68, "text": " set, being able to use transfer learning and carry out pre-training on a much larger data"}, {"start": 261.68, "end": 266.2, "text": " set that is critical to getting competitive performance."}, {"start": 266.2, "end": 272.52000000000004, "text": " Finally, there are also some technical reasons why it might be easier to string together"}, {"start": 272.52, "end": 277.91999999999996, "text": " multiple neural networks to build a larger machine learning system."}, {"start": 277.91999999999996, "end": 285.24, "text": " And the basic reason is neural networks compute the output y as a smooth or continuous function"}, {"start": 285.24, "end": 286.47999999999996, "text": " of the input x."}, {"start": 286.47999999999996, "end": 291.52, "text": " And so even if you string together a lot of different models, the outputs of all of these"}, {"start": 291.52, "end": 294.2, "text": " different models is self-differentiable."}, {"start": 294.2, "end": 297.79999999999995, "text": " And so you can train them all at the same time using gradient descent."}, {"start": 297.79999999999995, "end": 302.08, "text": " Whereas with a decision tree, you can only train one tree at a time."}, {"start": 302.08, "end": 307.44, "text": " Finally, if you're building a system of multiple machine learning models working together,"}, {"start": 307.44, "end": 311.91999999999996, "text": " it might be easier to string together and train multiple neural networks than multiple"}, {"start": 311.91999999999996, "end": 313.76, "text": " decision trees."}, {"start": 313.76, "end": 317.28, "text": " The reasons for this are quite technical, and you don't need to worry about it for the"}, {"start": 317.28, "end": 318.28, "text": " purpose of this course."}, {"start": 318.28, "end": 323.56, "text": " But it relates to that even when you string together multiple neural networks, you can"}, {"start": 323.56, "end": 327.4, "text": " train them all together using gradient descent."}, {"start": 327.4, "end": 332.88, "text": " Whereas for decision trees, you can only train one decision tree at a time."}, {"start": 332.88, "end": 333.88, "text": " So that's it."}, {"start": 333.88, "end": 338.88, "text": " You've reached the end of the videos for this course on advanced learning algorithms."}, {"start": 338.88, "end": 342.47999999999996, "text": " Thank you for sticking with me all this way and congratulations on getting to the end"}, {"start": 342.47999999999996, "end": 345.88, "text": " of the videos on advanced learning algorithms."}, {"start": 345.88, "end": 350.67999999999995, "text": " You've now learned how to build and use both neural networks and decision trees, and also"}, {"start": 350.67999999999995, "end": 356.52, "text": " heard about a variety of tips, practical advice on how to get these algorithms to work well"}, {"start": 356.52, "end": 358.15999999999997, "text": " for you."}, {"start": 358.15999999999997, "end": 363.68, "text": " But even with all that you've seen on supervised learning, that's just part of what learning"}, {"start": 363.68, "end": 365.59999999999997, "text": " algorithms can do."}, {"start": 365.59999999999997, "end": 370.96, "text": " Supervised learnings need label datasets with the labels Y on your training set."}, {"start": 370.96, "end": 375.68, "text": " There's another set of very powerful algorithms called unsupervised learning algorithms, where"}, {"start": 375.68, "end": 381.03999999999996, "text": " you don't even need labels Y for the algorithm to figure out very interesting patterns and"}, {"start": 381.03999999999996, "end": 383.44, "text": " to do things with the data that you have."}, {"start": 383.44, "end": 389.12, "text": " So I look forward to seeing you also in the third and final course of this specialization,"}, {"start": 389.12, "end": 392.48, "text": " which will be on unsupervised learning."}, {"start": 392.48, "end": 397.76, "text": " Now before you finish up this course, I hope you also enjoy practicing the ideas of decision"}, {"start": 397.76, "end": 401.68, "text": " trees in their practice quizzes and in their practice labs."}, {"start": 401.68, "end": 405.4, "text": " I'd like to wish you the best of luck in the practice labs."}, {"start": 405.4, "end": 413.79999999999995, "text": " For those of you that may be Star Wars fans, let me say may the forest be with you."}]