|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:56:50.6784114Z by ClassTranscribe |
|
|
|
00:01:05.170 --> 00:01:05.740 |
|
All right. |
|
|
|
00:01:05.740 --> 00:01:06.410 |
|
Good morning. |
|
|
|
00:01:07.400 --> 00:01:08.500 |
|
Happy Valentine's Day. |
|
|
|
00:01:12.140 --> 00:01:16.250 |
|
So let's just start with some thinking. |
|
|
|
00:01:16.250 --> 00:01:17.940 |
|
Warm up with some review questions. |
|
|
|
00:01:17.940 --> 00:01:20.285 |
|
So just as a reminder, there's |
|
|
|
00:01:20.285 --> 00:01:23.370 |
|
questions posted on the main page for |
|
|
|
00:01:23.370 --> 00:01:24.250 |
|
each of the lectures. |
|
|
|
00:01:24.250 --> 00:01:26.230 |
|
Couple days, usually after the lecture. |
|
|
|
00:01:27.300 --> 00:01:29.060 |
|
So they are good review for the exam or |
|
|
|
00:01:29.060 --> 00:01:30.350 |
|
just to refresh? |
|
|
|
00:01:31.620 --> 00:01:34.450 |
|
So, first question, these are true, |
|
|
|
00:01:34.450 --> 00:01:34.860 |
|
false. |
|
|
|
00:01:34.860 --> 00:01:36.390 |
|
So there's just two answers. |
|
|
|
00:01:37.710 --> 00:01:39.710 |
|
Unlike SVM, linear and logistic |
|
|
|
00:01:39.710 --> 00:01:42.210 |
|
regression loss always adds a nonzero |
|
|
|
00:01:42.210 --> 00:01:44.280 |
|
penalty over all training data points. |
|
|
|
00:01:45.300 --> 00:01:46.970 |
|
How many people think that's true? |
|
|
|
00:01:50.360 --> 00:01:52.040 |
|
How many people think it's False? |
|
|
|
00:01:54.790 --> 00:01:56.690 |
|
I think the abstains have it. |
|
|
|
00:01:57.990 --> 00:02:01.560 |
|
Alright, so this is true because |
|
|
|
00:02:01.560 --> 00:02:03.260 |
|
logistic regression you always have a |
|
|
|
00:02:03.260 --> 00:02:04.376 |
|
log probability loss. |
|
|
|
00:02:04.376 --> 00:02:06.000 |
|
So that applies to every training |
|
|
|
00:02:06.000 --> 00:02:10.595 |
|
example, where SVM by contrast only has |
|
|
|
00:02:10.595 --> 00:02:12.340 |
|
loss on points that are within the |
|
|
|
00:02:12.340 --> 00:02:12.590 |
|
margin. |
|
|
|
00:02:12.590 --> 00:02:14.300 |
|
So as long as you're really confident, |
|
|
|
00:02:14.300 --> 00:02:16.270 |
|
SVM doesn't really care about you. |
|
|
|
00:02:16.880 --> 00:02:19.500 |
|
But logistic regression still does. |
|
|
|
00:02:20.430 --> 00:02:23.670 |
|
So second question are we talked about |
|
|
|
00:02:23.670 --> 00:02:25.640 |
|
the Pegasus algorithm for SVM which is |
|
|
|
00:02:25.640 --> 00:02:28.170 |
|
doing like a gradient descent where you |
|
|
|
00:02:28.170 --> 00:02:30.300 |
|
process examples one at a time or in |
|
|
|
00:02:30.300 --> 00:02:31.250 |
|
small batches? |
|
|
|
00:02:31.860 --> 00:02:35.580 |
|
And after each Example you compute the |
|
|
|
00:02:35.580 --> 00:02:36.370 |
|
gradient. |
|
|
|
00:02:37.140 --> 00:02:39.830 |
|
Of the error for those examples with |
|
|
|
00:02:39.830 --> 00:02:41.140 |
|
respect to the Weights, and then you |
|
|
|
00:02:41.140 --> 00:02:44.110 |
|
take a step to reduce your loss. |
|
|
|
00:02:45.320 --> 00:02:47.280 |
|
That's the Pegasus algorithm when it's |
|
|
|
00:02:47.280 --> 00:02:48.250 |
|
applied to SVM. |
|
|
|
00:02:50.220 --> 00:02:53.900 |
|
So is it true? |
|
|
|
00:02:53.900 --> 00:02:55.240 |
|
I guess I sort of. |
|
|
|
00:02:56.480 --> 00:02:59.063 |
|
Said part of this, but is it true that |
|
|
|
00:02:59.063 --> 00:03:01.170 |
|
this increases the computational |
|
|
|
00:03:01.170 --> 00:03:03.668 |
|
efficiency versus like optimizing over |
|
|
|
00:03:03.668 --> 00:03:04.460 |
|
the full data set? |
|
|
|
00:03:04.460 --> 00:03:06.136 |
|
If you were to take a gradient over the |
|
|
|
00:03:06.136 --> 00:03:07.589 |
|
full data set, True. |
|
|
|
00:03:09.720 --> 00:03:10.550 |
|
OK, False. |
|
|
|
00:03:12.460 --> 00:03:14.290 |
|
It's how many people think it's True. |
|
|
|
00:03:14.290 --> 00:03:15.260 |
|
Put your hand up. |
|
|
|
00:03:16.100 --> 00:03:18.100 |
|
How many people think it's False? |
|
|
|
00:03:18.100 --> 00:03:18.959 |
|
Put your hand up. |
|
|
|
00:03:18.960 --> 00:03:20.060 |
|
I don't think it's False, but I'm |
|
|
|
00:03:20.060 --> 00:03:21.712 |
|
putting my hand up. |
|
|
|
00:03:21.712 --> 00:03:22.690 |
|
It's true. |
|
|
|
00:03:22.690 --> 00:03:24.016 |
|
So it does. |
|
|
|
00:03:24.016 --> 00:03:26.230 |
|
It does give a big efficiency gain |
|
|
|
00:03:26.230 --> 00:03:27.560 |
|
because you don't have to keep on |
|
|
|
00:03:27.560 --> 00:03:29.205 |
|
computing the gradient over all the |
|
|
|
00:03:29.205 --> 00:03:31.070 |
|
samples, which would be really slow. |
|
|
|
00:03:31.070 --> 00:03:32.750 |
|
You just have to do it over a small |
|
|
|
00:03:32.750 --> 00:03:34.240 |
|
number of samples and still take a |
|
|
|
00:03:34.240 --> 00:03:34.980 |
|
productive step. |
|
|
|
00:03:35.730 --> 00:03:38.510 |
|
And furthermore, it's a much better |
|
|
|
00:03:38.510 --> 00:03:39.550 |
|
optimization algorithm. |
|
|
|
00:03:39.550 --> 00:03:40.980 |
|
It gives you the ability to escape |
|
|
|
00:03:40.980 --> 00:03:44.180 |
|
local minima to find better solutions, |
|
|
|
00:03:44.180 --> 00:03:46.340 |
|
even if locally. |
|
|
|
00:03:47.620 --> 00:03:49.370 |
|
Even if locally, there's nowhere to go |
|
|
|
00:03:49.370 --> 00:03:51.480 |
|
to improve the total score of your data |
|
|
|
00:03:51.480 --> 00:03:51.690 |
|
set. |
|
|
|
00:03:53.270 --> 00:03:53.580 |
|
All right. |
|
|
|
00:03:53.580 --> 00:03:56.460 |
|
And then the third question, Pegasus |
|
|
|
00:03:56.460 --> 00:03:58.350 |
|
has a disadvantage that the larger the |
|
|
|
00:03:58.350 --> 00:04:00.130 |
|
training data set, the slower it can be |
|
|
|
00:04:00.130 --> 00:04:02.770 |
|
optimized to reach a particular test |
|
|
|
00:04:02.770 --> 00:04:03.000 |
|
error. |
|
|
|
00:04:03.000 --> 00:04:04.020 |
|
Is that true or false? |
|
|
|
00:04:06.150 --> 00:04:08.710 |
|
So how many people think that is true? |
|
|
|
00:04:11.020 --> 00:04:12.580 |
|
And how many people think that is |
|
|
|
00:04:12.580 --> 00:04:12.980 |
|
False? |
|
|
|
00:04:14.530 --> 00:04:16.290 |
|
So there's more falses, and that's |
|
|
|
00:04:16.290 --> 00:04:16.705 |
|
correct. |
|
|
|
00:04:16.705 --> 00:04:18.145 |
|
Yeah, it's False. |
|
|
|
00:04:18.145 --> 00:04:21.280 |
|
The bigger your training set, it means |
|
|
|
00:04:21.280 --> 00:04:23.265 |
|
that given a certain number of |
|
|
|
00:04:23.265 --> 00:04:24.450 |
|
iterations, you're just going to see |
|
|
|
00:04:24.450 --> 00:04:28.445 |
|
more new examples, and so you will take |
|
|
|
00:04:28.445 --> 00:04:31.679 |
|
more informative steps of your |
|
|
|
00:04:31.680 --> 00:04:32.310 |
|
gradient. |
|
|
|
00:04:32.310 --> 00:04:35.060 |
|
And so you're given the same number of |
|
|
|
00:04:35.060 --> 00:04:37.415 |
|
iterations, you're going to reach a |
|
|
|
00:04:37.415 --> 00:04:38.990 |
|
given test error faster. |
|
|
|
00:04:40.370 --> 00:04:42.340 |
|
To fully optimize over the data set, |
|
|
|
00:04:42.340 --> 00:04:43.640 |
|
it's going to take more time. |
|
|
|
00:04:43.640 --> 00:04:46.290 |
|
If you want to do say 300 passes |
|
|
|
00:04:46.290 --> 00:04:47.804 |
|
through the data, then if you have more |
|
|
|
00:04:47.804 --> 00:04:49.730 |
|
data then it's going to take longer to |
|
|
|
00:04:49.730 --> 00:04:51.350 |
|
do those 300 passes. |
|
|
|
00:04:51.350 --> 00:04:52.960 |
|
But you're going to reach the same test |
|
|
|
00:04:52.960 --> 00:04:54.680 |
|
error a lot faster if you keep getting |
|
|
|
00:04:54.680 --> 00:04:56.946 |
|
new examples then if you are processing |
|
|
|
00:04:56.946 --> 00:04:58.820 |
|
the same examples over and over again. |
|
|
|
00:05:02.240 --> 00:05:05.480 |
|
So the reason, one of the reasons that |
|
|
|
00:05:05.480 --> 00:05:08.360 |
|
I talked about SVM and Pegasus is as an |
|
|
|
00:05:08.360 --> 00:05:10.980 |
|
introduction to Perceptrons and MLPS. |
|
|
|
00:05:11.790 --> 00:05:14.210 |
|
Because the optimization algorithm used |
|
|
|
00:05:14.210 --> 00:05:16.260 |
|
for used in Pegasus. |
|
|
|
00:05:17.060 --> 00:05:18.700 |
|
It's the same as what's used for |
|
|
|
00:05:18.700 --> 00:05:22.070 |
|
Perceptrons and is extended when we do |
|
|
|
00:05:22.070 --> 00:05:23.380 |
|
Backprop in MLP's. |
|
|
|
00:05:24.410 --> 00:05:25.900 |
|
So I'm going to talk about what is the |
|
|
|
00:05:25.900 --> 00:05:26.360 |
|
Perceptron? |
|
|
|
00:05:26.360 --> 00:05:28.050 |
|
What is an MLP? |
|
|
|
00:05:28.050 --> 00:05:29.050 |
|
Multilayer perceptron? |
|
|
|
00:05:29.970 --> 00:05:32.790 |
|
And then how do we optimize it with SGD |
|
|
|
00:05:32.790 --> 00:05:34.685 |
|
and Back propagation with some examples |
|
|
|
00:05:34.685 --> 00:05:36.000 |
|
and demos and stuff? |
|
|
|
00:05:37.300 --> 00:05:39.733 |
|
Alright, so Perceptron is actually just |
|
|
|
00:05:39.733 --> 00:05:41.940 |
|
a linear classifier or linear |
|
|
|
00:05:41.940 --> 00:05:42.670 |
|
predictor. |
|
|
|
00:05:42.670 --> 00:05:45.490 |
|
It's you could say a linear logistic |
|
|
|
00:05:45.490 --> 00:05:48.080 |
|
regressor is one form of a Perceptron |
|
|
|
00:05:48.080 --> 00:05:49.797 |
|
and a linear regressor is another form |
|
|
|
00:05:49.797 --> 00:05:50.600 |
|
of a Perceptron. |
|
|
|
00:05:51.830 --> 00:05:54.275 |
|
What makes it a Perceptron is just like |
|
|
|
00:05:54.275 --> 00:05:56.100 |
|
how you think about it or how you draw |
|
|
|
00:05:56.100 --> 00:05:56.595 |
|
it. |
|
|
|
00:05:56.595 --> 00:05:59.290 |
|
So in the representation you typically |
|
|
|
00:05:59.290 --> 00:06:01.740 |
|
see a Perceptron as you have like some |
|
|
|
00:06:01.740 --> 00:06:02.115 |
|
inputs. |
|
|
|
00:06:02.115 --> 00:06:04.420 |
|
So these could be like pixels of an |
|
|
|
00:06:04.420 --> 00:06:07.870 |
|
image, your features, the temperature |
|
|
|
00:06:07.870 --> 00:06:09.640 |
|
data, whatever the features that you're |
|
|
|
00:06:09.640 --> 00:06:10.640 |
|
going to use for prediction. |
|
|
|
00:06:11.420 --> 00:06:13.179 |
|
And then you have some Weights. |
|
|
|
00:06:13.180 --> 00:06:15.900 |
|
Those get multiplied by the inputs and |
|
|
|
00:06:15.900 --> 00:06:16.580 |
|
summed up. |
|
|
|
00:06:16.580 --> 00:06:18.500 |
|
And then you have your output |
|
|
|
00:06:18.500 --> 00:06:19.450 |
|
prediction. |
|
|
|
00:06:19.450 --> 00:06:22.220 |
|
And you may have if you have a |
|
|
|
00:06:22.220 --> 00:06:25.180 |
|
classifier, you would say that if |
|
|
|
00:06:25.180 --> 00:06:27.960 |
|
Weights X, this is a dot. |
|
|
|
00:06:27.960 --> 00:06:29.650 |
|
SO dot product is the same as W |
|
|
|
00:06:29.650 --> 00:06:32.170 |
|
transpose X plus some bias term. |
|
|
|
00:06:32.170 --> 00:06:33.890 |
|
If it's greater than zero, then you |
|
|
|
00:06:33.890 --> 00:06:35.424 |
|
predict A1, if it's less than zero, you |
|
|
|
00:06:35.424 --> 00:06:36.189 |
|
predict a -, 1. |
|
|
|
00:06:36.840 --> 00:06:38.316 |
|
So it's basically the same as the |
|
|
|
00:06:38.316 --> 00:06:40.610 |
|
linear SVM, logistic regressor, |
|
|
|
00:06:40.610 --> 00:06:42.140 |
|
anything any other kind of linear |
|
|
|
00:06:42.140 --> 00:06:42.450 |
|
model. |
|
|
|
00:06:44.690 --> 00:06:46.956 |
|
But you draw it as this network, as |
|
|
|
00:06:46.956 --> 00:06:48.680 |
|
this little tiny network with a bunch |
|
|
|
00:06:48.680 --> 00:06:50.180 |
|
of Weights and input and output. |
|
|
|
00:06:53.260 --> 00:06:54.910 |
|
Though whoops. |
|
|
|
00:06:54.910 --> 00:06:56.190 |
|
Skip something? |
|
|
|
00:06:56.190 --> 00:06:56.680 |
|
No. |
|
|
|
00:06:56.680 --> 00:06:59.530 |
|
OK, all right, so how do we optimize |
|
|
|
00:06:59.530 --> 00:07:00.566 |
|
the Perceptron? |
|
|
|
00:07:00.566 --> 00:07:03.010 |
|
So let's say you can have different |
|
|
|
00:07:03.010 --> 00:07:04.539 |
|
error functions on the Perceptron. |
|
|
|
00:07:04.540 --> 00:07:06.166 |
|
But let's say we have a squared error. |
|
|
|
00:07:06.166 --> 00:07:09.010 |
|
So the prediction of the Perceptron is. |
|
|
|
00:07:09.010 --> 00:07:10.670 |
|
Like I said before, it's a linear |
|
|
|
00:07:10.670 --> 00:07:12.450 |
|
function, so it's a sum of the Weights |
|
|
|
00:07:12.450 --> 00:07:14.630 |
|
times the inputs plus some bias term. |
|
|
|
00:07:16.260 --> 00:07:18.660 |
|
And you could have a square a square |
|
|
|
00:07:18.660 --> 00:07:20.820 |
|
error which says that you want the |
|
|
|
00:07:20.820 --> 00:07:22.360 |
|
prediction to be close to the target. |
|
|
|
00:07:24.240 --> 00:07:25.250 |
|
And then? |
|
|
|
00:07:26.430 --> 00:07:31.100 |
|
So the update rule or the optimization |
|
|
|
00:07:31.100 --> 00:07:34.350 |
|
is based on taking a step to update |
|
|
|
00:07:34.350 --> 00:07:36.010 |
|
each of your Weights in order to |
|
|
|
00:07:36.010 --> 00:07:38.520 |
|
decrease the error for particular |
|
|
|
00:07:38.520 --> 00:07:39.200 |
|
examples. |
|
|
|
00:07:40.470 --> 00:07:42.330 |
|
So to do that, we need to take the |
|
|
|
00:07:42.330 --> 00:07:44.830 |
|
partial derivative of the error with |
|
|
|
00:07:44.830 --> 00:07:46.320 |
|
respect to each of the Weights. |
|
|
|
00:07:48.650 --> 00:07:50.137 |
|
And we're going to use the chain rule |
|
|
|
00:07:50.137 --> 00:07:50.835 |
|
to do that. |
|
|
|
00:07:50.835 --> 00:07:52.310 |
|
So I put the chain rule here for |
|
|
|
00:07:52.310 --> 00:07:52.770 |
|
reference. |
|
|
|
00:07:52.770 --> 00:07:53.890 |
|
So the chain rule says. |
|
|
|
00:07:54.630 --> 00:07:57.010 |
|
If you have some function of X, that's |
|
|
|
00:07:57.010 --> 00:07:59.250 |
|
actually a function of a function of X, |
|
|
|
00:07:59.250 --> 00:08:00.280 |
|
so F of G of X. |
|
|
|
00:08:01.110 --> 00:08:04.860 |
|
Then the derivative of that function H |
|
|
|
00:08:04.860 --> 00:08:07.490 |
|
is the derivative of the outer function |
|
|
|
00:08:07.490 --> 00:08:09.567 |
|
with the arguments of the inner |
|
|
|
00:08:09.567 --> 00:08:12.086 |
|
function times the derivative of the |
|
|
|
00:08:12.086 --> 00:08:12.789 |
|
inner function. |
|
|
|
00:08:14.210 --> 00:08:16.660 |
|
If I apply that Chain Rule here, I've |
|
|
|
00:08:16.660 --> 00:08:18.980 |
|
got a square function here, so I take |
|
|
|
00:08:18.980 --> 00:08:20.290 |
|
the derivative of this. |
|
|
|
00:08:21.030 --> 00:08:24.599 |
|
And I get 2 times the inside, so that's |
|
|
|
00:08:24.600 --> 00:08:26.800 |
|
my F prime of G of X and that's over |
|
|
|
00:08:26.800 --> 00:08:29.140 |
|
here, so two times of X -, Y. |
|
|
|
00:08:29.990 --> 00:08:32.020 |
|
Times the derivative of the inside |
|
|
|
00:08:32.020 --> 00:08:34.170 |
|
function, which is F of X -, Y. |
|
|
|
00:08:34.970 --> 00:08:37.240 |
|
And the derivative of this guy with |
|
|
|
00:08:37.240 --> 00:08:40.880 |
|
respect to WI will just be XI, because |
|
|
|
00:08:40.880 --> 00:08:42.387 |
|
there's only one term, there's a. |
|
|
|
00:08:42.387 --> 00:08:44.930 |
|
There's a big sum here, but only one of |
|
|
|
00:08:44.930 --> 00:08:48.345 |
|
their terms involves WI, and that term |
|
|
|
00:08:48.345 --> 00:08:51.005 |
|
is wixi, and all the rest are zero. |
|
|
|
00:08:51.005 --> 00:08:52.820 |
|
I mean, all the rest don't involve it, |
|
|
|
00:08:52.820 --> 00:08:53.870 |
|
so the derivative is 0. |
|
|
|
00:08:54.860 --> 00:08:55.160 |
|
Right. |
|
|
|
00:08:56.420 --> 00:08:59.840 |
|
So this gives me my update rule is 2 * |
|
|
|
00:08:59.840 --> 00:09:02.380 |
|
F of X -, y times XI. |
|
|
|
00:09:03.300 --> 00:09:04.860 |
|
So in other words, if my prediction is |
|
|
|
00:09:04.860 --> 00:09:07.800 |
|
too high and XI is positive or wait, |
|
|
|
00:09:07.800 --> 00:09:08.720 |
|
let me get to the update. |
|
|
|
00:09:08.720 --> 00:09:10.539 |
|
OK, so first the update is. |
|
|
|
00:09:10.540 --> 00:09:11.940 |
|
I'm going to subtract that off because |
|
|
|
00:09:11.940 --> 00:09:13.150 |
|
I want to reduce the error so it's |
|
|
|
00:09:13.150 --> 00:09:14.350 |
|
gradient descent. |
|
|
|
00:09:14.350 --> 00:09:16.400 |
|
So I want the gradient to go down |
|
|
|
00:09:16.400 --> 00:09:16.730 |
|
South. |
|
|
|
00:09:16.730 --> 00:09:18.826 |
|
I take my weight and I subtract the |
|
|
|
00:09:18.826 --> 00:09:20.260 |
|
gradient with some learning rate or |
|
|
|
00:09:20.260 --> 00:09:20.940 |
|
step size. |
|
|
|
00:09:21.770 --> 00:09:25.220 |
|
So of X -, Y is positive and XI is |
|
|
|
00:09:25.220 --> 00:09:25.630 |
|
positive. |
|
|
|
00:09:26.560 --> 00:09:27.970 |
|
That means that. |
|
|
|
00:09:28.670 --> 00:09:30.548 |
|
That I'm overshooting and I want my |
|
|
|
00:09:30.548 --> 00:09:32.000 |
|
weight to go down. |
|
|
|
00:09:32.000 --> 00:09:34.929 |
|
If X -, Y is negative and XI is |
|
|
|
00:09:34.930 --> 00:09:37.809 |
|
positive, then I want my weight to go |
|
|
|
00:09:37.810 --> 00:09:38.870 |
|
in the opposite direction. |
|
|
|
00:09:38.870 --> 00:09:39.200 |
|
Question. |
|
|
|
00:09:44.340 --> 00:09:47.000 |
|
This update or this derivative? |
|
|
|
00:09:51.770 --> 00:09:55.975 |
|
So first we're trying to figure out how |
|
|
|
00:09:55.975 --> 00:09:58.740 |
|
do we minimize this error function? |
|
|
|
00:09:58.740 --> 00:10:01.620 |
|
How do we change our Weights in a way |
|
|
|
00:10:01.620 --> 00:10:03.490 |
|
that will reduce our error a bit for |
|
|
|
00:10:03.490 --> 00:10:05.080 |
|
this for these particular examples? |
|
|
|
00:10:06.420 --> 00:10:10.110 |
|
And if you want to know like how you. |
|
|
|
00:10:10.710 --> 00:10:13.146 |
|
How you change the value, how some |
|
|
|
00:10:13.146 --> 00:10:16.080 |
|
change in your parameters will change |
|
|
|
00:10:16.080 --> 00:10:18.049 |
|
the output of some function. |
|
|
|
00:10:18.050 --> 00:10:19.753 |
|
Then you take the derivative of that |
|
|
|
00:10:19.753 --> 00:10:20.950 |
|
function with respect to your |
|
|
|
00:10:20.950 --> 00:10:21.300 |
|
parameters. |
|
|
|
00:10:21.300 --> 00:10:22.913 |
|
So that gives you like the slope of the |
|
|
|
00:10:22.913 --> 00:10:24.400 |
|
function as you change your parameters. |
|
|
|
00:10:25.490 --> 00:10:28.175 |
|
So that's why we're taking the partial |
|
|
|
00:10:28.175 --> 00:10:29.732 |
|
derivative and then the partial |
|
|
|
00:10:29.732 --> 00:10:30.960 |
|
derivative says like how you would |
|
|
|
00:10:30.960 --> 00:10:32.430 |
|
change your parameters to increase the |
|
|
|
00:10:32.430 --> 00:10:34.920 |
|
function, so we subtract that from the |
|
|
|
00:10:34.920 --> 00:10:36.030 |
|
from the original value. |
|
|
|
00:10:37.190 --> 00:10:37.500 |
|
OK. |
|
|
|
00:10:39.410 --> 00:10:40.010 |
|
Question. |
|
|
|
00:10:53.040 --> 00:10:54.590 |
|
So the question is, what if you had |
|
|
|
00:10:54.590 --> 00:10:56.100 |
|
like additional labels? |
|
|
|
00:10:56.100 --> 00:10:58.290 |
|
Then you would end up with essentially |
|
|
|
00:10:58.290 --> 00:11:02.227 |
|
you have different F of X, so you'd |
|
|
|
00:11:02.227 --> 00:11:04.261 |
|
have like one of X, F2 of X, et cetera. |
|
|
|
00:11:04.261 --> 00:11:06.780 |
|
You'd have 1F of X for each label that |
|
|
|
00:11:06.780 --> 00:11:07.600 |
|
you're producing. |
|
|
|
00:11:10.740 --> 00:11:10.985 |
|
Yes. |
|
|
|
00:11:10.985 --> 00:11:12.500 |
|
So you need to update their weights and |
|
|
|
00:11:12.500 --> 00:11:13.120 |
|
they would all. |
|
|
|
00:11:13.120 --> 00:11:14.820 |
|
In the case of a Perceptron, they'd all |
|
|
|
00:11:14.820 --> 00:11:16.360 |
|
be independent because they're all just |
|
|
|
00:11:16.360 --> 00:11:17.630 |
|
like linear models. |
|
|
|
00:11:17.630 --> 00:11:20.060 |
|
So it would end up being essentially |
|
|
|
00:11:20.060 --> 00:11:23.010 |
|
the same as training N Perceptrons to |
|
|
|
00:11:23.010 --> 00:11:24.220 |
|
do and outputs. |
|
|
|
00:11:25.810 --> 00:11:28.710 |
|
But yeah, that becomes a little more |
|
|
|
00:11:28.710 --> 00:11:30.430 |
|
complicated in the case of MLPS where |
|
|
|
00:11:30.430 --> 00:11:32.060 |
|
you could Share intermediate features, |
|
|
|
00:11:32.060 --> 00:11:33.620 |
|
but we'll get to that a little bit. |
|
|
|
00:11:35.340 --> 00:11:36.690 |
|
So here's my update. |
|
|
|
00:11:36.690 --> 00:11:38.500 |
|
So I'm going to improve the weights a |
|
|
|
00:11:38.500 --> 00:11:40.630 |
|
little bit to do reduce the error on |
|
|
|
00:11:40.630 --> 00:11:42.640 |
|
these particular examples, and I just |
|
|
|
00:11:42.640 --> 00:11:45.540 |
|
put the two into the learning rate. |
|
|
|
00:11:45.540 --> 00:11:47.250 |
|
Sometimes people write this error is |
|
|
|
00:11:47.250 --> 00:11:49.200 |
|
1/2 of this just so they don't have to |
|
|
|
00:11:49.200 --> 00:11:50.150 |
|
deal with the two. |
|
|
|
00:11:51.580 --> 00:11:53.130 |
|
Right, so here's the whole optimization |
|
|
|
00:11:53.130 --> 00:11:53.630 |
|
algorithm. |
|
|
|
00:11:54.600 --> 00:11:56.140 |
|
Randomly initialize my Weights. |
|
|
|
00:11:56.140 --> 00:11:57.930 |
|
For example, I say W is drawn from some |
|
|
|
00:11:57.930 --> 00:11:59.660 |
|
Gaussian with a mean of zero and |
|
|
|
00:11:59.660 --> 00:12:00.590 |
|
standard deviation of. |
|
|
|
00:12:01.380 --> 00:12:05.320 |
|
0.50 point 05 so just initialize them |
|
|
|
00:12:05.320 --> 00:12:05.850 |
|
small. |
|
|
|
00:12:06.870 --> 00:12:09.400 |
|
And then for each iteration or epoch. |
|
|
|
00:12:09.400 --> 00:12:10.900 |
|
An epoch is like a cycle through the |
|
|
|
00:12:10.900 --> 00:12:11.635 |
|
training data. |
|
|
|
00:12:11.635 --> 00:12:14.050 |
|
I split the data into batches so I |
|
|
|
00:12:14.050 --> 00:12:16.420 |
|
could have a batch size of 1 or 128, |
|
|
|
00:12:16.420 --> 00:12:19.980 |
|
but I just chunk it into different sets |
|
|
|
00:12:19.980 --> 00:12:21.219 |
|
that are partition of the data. |
|
|
|
00:12:22.120 --> 00:12:23.740 |
|
And I set my learning rate. |
|
|
|
00:12:23.740 --> 00:12:25.940 |
|
For example, this is the schedule used |
|
|
|
00:12:25.940 --> 00:12:27.230 |
|
by Pegasus. |
|
|
|
00:12:27.230 --> 00:12:29.150 |
|
But sometimes people use a constant |
|
|
|
00:12:29.150 --> 00:12:29.660 |
|
learning rate. |
|
|
|
00:12:31.140 --> 00:12:33.459 |
|
And then for each batch and for each |
|
|
|
00:12:33.460 --> 00:12:34.040 |
|
weight. |
|
|
|
00:12:34.880 --> 00:12:37.630 |
|
I have my update rule so I have this |
|
|
|
00:12:37.630 --> 00:12:39.315 |
|
gradient for each. |
|
|
|
00:12:39.315 --> 00:12:41.675 |
|
I take the sum over all the examples in |
|
|
|
00:12:41.675 --> 00:12:42.330 |
|
my batch. |
|
|
|
00:12:43.120 --> 00:12:46.480 |
|
And then I get the output the |
|
|
|
00:12:46.480 --> 00:12:48.920 |
|
prediction for that sample subtract it |
|
|
|
00:12:48.920 --> 00:12:50.510 |
|
from the true value according to my |
|
|
|
00:12:50.510 --> 00:12:51.290 |
|
training labels. |
|
|
|
00:12:52.200 --> 00:12:55.510 |
|
By the Input for that weight, which is |
|
|
|
00:12:55.510 --> 00:12:58.810 |
|
xni, I sum that up, divide by the total |
|
|
|
00:12:58.810 --> 00:13:00.260 |
|
number of samples in my batch. |
|
|
|
00:13:00.910 --> 00:13:05.010 |
|
And then I take a step in that negative |
|
|
|
00:13:05.010 --> 00:13:07.070 |
|
direction weighted by the learning |
|
|
|
00:13:07.070 --> 00:13:07.460 |
|
rate. |
|
|
|
00:13:07.460 --> 00:13:09.120 |
|
So if the learning rates really high, |
|
|
|
00:13:09.120 --> 00:13:10.920 |
|
I'm going to take really big steps, but |
|
|
|
00:13:10.920 --> 00:13:12.790 |
|
then you risk like overstepping the |
|
|
|
00:13:12.790 --> 00:13:15.933 |
|
minimum or even kind of bouncing out of |
|
|
|
00:13:15.933 --> 00:13:18.910 |
|
a into like a undefined solution. |
|
|
|
00:13:19.540 --> 00:13:21.100 |
|
If you're learning is really low, then |
|
|
|
00:13:21.100 --> 00:13:22.650 |
|
it's just going to take a long time to |
|
|
|
00:13:22.650 --> 00:13:23.090 |
|
converge. |
|
|
|
00:13:24.570 --> 00:13:26.820 |
|
Perceptrons it's a linear model, so |
|
|
|
00:13:26.820 --> 00:13:28.610 |
|
it's Fully optimizable. |
|
|
|
00:13:28.610 --> 00:13:30.910 |
|
You can always it's possible to find |
|
|
|
00:13:30.910 --> 00:13:32.060 |
|
global minimum. |
|
|
|
00:13:32.060 --> 00:13:34.220 |
|
It's a really nicely behaved |
|
|
|
00:13:34.220 --> 00:13:35.400 |
|
optimization problem. |
|
|
|
00:13:41.540 --> 00:13:45.580 |
|
So you can have different losses if I |
|
|
|
00:13:45.580 --> 00:13:47.840 |
|
instead of having a squared loss if I |
|
|
|
00:13:47.840 --> 00:13:49.980 |
|
had a logistic loss for example. |
|
|
|
00:13:50.650 --> 00:13:53.110 |
|
Then it would mean that I have this |
|
|
|
00:13:53.110 --> 00:13:55.170 |
|
function that the Perceptron is still |
|
|
|
00:13:55.170 --> 00:13:56.660 |
|
computing this linear function. |
|
|
|
00:13:57.260 --> 00:13:59.220 |
|
But then I say that my error is a |
|
|
|
00:13:59.220 --> 00:14:01.290 |
|
negative log probability of the True |
|
|
|
00:14:01.290 --> 00:14:03.220 |
|
label given the features. |
|
|
|
00:14:04.450 --> 00:14:06.280 |
|
And where the probability is given by |
|
|
|
00:14:06.280 --> 00:14:07.740 |
|
this Sigmoid function. |
|
|
|
00:14:08.640 --> 00:14:12.230 |
|
Which is 1 / 1 + E to the negative like |
|
|
|
00:14:12.230 --> 00:14:14.060 |
|
F of Y of F of X so. |
|
|
|
00:14:15.050 --> 00:14:15.730 |
|
So this. |
|
|
|
00:14:15.810 --> 00:14:16.430 |
|
|
|
|
|
00:14:17.800 --> 00:14:21.280 |
|
Now the derivative of this you can. |
|
|
|
00:14:21.280 --> 00:14:23.536 |
|
It's not super complicated to take this |
|
|
|
00:14:23.536 --> 00:14:25.050 |
|
derivative, but it does involve like |
|
|
|
00:14:25.050 --> 00:14:27.510 |
|
several lines and I decide it's not |
|
|
|
00:14:27.510 --> 00:14:29.833 |
|
really worth stepping through the |
|
|
|
00:14:29.833 --> 00:14:30.109 |
|
lines. |
|
|
|
00:14:30.880 --> 00:14:33.974 |
|
The main point is that for any kind of |
|
|
|
00:14:33.974 --> 00:14:35.908 |
|
activation function, you can compute |
|
|
|
00:14:35.908 --> 00:14:37.843 |
|
the derivative of that activation |
|
|
|
00:14:37.843 --> 00:14:40.208 |
|
function, or for any kind of error |
|
|
|
00:14:40.208 --> 00:14:41.920 |
|
function, I mean you can compute the |
|
|
|
00:14:41.920 --> 00:14:43.190 |
|
derivative of that error function. |
|
|
|
00:14:44.000 --> 00:14:45.480 |
|
And then you plug it in there. |
|
|
|
00:14:45.480 --> 00:14:50.453 |
|
So now this becomes Y&X and I times and |
|
|
|
00:14:50.453 --> 00:14:53.990 |
|
it works out to 1 minus the probability |
|
|
|
00:14:53.990 --> 00:14:56.170 |
|
of the correct label given the data. |
|
|
|
00:14:57.450 --> 00:14:58.705 |
|
So this kind of makes sense. |
|
|
|
00:14:58.705 --> 00:15:02.120 |
|
So if this plus this. |
|
|
|
00:15:02.210 --> 00:15:02.440 |
|
OK. |
|
|
|
00:15:03.080 --> 00:15:04.930 |
|
This ends up being a plus because I'm |
|
|
|
00:15:04.930 --> 00:15:06.760 |
|
decreasing the negative log likelihood, |
|
|
|
00:15:06.760 --> 00:15:08.696 |
|
which is the same as increasing the log |
|
|
|
00:15:08.696 --> 00:15:08.999 |
|
likelihood. |
|
|
|
00:15:09.790 --> 00:15:14.120 |
|
And if XNXN is positive and Y is |
|
|
|
00:15:14.120 --> 00:15:15.750 |
|
positive then I want to increase the |
|
|
|
00:15:15.750 --> 00:15:18.680 |
|
score further and so I want the weight |
|
|
|
00:15:18.680 --> 00:15:19.740 |
|
to go up. |
|
|
|
00:15:20.690 --> 00:15:24.610 |
|
And the step size will be weighted by |
|
|
|
00:15:24.610 --> 00:15:26.050 |
|
how wrong I was. |
|
|
|
00:15:26.050 --> 00:15:29.540 |
|
So 1 minus the probability of y = y N |
|
|
|
00:15:29.540 --> 00:15:30.910 |
|
given XN is like how wrong I was if |
|
|
|
00:15:30.910 --> 00:15:31.370 |
|
this is. |
|
|
|
00:15:32.010 --> 00:15:33.490 |
|
If I was perfectly correct, then this |
|
|
|
00:15:33.490 --> 00:15:34.380 |
|
is going to be zero. |
|
|
|
00:15:34.380 --> 00:15:35.557 |
|
I don't need to take any step. |
|
|
|
00:15:35.557 --> 00:15:38.240 |
|
If I was completely confidently wrong, |
|
|
|
00:15:38.240 --> 00:15:39.500 |
|
then this is going to be one and so |
|
|
|
00:15:39.500 --> 00:15:40.320 |
|
I'll take a bigger step. |
|
|
|
00:15:43.020 --> 00:15:47.330 |
|
Right, so the, so just the. |
|
|
|
00:15:47.410 --> 00:15:50.280 |
|
Step depends on the loss, but with any |
|
|
|
00:15:50.280 --> 00:15:51.950 |
|
loss you can do a similar kind of |
|
|
|
00:15:51.950 --> 00:15:53.670 |
|
strategy as long as you can take a |
|
|
|
00:15:53.670 --> 00:15:54.640 |
|
derivative of the loss. |
|
|
|
00:15:58.860 --> 00:15:59.150 |
|
OK. |
|
|
|
00:16:02.620 --> 00:16:06.160 |
|
Alright, so let's see, is a Perceptron |
|
|
|
00:16:06.160 --> 00:16:06.550 |
|
enough? |
|
|
|
00:16:06.550 --> 00:16:10.046 |
|
So which of these functions do you |
|
|
|
00:16:10.046 --> 00:16:14.166 |
|
think can be fit with the Perceptron? |
|
|
|
00:16:14.166 --> 00:16:17.085 |
|
So what about the first function? |
|
|
|
00:16:17.085 --> 00:16:19.036 |
|
Do you think we can fit that with the |
|
|
|
00:16:19.036 --> 00:16:19.349 |
|
Perceptron? |
|
|
|
00:16:21.040 --> 00:16:21.600 |
|
Yeah. |
|
|
|
00:16:21.600 --> 00:16:23.050 |
|
What about the second one? |
|
|
|
00:16:26.950 --> 00:16:29.620 |
|
OK, yeah, some people are confidently |
|
|
|
00:16:29.620 --> 00:16:33.150 |
|
yes, and some people are not so sure we |
|
|
|
00:16:33.150 --> 00:16:34.170 |
|
will see. |
|
|
|
00:16:34.170 --> 00:16:35.320 |
|
What about this function? |
|
|
|
00:16:37.440 --> 00:16:38.599 |
|
Yeah, definitely not. |
|
|
|
00:16:38.600 --> 00:16:40.260 |
|
Well, I'm giving away, but definitely |
|
|
|
00:16:40.260 --> 00:16:41.010 |
|
not that one, right? |
|
|
|
00:16:42.700 --> 00:16:43.920 |
|
All right, so let's see. |
|
|
|
00:16:43.920 --> 00:16:45.030 |
|
So here's Demo. |
|
|
|
00:16:46.770 --> 00:16:48.060 |
|
I'm going to switch. |
|
|
|
00:16:48.060 --> 00:16:49.205 |
|
I only have one. |
|
|
|
00:16:49.205 --> 00:16:51.620 |
|
I only have one USB port, so. |
|
|
|
00:16:54.050 --> 00:16:56.210 |
|
I don't want to use my touchpad. |
|
|
|
00:17:06.710 --> 00:17:07.770 |
|
OK. |
|
|
|
00:17:07.770 --> 00:17:08.500 |
|
Don't go there. |
|
|
|
00:17:08.500 --> 00:17:09.120 |
|
OK. |
|
|
|
00:17:31.910 --> 00:17:34.170 |
|
OK, so I've got some functions here. |
|
|
|
00:17:34.170 --> 00:17:35.889 |
|
I've got that linear function that I |
|
|
|
00:17:35.889 --> 00:17:36.163 |
|
showed. |
|
|
|
00:17:36.163 --> 00:17:38.378 |
|
I've got the rounded function that I |
|
|
|
00:17:38.378 --> 00:17:39.790 |
|
showed, continuous, nonlinear. |
|
|
|
00:17:40.600 --> 00:17:44.060 |
|
Giving it away but and then I've got a |
|
|
|
00:17:44.060 --> 00:17:45.850 |
|
more non linear function that had like |
|
|
|
00:17:45.850 --> 00:17:47.380 |
|
that little circle on the right side. |
|
|
|
00:17:49.580 --> 00:17:51.750 |
|
Got display functions that I don't want |
|
|
|
00:17:51.750 --> 00:17:54.455 |
|
to talk about because they're just for |
|
|
|
00:17:54.455 --> 00:17:55.636 |
|
display and they're really complicated |
|
|
|
00:17:55.636 --> 00:17:57.560 |
|
and I sort of like found it somewhere |
|
|
|
00:17:57.560 --> 00:17:59.530 |
|
and modified it, but it's definitely |
|
|
|
00:17:59.530 --> 00:17:59.980 |
|
not the point. |
|
|
|
00:18:00.730 --> 00:18:02.650 |
|
Let me just see if this will run. |
|
|
|
00:18:06.740 --> 00:18:08.060 |
|
All right, let's take for granted that |
|
|
|
00:18:08.060 --> 00:18:09.130 |
|
it will and move on. |
|
|
|
00:18:09.250 --> 00:18:09.870 |
|
|
|
|
|
00:18:11.440 --> 00:18:15.200 |
|
So then I've got so this is all display |
|
|
|
00:18:15.200 --> 00:18:15.650 |
|
function. |
|
|
|
00:18:16.260 --> 00:18:18.140 |
|
And then I've got the Perceptron down |
|
|
|
00:18:18.140 --> 00:18:18.610 |
|
here. |
|
|
|
00:18:18.610 --> 00:18:20.415 |
|
So just a second, let me make sure. |
|
|
|
00:18:20.415 --> 00:18:22.430 |
|
OK, that's good, let me run my display |
|
|
|
00:18:22.430 --> 00:18:23.850 |
|
functions, OK. |
|
|
|
00:18:25.280 --> 00:18:27.460 |
|
So the Perceptron. |
|
|
|
00:18:27.460 --> 00:18:29.800 |
|
OK, so I've got my prediction function |
|
|
|
00:18:29.800 --> 00:18:33.746 |
|
that's just basically matmul X and |
|
|
|
00:18:33.746 --> 00:18:33.993 |
|
West. |
|
|
|
00:18:33.993 --> 00:18:36.050 |
|
So I just multiply my Weights by West. |
|
|
|
00:18:37.050 --> 00:18:38.870 |
|
I've got an evaluation function to |
|
|
|
00:18:38.870 --> 00:18:40.230 |
|
compute a loss. |
|
|
|
00:18:40.230 --> 00:18:43.160 |
|
It's just one over 1 + E to the |
|
|
|
00:18:43.160 --> 00:18:45.490 |
|
negative prediction times Y. |
|
|
|
00:18:46.360 --> 00:18:48.650 |
|
The negative log of that just so it's |
|
|
|
00:18:48.650 --> 00:18:49.520 |
|
the logistic loss. |
|
|
|
00:18:51.210 --> 00:18:53.346 |
|
And I'm also computing an accuracy here |
|
|
|
00:18:53.346 --> 00:18:56.430 |
|
which is just whether Y times the |
|
|
|
00:18:56.430 --> 00:18:59.710 |
|
prediction which is greater than zero |
|
|
|
00:18:59.710 --> 00:19:00.130 |
|
or not. |
|
|
|
00:19:01.210 --> 00:19:02.160 |
|
The average of that? |
|
|
|
00:19:03.080 --> 00:19:07.420 |
|
And then I've got my SGD Perceptron |
|
|
|
00:19:07.420 --> 00:19:07.780 |
|
here. |
|
|
|
00:19:08.540 --> 00:19:11.460 |
|
So I randomly initialized my Weights. |
|
|
|
00:19:11.460 --> 00:19:13.480 |
|
Here I just did a uniform random, which |
|
|
|
00:19:13.480 --> 00:19:14.530 |
|
is OK too. |
|
|
|
00:19:14.670 --> 00:19:15.310 |
|
|
|
|
|
00:19:17.730 --> 00:19:18.900 |
|
And. |
|
|
|
00:19:20.280 --> 00:19:22.040 |
|
So this is a uniform random |
|
|
|
00:19:22.040 --> 00:19:26.040 |
|
initialization from -, .25 to .025. |
|
|
|
00:19:27.020 --> 00:19:29.670 |
|
I added a one to my features as a way |
|
|
|
00:19:29.670 --> 00:19:30.940 |
|
of dealing with the bias term. |
|
|
|
00:19:32.500 --> 00:19:33.740 |
|
And then I've got some number of |
|
|
|
00:19:33.740 --> 00:19:34.610 |
|
iterations set. |
|
|
|
00:19:36.000 --> 00:19:37.920 |
|
And initializing something to keep |
|
|
|
00:19:37.920 --> 00:19:39.770 |
|
track of their loss and the accuracy. |
|
|
|
00:19:40.470 --> 00:19:42.940 |
|
I'm going to just Evaluate to start |
|
|
|
00:19:42.940 --> 00:19:44.620 |
|
with my random Weights, so I'm not |
|
|
|
00:19:44.620 --> 00:19:45.970 |
|
expecting anything to be good, but I |
|
|
|
00:19:45.970 --> 00:19:46.840 |
|
want to start tracking it. |
|
|
|
00:19:48.010 --> 00:19:50.410 |
|
Gotta batch size of 100 so I'm going to |
|
|
|
00:19:50.410 --> 00:19:52.210 |
|
process 100 examples at a time. |
|
|
|
00:19:53.660 --> 00:19:55.610 |
|
I start iterating through my data. |
|
|
|
00:19:56.470 --> 00:19:58.002 |
|
I set my learning rate. |
|
|
|
00:19:58.002 --> 00:20:00.440 |
|
I did the initial learning rate. |
|
|
|
00:20:00.550 --> 00:20:01.200 |
|
|
|
|
|
00:20:02.210 --> 00:20:06.440 |
|
Divided by this thing, another number |
|
|
|
00:20:06.440 --> 00:20:09.135 |
|
and then divide by the step size. |
|
|
|
00:20:09.135 --> 00:20:11.910 |
|
This was based on the Pegasus learning |
|
|
|
00:20:11.910 --> 00:20:12.490 |
|
rate schedule. |
|
|
|
00:20:13.660 --> 00:20:15.210 |
|
Then I do. |
|
|
|
00:20:15.300 --> 00:20:18.366 |
|
I've randomly permute my data order, so |
|
|
|
00:20:18.366 --> 00:20:20.230 |
|
I randomly permute indices. |
|
|
|
00:20:20.230 --> 00:20:22.300 |
|
So I've shuffled my data so that every |
|
|
|
00:20:22.300 --> 00:20:23.856 |
|
time I pass through I'm going to pass |
|
|
|
00:20:23.856 --> 00:20:24.920 |
|
through it in a different order. |
|
|
|
00:20:26.730 --> 00:20:29.120 |
|
Then I step through and steps of batch |
|
|
|
00:20:29.120 --> 00:20:29.360 |
|
size. |
|
|
|
00:20:29.360 --> 00:20:30.928 |
|
I step through my data in steps of |
|
|
|
00:20:30.928 --> 00:20:31.396 |
|
batch size. |
|
|
|
00:20:31.396 --> 00:20:34.790 |
|
I get the indices of size batch size. |
|
|
|
00:20:35.970 --> 00:20:37.840 |
|
I make a prediction for all those |
|
|
|
00:20:37.840 --> 00:20:39.100 |
|
indices and like. |
|
|
|
00:20:39.100 --> 00:20:40.770 |
|
These reshapes make the Code kind of |
|
|
|
00:20:40.770 --> 00:20:42.690 |
|
ugly, but necessary. |
|
|
|
00:20:42.690 --> 00:20:44.080 |
|
At least seem necessary. |
|
|
|
00:20:45.260 --> 00:20:52.000 |
|
So I multiply my by my ex and I do the |
|
|
|
00:20:52.000 --> 00:20:52.830 |
|
exponent of that. |
|
|
|
00:20:52.830 --> 00:20:54.300 |
|
So this is 1 minus the. |
|
|
|
00:20:54.300 --> 00:20:55.790 |
|
Since I don't have a negative here, |
|
|
|
00:20:55.790 --> 00:20:57.940 |
|
this is 1 minus the probability of the |
|
|
|
00:20:57.940 --> 00:20:58.390 |
|
True label. |
|
|
|
00:20:59.980 --> 00:21:01.830 |
|
This whole pred is 1 minus the |
|
|
|
00:21:01.830 --> 00:21:02.910 |
|
probability of the True label. |
|
|
|
00:21:03.650 --> 00:21:05.009 |
|
And then I have my weight update. |
|
|
|
00:21:05.010 --> 00:21:07.010 |
|
So my weight update is one over the |
|
|
|
00:21:07.010 --> 00:21:09.255 |
|
batch size times the learning rate |
|
|
|
00:21:09.255 --> 00:21:12.260 |
|
times X * Y. |
|
|
|
00:21:12.970 --> 00:21:16.750 |
|
Times this error function evaluation |
|
|
|
00:21:16.750 --> 00:21:17.390 |
|
which is pred. |
|
|
|
00:21:18.690 --> 00:21:19.360 |
|
And. |
|
|
|
00:21:20.000 --> 00:21:21.510 |
|
I'm going to sum it over all the |
|
|
|
00:21:21.510 --> 00:21:22.220 |
|
examples. |
|
|
|
00:21:23.050 --> 00:21:25.920 |
|
So this is my loss update. |
|
|
|
00:21:25.920 --> 00:21:27.550 |
|
And then I also have an L2 |
|
|
|
00:21:27.550 --> 00:21:29.560 |
|
regularization, so I'm penalizing the |
|
|
|
00:21:29.560 --> 00:21:30.550 |
|
square of Weights. |
|
|
|
00:21:30.550 --> 00:21:32.027 |
|
When you penalize the square of |
|
|
|
00:21:32.027 --> 00:21:33.420 |
|
Weights, take the derivative of West |
|
|
|
00:21:33.420 --> 00:21:35.620 |
|
squared and you get 2 W so you subtract |
|
|
|
00:21:35.620 --> 00:21:36.400 |
|
off 2 W. |
|
|
|
00:21:37.870 --> 00:21:40.460 |
|
And so I take a step in that negative W |
|
|
|
00:21:40.460 --> 00:21:40.950 |
|
direction. |
|
|
|
00:21:43.170 --> 00:21:45.420 |
|
OK, so these are the same as the |
|
|
|
00:21:45.420 --> 00:21:46.610 |
|
equations I showed, they're just |
|
|
|
00:21:46.610 --> 00:21:48.487 |
|
vectorized so that I'm updating all the |
|
|
|
00:21:48.487 --> 00:21:50.510 |
|
weights at the same time and processing |
|
|
|
00:21:50.510 --> 00:21:52.462 |
|
all the data in the batch at the same |
|
|
|
00:21:52.462 --> 00:21:54.362 |
|
time, rather than looping through the |
|
|
|
00:21:54.362 --> 00:21:56.060 |
|
data and looping through the Weights. |
|
|
|
00:21:57.820 --> 00:21:59.920 |
|
Then I do this. |
|
|
|
00:22:00.750 --> 00:22:02.570 |
|
Until I finish get to the end of my |
|
|
|
00:22:02.570 --> 00:22:06.120 |
|
data, I compute my accuracy and my loss |
|
|
|
00:22:06.120 --> 00:22:07.870 |
|
just for plotting purposes. |
|
|
|
00:22:08.720 --> 00:22:10.950 |
|
And then at the end I do an evaluation |
|
|
|
00:22:10.950 --> 00:22:11.990 |
|
and I. |
|
|
|
00:22:12.820 --> 00:22:15.860 |
|
And I print my error and my loss and my |
|
|
|
00:22:15.860 --> 00:22:19.150 |
|
accuracy and plot plot things OK, so |
|
|
|
00:22:19.150 --> 00:22:19.400 |
|
that's. |
|
|
|
00:22:20.560 --> 00:22:22.680 |
|
That's SGD for Perceptron. |
|
|
|
00:22:23.940 --> 00:22:25.466 |
|
And so now let's look at it at the |
|
|
|
00:22:25.466 --> 00:22:26.640 |
|
linear for the linear problem. |
|
|
|
00:22:27.740 --> 00:22:29.260 |
|
So I'm going to generate some random |
|
|
|
00:22:29.260 --> 00:22:29.770 |
|
data. |
|
|
|
00:22:30.610 --> 00:22:31.190 |
|
|
|
|
|
00:22:32.380 --> 00:22:34.865 |
|
So here's some like randomly generated |
|
|
|
00:22:34.865 --> 00:22:37.190 |
|
data where like half the data is on one |
|
|
|
00:22:37.190 --> 00:22:38.730 |
|
side of this diagonal and half on the |
|
|
|
00:22:38.730 --> 00:22:39.250 |
|
other side. |
|
|
|
00:22:40.900 --> 00:22:43.220 |
|
And I plot it and then I run this |
|
|
|
00:22:43.220 --> 00:22:44.760 |
|
function that I just described. |
|
|
|
00:22:46.290 --> 00:22:48.070 |
|
And so this is what happens to the |
|
|
|
00:22:48.070 --> 00:22:48.800 |
|
loss. |
|
|
|
00:22:48.800 --> 00:22:50.800 |
|
So first, the first, the learning rate |
|
|
|
00:22:50.800 --> 00:22:53.120 |
|
is pretty high, so it's actually kind |
|
|
|
00:22:53.120 --> 00:22:55.140 |
|
of a little wild, but then it settles |
|
|
|
00:22:55.140 --> 00:22:57.430 |
|
down and it quickly descends and |
|
|
|
00:22:57.430 --> 00:22:59.480 |
|
reaches a. |
|
|
|
00:22:59.600 --> 00:23:00.000 |
|
The loss. |
|
|
|
00:23:01.380 --> 00:23:02.890 |
|
And here's what happens to the |
|
|
|
00:23:02.890 --> 00:23:03.930 |
|
accuracy. |
|
|
|
00:23:04.340 --> 00:23:07.910 |
|
The accuracy goes up to pretty close to |
|
|
|
00:23:07.910 --> 00:23:08.140 |
|
1. |
|
|
|
00:23:09.950 --> 00:23:12.680 |
|
And I just ran it for 10 iterations. |
|
|
|
00:23:12.680 --> 00:23:15.810 |
|
So if I decrease my learning rate and |
|
|
|
00:23:15.810 --> 00:23:17.290 |
|
let it run more, I could probably get a |
|
|
|
00:23:17.290 --> 00:23:18.490 |
|
little higher accuracy, but. |
|
|
|
00:23:21.070 --> 00:23:22.720 |
|
So notice the loss is still like far |
|
|
|
00:23:22.720 --> 00:23:25.270 |
|
from zero, even though here's my |
|
|
|
00:23:25.270 --> 00:23:26.560 |
|
decision boundary. |
|
|
|
00:23:27.800 --> 00:23:29.780 |
|
I'm going to have to zoom out. |
|
|
|
00:23:29.780 --> 00:23:31.010 |
|
That's a big boundary. |
|
|
|
00:23:31.900 --> 00:23:33.070 |
|
All right, so here's my decision |
|
|
|
00:23:33.070 --> 00:23:35.450 |
|
boundary, and you can see it's |
|
|
|
00:23:35.450 --> 00:23:37.880 |
|
classifying almost everything perfect. |
|
|
|
00:23:37.880 --> 00:23:40.210 |
|
There's some red dots that ended up on |
|
|
|
00:23:40.210 --> 00:23:41.450 |
|
the wrong side of the line. |
|
|
|
00:23:42.990 --> 00:23:45.330 |
|
And but pretty good. |
|
|
|
00:23:45.940 --> 00:23:47.880 |
|
But the loss is still fairly high |
|
|
|
00:23:47.880 --> 00:23:49.430 |
|
because it's still paying a loss even |
|
|
|
00:23:49.430 --> 00:23:51.370 |
|
for these correctly classified samples |
|
|
|
00:23:51.370 --> 00:23:52.350 |
|
that are near the boundary. |
|
|
|
00:23:52.350 --> 00:23:54.020 |
|
In fact, all the samples it's paying |
|
|
|
00:23:54.020 --> 00:23:55.880 |
|
some loss, but here pretty much fit |
|
|
|
00:23:55.880 --> 00:23:56.620 |
|
this function right. |
|
|
|
00:23:58.160 --> 00:23:59.790 |
|
Alright, so now let's look at our non |
|
|
|
00:23:59.790 --> 00:24:00.770 |
|
linear problem. |
|
|
|
00:24:01.520 --> 00:24:04.910 |
|
With the rounded curve. |
|
|
|
00:24:04.910 --> 00:24:06.560 |
|
So I just basically did like a circle |
|
|
|
00:24:06.560 --> 00:24:08.040 |
|
that overlaps with the feature space. |
|
|
|
00:24:11.370 --> 00:24:13.630 |
|
Here's what happened in loss. |
|
|
|
00:24:13.630 --> 00:24:15.960 |
|
Nice descent leveled out. |
|
|
|
00:24:17.390 --> 00:24:21.400 |
|
My accuracy went up, but it's topped |
|
|
|
00:24:21.400 --> 00:24:22.210 |
|
out at 90%. |
|
|
|
00:24:23.150 --> 00:24:25.000 |
|
And that's because I can't fit this |
|
|
|
00:24:25.000 --> 00:24:26.380 |
|
perfectly with the linear function, |
|
|
|
00:24:26.380 --> 00:24:26.620 |
|
right? |
|
|
|
00:24:26.620 --> 00:24:28.518 |
|
The best I can do with the linear |
|
|
|
00:24:28.518 --> 00:24:29.800 |
|
function it means that I'm drawing a |
|
|
|
00:24:29.800 --> 00:24:31.070 |
|
line in this 2D space. |
|
|
|
00:24:31.970 --> 00:24:34.610 |
|
And the True boundary is not a line, |
|
|
|
00:24:34.610 --> 00:24:36.280 |
|
it's a semi circle. |
|
|
|
00:24:36.960 --> 00:24:39.040 |
|
And so I get these points over here |
|
|
|
00:24:39.040 --> 00:24:41.396 |
|
incorrect the red dots and I get. |
|
|
|
00:24:41.396 --> 00:24:41.989 |
|
I get. |
|
|
|
00:24:43.310 --> 00:24:47.429 |
|
It shows a line and then I get these |
|
|
|
00:24:47.430 --> 00:24:49.056 |
|
red dots incorrect, and then I get |
|
|
|
00:24:49.056 --> 00:24:49.960 |
|
these blue dots correct. |
|
|
|
00:24:49.960 --> 00:24:52.240 |
|
So it's the best line it can find, but |
|
|
|
00:24:52.240 --> 00:24:52.981 |
|
it can't fit. |
|
|
|
00:24:52.981 --> 00:24:55.340 |
|
You can't fit a nonlinear shape with |
|
|
|
00:24:55.340 --> 00:24:57.170 |
|
the line, just like you can't put a |
|
|
|
00:24:57.170 --> 00:24:58.810 |
|
square in a round hole. |
|
|
|
00:24:59.720 --> 00:25:00.453 |
|
Right. |
|
|
|
00:25:00.453 --> 00:25:03.710 |
|
So not perfect, but not horrible. |
|
|
|
00:25:04.710 --> 00:25:06.220 |
|
And then if we go to this. |
|
|
|
00:25:06.330 --> 00:25:07.010 |
|
And. |
|
|
|
00:25:07.860 --> 00:25:09.030 |
|
We go to this guy. |
|
|
|
00:25:09.830 --> 00:25:11.400 |
|
And this has this big BLOB here. |
|
|
|
00:25:11.400 --> 00:25:12.770 |
|
So obviously we're not going to fit |
|
|
|
00:25:12.770 --> 00:25:14.100 |
|
this perfectly with a line like. |
|
|
|
00:25:14.100 --> 00:25:15.470 |
|
I could do that, I could do that. |
|
|
|
00:25:16.330 --> 00:25:20.270 |
|
But it does its best, minimizes the |
|
|
|
00:25:20.270 --> 00:25:20.730 |
|
loss. |
|
|
|
00:25:20.730 --> 00:25:23.140 |
|
Now it gets an error of 85% or an |
|
|
|
00:25:23.140 --> 00:25:24.220 |
|
accuracy of 85%. |
|
|
|
00:25:24.220 --> 00:25:25.420 |
|
This is accuracy that I've been |
|
|
|
00:25:25.420 --> 00:25:25.700 |
|
plotting. |
|
|
|
00:25:26.590 --> 00:25:28.580 |
|
So a little bit lower and what it does |
|
|
|
00:25:28.580 --> 00:25:30.340 |
|
is it puts like a straight line, like |
|
|
|
00:25:30.340 --> 00:25:31.330 |
|
straight through. |
|
|
|
00:25:31.330 --> 00:25:32.820 |
|
It just moves it a little bit to the |
|
|
|
00:25:32.820 --> 00:25:34.800 |
|
right so that these guys don't have |
|
|
|
00:25:34.800 --> 00:25:36.010 |
|
quite as high of a loss. |
|
|
|
00:25:36.990 --> 00:25:38.820 |
|
And it's still getting a lot of the |
|
|
|
00:25:38.820 --> 00:25:40.475 |
|
same examples wrong. |
|
|
|
00:25:40.475 --> 00:25:42.940 |
|
And it also gets this like BLOB over |
|
|
|
00:25:42.940 --> 00:25:43.480 |
|
here wrong. |
|
|
|
00:25:44.300 --> 00:25:46.610 |
|
So these I should have said, these |
|
|
|
00:25:46.610 --> 00:25:49.890 |
|
boundaries are showing where the model |
|
|
|
00:25:49.890 --> 00:25:50.563 |
|
predicts 0. |
|
|
|
00:25:50.563 --> 00:25:52.170 |
|
So this is like the decision boundary. |
|
|
|
00:25:53.140 --> 00:25:53.960 |
|
And then the. |
|
|
|
00:25:54.700 --> 00:25:56.718 |
|
Faded background is showing like the |
|
|
|
00:25:56.718 --> 00:25:59.007 |
|
probability of the blue class or the |
|
|
|
00:25:59.007 --> 00:26:00.420 |
|
probability of the red class. |
|
|
|
00:26:00.420 --> 00:26:01.570 |
|
So it's bigger if it's. |
|
|
|
00:26:02.880 --> 00:26:04.200 |
|
Excuse me if it's more probable. |
|
|
|
00:26:04.830 --> 00:26:07.240 |
|
And then the dark dots are the training |
|
|
|
00:26:07.240 --> 00:26:08.980 |
|
examples question. |
|
|
|
00:26:10.930 --> 00:26:11.280 |
|
Decision. |
|
|
|
00:26:13.920 --> 00:26:15.120 |
|
It's a straight line. |
|
|
|
00:26:15.120 --> 00:26:16.880 |
|
It's just that in order to plot, you |
|
|
|
00:26:16.880 --> 00:26:18.810 |
|
have to discretize the space and take |
|
|
|
00:26:18.810 --> 00:26:20.690 |
|
samples at different positions. |
|
|
|
00:26:20.690 --> 00:26:22.860 |
|
So we need discretize it and fit a |
|
|
|
00:26:22.860 --> 00:26:23.770 |
|
contour to it. |
|
|
|
00:26:23.770 --> 00:26:25.826 |
|
It's like wiggling, but it's really a |
|
|
|
00:26:25.826 --> 00:26:26.299 |
|
straight line. |
|
|
|
00:26:27.170 --> 00:26:27.910 |
|
Yeah, question. |
|
|
|
00:26:33.180 --> 00:26:35.320 |
|
So that's one thing you could do, but I |
|
|
|
00:26:35.320 --> 00:26:37.550 |
|
would say in this case. |
|
|
|
00:26:39.180 --> 00:26:42.890 |
|
Even polar coordinates I think would |
|
|
|
00:26:42.890 --> 00:26:45.260 |
|
probably not be linear still, but it's |
|
|
|
00:26:45.260 --> 00:26:46.580 |
|
true that you could project it into a |
|
|
|
00:26:46.580 --> 00:26:48.520 |
|
higher dimension dimensionality and |
|
|
|
00:26:48.520 --> 00:26:50.274 |
|
make it linear, and polar coordinates |
|
|
|
00:26:50.274 --> 00:26:51.280 |
|
could be part of that. |
|
|
|
00:26:52.030 --> 00:26:52.400 |
|
Question. |
|
|
|
00:27:02.770 --> 00:27:05.630 |
|
So the question was, could layer lines |
|
|
|
00:27:05.630 --> 00:27:07.089 |
|
on top of each other and use like a |
|
|
|
00:27:07.090 --> 00:27:08.560 |
|
combination of lines to make a |
|
|
|
00:27:08.560 --> 00:27:10.110 |
|
prediction like with the decision tree? |
|
|
|
00:27:11.280 --> 00:27:12.130 |
|
So definitely. |
|
|
|
00:27:12.130 --> 00:27:13.780 |
|
So you could just literally use a |
|
|
|
00:27:13.780 --> 00:27:14.930 |
|
decision tree. |
|
|
|
00:27:15.910 --> 00:27:16.870 |
|
And. |
|
|
|
00:27:17.840 --> 00:27:19.870 |
|
Multi layer Perceptron, which is the |
|
|
|
00:27:19.870 --> 00:27:21.070 |
|
next thing that we're going to talk |
|
|
|
00:27:21.070 --> 00:27:22.742 |
|
about, is essentially doing just that. |
|
|
|
00:27:22.742 --> 00:27:25.510 |
|
You make a bunch of linear predictions |
|
|
|
00:27:25.510 --> 00:27:27.335 |
|
as your intermediate features and then |
|
|
|
00:27:27.335 --> 00:27:29.480 |
|
you have some nonlinearity like a |
|
|
|
00:27:29.480 --> 00:27:31.755 |
|
threshold and then you make linear |
|
|
|
00:27:31.755 --> 00:27:32.945 |
|
predictions from those. |
|
|
|
00:27:32.945 --> 00:27:34.800 |
|
And so that's how we're going to do it. |
|
|
|
00:27:34.860 --> 00:27:35.430 |
|
Yep. |
|
|
|
00:27:41.910 --> 00:27:42.880 |
|
So. |
|
|
|
00:27:43.640 --> 00:27:44.690 |
|
There's here. |
|
|
|
00:27:44.690 --> 00:27:46.820 |
|
It's not important because it's a |
|
|
|
00:27:46.820 --> 00:27:48.510 |
|
convex function and it's a small |
|
|
|
00:27:48.510 --> 00:27:51.195 |
|
function, but there's two purposes to a |
|
|
|
00:27:51.195 --> 00:27:51.380 |
|
batch. |
|
|
|
00:27:51.380 --> 00:27:53.550 |
|
Size 1 is that you can process the |
|
|
|
00:27:53.550 --> 00:27:55.110 |
|
whole batch in parallel, especially if |
|
|
|
00:27:55.110 --> 00:27:56.660 |
|
you're doing like GPU processing. |
|
|
|
00:27:57.380 --> 00:27:59.440 |
|
The other is that it gives you a more |
|
|
|
00:27:59.440 --> 00:28:00.756 |
|
stable estimate of the gradient. |
|
|
|
00:28:00.756 --> 00:28:02.290 |
|
So we're computing the gradient for |
|
|
|
00:28:02.290 --> 00:28:04.200 |
|
each of the examples, and then we're |
|
|
|
00:28:04.200 --> 00:28:06.190 |
|
summing it and dividing by the number |
|
|
|
00:28:06.190 --> 00:28:06.890 |
|
of samples. |
|
|
|
00:28:06.890 --> 00:28:08.670 |
|
So I'm getting a more I'm getting like |
|
|
|
00:28:08.670 --> 00:28:10.840 |
|
a better estimate of the mean gradient. |
|
|
|
00:28:11.490 --> 00:28:13.760 |
|
So what I'm really doing is for each |
|
|
|
00:28:13.760 --> 00:28:16.640 |
|
sample I'm getting an unexpected |
|
|
|
00:28:16.640 --> 00:28:19.605 |
|
gradient function for all the data, but |
|
|
|
00:28:19.605 --> 00:28:21.450 |
|
the expectation is based on just one |
|
|
|
00:28:21.450 --> 00:28:22.143 |
|
small sample. |
|
|
|
00:28:22.143 --> 00:28:24.430 |
|
So the bigger the sample, the better |
|
|
|
00:28:24.430 --> 00:28:25.910 |
|
approximates the true gradient. |
|
|
|
00:28:27.930 --> 00:28:31.620 |
|
In practice, usually bigger batch sizes |
|
|
|
00:28:31.620 --> 00:28:32.440 |
|
are better. |
|
|
|
00:28:32.590 --> 00:28:33.280 |
|
|
|
|
|
00:28:35.380 --> 00:28:37.440 |
|
And so usually people use the biggest |
|
|
|
00:28:37.440 --> 00:28:39.240 |
|
batch size that can fit into their GPU |
|
|
|
00:28:39.240 --> 00:28:41.290 |
|
memory when they're doing like deep |
|
|
|
00:28:41.290 --> 00:28:41.890 |
|
learning. |
|
|
|
00:28:42.090 --> 00:28:45.970 |
|
But you have to like tune different |
|
|
|
00:28:45.970 --> 00:28:47.660 |
|
some parameters, like you might |
|
|
|
00:28:47.660 --> 00:28:48.810 |
|
increase your learning rate, for |
|
|
|
00:28:48.810 --> 00:28:50.460 |
|
example if you have a bigger batch size |
|
|
|
00:28:50.460 --> 00:28:51.840 |
|
because you have a more stable estimate |
|
|
|
00:28:51.840 --> 00:28:53.450 |
|
of your gradient, so you can take |
|
|
|
00:28:53.450 --> 00:28:54.140 |
|
bigger steps. |
|
|
|
00:28:54.140 --> 00:28:55.180 |
|
But there's. |
|
|
|
00:28:55.850 --> 00:28:56.780 |
|
Yeah. |
|
|
|
00:29:00.230 --> 00:29:00.870 |
|
Alright. |
|
|
|
00:29:01.600 --> 00:29:03.225 |
|
So now I'm going to jump out of that. |
|
|
|
00:29:03.225 --> 00:29:05.350 |
|
I'm going to come back to this Demo. |
|
|
|
00:29:06.700 --> 00:29:07.560 |
|
In a bit. |
|
|
|
00:29:13.580 --> 00:29:16.570 |
|
Right, so I can fit linear functions |
|
|
|
00:29:16.570 --> 00:29:19.040 |
|
with this Perceptron, but sometimes I |
|
|
|
00:29:19.040 --> 00:29:20.360 |
|
want a nonlinear function. |
|
|
|
00:29:22.070 --> 00:29:23.830 |
|
So that's where the multilayer |
|
|
|
00:29:23.830 --> 00:29:25.210 |
|
perceptron comes in. |
|
|
|
00:29:25.950 --> 00:29:27.120 |
|
It's just a. |
|
|
|
00:29:28.060 --> 00:29:28.990 |
|
Perceptron with. |
|
|
|
00:29:29.770 --> 00:29:31.420 |
|
More layers of Perceptrons. |
|
|
|
00:29:31.420 --> 00:29:33.760 |
|
So basically this guy. |
|
|
|
00:29:33.760 --> 00:29:36.081 |
|
So you have in a multilayer perceptron, |
|
|
|
00:29:36.081 --> 00:29:39.245 |
|
you have your, you have your output or |
|
|
|
00:29:39.245 --> 00:29:41.010 |
|
outputs, and then you have what people |
|
|
|
00:29:41.010 --> 00:29:42.880 |
|
call hidden layers, which are like |
|
|
|
00:29:42.880 --> 00:29:44.500 |
|
intermediate outputs. |
|
|
|
00:29:44.690 --> 00:29:45.390 |
|
|
|
|
|
00:29:46.180 --> 00:29:47.955 |
|
That you can think of as being some |
|
|
|
00:29:47.955 --> 00:29:50.290 |
|
kind of latent feature, some kind of |
|
|
|
00:29:50.290 --> 00:29:52.300 |
|
combination of the Input data that may |
|
|
|
00:29:52.300 --> 00:29:54.510 |
|
be useful for making a prediction, but |
|
|
|
00:29:54.510 --> 00:29:56.466 |
|
it's not like explicitly part of your |
|
|
|
00:29:56.466 --> 00:29:58.100 |
|
data vector or part of your label |
|
|
|
00:29:58.100 --> 00:29:58.570 |
|
vector. |
|
|
|
00:30:00.050 --> 00:30:02.518 |
|
So for example, this is somebody else's |
|
|
|
00:30:02.518 --> 00:30:05.340 |
|
Example, but if you wanted to predict |
|
|
|
00:30:05.340 --> 00:30:07.530 |
|
whether somebody's going to survive the |
|
|
|
00:30:07.530 --> 00:30:08.830 |
|
cancer, I think that's what this is |
|
|
|
00:30:08.830 --> 00:30:09.010 |
|
from. |
|
|
|
00:30:09.670 --> 00:30:11.900 |
|
Then you could take the age, the gender |
|
|
|
00:30:11.900 --> 00:30:14.170 |
|
of the stage, and you could just try a |
|
|
|
00:30:14.170 --> 00:30:14.930 |
|
linear prediction. |
|
|
|
00:30:14.930 --> 00:30:16.030 |
|
But maybe that doesn't work well |
|
|
|
00:30:16.030 --> 00:30:16.600 |
|
enough. |
|
|
|
00:30:16.600 --> 00:30:18.880 |
|
So you create an MLP, a multilayer |
|
|
|
00:30:18.880 --> 00:30:21.500 |
|
perceptron, and then this is |
|
|
|
00:30:21.500 --> 00:30:23.029 |
|
essentially making a prediction, a |
|
|
|
00:30:23.030 --> 00:30:24.885 |
|
weighted combination of these inputs. |
|
|
|
00:30:24.885 --> 00:30:27.060 |
|
It goes through a Sigmoid, so it gets |
|
|
|
00:30:27.060 --> 00:30:28.595 |
|
mapped from zero to one. |
|
|
|
00:30:28.595 --> 00:30:30.510 |
|
You do the same for this node. |
|
|
|
00:30:31.450 --> 00:30:34.150 |
|
And then you and then you make another |
|
|
|
00:30:34.150 --> 00:30:35.770 |
|
prediction based on the outputs of |
|
|
|
00:30:35.770 --> 00:30:37.410 |
|
these two nodes, and that gives you |
|
|
|
00:30:37.410 --> 00:30:38.620 |
|
your final probability. |
|
|
|
00:30:41.610 --> 00:30:44.510 |
|
So this becomes a nonlinear function |
|
|
|
00:30:44.510 --> 00:30:47.597 |
|
because of these like nonlinearities in |
|
|
|
00:30:47.597 --> 00:30:48.880 |
|
that in that activation. |
|
|
|
00:30:48.880 --> 00:30:50.795 |
|
But I know I'm throwing out a lot of |
|
|
|
00:30:50.795 --> 00:30:51.740 |
|
throwing around a lot of terms. |
|
|
|
00:30:51.740 --> 00:30:53.720 |
|
I'm going to get through it all later, |
|
|
|
00:30:53.720 --> 00:30:56.400 |
|
but that's just the basic idea, all |
|
|
|
00:30:56.400 --> 00:30:56.580 |
|
right? |
|
|
|
00:30:56.580 --> 00:30:59.586 |
|
So here's another example for Digits |
|
|
|
00:30:59.586 --> 00:31:02.480 |
|
for MNIST digits, which is part of your |
|
|
|
00:31:02.480 --> 00:31:03.140 |
|
homework too. |
|
|
|
00:31:04.090 --> 00:31:05.180 |
|
So. |
|
|
|
00:31:05.800 --> 00:31:07.590 |
|
So we have in the digit problem. |
|
|
|
00:31:07.590 --> 00:31:11.830 |
|
We have 28 by 28 digit images and we |
|
|
|
00:31:11.830 --> 00:31:14.430 |
|
then reshape it to a 784 dimensional |
|
|
|
00:31:14.430 --> 00:31:16.650 |
|
vector so the inputs are the pixel |
|
|
|
00:31:16.650 --> 00:31:17.380 |
|
intensities. |
|
|
|
00:31:19.680 --> 00:31:20.500 |
|
That's here. |
|
|
|
00:31:20.500 --> 00:31:24.650 |
|
So I have X0 which is 784 Values. |
|
|
|
00:31:25.720 --> 00:31:27.650 |
|
I pass it through what's called a fully |
|
|
|
00:31:27.650 --> 00:31:28.550 |
|
connected layer. |
|
|
|
00:31:28.550 --> 00:31:32.520 |
|
It's a linear, it's just linear |
|
|
|
00:31:32.520 --> 00:31:33.350 |
|
product. |
|
|
|
00:31:34.490 --> 00:31:36.870 |
|
And now the thing here is that I've got |
|
|
|
00:31:36.870 --> 00:31:40.110 |
|
I've got here 256 nodes in this layer, |
|
|
|
00:31:40.110 --> 00:31:42.235 |
|
which means that I'm going to predict |
|
|
|
00:31:42.235 --> 00:31:44.130 |
|
256 different Values. |
|
|
|
00:31:44.790 --> 00:31:47.489 |
|
Each one has its own vector of weights |
|
|
|
00:31:47.490 --> 00:31:50.735 |
|
and the vector of weights that size |
|
|
|
00:31:50.735 --> 00:31:53.260 |
|
2784 plus one for the bias. |
|
|
|
00:31:54.530 --> 00:31:57.330 |
|
So I've got 256 values that I'm going |
|
|
|
00:31:57.330 --> 00:31:58.360 |
|
to produce here. |
|
|
|
00:31:58.360 --> 00:32:00.550 |
|
Each of them is based on a linear |
|
|
|
00:32:00.550 --> 00:32:02.210 |
|
combination of these inputs. |
|
|
|
00:32:03.690 --> 00:32:05.830 |
|
And I can store that as a matrix. |
|
|
|
00:32:07.730 --> 00:32:11.070 |
|
The matrix is W10 and it has a shape. |
|
|
|
00:32:11.750 --> 00:32:15.985 |
|
256 by 784 so when I multiply this 256 |
|
|
|
00:32:15.985 --> 00:32:22.817 |
|
by 784 matrix by X0 then I get 256 by 1 |
|
|
|
00:32:22.817 --> 00:32:23.940 |
|
vector right? |
|
|
|
00:32:23.940 --> 00:32:26.150 |
|
If this is a 784 by 1 vector? |
|
|
|
00:32:29.000 --> 00:32:31.510 |
|
So now I've got 256 Values. |
|
|
|
00:32:31.510 --> 00:32:34.380 |
|
I pass it through a nonlinearity and I |
|
|
|
00:32:34.380 --> 00:32:36.030 |
|
am going to talk about these things |
|
|
|
00:32:36.030 --> 00:32:36.380 |
|
more. |
|
|
|
00:32:36.380 --> 00:32:36.680 |
|
But. |
|
|
|
00:32:37.450 --> 00:32:38.820 |
|
Call it a veliu. |
|
|
|
00:32:38.820 --> 00:32:40.605 |
|
So this just sets. |
|
|
|
00:32:40.605 --> 00:32:42.490 |
|
If the Values would have been negative |
|
|
|
00:32:42.490 --> 00:32:44.740 |
|
then they become zero and if they're |
|
|
|
00:32:44.740 --> 00:32:48.100 |
|
positive then they stay the same. |
|
|
|
00:32:48.100 --> 00:32:49.640 |
|
So this. |
|
|
|
00:32:49.640 --> 00:32:52.629 |
|
I passed my X1 through this and it |
|
|
|
00:32:52.630 --> 00:32:55.729 |
|
becomes a Max of X1 and zero and I |
|
|
|
00:32:55.730 --> 00:32:58.060 |
|
still have 256 Values here but now it's |
|
|
|
00:32:58.060 --> 00:32:59.440 |
|
either zero or positive. |
|
|
|
00:33:01.070 --> 00:33:03.470 |
|
And then I pass it through another |
|
|
|
00:33:03.470 --> 00:33:04.910 |
|
fully connected layer if. |
|
|
|
00:33:04.910 --> 00:33:06.580 |
|
This is if I'm doing 2 hidden layers. |
|
|
|
00:33:07.490 --> 00:33:10.780 |
|
And this maps it from 256 to 10 because |
|
|
|
00:33:10.780 --> 00:33:12.250 |
|
I want to predict 10 digits. |
|
|
|
00:33:13.370 --> 00:33:16.340 |
|
So I've got a 10 by 256 matrix. |
|
|
|
00:33:16.340 --> 00:33:21.380 |
|
I do a linear mapping into 10 Values. |
|
|
|
00:33:22.120 --> 00:33:25.300 |
|
And then these can be my scores, my |
|
|
|
00:33:25.300 --> 00:33:26.115 |
|
logic scores. |
|
|
|
00:33:26.115 --> 00:33:28.460 |
|
I could have a Sigmoid after this if I |
|
|
|
00:33:28.460 --> 00:33:29.859 |
|
wanted to map it into zero to 1. |
|
|
|
00:33:31.350 --> 00:33:33.020 |
|
So these are like the main components |
|
|
|
00:33:33.020 --> 00:33:34.405 |
|
and again I'm going to talk through |
|
|
|
00:33:34.405 --> 00:33:36.130 |
|
these a bit more and show you Code and |
|
|
|
00:33:36.130 --> 00:33:36.650 |
|
stuff, but. |
|
|
|
00:33:37.680 --> 00:33:38.750 |
|
I've got my Input. |
|
|
|
00:33:38.750 --> 00:33:41.690 |
|
I've got usually like a series of fully |
|
|
|
00:33:41.690 --> 00:33:43.130 |
|
connected layers, which are just |
|
|
|
00:33:43.130 --> 00:33:45.460 |
|
matrices that are learnable matrices of |
|
|
|
00:33:45.460 --> 00:33:45.910 |
|
Weights. |
|
|
|
00:33:47.370 --> 00:33:49.480 |
|
Some kind of non linear activation? |
|
|
|
00:33:49.480 --> 00:33:51.595 |
|
Because if I were to just stack a bunch |
|
|
|
00:33:51.595 --> 00:33:54.205 |
|
of linear weight multiplications on top |
|
|
|
00:33:54.205 --> 00:33:56.233 |
|
of each other, that just gives me a |
|
|
|
00:33:56.233 --> 00:33:58.110 |
|
linear multiplication, so there's no |
|
|
|
00:33:58.110 --> 00:33:58.836 |
|
point, right? |
|
|
|
00:33:58.836 --> 00:34:00.750 |
|
If I do a bunch of linear operations, |
|
|
|
00:34:00.750 --> 00:34:01.440 |
|
it's still linear. |
|
|
|
00:34:02.510 --> 00:34:05.289 |
|
And one second, and then I have A and |
|
|
|
00:34:05.290 --> 00:34:06.690 |
|
so I usually have a bunch of a couple |
|
|
|
00:34:06.690 --> 00:34:08.192 |
|
of these stacked together, and then I |
|
|
|
00:34:08.192 --> 00:34:08.800 |
|
have my output. |
|
|
|
00:34:08.880 --> 00:34:09.050 |
|
Yeah. |
|
|
|
00:34:14.250 --> 00:34:16.210 |
|
So this is the non linear thing this |
|
|
|
00:34:16.210 --> 00:34:18.520 |
|
Relu this. |
|
|
|
00:34:20.290 --> 00:34:23.845 |
|
So it's non linear because if I'll show |
|
|
|
00:34:23.845 --> 00:34:27.040 |
|
you but if it's negative it maps to 0 |
|
|
|
00:34:27.040 --> 00:34:28.678 |
|
so that's pretty non linear and then |
|
|
|
00:34:28.678 --> 00:34:30.490 |
|
you have this bend and then if it's |
|
|
|
00:34:30.490 --> 00:34:32.250 |
|
positive it keeps its value. |
|
|
|
00:34:37.820 --> 00:34:42.019 |
|
So when they lose so I mean so these |
|
|
|
00:34:42.020 --> 00:34:44.600 |
|
networks, these multilayer perceptrons, |
|
|
|
00:34:44.600 --> 00:34:46.610 |
|
they're a combination of linear |
|
|
|
00:34:46.610 --> 00:34:48.369 |
|
functions and non linear functions. |
|
|
|
00:34:49.140 --> 00:34:50.930 |
|
The linear functions you're taking the |
|
|
|
00:34:50.930 --> 00:34:53.430 |
|
original Input, Values it by some |
|
|
|
00:34:53.430 --> 00:34:55.590 |
|
Weights, summing it up, and then you |
|
|
|
00:34:55.590 --> 00:34:56.970 |
|
get some new output value. |
|
|
|
00:34:57.550 --> 00:34:59.670 |
|
And then you have usually a nonlinear |
|
|
|
00:34:59.670 --> 00:35:01.440 |
|
function so that you can. |
|
|
|
00:35:02.390 --> 00:35:03.900 |
|
So that when you stack a bunch of the |
|
|
|
00:35:03.900 --> 00:35:05.340 |
|
linear functions together, you end up |
|
|
|
00:35:05.340 --> 00:35:06.830 |
|
with a non linear function. |
|
|
|
00:35:06.830 --> 00:35:08.734 |
|
Similar to decision trees and decision |
|
|
|
00:35:08.734 --> 00:35:10.633 |
|
trees, you have a very simple linear |
|
|
|
00:35:10.633 --> 00:35:11.089 |
|
function. |
|
|
|
00:35:11.090 --> 00:35:13.412 |
|
Usually you pick a feature and then you |
|
|
|
00:35:13.412 --> 00:35:13.969 |
|
threshold it. |
|
|
|
00:35:13.970 --> 00:35:15.618 |
|
That's a nonlinearity, and then you |
|
|
|
00:35:15.618 --> 00:35:17.185 |
|
pick a new feature and threshold it. |
|
|
|
00:35:17.185 --> 00:35:19.230 |
|
And so you're stacking together simple |
|
|
|
00:35:19.230 --> 00:35:21.780 |
|
linear and nonlinear functions by |
|
|
|
00:35:21.780 --> 00:35:23.260 |
|
choosing a single variable and then |
|
|
|
00:35:23.260 --> 00:35:23.820 |
|
thresholding. |
|
|
|
00:35:25.770 --> 00:35:29.880 |
|
So the simplest activation is a linear |
|
|
|
00:35:29.880 --> 00:35:31.560 |
|
activation, which is basically a number |
|
|
|
00:35:31.560 --> 00:35:31.930 |
|
op. |
|
|
|
00:35:31.930 --> 00:35:32.820 |
|
It doesn't do anything. |
|
|
|
00:35:34.060 --> 00:35:37.140 |
|
So F of X = X, the derivative of it is |
|
|
|
00:35:37.140 --> 00:35:37.863 |
|
equal to 1. |
|
|
|
00:35:37.863 --> 00:35:39.350 |
|
And I'm going to keep on mentioning the |
|
|
|
00:35:39.350 --> 00:35:41.866 |
|
derivatives because as we'll see, we're |
|
|
|
00:35:41.866 --> 00:35:43.240 |
|
going to be doing like a Back |
|
|
|
00:35:43.240 --> 00:35:44.050 |
|
propagation. |
|
|
|
00:35:44.050 --> 00:35:46.180 |
|
So you flow the gradient back through |
|
|
|
00:35:46.180 --> 00:35:48.330 |
|
the network, and so you need to know |
|
|
|
00:35:48.330 --> 00:35:49.510 |
|
what the gradient is of these |
|
|
|
00:35:49.510 --> 00:35:51.060 |
|
activation functions in order to |
|
|
|
00:35:51.060 --> 00:35:51.550 |
|
compute that. |
|
|
|
00:35:53.050 --> 00:35:55.420 |
|
And the size of this gradient is pretty |
|
|
|
00:35:55.420 --> 00:35:56.080 |
|
important. |
|
|
|
00:35:57.530 --> 00:36:00.220 |
|
So you could do you could use a linear |
|
|
|
00:36:00.220 --> 00:36:01.820 |
|
layer if you for example want to |
|
|
|
00:36:01.820 --> 00:36:02.358 |
|
compress data. |
|
|
|
00:36:02.358 --> 00:36:06.050 |
|
If you want to map it from 784 Input |
|
|
|
00:36:06.050 --> 00:36:09.501 |
|
pixels down to 100 Input Values down to |
|
|
|
00:36:09.501 --> 00:36:10.289 |
|
100 Values. |
|
|
|
00:36:10.290 --> 00:36:13.220 |
|
Maybe you think 784 is like too high of |
|
|
|
00:36:13.220 --> 00:36:14.670 |
|
a dimension or some of those features |
|
|
|
00:36:14.670 --> 00:36:15.105 |
|
are useless. |
|
|
|
00:36:15.105 --> 00:36:17.250 |
|
So as a first step you just want to map |
|
|
|
00:36:17.250 --> 00:36:18.967 |
|
it down and you don't need to, you |
|
|
|
00:36:18.967 --> 00:36:20.720 |
|
don't need to apply any nonlinearity. |
|
|
|
00:36:22.380 --> 00:36:24.030 |
|
But you would never really stack |
|
|
|
00:36:24.030 --> 00:36:26.630 |
|
together multiple linear layers. |
|
|
|
00:36:27.670 --> 00:36:29.980 |
|
Because without non linear activation |
|
|
|
00:36:29.980 --> 00:36:31.420 |
|
because that's just equivalent to a |
|
|
|
00:36:31.420 --> 00:36:32.390 |
|
single linear layer. |
|
|
|
00:36:35.120 --> 00:36:37.770 |
|
Alright, so this Sigmoid. |
|
|
|
00:36:38.960 --> 00:36:42.770 |
|
So the so the Sigmoid it maps. |
|
|
|
00:36:42.870 --> 00:36:43.490 |
|
|
|
|
|
00:36:44.160 --> 00:36:46.060 |
|
It maps from an infinite range, from |
|
|
|
00:36:46.060 --> 00:36:48.930 |
|
negative Infinity to Infinity to some. |
|
|
|
00:36:50.690 --> 00:36:52.062 |
|
To some zero to 1. |
|
|
|
00:36:52.062 --> 00:36:54.358 |
|
So basically it can turn anything into |
|
|
|
00:36:54.358 --> 00:36:55.114 |
|
a probability. |
|
|
|
00:36:55.114 --> 00:36:56.485 |
|
So it's part of the logistic. |
|
|
|
00:36:56.485 --> 00:36:59.170 |
|
It's a logistic function, so it maps a |
|
|
|
00:36:59.170 --> 00:37:00.890 |
|
continuous score into a probability. |
|
|
|
00:37:04.020 --> 00:37:04.690 |
|
So. |
|
|
|
00:37:06.100 --> 00:37:08.240 |
|
Sigmoid is actually. |
|
|
|
00:37:09.410 --> 00:37:10.820 |
|
The. |
|
|
|
00:37:11.050 --> 00:37:13.680 |
|
It's actually the reason that AI |
|
|
|
00:37:13.680 --> 00:37:15.140 |
|
stalled for 10 years. |
|
|
|
00:37:15.950 --> 00:37:18.560 |
|
So this and the reason is the gradient. |
|
|
|
00:37:19.610 --> 00:37:20.510 |
|
So. |
|
|
|
00:37:20.800 --> 00:37:21.560 |
|
|
|
|
|
00:37:24.180 --> 00:37:26.295 |
|
All right, so I'm going to try this. |
|
|
|
00:37:26.295 --> 00:37:28.610 |
|
So rather than talking behind the |
|
|
|
00:37:28.610 --> 00:37:31.330 |
|
Sigmoid's back, I am going to dramatize |
|
|
|
00:37:31.330 --> 00:37:32.370 |
|
the Sigmoid. |
|
|
|
00:37:32.620 --> 00:37:35.390 |
|
Right, Sigmoid. |
|
|
|
00:37:36.720 --> 00:37:37.640 |
|
What is it? |
|
|
|
00:37:38.680 --> 00:37:41.410 |
|
You say I Back for 10 years. |
|
|
|
00:37:41.410 --> 00:37:42.470 |
|
How? |
|
|
|
00:37:43.520 --> 00:37:46.120 |
|
One problem is your veins. |
|
|
|
00:37:46.120 --> 00:37:48.523 |
|
You only map from zero to one or from |
|
|
|
00:37:48.523 --> 00:37:49.359 |
|
zero to 1. |
|
|
|
00:37:49.360 --> 00:37:50.980 |
|
Sometimes people want a little bit less |
|
|
|
00:37:50.980 --> 00:37:52.130 |
|
or a little bit more. |
|
|
|
00:37:52.130 --> 00:37:53.730 |
|
It's like, OK, I like things neat, but |
|
|
|
00:37:53.730 --> 00:37:54.690 |
|
that doesn't seem too bad. |
|
|
|
00:37:55.350 --> 00:37:58.715 |
|
Sigmoid, it's not just your range, it's |
|
|
|
00:37:58.715 --> 00:37:59.760 |
|
your gradient. |
|
|
|
00:37:59.760 --> 00:38:00.370 |
|
What? |
|
|
|
00:38:00.370 --> 00:38:02.020 |
|
It's your curves? |
|
|
|
00:38:02.020 --> 00:38:02.740 |
|
Your slope. |
|
|
|
00:38:02.740 --> 00:38:03.640 |
|
Have you looked at it? |
|
|
|
00:38:04.230 --> 00:38:05.490 |
|
It's like, So what? |
|
|
|
00:38:05.490 --> 00:38:07.680 |
|
I like my lump Black Eyed Peas saying a |
|
|
|
00:38:07.680 --> 00:38:08.395 |
|
song about it. |
|
|
|
00:38:08.395 --> 00:38:08.870 |
|
It's good. |
|
|
|
00:38:12.150 --> 00:38:16.670 |
|
But the problem is that if you get the |
|
|
|
00:38:16.670 --> 00:38:19.000 |
|
lump may look good, but if you get out |
|
|
|
00:38:19.000 --> 00:38:21.110 |
|
into the very high values or the very |
|
|
|
00:38:21.110 --> 00:38:21.940 |
|
low Values. |
|
|
|
00:38:22.570 --> 00:38:25.920 |
|
You end up with this flat slope. |
|
|
|
00:38:25.920 --> 00:38:28.369 |
|
So what happens is that if you get on |
|
|
|
00:38:28.369 --> 00:38:30.307 |
|
top of this hill, you slide down into |
|
|
|
00:38:30.307 --> 00:38:32.400 |
|
the high Values or you slide down into |
|
|
|
00:38:32.400 --> 00:38:34.440 |
|
low values and then you can't move |
|
|
|
00:38:34.440 --> 00:38:34.910 |
|
anymore. |
|
|
|
00:38:34.910 --> 00:38:36.180 |
|
It's like you're sitting on top of a |
|
|
|
00:38:36.180 --> 00:38:37.677 |
|
nice till you slide down and you can't |
|
|
|
00:38:37.677 --> 00:38:38.730 |
|
get back up the ice hill. |
|
|
|
00:38:39.600 --> 00:38:41.535 |
|
And then if you negate it, then you |
|
|
|
00:38:41.535 --> 00:38:43.030 |
|
just slide into the center. |
|
|
|
00:38:43.030 --> 00:38:44.286 |
|
Here you still have a second derivative |
|
|
|
00:38:44.286 --> 00:38:46.250 |
|
of 0 and you just sit in there. |
|
|
|
00:38:46.250 --> 00:38:48.220 |
|
So basically you gum up the whole |
|
|
|
00:38:48.220 --> 00:38:48.526 |
|
works. |
|
|
|
00:38:48.526 --> 00:38:50.370 |
|
If you get a bunch of these stacked |
|
|
|
00:38:50.370 --> 00:38:52.070 |
|
together, you end up with a low |
|
|
|
00:38:52.070 --> 00:38:54.140 |
|
gradient somewhere and then you can't |
|
|
|
00:38:54.140 --> 00:38:55.830 |
|
optimize it and nothing happens. |
|
|
|
00:38:56.500 --> 00:39:00.690 |
|
And then Sigmoid is said, man I sorry |
|
|
|
00:39:00.690 --> 00:39:02.150 |
|
man, I thought I was pretty cool. |
|
|
|
00:39:02.150 --> 00:39:04.850 |
|
I people have been using me for years. |
|
|
|
00:39:04.850 --> 00:39:05.920 |
|
It's like OK Sigmoid. |
|
|
|
00:39:06.770 --> 00:39:07.500 |
|
It's OK. |
|
|
|
00:39:07.500 --> 00:39:09.410 |
|
You're still a good closer. |
|
|
|
00:39:09.410 --> 00:39:11.710 |
|
You're still good at getting the |
|
|
|
00:39:11.710 --> 00:39:12.860 |
|
probability at the end. |
|
|
|
00:39:12.860 --> 00:39:16.120 |
|
Just please don't go inside the MLP. |
|
|
|
00:39:17.370 --> 00:39:19.280 |
|
So I hope, I hope he wasn't too sad |
|
|
|
00:39:19.280 --> 00:39:20.800 |
|
about that. |
|
|
|
00:39:20.800 --> 00:39:21.770 |
|
But I am pretty mad. |
|
|
|
00:39:21.770 --> 00:39:22.130 |
|
What? |
|
|
|
00:39:27.940 --> 00:39:30.206 |
|
The problem with this Sigmoid function |
|
|
|
00:39:30.206 --> 00:39:33.160 |
|
is that stupid low gradient. |
|
|
|
00:39:33.160 --> 00:39:34.880 |
|
So everywhere out here it gets a really |
|
|
|
00:39:34.880 --> 00:39:35.440 |
|
low gradient. |
|
|
|
00:39:35.440 --> 00:39:36.980 |
|
And the problem is that remember that |
|
|
|
00:39:36.980 --> 00:39:38.570 |
|
we're going to be taking steps to try |
|
|
|
00:39:38.570 --> 00:39:40.090 |
|
to improve our. |
|
|
|
00:39:40.860 --> 00:39:42.590 |
|
Improve our error with respect to the |
|
|
|
00:39:42.590 --> 00:39:42.990 |
|
gradient. |
|
|
|
00:39:43.650 --> 00:39:45.780 |
|
And the great if the gradient is 0, it |
|
|
|
00:39:45.780 --> 00:39:46.967 |
|
means that your steps are zero. |
|
|
|
00:39:46.967 --> 00:39:49.226 |
|
And this is so close to zero that |
|
|
|
00:39:49.226 --> 00:39:49.729 |
|
basically. |
|
|
|
00:39:49.730 --> 00:39:51.810 |
|
I'll show you a demo later, but |
|
|
|
00:39:51.810 --> 00:39:53.960 |
|
basically this means that you. |
|
|
|
00:39:54.610 --> 00:39:56.521 |
|
You get unlucky, you get out into this. |
|
|
|
00:39:56.521 --> 00:39:57.346 |
|
It's not even unlucky. |
|
|
|
00:39:57.346 --> 00:39:58.600 |
|
It's trying to force you into this |
|
|
|
00:39:58.600 --> 00:39:59.340 |
|
side. |
|
|
|
00:39:59.340 --> 00:40:00.750 |
|
But then when you get out here you |
|
|
|
00:40:00.750 --> 00:40:02.790 |
|
can't move anymore and so you're |
|
|
|
00:40:02.790 --> 00:40:05.380 |
|
gradient based optimization just gets |
|
|
|
00:40:05.380 --> 00:40:06.600 |
|
stuck, yeah. |
|
|
|
00:40:09.560 --> 00:40:13.270 |
|
OK, so the lesson don't put sigmoids |
|
|
|
00:40:13.270 --> 00:40:14.280 |
|
inside of your MLP. |
|
|
|
00:40:15.820 --> 00:40:18.350 |
|
This guy Relu is pretty cool. |
|
|
|
00:40:18.350 --> 00:40:22.070 |
|
He looks very unassuming and stupid, |
|
|
|
00:40:22.070 --> 00:40:24.340 |
|
but it works. |
|
|
|
00:40:24.340 --> 00:40:26.750 |
|
So you just have a. |
|
|
|
00:40:26.750 --> 00:40:28.170 |
|
Anywhere it's negative, it's zero. |
|
|
|
00:40:28.170 --> 00:40:30.470 |
|
Anywhere it's positive, you pass |
|
|
|
00:40:30.470 --> 00:40:30.850 |
|
through. |
|
|
|
00:40:31.470 --> 00:40:34.830 |
|
And is so good about this is that for |
|
|
|
00:40:34.830 --> 00:40:37.457 |
|
all of these values, the gradient is 11 |
|
|
|
00:40:37.457 --> 00:40:39.466 |
|
is like a pretty high, pretty high |
|
|
|
00:40:39.466 --> 00:40:39.759 |
|
gradient. |
|
|
|
00:40:40.390 --> 00:40:42.590 |
|
So if you notice with the Sigmoid, the |
|
|
|
00:40:42.590 --> 00:40:45.040 |
|
gradient function is the Sigmoid times |
|
|
|
00:40:45.040 --> 00:40:46.530 |
|
1 minus the Sigmoid. |
|
|
|
00:40:46.530 --> 00:40:50.240 |
|
This is maximized when F of X is equal |
|
|
|
00:40:50.240 --> 00:40:53.035 |
|
to 5 and that leads to a 25. |
|
|
|
00:40:53.035 --> 00:40:55.920 |
|
So it's at most .25 and then it quickly |
|
|
|
00:40:55.920 --> 00:40:57.070 |
|
goes to zero everywhere. |
|
|
|
00:40:58.090 --> 00:40:58.890 |
|
This guy. |
|
|
|
00:40:59.570 --> 00:41:02.050 |
|
Has a gradient of 1 as long as X is |
|
|
|
00:41:02.050 --> 00:41:03.120 |
|
greater than zero. |
|
|
|
00:41:03.120 --> 00:41:04.720 |
|
And that means you've got like a lot |
|
|
|
00:41:04.720 --> 00:41:06.130 |
|
more gradient flowing through your |
|
|
|
00:41:06.130 --> 00:41:06.690 |
|
network. |
|
|
|
00:41:06.690 --> 00:41:09.140 |
|
So you can do better optimization. |
|
|
|
00:41:09.140 --> 00:41:10.813 |
|
And the range is not limited from zero |
|
|
|
00:41:10.813 --> 00:41:12.429 |
|
to 1, the range can go from zero to |
|
|
|
00:41:12.430 --> 00:41:12.900 |
|
Infinity. |
|
|
|
00:41:15.800 --> 00:41:17.670 |
|
So. |
|
|
|
00:41:17.740 --> 00:41:18.460 |
|
|
|
|
|
00:41:19.610 --> 00:41:21.840 |
|
So let's talk about MLP architectures. |
|
|
|
00:41:21.840 --> 00:41:24.545 |
|
So the MLP architecture is that you've |
|
|
|
00:41:24.545 --> 00:41:26.482 |
|
got your inputs, you've got a bunch of |
|
|
|
00:41:26.482 --> 00:41:28.136 |
|
layers, each of those layers has a |
|
|
|
00:41:28.136 --> 00:41:29.530 |
|
bunch of nodes, and then you've got |
|
|
|
00:41:29.530 --> 00:41:30.830 |
|
Activations in between. |
|
|
|
00:41:31.700 --> 00:41:33.340 |
|
So how do you choose the number of |
|
|
|
00:41:33.340 --> 00:41:33.840 |
|
layers? |
|
|
|
00:41:33.840 --> 00:41:35.290 |
|
How do you choose the number of nodes |
|
|
|
00:41:35.290 --> 00:41:35.810 |
|
per layer? |
|
|
|
00:41:35.810 --> 00:41:37.240 |
|
It's a little bit of a black art, but |
|
|
|
00:41:37.240 --> 00:41:37.980 |
|
there is some. |
|
|
|
00:41:39.100 --> 00:41:40.580 |
|
Reasoning behind it. |
|
|
|
00:41:40.580 --> 00:41:42.640 |
|
So first, if you don't have any hidden |
|
|
|
00:41:42.640 --> 00:41:43.920 |
|
layers, then you're stuck with the |
|
|
|
00:41:43.920 --> 00:41:44.600 |
|
Perceptron. |
|
|
|
00:41:45.200 --> 00:41:47.650 |
|
And you have a linear model, so you can |
|
|
|
00:41:47.650 --> 00:41:48.860 |
|
only fit linear boundaries. |
|
|
|
00:41:50.580 --> 00:41:52.692 |
|
If you only have enough, if you only |
|
|
|
00:41:52.692 --> 00:41:54.600 |
|
have 1 hidden layer, you can actually |
|
|
|
00:41:54.600 --> 00:41:56.500 |
|
fit any Boolean function, which means |
|
|
|
00:41:56.500 --> 00:41:57.789 |
|
anything where you have. |
|
|
|
00:41:58.420 --> 00:42:01.650 |
|
A bunch of 01 inputs and the output is |
|
|
|
00:42:01.650 --> 00:42:03.160 |
|
01 like classification. |
|
|
|
00:42:03.870 --> 00:42:05.780 |
|
You can fit any of those functions, but |
|
|
|
00:42:05.780 --> 00:42:07.995 |
|
the catch is that you need that the |
|
|
|
00:42:07.995 --> 00:42:09.480 |
|
number of nodes required grows |
|
|
|
00:42:09.480 --> 00:42:10.972 |
|
exponentially in the number of inputs |
|
|
|
00:42:10.972 --> 00:42:13.150 |
|
in the worst case, because essentially |
|
|
|
00:42:13.150 --> 00:42:15.340 |
|
you just enumerating all the different |
|
|
|
00:42:15.340 --> 00:42:16.760 |
|
combinations of inputs that are |
|
|
|
00:42:16.760 --> 00:42:18.220 |
|
possible with your internal layer. |
|
|
|
00:42:20.730 --> 00:42:26.070 |
|
So further, if you have a single |
|
|
|
00:42:26.070 --> 00:42:29.090 |
|
Sigmoid layer, that's internal. |
|
|
|
00:42:30.080 --> 00:42:32.490 |
|
Then, and it's big enough, you can |
|
|
|
00:42:32.490 --> 00:42:34.840 |
|
model every single bounded continuous |
|
|
|
00:42:34.840 --> 00:42:36.610 |
|
function, which means that if you have |
|
|
|
00:42:36.610 --> 00:42:39.333 |
|
a bunch of inputs and your output is a |
|
|
|
00:42:39.333 --> 00:42:40.610 |
|
single continuous value. |
|
|
|
00:42:41.620 --> 00:42:43.340 |
|
Or even really Multiple continuous |
|
|
|
00:42:43.340 --> 00:42:43.930 |
|
values. |
|
|
|
00:42:43.930 --> 00:42:47.100 |
|
You can approximate it to arbitrary |
|
|
|
00:42:47.100 --> 00:42:48.010 |
|
accuracy. |
|
|
|
00:42:49.240 --> 00:42:50.610 |
|
With this single Sigmoid. |
|
|
|
00:42:51.540 --> 00:42:53.610 |
|
So one layer MLP can fit like almost |
|
|
|
00:42:53.610 --> 00:42:54.170 |
|
everything. |
|
|
|
00:42:55.750 --> 00:42:58.620 |
|
And if you have a two layer MLP. |
|
|
|
00:42:59.950 --> 00:43:01.450 |
|
With Sigmoid activation. |
|
|
|
00:43:02.350 --> 00:43:03.260 |
|
Then. |
|
|
|
00:43:04.150 --> 00:43:06.900 |
|
Then you can approximate any function |
|
|
|
00:43:06.900 --> 00:43:09.700 |
|
with arbitrary accuracy, right? |
|
|
|
00:43:09.700 --> 00:43:12.510 |
|
So this all sounds pretty good. |
|
|
|
00:43:15.040 --> 00:43:16.330 |
|
So here's a question. |
|
|
|
00:43:16.330 --> 00:43:18.170 |
|
Does it ever make sense to have more |
|
|
|
00:43:18.170 --> 00:43:20.160 |
|
than two internal layers given this? |
|
|
|
00:43:23.460 --> 00:43:24.430 |
|
Why? |
|
|
|
00:43:26.170 --> 00:43:29.124 |
|
I just said that you can fit any |
|
|
|
00:43:29.124 --> 00:43:30.570 |
|
function, or I didn't say you could fit |
|
|
|
00:43:30.570 --> 00:43:30.680 |
|
it. |
|
|
|
00:43:30.680 --> 00:43:33.470 |
|
I said you can approximate any function |
|
|
|
00:43:33.470 --> 00:43:35.460 |
|
to arbitrary accuracy, which means |
|
|
|
00:43:35.460 --> 00:43:36.350 |
|
infinite precision. |
|
|
|
00:43:37.320 --> 00:43:38.110 |
|
With two layers. |
|
|
|
00:43:45.400 --> 00:43:48.000 |
|
So one thing you could say is maybe |
|
|
|
00:43:48.000 --> 00:43:50.010 |
|
like maybe it's possible to do it too |
|
|
|
00:43:50.010 --> 00:43:51.140 |
|
layers, but you can't find the right |
|
|
|
00:43:51.140 --> 00:43:52.810 |
|
two layers, so maybe add some more and |
|
|
|
00:43:52.810 --> 00:43:53.500 |
|
then maybe you'll get. |
|
|
|
00:43:54.380 --> 00:43:55.500 |
|
OK, that's reasonable. |
|
|
|
00:43:55.500 --> 00:43:56.260 |
|
Any other answer? |
|
|
|
00:43:56.900 --> 00:43:57.180 |
|
Yeah. |
|
|
|
00:44:04.590 --> 00:44:07.290 |
|
Right, so that's an issue that these |
|
|
|
00:44:07.290 --> 00:44:09.600 |
|
may require like a huge number of |
|
|
|
00:44:09.600 --> 00:44:10.100 |
|
nodes. |
|
|
|
00:44:10.910 --> 00:44:14.300 |
|
And so the reason to go deeper is |
|
|
|
00:44:14.300 --> 00:44:15.470 |
|
compositionality. |
|
|
|
00:44:16.980 --> 00:44:20.660 |
|
So you may be able to enumerate, for |
|
|
|
00:44:20.660 --> 00:44:22.460 |
|
example, all the different Boolean |
|
|
|
00:44:22.460 --> 00:44:24.220 |
|
things with a single layer. |
|
|
|
00:44:24.220 --> 00:44:26.760 |
|
But if you had a stack of layers, then |
|
|
|
00:44:26.760 --> 00:44:29.462 |
|
you can model, then one of them is |
|
|
|
00:44:29.462 --> 00:44:29.915 |
|
like. |
|
|
|
00:44:29.915 --> 00:44:32.570 |
|
Is like partitions the data, so it |
|
|
|
00:44:32.570 --> 00:44:34.194 |
|
creates like a bunch of models, a bunch |
|
|
|
00:44:34.194 --> 00:44:35.675 |
|
of functions, and then the next one |
|
|
|
00:44:35.675 --> 00:44:37.103 |
|
models functions of functions and |
|
|
|
00:44:37.103 --> 00:44:37.959 |
|
functions of functions. |
|
|
|
00:44:39.370 --> 00:44:42.790 |
|
Functions of functions and functions. |
|
|
|
00:44:42.790 --> 00:44:45.780 |
|
So compositionality is a more efficient |
|
|
|
00:44:45.780 --> 00:44:46.570 |
|
representation. |
|
|
|
00:44:46.570 --> 00:44:48.540 |
|
You can model model things, and then |
|
|
|
00:44:48.540 --> 00:44:49.905 |
|
model combinations of those things. |
|
|
|
00:44:49.905 --> 00:44:51.600 |
|
And you can do it with fewer nodes. |
|
|
|
00:44:52.890 --> 00:44:54.060 |
|
Fewer nodes, OK. |
|
|
|
00:44:54.830 --> 00:44:57.230 |
|
So it can make sense to make have more |
|
|
|
00:44:57.230 --> 00:44:58.590 |
|
than two internal layers. |
|
|
|
00:45:00.830 --> 00:45:02.745 |
|
And you can see the seductiveness of |
|
|
|
00:45:02.745 --> 00:45:03.840 |
|
the Sigmoid here like. |
|
|
|
00:45:04.650 --> 00:45:06.340 |
|
You can model anything with Sigmoid |
|
|
|
00:45:06.340 --> 00:45:08.800 |
|
activation, so why not use it right? |
|
|
|
00:45:08.800 --> 00:45:10.230 |
|
So that's why people used it. |
|
|
|
00:45:11.270 --> 00:45:12.960 |
|
That's why it got stuck. |
|
|
|
00:45:13.980 --> 00:45:14.880 |
|
So. |
|
|
|
00:45:14.950 --> 00:45:15.730 |
|
And. |
|
|
|
00:45:16.840 --> 00:45:18.770 |
|
You also have, like the number the |
|
|
|
00:45:18.770 --> 00:45:20.440 |
|
other parameters, the number of nodes |
|
|
|
00:45:20.440 --> 00:45:21.950 |
|
per hidden layer, called the width. |
|
|
|
00:45:21.950 --> 00:45:23.800 |
|
If you have more nodes, it means more |
|
|
|
00:45:23.800 --> 00:45:25.790 |
|
representational power and more |
|
|
|
00:45:25.790 --> 00:45:26.230 |
|
parameters. |
|
|
|
00:45:27.690 --> 00:45:30.460 |
|
So, and that's just a design decision. |
|
|
|
00:45:31.600 --> 00:45:33.420 |
|
That you could use fit with cross |
|
|
|
00:45:33.420 --> 00:45:34.820 |
|
validation or something. |
|
|
|
00:45:35.650 --> 00:45:37.610 |
|
And then each layer has an activation |
|
|
|
00:45:37.610 --> 00:45:39.145 |
|
function, and I talked about those |
|
|
|
00:45:39.145 --> 00:45:40.080 |
|
activation functions. |
|
|
|
00:45:41.820 --> 00:45:43.440 |
|
Here's one early example. |
|
|
|
00:45:43.440 --> 00:45:45.082 |
|
It's not an early example, but here's |
|
|
|
00:45:45.082 --> 00:45:47.700 |
|
an example of application of MLP. |
|
|
|
00:45:48.860 --> 00:45:51.020 |
|
To backgammon, this is a famous case. |
|
|
|
00:45:51.790 --> 00:45:53.480 |
|
From 1992. |
|
|
|
00:45:54.400 --> 00:45:56.490 |
|
They for the first version. |
|
|
|
00:45:57.620 --> 00:46:00.419 |
|
So first backgammon you one side is |
|
|
|
00:46:00.419 --> 00:46:02.120 |
|
trying to move their pieces around the |
|
|
|
00:46:02.120 --> 00:46:03.307 |
|
board one way to their home. |
|
|
|
00:46:03.307 --> 00:46:04.903 |
|
The other side is trying to move the |
|
|
|
00:46:04.903 --> 00:46:06.300 |
|
pieces around the other way to their |
|
|
|
00:46:06.300 --> 00:46:06.499 |
|
home. |
|
|
|
00:46:07.660 --> 00:46:09.060 |
|
And it's a dice based game. |
|
|
|
00:46:10.130 --> 00:46:13.110 |
|
So one way that you can represent the |
|
|
|
00:46:13.110 --> 00:46:14.900 |
|
game is just the number of pieces that |
|
|
|
00:46:14.900 --> 00:46:16.876 |
|
are on each of these spaces. |
|
|
|
00:46:16.876 --> 00:46:19.250 |
|
So the earliest version they just |
|
|
|
00:46:19.250 --> 00:46:20.510 |
|
directly represented that. |
|
|
|
00:46:21.460 --> 00:46:23.670 |
|
Then they have an MLP with one internal |
|
|
|
00:46:23.670 --> 00:46:27.420 |
|
layer that just had 40 hidden units. |
|
|
|
00:46:28.570 --> 00:46:32.270 |
|
And they played two hundred 200,000 |
|
|
|
00:46:32.270 --> 00:46:34.440 |
|
games against other programs using |
|
|
|
00:46:34.440 --> 00:46:35.390 |
|
reinforcement learning. |
|
|
|
00:46:35.390 --> 00:46:37.345 |
|
So the main idea of reinforcement |
|
|
|
00:46:37.345 --> 00:46:40.160 |
|
learning is that you have some problem |
|
|
|
00:46:40.160 --> 00:46:41.640 |
|
you want to solve, like winning the |
|
|
|
00:46:41.640 --> 00:46:41.950 |
|
game. |
|
|
|
00:46:42.740 --> 00:46:44.250 |
|
And you want to take actions that bring |
|
|
|
00:46:44.250 --> 00:46:45.990 |
|
you closer to that goal, but you don't |
|
|
|
00:46:45.990 --> 00:46:48.439 |
|
know if you won right away and so you |
|
|
|
00:46:48.440 --> 00:46:50.883 |
|
need to use like a discounted reward. |
|
|
|
00:46:50.883 --> 00:46:53.200 |
|
So usually I have like 2 reward |
|
|
|
00:46:53.200 --> 00:46:53.710 |
|
functions. |
|
|
|
00:46:53.710 --> 00:46:55.736 |
|
One is like how good is my game state? |
|
|
|
00:46:55.736 --> 00:46:57.490 |
|
So moving your pieces closer to home |
|
|
|
00:46:57.490 --> 00:46:58.985 |
|
might improve your score of the game |
|
|
|
00:46:58.985 --> 00:46:59.260 |
|
state. |
|
|
|
00:46:59.960 --> 00:47:01.760 |
|
And the other is like, did you win? |
|
|
|
00:47:01.760 --> 00:47:05.010 |
|
And then you're going to score your |
|
|
|
00:47:05.010 --> 00:47:09.060 |
|
decisions based on the reward of taking |
|
|
|
00:47:09.060 --> 00:47:10.590 |
|
the step, as well as like future |
|
|
|
00:47:10.590 --> 00:47:11.690 |
|
rewards that you receive. |
|
|
|
00:47:11.690 --> 00:47:13.888 |
|
So at the end of the game you win, then |
|
|
|
00:47:13.888 --> 00:47:15.526 |
|
that kind of flows back and tells you |
|
|
|
00:47:15.526 --> 00:47:16.956 |
|
that the steps that you took during the |
|
|
|
00:47:16.956 --> 00:47:17.560 |
|
game were good. |
|
|
|
00:47:19.060 --> 00:47:20.400 |
|
There's a whole classes on |
|
|
|
00:47:20.400 --> 00:47:22.320 |
|
reinforcement learning, but that's the |
|
|
|
00:47:22.320 --> 00:47:23.770 |
|
basic idea and that's how they. |
|
|
|
00:47:24.700 --> 00:47:26.620 |
|
Solve this problem. |
|
|
|
00:47:28.420 --> 00:47:30.810 |
|
So even the first version, which had |
|
|
|
00:47:30.810 --> 00:47:33.090 |
|
no, which was just like very simple |
|
|
|
00:47:33.090 --> 00:47:35.360 |
|
inputs, was able to perform competitive |
|
|
|
00:47:35.360 --> 00:47:37.205 |
|
competitively with world experts. |
|
|
|
00:47:37.205 --> 00:47:39.250 |
|
And so that was an impressive |
|
|
|
00:47:39.250 --> 00:47:43.100 |
|
demonstration that AI and machine |
|
|
|
00:47:43.100 --> 00:47:45.300 |
|
learning can do amazing things. |
|
|
|
00:47:45.300 --> 00:47:47.510 |
|
You can beat world experts in this game |
|
|
|
00:47:47.510 --> 00:47:49.540 |
|
without any expertise just by setting |
|
|
|
00:47:49.540 --> 00:47:50.760 |
|
up a machine learning problem and |
|
|
|
00:47:50.760 --> 00:47:52.880 |
|
having a computer play other computers. |
|
|
|
00:47:54.150 --> 00:47:56.050 |
|
Then they put some experts in it, |
|
|
|
00:47:56.050 --> 00:47:57.620 |
|
designed better features, made the |
|
|
|
00:47:57.620 --> 00:47:59.710 |
|
network a little bit bigger, trained, |
|
|
|
00:47:59.710 --> 00:48:02.255 |
|
played more games, virtual games, and |
|
|
|
00:48:02.255 --> 00:48:04.660 |
|
then they were able to again compete |
|
|
|
00:48:04.660 --> 00:48:07.210 |
|
well with Grand Masters and experts. |
|
|
|
00:48:08.280 --> 00:48:10.110 |
|
So this is kind of like the forerunner |
|
|
|
00:48:10.110 --> 00:48:12.560 |
|
of deep blue or is it deep blue, |
|
|
|
00:48:12.560 --> 00:48:13.100 |
|
Watson? |
|
|
|
00:48:13.770 --> 00:48:15.635 |
|
Maybe Watson, I don't know the guy that |
|
|
|
00:48:15.635 --> 00:48:17.060 |
|
the computer that played chess. |
|
|
|
00:48:17.910 --> 00:48:19.210 |
|
As well as alpha go. |
|
|
|
00:48:22.370 --> 00:48:26.070 |
|
So now I want to talk about how we |
|
|
|
00:48:26.070 --> 00:48:27.080 |
|
optimize this thing. |
|
|
|
00:48:27.770 --> 00:48:31.930 |
|
And the details are not too important, |
|
|
|
00:48:31.930 --> 00:48:33.650 |
|
but the concept is really, really |
|
|
|
00:48:33.650 --> 00:48:33.940 |
|
important. |
|
|
|
00:48:35.150 --> 00:48:38.190 |
|
So we're going to optimize these MLP's |
|
|
|
00:48:38.190 --> 00:48:40.150 |
|
using Back propagation. |
|
|
|
00:48:40.580 --> 00:48:45.070 |
|
Back propagation is just an extension |
|
|
|
00:48:45.070 --> 00:48:46.430 |
|
of. |
|
|
|
00:48:46.570 --> 00:48:49.540 |
|
Of SGD of stochastic gradient descent. |
|
|
|
00:48:50.460 --> 00:48:51.850 |
|
Where we just apply the chain rule a |
|
|
|
00:48:51.850 --> 00:48:53.180 |
|
little bit more. |
|
|
|
00:48:53.180 --> 00:48:54.442 |
|
So let's take this simple network. |
|
|
|
00:48:54.442 --> 00:48:56.199 |
|
I've got two inputs, I've got 2 |
|
|
|
00:48:56.200 --> 00:48:57.210 |
|
intermediate nodes. |
|
|
|
00:48:58.230 --> 00:48:59.956 |
|
Going to represent their outputs as F3 |
|
|
|
00:48:59.956 --> 00:49:00.840 |
|
and four. |
|
|
|
00:49:00.840 --> 00:49:03.290 |
|
I've gotten output node which I'll call |
|
|
|
00:49:03.290 --> 00:49:04.590 |
|
G5 and the Weights. |
|
|
|
00:49:04.590 --> 00:49:06.910 |
|
I'm using a like weight and then two |
|
|
|
00:49:06.910 --> 00:49:08.559 |
|
indices which are like where it's |
|
|
|
00:49:08.560 --> 00:49:09.750 |
|
coming from and where it's going. |
|
|
|
00:49:09.750 --> 00:49:11.920 |
|
So F3 to G5 is W 35. |
|
|
|
00:49:13.420 --> 00:49:18.210 |
|
And so the output here is some function |
|
|
|
00:49:18.210 --> 00:49:20.550 |
|
of the two intermediate nodes and each |
|
|
|
00:49:20.550 --> 00:49:21.230 |
|
of those. |
|
|
|
00:49:21.230 --> 00:49:24.140 |
|
So in particular W 3/5 times if three |
|
|
|
00:49:24.140 --> 00:49:25.880 |
|
plus W four 5 * 4. |
|
|
|
00:49:26.650 --> 00:49:28.420 |
|
And each of those intermediate nodes is |
|
|
|
00:49:28.420 --> 00:49:29.955 |
|
a linear combination of the inputs. |
|
|
|
00:49:29.955 --> 00:49:31.660 |
|
And to keep things simple, I'll just |
|
|
|
00:49:31.660 --> 00:49:33.070 |
|
say linear activation for now. |
|
|
|
00:49:33.990 --> 00:49:35.904 |
|
My error function is the squared |
|
|
|
00:49:35.904 --> 00:49:37.970 |
|
function, the squared difference of the |
|
|
|
00:49:37.970 --> 00:49:39.400 |
|
prediction and the output. |
|
|
|
00:49:41.880 --> 00:49:44.320 |
|
So I just as before, if I want to |
|
|
|
00:49:44.320 --> 00:49:46.210 |
|
optimize this, I compute the partial |
|
|
|
00:49:46.210 --> 00:49:47.985 |
|
derivative of my error with respect to |
|
|
|
00:49:47.985 --> 00:49:49.420 |
|
the Weights for a given sample. |
|
|
|
00:49:50.600 --> 00:49:53.400 |
|
And I apply the chain rule again. |
|
|
|
00:49:54.620 --> 00:49:57.130 |
|
And I get for this weight. |
|
|
|
00:49:57.130 --> 00:50:02.400 |
|
Here it becomes again two times this |
|
|
|
00:50:02.400 --> 00:50:04.320 |
|
derivative of the error function, the |
|
|
|
00:50:04.320 --> 00:50:05.440 |
|
prediction minus Y. |
|
|
|
00:50:06.200 --> 00:50:10.530 |
|
Times the derivative of the inside of |
|
|
|
00:50:10.530 --> 00:50:12.550 |
|
this function with respect to my weight |
|
|
|
00:50:12.550 --> 00:50:13.440 |
|
W 35. |
|
|
|
00:50:13.440 --> 00:50:15.180 |
|
I'm optimizing it for W 35. |
|
|
|
00:50:16.390 --> 00:50:16.800 |
|
Right. |
|
|
|
00:50:16.800 --> 00:50:18.770 |
|
So the derivative of this guy with |
|
|
|
00:50:18.770 --> 00:50:21.220 |
|
respect to the weight, if I look up at |
|
|
|
00:50:21.220 --> 00:50:21.540 |
|
this. |
|
|
|
00:50:24.490 --> 00:50:26.880 |
|
If I look at this function up here. |
|
|
|
00:50:27.590 --> 00:50:31.278 |
|
The derivative of W-35 times F3 plus W |
|
|
|
00:50:31.278 --> 00:50:33.670 |
|
Four 5 * 4 is just F3. |
|
|
|
00:50:34.560 --> 00:50:36.530 |
|
Right, so that gives me that my |
|
|
|
00:50:36.530 --> 00:50:41.050 |
|
derivative is 2 * g Five X -, y * F |
|
|
|
00:50:41.050 --> 00:50:41.750 |
|
three of X. |
|
|
|
00:50:42.480 --> 00:50:44.670 |
|
And then I take a step in that negative |
|
|
|
00:50:44.670 --> 00:50:46.695 |
|
direction in order to do my update for |
|
|
|
00:50:46.695 --> 00:50:47.190 |
|
this weight. |
|
|
|
00:50:47.190 --> 00:50:48.760 |
|
So this is very similar to the |
|
|
|
00:50:48.760 --> 00:50:50.574 |
|
Perceptron, except instead of the Input |
|
|
|
00:50:50.574 --> 00:50:53.243 |
|
I have the previous node as part of |
|
|
|
00:50:53.243 --> 00:50:54.106 |
|
this function here. |
|
|
|
00:50:54.106 --> 00:50:55.610 |
|
So I have my error gradient and then |
|
|
|
00:50:55.610 --> 00:50:56.490 |
|
the input to the. |
|
|
|
00:50:57.100 --> 00:50:59.240 |
|
Being multiplied together and that |
|
|
|
00:50:59.240 --> 00:51:01.830 |
|
gives me my direction of the gradient. |
|
|
|
00:51:03.860 --> 00:51:06.250 |
|
For the internal nodes, I have to just |
|
|
|
00:51:06.250 --> 00:51:08.160 |
|
apply the chain rule recursively. |
|
|
|
00:51:08.870 --> 00:51:11.985 |
|
So if I want to solve for the update |
|
|
|
00:51:11.985 --> 00:51:13.380 |
|
for W 113. |
|
|
|
00:51:14.360 --> 00:51:17.626 |
|
Then I take the derivative. |
|
|
|
00:51:17.626 --> 00:51:19.920 |
|
I do again, again, again get this |
|
|
|
00:51:19.920 --> 00:51:21.410 |
|
derivative of the error. |
|
|
|
00:51:21.410 --> 00:51:23.236 |
|
Then I take the derivative of the |
|
|
|
00:51:23.236 --> 00:51:25.930 |
|
output with respect to West 1/3, which |
|
|
|
00:51:25.930 --> 00:51:27.250 |
|
brings me into this. |
|
|
|
00:51:27.250 --> 00:51:29.720 |
|
And then if I follow that through, I |
|
|
|
00:51:29.720 --> 00:51:32.450 |
|
end up with the derivative of the |
|
|
|
00:51:32.450 --> 00:51:36.610 |
|
output with respect to W13 is W-35 |
|
|
|
00:51:36.610 --> 00:51:37.830 |
|
times X1. |
|
|
|
00:51:38.720 --> 00:51:40.540 |
|
And so I get this as my update. |
|
|
|
00:51:42.270 --> 00:51:44.100 |
|
And so I end up with these three parts |
|
|
|
00:51:44.100 --> 00:51:45.180 |
|
to the update. |
|
|
|
00:51:45.180 --> 00:51:47.450 |
|
This product of three terms, that one |
|
|
|
00:51:47.450 --> 00:51:49.111 |
|
term is the error gradient, the |
|
|
|
00:51:49.111 --> 00:51:50.310 |
|
gradient error gradient in my |
|
|
|
00:51:50.310 --> 00:51:50.870 |
|
prediction. |
|
|
|
00:51:51.570 --> 00:51:54.030 |
|
The other is like the input into the |
|
|
|
00:51:54.030 --> 00:51:56.890 |
|
function and the third is the |
|
|
|
00:51:56.890 --> 00:51:59.400 |
|
contribution how this like contributed |
|
|
|
00:51:59.400 --> 00:52:02.510 |
|
to the output which is through W 35 |
|
|
|
00:52:02.510 --> 00:52:06.300 |
|
because W 113 Help create F3 which then |
|
|
|
00:52:06.300 --> 00:52:08.499 |
|
contributes to the output through W 35. |
|
|
|
00:52:12.700 --> 00:52:15.080 |
|
And this was for a linear activation. |
|
|
|
00:52:15.080 --> 00:52:18.759 |
|
But if I had a Relu all I would do is I |
|
|
|
00:52:18.760 --> 00:52:19.920 |
|
modify this. |
|
|
|
00:52:19.920 --> 00:52:22.677 |
|
Like if I have a Relu after F3 and F4 |
|
|
|
00:52:22.677 --> 00:52:27.970 |
|
that means that F3 is going to be W 1/3 |
|
|
|
00:52:27.970 --> 00:52:30.380 |
|
times Max of X10. |
|
|
|
00:52:31.790 --> 00:52:33.690 |
|
And. |
|
|
|
00:52:34.310 --> 00:52:35.620 |
|
Did I put that really in the right |
|
|
|
00:52:35.620 --> 00:52:36.140 |
|
place? |
|
|
|
00:52:36.140 --> 00:52:38.190 |
|
All right, let me go with it. |
|
|
|
00:52:38.190 --> 00:52:38.980 |
|
So. |
|
|
|
00:52:39.080 --> 00:52:42.890 |
|
FW13 times this is if I put a rally |
|
|
|
00:52:42.890 --> 00:52:44.360 |
|
right here, which is a weird place. |
|
|
|
00:52:44.360 --> 00:52:47.353 |
|
But let's just say I did it OK W 1/3 |
|
|
|
00:52:47.353 --> 00:52:50.737 |
|
times Max of X10 plus W 2/3 of Max X20. |
|
|
|
00:52:50.737 --> 00:52:54.060 |
|
And then I just plug that into my. |
|
|
|
00:52:54.060 --> 00:52:56.938 |
|
So then the Max of X10 just becomes. |
|
|
|
00:52:56.938 --> 00:52:59.880 |
|
I mean the X1 becomes Max of X1 and 0. |
|
|
|
00:53:01.190 --> 00:53:02.990 |
|
I think I put it value here where I |
|
|
|
00:53:02.990 --> 00:53:04.780 |
|
meant to put it here, but the main |
|
|
|
00:53:04.780 --> 00:53:06.658 |
|
point is that you then take the |
|
|
|
00:53:06.658 --> 00:53:09.170 |
|
derivative, then you just follow the |
|
|
|
00:53:09.170 --> 00:53:10.870 |
|
chain rule, take the derivative of the |
|
|
|
00:53:10.870 --> 00:53:11.840 |
|
activation as well. |
|
|
|
00:53:16.130 --> 00:53:20.219 |
|
So the general concept of the. |
|
|
|
00:53:20.700 --> 00:53:23.480 |
|
Backprop is that each Weights gradient |
|
|
|
00:53:23.480 --> 00:53:26.000 |
|
based on a single training sample is a |
|
|
|
00:53:26.000 --> 00:53:27.540 |
|
product of the gradient of the loss |
|
|
|
00:53:27.540 --> 00:53:28.070 |
|
function. |
|
|
|
00:53:29.060 --> 00:53:30.981 |
|
And the gradient of the prediction with |
|
|
|
00:53:30.981 --> 00:53:33.507 |
|
respect to the activation and the |
|
|
|
00:53:33.507 --> 00:53:36.033 |
|
gradient of the activation with respect |
|
|
|
00:53:36.033 --> 00:53:38.800 |
|
to the with respect to the weight. |
|
|
|
00:53:39.460 --> 00:53:41.570 |
|
So it's like a gradient that says how I |
|
|
|
00:53:41.570 --> 00:53:42.886 |
|
got for W13. |
|
|
|
00:53:42.886 --> 00:53:45.670 |
|
It's like what was fed into me. |
|
|
|
00:53:45.670 --> 00:53:47.990 |
|
How does how does the value that I'm |
|
|
|
00:53:47.990 --> 00:53:49.859 |
|
producing depend on what's fed into me? |
|
|
|
00:53:50.500 --> 00:53:51.630 |
|
What did I produce? |
|
|
|
00:53:51.630 --> 00:53:53.920 |
|
How did I influence this W this F3 |
|
|
|
00:53:53.920 --> 00:53:57.360 |
|
function and how did that impact my |
|
|
|
00:53:57.360 --> 00:53:58.880 |
|
gradient error gradient? |
|
|
|
00:54:00.980 --> 00:54:03.380 |
|
And because you're applying chain rule |
|
|
|
00:54:03.380 --> 00:54:05.495 |
|
recursively, you can save computation |
|
|
|
00:54:05.495 --> 00:54:08.920 |
|
and each weight ends up being like a |
|
|
|
00:54:08.920 --> 00:54:11.940 |
|
kind of a product of the gradient that |
|
|
|
00:54:11.940 --> 00:54:13.890 |
|
was accumulated in the Weights after it |
|
|
|
00:54:13.890 --> 00:54:16.100 |
|
with the Input that came before it. |
|
|
|
00:54:19.910 --> 00:54:21.450 |
|
OK, so it's a little bit late for this, |
|
|
|
00:54:21.450 --> 00:54:22.440 |
|
but let's do it anyway. |
|
|
|
00:54:22.440 --> 00:54:25.330 |
|
So take a 2 minute break and you can |
|
|
|
00:54:25.330 --> 00:54:26.610 |
|
think about this question. |
|
|
|
00:54:26.610 --> 00:54:29.127 |
|
So if I want to fill in the blanks, if |
|
|
|
00:54:29.127 --> 00:54:31.320 |
|
I want to get the gradient with respect |
|
|
|
00:54:31.320 --> 00:54:34.290 |
|
to W 8 and the gradient with respect to |
|
|
|
00:54:34.290 --> 00:54:34.950 |
|
W 2. |
|
|
|
00:54:36.150 --> 00:54:37.960 |
|
How do I fill in those blanks? |
|
|
|
00:54:38.920 --> 00:54:40.880 |
|
And let's say that it's all linear |
|
|
|
00:54:40.880 --> 00:54:41.430 |
|
layers. |
|
|
|
00:54:44.280 --> 00:54:45.690 |
|
I'll set a timer for 2 minutes. |
|
|
|
00:55:12.550 --> 00:55:14.880 |
|
The input size and that output size. |
|
|
|
00:55:17.260 --> 00:55:17.900 |
|
Yeah. |
|
|
|
00:55:18.070 --> 00:55:20.310 |
|
Input size are we talking like the row |
|
|
|
00:55:20.310 --> 00:55:20.590 |
|
of the? |
|
|
|
00:55:22.870 --> 00:55:23.650 |
|
Yes. |
|
|
|
00:55:23.650 --> 00:55:25.669 |
|
So it'll be like 784, right? |
|
|
|
00:55:25.670 --> 00:55:28.670 |
|
And then output sizes colon, right, |
|
|
|
00:55:28.670 --> 00:55:30.350 |
|
because we're talking about the number |
|
|
|
00:55:30.350 --> 00:55:30.520 |
|
of. |
|
|
|
00:55:32.050 --> 00:55:35.820 |
|
Out output size is the number of values |
|
|
|
00:55:35.820 --> 00:55:36.795 |
|
that you're trying to predict. |
|
|
|
00:55:36.795 --> 00:55:39.130 |
|
So for Digits it would be 10 all |
|
|
|
00:55:39.130 --> 00:55:41.550 |
|
because that's the testing, that's the |
|
|
|
00:55:41.550 --> 00:55:42.980 |
|
label side, the number of labels that |
|
|
|
00:55:42.980 --> 00:55:43.400 |
|
you have. |
|
|
|
00:55:43.400 --> 00:55:45.645 |
|
So it's a row of the test, right? |
|
|
|
00:55:45.645 --> 00:55:46.580 |
|
That's would be my. |
|
|
|
00:55:49.290 --> 00:55:51.140 |
|
But it's not the number of examples, |
|
|
|
00:55:51.140 --> 00:55:52.560 |
|
it's a number of different labels that |
|
|
|
00:55:52.560 --> 00:55:53.310 |
|
you can have. |
|
|
|
00:55:53.310 --> 00:55:53.866 |
|
So doesn't. |
|
|
|
00:55:53.866 --> 00:55:56.080 |
|
It's not really a row or a column, |
|
|
|
00:55:56.080 --> 00:55:56.320 |
|
right? |
|
|
|
00:55:57.500 --> 00:55:59.335 |
|
Because then you're data vectors you |
|
|
|
00:55:59.335 --> 00:56:01.040 |
|
have like X which is number of examples |
|
|
|
00:56:01.040 --> 00:56:02.742 |
|
by number of features, and then you |
|
|
|
00:56:02.742 --> 00:56:04.328 |
|
have Y which is number of examples by |
|
|
|
00:56:04.328 --> 00:56:04.549 |
|
1. |
|
|
|
00:56:05.440 --> 00:56:07.580 |
|
But the out you have one output per |
|
|
|
00:56:07.580 --> 00:56:10.410 |
|
label, so per class. |
|
|
|
00:56:11.650 --> 00:56:12.460 |
|
There, it's 10. |
|
|
|
00:56:13.080 --> 00:56:14.450 |
|
Because you have 10 digits. |
|
|
|
00:56:14.450 --> 00:56:16.520 |
|
Alright, thank you. |
|
|
|
00:56:16.520 --> 00:56:17.250 |
|
You're welcome. |
|
|
|
00:56:29.150 --> 00:56:31.060 |
|
Alright, so let's try it. |
|
|
|
00:56:31.160 --> 00:56:31.770 |
|
|
|
|
|
00:56:34.070 --> 00:56:37.195 |
|
So this is probably a hard question. |
|
|
|
00:56:37.195 --> 00:56:40.160 |
|
You've just been exposed to Backprop, |
|
|
|
00:56:40.160 --> 00:56:42.330 |
|
but I think solving it out loud will |
|
|
|
00:56:42.330 --> 00:56:43.680 |
|
probably make this a little more clear. |
|
|
|
00:56:43.680 --> 00:56:44.000 |
|
So. |
|
|
|
00:56:44.730 --> 00:56:46.330 |
|
Gradient with respect to W. |
|
|
|
00:56:46.330 --> 00:56:48.840 |
|
Does anyone have an idea what these |
|
|
|
00:56:48.840 --> 00:56:49.810 |
|
blanks would be? |
|
|
|
00:57:07.100 --> 00:57:08.815 |
|
It might be, I'm not sure. |
|
|
|
00:57:08.815 --> 00:57:11.147 |
|
I'm not sure I can do it without the. |
|
|
|
00:57:11.147 --> 00:57:13.710 |
|
I was not putting it in terms of |
|
|
|
00:57:13.710 --> 00:57:16.000 |
|
derivatives though, just in terms of |
|
|
|
00:57:16.000 --> 00:57:17.860 |
|
purely in terms of the X's. |
|
|
|
00:57:17.860 --> 00:57:19.730 |
|
H is the G's and the W's. |
|
|
|
00:57:21.200 --> 00:57:24.970 |
|
Alright, so part of it is that I would |
|
|
|
00:57:24.970 --> 00:57:27.019 |
|
have how this. |
|
|
|
00:57:27.020 --> 00:57:29.880 |
|
Here I have how this weight contributes |
|
|
|
00:57:29.880 --> 00:57:30.560 |
|
to the output. |
|
|
|
00:57:31.490 --> 00:57:34.052 |
|
So it's going to flow through these |
|
|
|
00:57:34.052 --> 00:57:37.698 |
|
guys, right, goes through W7 and W3 |
|
|
|
00:57:37.698 --> 00:57:40.513 |
|
through G1 and it also flows through |
|
|
|
00:57:40.513 --> 00:57:41.219 |
|
these guys. |
|
|
|
00:57:42.790 --> 00:57:45.280 |
|
Right, so part of it will end up being |
|
|
|
00:57:45.280 --> 00:57:48.427 |
|
W 0 times W 9 this path and part of it |
|
|
|
00:57:48.427 --> 00:57:52.570 |
|
will be W 7 three times W 7 this path. |
|
|
|
00:57:54.720 --> 00:57:57.500 |
|
So this is 1 portion like how it's |
|
|
|
00:57:57.500 --> 00:57:58.755 |
|
influenced the output. |
|
|
|
00:57:58.755 --> 00:58:00.610 |
|
This is my output gradient which I |
|
|
|
00:58:00.610 --> 00:58:01.290 |
|
started with. |
|
|
|
00:58:02.070 --> 00:58:04.486 |
|
My error gradient and then it's going |
|
|
|
00:58:04.486 --> 00:58:07.100 |
|
to be multiplied by the input, which is |
|
|
|
00:58:07.100 --> 00:58:09.220 |
|
like how the magnitude, how this |
|
|
|
00:58:09.220 --> 00:58:11.610 |
|
Weights, whether it made a positive or |
|
|
|
00:58:11.610 --> 00:58:13.050 |
|
negative number for example, and how |
|
|
|
00:58:13.050 --> 00:58:15.900 |
|
big, and that's going to depend on X2. |
|
|
|
00:58:16.940 --> 00:58:17.250 |
|
Right. |
|
|
|
00:58:17.250 --> 00:58:18.230 |
|
So it's just the. |
|
|
|
00:58:19.270 --> 00:58:21.670 |
|
The sum of the paths to the output |
|
|
|
00:58:21.670 --> 00:58:23.250 |
|
times the Input, yeah. |
|
|
|
00:58:27.530 --> 00:58:29.240 |
|
I'm computing the gradient with respect |
|
|
|
00:58:29.240 --> 00:58:30.430 |
|
to weight here. |
|
|
|
00:58:31.610 --> 00:58:32.030 |
|
Yeah. |
|
|
|
00:58:33.290 --> 00:58:33.480 |
|
Yeah. |
|
|
|
00:58:42.070 --> 00:58:45.172 |
|
Yeah, it's not like it's a. |
|
|
|
00:58:45.172 --> 00:58:47.240 |
|
It's written as an undirected graph, |
|
|
|
00:58:47.240 --> 00:58:49.390 |
|
but it's not like a. |
|
|
|
00:58:49.390 --> 00:58:50.830 |
|
It doesn't imply a flow direction |
|
|
|
00:58:50.830 --> 00:58:51.490 |
|
necessarily. |
|
|
|
00:58:52.440 --> 00:58:54.790 |
|
But when you do, but we do think of it |
|
|
|
00:58:54.790 --> 00:58:55.080 |
|
that way. |
|
|
|
00:58:55.080 --> 00:58:56.880 |
|
So we think of forward as you're making |
|
|
|
00:58:56.880 --> 00:58:59.480 |
|
a prediction and Backprop is your |
|
|
|
00:58:59.480 --> 00:59:01.480 |
|
propagating the error gradients back to |
|
|
|
00:59:01.480 --> 00:59:02.450 |
|
the Weights to update them. |
|
|
|
00:59:04.440 --> 00:59:05.240 |
|
And then? |
|
|
|
00:59:10.810 --> 00:59:13.260 |
|
Yeah, it doesn't imply causality or |
|
|
|
00:59:13.260 --> 00:59:15.070 |
|
anything like that like a like a |
|
|
|
00:59:15.070 --> 00:59:16.020 |
|
Bayesian network might. |
|
|
|
00:59:19.570 --> 00:59:23.061 |
|
All right, and then with W 2, it'll be |
|
|
|
00:59:23.061 --> 00:59:25.910 |
|
the connection of the output is through |
|
|
|
00:59:25.910 --> 00:59:29.845 |
|
W 3, and the connection to the Input is |
|
|
|
00:59:29.845 --> 00:59:32.400 |
|
H1, so it will be the output of H1 |
|
|
|
00:59:32.400 --> 00:59:34.240 |
|
times W 3. |
|
|
|
00:59:39.320 --> 00:59:41.500 |
|
So that's for linear and then if you |
|
|
|
00:59:41.500 --> 00:59:43.430 |
|
have non linear it will be. |
|
|
|
00:59:43.510 --> 00:59:44.940 |
|
I like it. |
|
|
|
00:59:44.940 --> 00:59:46.710 |
|
Will be longer, you'll have more. |
|
|
|
00:59:46.710 --> 00:59:48.320 |
|
You'll have those nonlinearities like |
|
|
|
00:59:48.320 --> 00:59:49.270 |
|
come into play, yeah? |
|
|
|
00:59:58.440 --> 01:00:00.325 |
|
So it would end up being. |
|
|
|
01:00:00.325 --> 01:00:03.420 |
|
So it's kind of like so I did it for a |
|
|
|
01:00:03.420 --> 01:00:04.600 |
|
smaller function here. |
|
|
|
01:00:05.290 --> 01:00:07.833 |
|
But if you take if you do the chain |
|
|
|
01:00:07.833 --> 01:00:10.240 |
|
rule through, if you take the partial |
|
|
|
01:00:10.240 --> 01:00:11.790 |
|
derivative and then you follow the |
|
|
|
01:00:11.790 --> 01:00:13.430 |
|
chain rule or you expand your |
|
|
|
01:00:13.430 --> 01:00:14.080 |
|
functions. |
|
|
|
01:00:14.800 --> 01:00:15.930 |
|
Then you will. |
|
|
|
01:00:16.740 --> 01:00:17.640 |
|
You'll get there. |
|
|
|
01:00:18.930 --> 01:00:19.490 |
|
So. |
|
|
|
01:00:21.900 --> 01:00:23.730 |
|
Mathematically, you would get there |
|
|
|
01:00:23.730 --> 01:00:25.890 |
|
this way, and then in practice the way |
|
|
|
01:00:25.890 --> 01:00:28.305 |
|
that it's implemented usually is that |
|
|
|
01:00:28.305 --> 01:00:30.720 |
|
you would accumulate you can compute |
|
|
|
01:00:30.720 --> 01:00:32.800 |
|
the contribution of the error. |
|
|
|
01:00:33.680 --> 01:00:36.200 |
|
Each node like how this nodes output |
|
|
|
01:00:36.200 --> 01:00:38.370 |
|
affected the error the error. |
|
|
|
01:00:38.990 --> 01:00:41.489 |
|
And then you propagate and then you |
|
|
|
01:00:41.490 --> 01:00:41.710 |
|
can. |
|
|
|
01:00:42.380 --> 01:00:45.790 |
|
Can say that this Weights gradient is |
|
|
|
01:00:45.790 --> 01:00:47.830 |
|
its error contribution on this node |
|
|
|
01:00:47.830 --> 01:00:49.990 |
|
times the Input. |
|
|
|
01:00:50.660 --> 01:00:52.460 |
|
And then you keep on like propagating |
|
|
|
01:00:52.460 --> 01:00:54.110 |
|
that error contribution backwards |
|
|
|
01:00:54.110 --> 01:00:56.220 |
|
recursively and then updating the |
|
|
|
01:00:56.220 --> 01:00:57.650 |
|
previous layers we. |
|
|
|
01:00:59.610 --> 01:01:01.980 |
|
So one thing I would like to clarify is |
|
|
|
01:01:01.980 --> 01:01:04.190 |
|
that I'm not going to ask any questions |
|
|
|
01:01:04.190 --> 01:01:07.016 |
|
about how you compute the. |
|
|
|
01:01:07.016 --> 01:01:10.700 |
|
I'm not going to ask you guys in an |
|
|
|
01:01:10.700 --> 01:01:13.130 |
|
exam like to compute the gradient or to |
|
|
|
01:01:13.130 --> 01:01:13.980 |
|
perform Backprop. |
|
|
|
01:01:14.840 --> 01:01:16.850 |
|
But I think it's important to |
|
|
|
01:01:16.850 --> 01:01:20.279 |
|
understand this concept that the |
|
|
|
01:01:20.280 --> 01:01:22.740 |
|
gradients are flowing back through the |
|
|
|
01:01:22.740 --> 01:01:25.220 |
|
Weights and one consequence of that. |
|
|
|
01:01:25.980 --> 01:01:27.640 |
|
Is that you can imagine if this layer |
|
|
|
01:01:27.640 --> 01:01:29.930 |
|
were really deep, if any of these |
|
|
|
01:01:29.930 --> 01:01:32.025 |
|
weights are zero, or if some fraction |
|
|
|
01:01:32.025 --> 01:01:34.160 |
|
of them are zero, then it's pretty easy |
|
|
|
01:01:34.160 --> 01:01:35.830 |
|
for this gradient to become zero, which |
|
|
|
01:01:35.830 --> 01:01:36.950 |
|
means you don't get any update. |
|
|
|
01:01:37.610 --> 01:01:39.240 |
|
And if you have Sigmoid adds, a lot of |
|
|
|
01:01:39.240 --> 01:01:40.680 |
|
these weights are zero, which means |
|
|
|
01:01:40.680 --> 01:01:42.260 |
|
that there's no updates flowing into |
|
|
|
01:01:42.260 --> 01:01:43.870 |
|
the early layers, which is what |
|
|
|
01:01:43.870 --> 01:01:44.720 |
|
cripples the learning. |
|
|
|
01:01:45.520 --> 01:01:47.590 |
|
And even with raley's, if this gets |
|
|
|
01:01:47.590 --> 01:01:50.020 |
|
really deep, then you end up with a lot |
|
|
|
01:01:50.020 --> 01:01:52.790 |
|
of zeros and you end up also having the |
|
|
|
01:01:52.790 --> 01:01:53.490 |
|
same problem. |
|
|
|
01:01:54.880 --> 01:01:57.240 |
|
So that's going to foreshadowing some |
|
|
|
01:01:57.240 --> 01:01:59.760 |
|
of the difficulties that neural |
|
|
|
01:01:59.760 --> 01:02:00.500 |
|
networks will have. |
|
|
|
01:02:02.140 --> 01:02:06.255 |
|
If I want to do optimization by SGD, |
|
|
|
01:02:06.255 --> 01:02:09.250 |
|
then it's, then this is the basic |
|
|
|
01:02:09.250 --> 01:02:09.525 |
|
algorithm. |
|
|
|
01:02:09.525 --> 01:02:11.090 |
|
I split the data into batches. |
|
|
|
01:02:11.090 --> 01:02:12.110 |
|
I set some learning rate. |
|
|
|
01:02:13.170 --> 01:02:15.715 |
|
I go for each or for each epac POC. |
|
|
|
01:02:15.715 --> 01:02:17.380 |
|
I do that so for each pass through the |
|
|
|
01:02:17.380 --> 01:02:19.630 |
|
data for each batch I compute the |
|
|
|
01:02:19.630 --> 01:02:20.120 |
|
output. |
|
|
|
01:02:22.250 --> 01:02:23.010 |
|
My predictions. |
|
|
|
01:02:23.010 --> 01:02:25.790 |
|
In other words, I Evaluate the loss, I |
|
|
|
01:02:25.790 --> 01:02:27.160 |
|
compute the gradients with Back |
|
|
|
01:02:27.160 --> 01:02:29.170 |
|
propagation, and then I update the |
|
|
|
01:02:29.170 --> 01:02:29.660 |
|
weights. |
|
|
|
01:02:29.660 --> 01:02:30.780 |
|
Those are the four steps. |
|
|
|
01:02:31.910 --> 01:02:33.520 |
|
And I'll show you that how we do that |
|
|
|
01:02:33.520 --> 01:02:36.700 |
|
in Code with torch in a minute, but |
|
|
|
01:02:36.700 --> 01:02:37.110 |
|
first. |
|
|
|
01:02:38.480 --> 01:02:40.550 |
|
Why go from Perceptrons to MLPS? |
|
|
|
01:02:40.550 --> 01:02:43.310 |
|
So the big benefit is that we get a lot |
|
|
|
01:02:43.310 --> 01:02:44.430 |
|
more expressivity. |
|
|
|
01:02:44.430 --> 01:02:46.370 |
|
We can model potentially any function |
|
|
|
01:02:46.370 --> 01:02:49.187 |
|
with MLPS, while with Perceptrons we |
|
|
|
01:02:49.187 --> 01:02:50.910 |
|
can only model linear functions. |
|
|
|
01:02:50.910 --> 01:02:52.357 |
|
So that's a big benefit. |
|
|
|
01:02:52.357 --> 01:02:53.970 |
|
And of course like we could like |
|
|
|
01:02:53.970 --> 01:02:55.670 |
|
manually project things into higher |
|
|
|
01:02:55.670 --> 01:02:57.540 |
|
dimensions and use polar coordinates or |
|
|
|
01:02:57.540 --> 01:02:58.790 |
|
squares or whatever. |
|
|
|
01:02:58.790 --> 01:03:01.500 |
|
But the nice thing is that the MLP's |
|
|
|
01:03:01.500 --> 01:03:03.650 |
|
you can optimize the features and your |
|
|
|
01:03:03.650 --> 01:03:05.150 |
|
prediction at the same time to work |
|
|
|
01:03:05.150 --> 01:03:06.050 |
|
well together. |
|
|
|
01:03:06.050 --> 01:03:08.310 |
|
And so it takes the. |
|
|
|
01:03:08.380 --> 01:03:10.719 |
|
Expert out of the loop a bit if you can |
|
|
|
01:03:10.720 --> 01:03:12.320 |
|
do this really well, so you can just |
|
|
|
01:03:12.320 --> 01:03:14.240 |
|
take your data and learn really good |
|
|
|
01:03:14.240 --> 01:03:15.620 |
|
features and learn a really good |
|
|
|
01:03:15.620 --> 01:03:16.230 |
|
prediction. |
|
|
|
01:03:16.900 --> 01:03:19.570 |
|
Jointly, so that you can get a good |
|
|
|
01:03:19.570 --> 01:03:22.220 |
|
predictor based on simple features. |
|
|
|
01:03:24.090 --> 01:03:26.050 |
|
The problems are that the optimization |
|
|
|
01:03:26.050 --> 01:03:29.640 |
|
is no longer convex, you can get stuck |
|
|
|
01:03:29.640 --> 01:03:30.480 |
|
in local minima. |
|
|
|
01:03:30.480 --> 01:03:31.930 |
|
You're no longer guaranteed to reach a |
|
|
|
01:03:31.930 --> 01:03:33.440 |
|
globally optimum solution. |
|
|
|
01:03:33.440 --> 01:03:34.560 |
|
I'm going to talk more about |
|
|
|
01:03:34.560 --> 01:03:37.210 |
|
optimization and this issue next class. |
|
|
|
01:03:37.840 --> 01:03:39.730 |
|
You also have a larger model, which |
|
|
|
01:03:39.730 --> 01:03:41.265 |
|
means more training and inference time |
|
|
|
01:03:41.265 --> 01:03:43.299 |
|
and also more data is required to get a |
|
|
|
01:03:43.300 --> 01:03:43.828 |
|
good fit. |
|
|
|
01:03:43.828 --> 01:03:45.870 |
|
You have higher, lower, in other words, |
|
|
|
01:03:45.870 --> 01:03:47.580 |
|
the MLP has lower bias and higher |
|
|
|
01:03:47.580 --> 01:03:48.270 |
|
variance. |
|
|
|
01:03:48.270 --> 01:03:50.840 |
|
And also you get additional error due |
|
|
|
01:03:50.840 --> 01:03:53.330 |
|
to the challenge of optimization. |
|
|
|
01:03:53.330 --> 01:03:56.594 |
|
So even though the theory is that you |
|
|
|
01:03:56.594 --> 01:03:59.316 |
|
can fit that, there is a function that |
|
|
|
01:03:59.316 --> 01:04:01.396 |
|
the MLP, the MLP can represent any |
|
|
|
01:04:01.396 --> 01:04:01.623 |
|
function. |
|
|
|
01:04:01.623 --> 01:04:02.960 |
|
It doesn't mean you can find it. |
|
|
|
01:04:03.840 --> 01:04:05.400 |
|
So it's not enough that it has |
|
|
|
01:04:05.400 --> 01:04:07.010 |
|
essentially 0 bias. |
|
|
|
01:04:07.010 --> 01:04:09.400 |
|
If you have a really huge network, you |
|
|
|
01:04:09.400 --> 01:04:10.770 |
|
may still not be able to fit your |
|
|
|
01:04:10.770 --> 01:04:12.826 |
|
training data because of the deficiency |
|
|
|
01:04:12.826 --> 01:04:14.230 |
|
of your optimization. |
|
|
|
01:04:16.960 --> 01:04:20.370 |
|
Alright, so now let's see. |
|
|
|
01:04:20.370 --> 01:04:21.640 |
|
I don't need to open a new one. |
|
|
|
01:04:23.140 --> 01:04:24.310 |
|
Go back to the old one. |
|
|
|
01:04:24.310 --> 01:04:25.990 |
|
OK, so now let's see how this works. |
|
|
|
01:04:25.990 --> 01:04:28.370 |
|
So now I've got torch torches or |
|
|
|
01:04:28.370 --> 01:04:29.580 |
|
framework for deep learning. |
|
|
|
01:04:30.710 --> 01:04:33.429 |
|
And I'm specifying a model. |
|
|
|
01:04:33.430 --> 01:04:35.609 |
|
So in torch you specify a model and |
|
|
|
01:04:35.610 --> 01:04:38.030 |
|
I've got two models specified here. |
|
|
|
01:04:38.850 --> 01:04:41.170 |
|
One has a linear layer that goes from |
|
|
|
01:04:41.170 --> 01:04:43.850 |
|
Input size to hidden size. |
|
|
|
01:04:43.850 --> 01:04:45.230 |
|
In this example it's just from 2 |
|
|
|
01:04:45.230 --> 01:04:46.850 |
|
because I'm doing 2 dimensional problem |
|
|
|
01:04:46.850 --> 01:04:49.270 |
|
for visualization and a hidden size |
|
|
|
01:04:49.270 --> 01:04:51.180 |
|
which I'll set when I call the model. |
|
|
|
01:04:52.460 --> 01:04:55.225 |
|
Then I've got a Relu so do the Max of |
|
|
|
01:04:55.225 --> 01:04:57.620 |
|
the input and zero. |
|
|
|
01:04:57.620 --> 01:05:00.470 |
|
Then I have a linear function. |
|
|
|
01:05:00.660 --> 01:05:03.500 |
|
That then maps into my output size, |
|
|
|
01:05:03.500 --> 01:05:04.940 |
|
which in this case is 1 because I'm |
|
|
|
01:05:04.940 --> 01:05:06.550 |
|
just doing classification binary |
|
|
|
01:05:06.550 --> 01:05:09.000 |
|
classification and then I do I put a |
|
|
|
01:05:09.000 --> 01:05:10.789 |
|
Sigmoid here to map it from zero to 1. |
|
|
|
01:05:12.820 --> 01:05:14.800 |
|
And then I also defined A2 layer |
|
|
|
01:05:14.800 --> 01:05:17.459 |
|
network where I pass in a two-part |
|
|
|
01:05:17.460 --> 01:05:20.400 |
|
hidden size and I have linear layer |
|
|
|
01:05:20.400 --> 01:05:22.290 |
|
Velu, linear layer we're fully |
|
|
|
01:05:22.290 --> 01:05:24.310 |
|
connected layer Relu. |
|
|
|
01:05:25.490 --> 01:05:28.180 |
|
Then my output layer and then Sigmoid. |
|
|
|
01:05:28.880 --> 01:05:30.800 |
|
So this defines my network structure |
|
|
|
01:05:30.800 --> 01:05:31.110 |
|
here. |
|
|
|
01:05:32.610 --> 01:05:36.820 |
|
And then my forward, because I'm using |
|
|
|
01:05:36.820 --> 01:05:39.540 |
|
this sequential, if I just call self |
|
|
|
01:05:39.540 --> 01:05:42.140 |
|
doubt layers it just does like step |
|
|
|
01:05:42.140 --> 01:05:43.239 |
|
through the layers. |
|
|
|
01:05:43.240 --> 01:05:46.040 |
|
So it means that it goes from the input |
|
|
|
01:05:46.040 --> 01:05:47.464 |
|
to this layer, to this layer, to this |
|
|
|
01:05:47.464 --> 01:05:49.060 |
|
layer to list layer, blah blah blah all |
|
|
|
01:05:49.060 --> 01:05:49.985 |
|
the way through. |
|
|
|
01:05:49.985 --> 01:05:52.230 |
|
You can also just define these layers |
|
|
|
01:05:52.230 --> 01:05:53.904 |
|
separately outside of sequential and |
|
|
|
01:05:53.904 --> 01:05:56.640 |
|
then you're forward will be like X |
|
|
|
01:05:56.640 --> 01:05:57.715 |
|
equals. |
|
|
|
01:05:57.715 --> 01:06:00.500 |
|
You name the layers and then you say |
|
|
|
01:06:00.500 --> 01:06:02.645 |
|
like you call them one by one and |
|
|
|
01:06:02.645 --> 01:06:03.190 |
|
you're forward. |
|
|
|
01:06:03.750 --> 01:06:04.320 |
|
Step here. |
|
|
|
01:06:06.460 --> 01:06:08.290 |
|
Here's the training code. |
|
|
|
01:06:08.980 --> 01:06:10.410 |
|
I've got my. |
|
|
|
01:06:11.000 --> 01:06:14.280 |
|
I've got my X training and my train and |
|
|
|
01:06:14.280 --> 01:06:14.980 |
|
some model. |
|
|
|
01:06:16.010 --> 01:06:19.009 |
|
And I need to make them into torch |
|
|
|
01:06:19.010 --> 01:06:21.350 |
|
tensors, like a data structure that |
|
|
|
01:06:21.350 --> 01:06:22.270 |
|
torch can use. |
|
|
|
01:06:22.270 --> 01:06:24.946 |
|
So I call this torch tensor X and torch |
|
|
|
01:06:24.946 --> 01:06:27.930 |
|
tensor reshaping Y into a column vector |
|
|
|
01:06:27.930 --> 01:06:31.000 |
|
in case it was a north comma vector. |
|
|
|
01:06:33.020 --> 01:06:34.110 |
|
And. |
|
|
|
01:06:34.180 --> 01:06:37.027 |
|
And then I so that creates a train set |
|
|
|
01:06:37.027 --> 01:06:38.847 |
|
and then I call my data loader. |
|
|
|
01:06:38.847 --> 01:06:40.380 |
|
So the data loader is just something |
|
|
|
01:06:40.380 --> 01:06:42.440 |
|
that deals with all that shuffling and |
|
|
|
01:06:42.440 --> 01:06:43.890 |
|
loading and all of that stuff. |
|
|
|
01:06:43.890 --> 01:06:47.105 |
|
For you can give it like your source of |
|
|
|
01:06:47.105 --> 01:06:48.360 |
|
data, or you can give it the data |
|
|
|
01:06:48.360 --> 01:06:50.795 |
|
directly and it will handle the |
|
|
|
01:06:50.795 --> 01:06:52.210 |
|
shuffling and stepping. |
|
|
|
01:06:52.210 --> 01:06:54.530 |
|
So I give it a batch size, told it to |
|
|
|
01:06:54.530 --> 01:06:57.026 |
|
Shuffle, told it how many CPU threads |
|
|
|
01:06:57.026 --> 01:06:57.760 |
|
it can use. |
|
|
|
01:06:58.670 --> 01:06:59.780 |
|
And I gave it the data. |
|
|
|
01:07:01.670 --> 01:07:04.240 |
|
I set my loss to binary cross entropy |
|
|
|
01:07:04.240 --> 01:07:07.515 |
|
loss, which is just the log probability |
|
|
|
01:07:07.515 --> 01:07:09.810 |
|
loss in the case of a binary |
|
|
|
01:07:09.810 --> 01:07:10.500 |
|
classifier. |
|
|
|
01:07:11.950 --> 01:07:15.920 |
|
And I'm using an atom optimizer because |
|
|
|
01:07:15.920 --> 01:07:18.680 |
|
it's a little more friendly than SGD. |
|
|
|
01:07:18.680 --> 01:07:20.310 |
|
I'll talk about Adam in the next class. |
|
|
|
01:07:23.140 --> 01:07:25.320 |
|
Then I'm doing epoch, so I'm stepping |
|
|
|
01:07:25.320 --> 01:07:27.282 |
|
through my data or cycling through my |
|
|
|
01:07:27.282 --> 01:07:28.580 |
|
data number of epochs. |
|
|
|
01:07:30.020 --> 01:07:32.370 |
|
Then I'm stepping through my batches, |
|
|
|
01:07:32.370 --> 01:07:34.260 |
|
enumerating my train loader. |
|
|
|
01:07:34.260 --> 01:07:35.830 |
|
It gets each batch. |
|
|
|
01:07:37.120 --> 01:07:39.160 |
|
I split the data into the targets and |
|
|
|
01:07:39.160 --> 01:07:40.190 |
|
the inputs. |
|
|
|
01:07:40.190 --> 01:07:42.350 |
|
I zero out my gradients. |
|
|
|
01:07:43.260 --> 01:07:44.790 |
|
I. |
|
|
|
01:07:45.780 --> 01:07:48.280 |
|
Make my prediction which is just MLP |
|
|
|
01:07:48.280 --> 01:07:48.860 |
|
inputs. |
|
|
|
01:07:50.710 --> 01:07:53.040 |
|
Then I call my loss function. |
|
|
|
01:07:53.040 --> 01:07:55.030 |
|
So I compute the loss based on my |
|
|
|
01:07:55.030 --> 01:07:56.410 |
|
outputs and my targets. |
|
|
|
01:07:57.610 --> 01:07:59.930 |
|
Then I do Back propagation loss dot |
|
|
|
01:07:59.930 --> 01:08:01.450 |
|
backwards, backward. |
|
|
|
01:08:02.740 --> 01:08:04.550 |
|
And then I tell the optimizer to step |
|
|
|
01:08:04.550 --> 01:08:06.954 |
|
so it updates the Weights based on that |
|
|
|
01:08:06.954 --> 01:08:08.440 |
|
based on that loss. |
|
|
|
01:08:10.180 --> 01:08:11.090 |
|
And that's it. |
|
|
|
01:08:12.100 --> 01:08:13.600 |
|
And then keep looping through the |
|
|
|
01:08:13.600 --> 01:08:14.620 |
|
through the batches. |
|
|
|
01:08:14.620 --> 01:08:16.800 |
|
So code wise is pretty simple. |
|
|
|
01:08:16.800 --> 01:08:18.360 |
|
Computing partial derivatives of |
|
|
|
01:08:18.360 --> 01:08:20.730 |
|
complex functions is not so simple, but |
|
|
|
01:08:20.730 --> 01:08:22.250 |
|
implementing it in torch is simple. |
|
|
|
01:08:23.540 --> 01:08:26.270 |
|
And then I'm just like doing some |
|
|
|
01:08:26.270 --> 01:08:28.180 |
|
record keeping to compute accuracy and |
|
|
|
01:08:28.180 --> 01:08:30.720 |
|
losses and then record it and plot it, |
|
|
|
01:08:30.720 --> 01:08:31.200 |
|
all right. |
|
|
|
01:08:31.200 --> 01:08:33.340 |
|
So let's go back to those same |
|
|
|
01:08:33.340 --> 01:08:33.820 |
|
problems. |
|
|
|
01:08:39.330 --> 01:08:41.710 |
|
So I've got some. |
|
|
|
01:08:43.730 --> 01:08:45.480 |
|
And then here I just have like a |
|
|
|
01:08:45.480 --> 01:08:47.040 |
|
prediction function that I'm using for |
|
|
|
01:08:47.040 --> 01:08:47.540 |
|
display. |
|
|
|
01:08:50.720 --> 01:08:52.800 |
|
Alright, so here's my loss. |
|
|
|
01:08:52.800 --> 01:08:54.570 |
|
A nice descent. |
|
|
|
01:08:55.890 --> 01:08:59.330 |
|
And my accuracy goes up to close to 1. |
|
|
|
01:09:00.670 --> 01:09:03.660 |
|
And this is now on this like curved |
|
|
|
01:09:03.660 --> 01:09:04.440 |
|
problem, right? |
|
|
|
01:09:04.440 --> 01:09:07.020 |
|
So this is one that the Perceptron |
|
|
|
01:09:07.020 --> 01:09:09.480 |
|
couldn't fit exactly, but here I get |
|
|
|
01:09:09.480 --> 01:09:11.410 |
|
like a pretty good fit OK. |
|
|
|
01:09:12.240 --> 01:09:14.000 |
|
Still not perfect, but if I add more |
|
|
|
01:09:14.000 --> 01:09:16.190 |
|
nodes or optimize further, I can |
|
|
|
01:09:16.190 --> 01:09:17.310 |
|
probably fit these guys too. |
|
|
|
01:09:19.890 --> 01:09:20.830 |
|
So that's cool. |
|
|
|
01:09:22.390 --> 01:09:24.660 |
|
Just for my own sanity checks, I tried |
|
|
|
01:09:24.660 --> 01:09:29.265 |
|
using the MLP in SKLEARN and it gives |
|
|
|
01:09:29.265 --> 01:09:30.210 |
|
the same result. |
|
|
|
01:09:31.580 --> 01:09:33.170 |
|
Show you similar result. |
|
|
|
01:09:33.170 --> 01:09:35.779 |
|
Anyway, so here I SET Max Iters, I set |
|
|
|
01:09:35.780 --> 01:09:36.930 |
|
some network size. |
|
|
|
01:09:37.720 --> 01:09:40.270 |
|
Here's using Sklearn and did like |
|
|
|
01:09:40.270 --> 01:09:42.890 |
|
bigger optimization so better fit. |
|
|
|
01:09:43.840 --> 01:09:44.910 |
|
But basically the same thing. |
|
|
|
01:09:46.780 --> 01:09:51.090 |
|
And then here I can let's try the other |
|
|
|
01:09:51.090 --> 01:09:51.770 |
|
One South. |
|
|
|
01:09:51.770 --> 01:09:54.800 |
|
Let's do a one layer network with 100 |
|
|
|
01:09:54.800 --> 01:09:56.140 |
|
nodes hidden. |
|
|
|
01:10:02.710 --> 01:10:03.760 |
|
It's optimizing. |
|
|
|
01:10:03.760 --> 01:10:04.710 |
|
It'll take a little bit. |
|
|
|
01:10:06.550 --> 01:10:10.550 |
|
I'm just using CPU so it did decrease |
|
|
|
01:10:10.550 --> 01:10:11.300 |
|
the loss. |
|
|
|
01:10:11.300 --> 01:10:13.790 |
|
It got pretty good error but not |
|
|
|
01:10:13.790 --> 01:10:14.280 |
|
perfect. |
|
|
|
01:10:14.280 --> 01:10:15.990 |
|
It didn't like fit that little circle |
|
|
|
01:10:15.990 --> 01:10:16.470 |
|
in there. |
|
|
|
01:10:17.460 --> 01:10:18.580 |
|
It kind of went around. |
|
|
|
01:10:18.580 --> 01:10:20.150 |
|
It decided to. |
|
|
|
01:10:21.030 --> 01:10:22.950 |
|
These guys are not important enough and |
|
|
|
01:10:22.950 --> 01:10:24.880 |
|
justice fit everything in there. |
|
|
|
01:10:25.880 --> 01:10:27.160 |
|
If I run it with different random |
|
|
|
01:10:27.160 --> 01:10:29.145 |
|
seeds, sometimes it will fit that even |
|
|
|
01:10:29.145 --> 01:10:31.980 |
|
with one layer, but let's go with two |
|
|
|
01:10:31.980 --> 01:10:32.370 |
|
layers. |
|
|
|
01:10:32.370 --> 01:10:33.779 |
|
So now I'm going to train the two layer |
|
|
|
01:10:33.780 --> 01:10:35.550 |
|
network, each with the hidden size of |
|
|
|
01:10:35.550 --> 01:10:36.040 |
|
50. |
|
|
|
01:10:37.970 --> 01:10:39.090 |
|
And try it again. |
|
|
|
01:10:44.940 --> 01:10:45.870 |
|
Executing. |
|
|
|
01:10:48.470 --> 01:10:50.430 |
|
All right, so now here's my loss |
|
|
|
01:10:50.430 --> 01:10:51.120 |
|
function. |
|
|
|
01:10:52.940 --> 01:10:54.490 |
|
Still going down, if I trained it |
|
|
|
01:10:54.490 --> 01:10:55.790 |
|
further I'd probably decrease the loss |
|
|
|
01:10:55.790 --> 01:10:56.245 |
|
further. |
|
|
|
01:10:56.245 --> 01:10:58.710 |
|
My error is like my accuracy is super |
|
|
|
01:10:58.710 --> 01:10:59.440 |
|
close to 1. |
|
|
|
01:11:00.180 --> 01:11:02.580 |
|
And you can see that it fit both these |
|
|
|
01:11:02.580 --> 01:11:03.940 |
|
guys pretty well, right? |
|
|
|
01:11:03.940 --> 01:11:05.696 |
|
So with the more layers I got like a |
|
|
|
01:11:05.696 --> 01:11:07.270 |
|
more easier to get a more expressive |
|
|
|
01:11:07.270 --> 01:11:07.710 |
|
function. |
|
|
|
01:11:08.350 --> 01:11:10.020 |
|
That could deal with these different |
|
|
|
01:11:10.020 --> 01:11:11.200 |
|
parts of the feature space. |
|
|
|
01:11:18.170 --> 01:11:20.270 |
|
I also want to show you this demo. |
|
|
|
01:11:20.270 --> 01:11:22.710 |
|
This is like so cool I think. |
|
|
|
01:11:24.820 --> 01:11:26.190 |
|
I realized that this was here before. |
|
|
|
01:11:26.190 --> 01:11:27.820 |
|
I might not have even made another |
|
|
|
01:11:27.820 --> 01:11:28.340 |
|
demo, but. |
|
|
|
01:11:29.010 --> 01:11:32.723 |
|
So this so here you get to choose your |
|
|
|
01:11:32.723 --> 01:11:33.126 |
|
problem. |
|
|
|
01:11:33.126 --> 01:11:34.910 |
|
So like let's say this problem. |
|
|
|
01:11:35.590 --> 01:11:39.340 |
|
And you choose your number of layers, |
|
|
|
01:11:39.340 --> 01:11:41.400 |
|
and you choose the number of neurons, |
|
|
|
01:11:41.400 --> 01:11:43.072 |
|
and you choose your learning rate, and |
|
|
|
01:11:43.072 --> 01:11:44.320 |
|
you choose your activation function. |
|
|
|
01:11:44.940 --> 01:11:45.850 |
|
And then you can't play. |
|
|
|
01:11:46.430 --> 01:11:50.490 |
|
And then it optimizes and it shows you |
|
|
|
01:11:50.490 --> 01:11:52.100 |
|
like the function that's fitting. |
|
|
|
01:11:53.140 --> 01:11:54.600 |
|
And this. |
|
|
|
01:11:56.140 --> 01:11:57.690 |
|
Alright, I think, is it making |
|
|
|
01:11:57.690 --> 01:11:58.065 |
|
progress? |
|
|
|
01:11:58.065 --> 01:12:00.210 |
|
It's getting there, it's maybe doing |
|
|
|
01:12:00.210 --> 01:12:00.890 |
|
things. |
|
|
|
01:12:00.890 --> 01:12:03.380 |
|
So here I'm doing a Sigmoid with just |
|
|
|
01:12:03.380 --> 01:12:04.760 |
|
these few neurons. |
|
|
|
01:12:05.500 --> 01:12:09.279 |
|
And these two inputs so just X1 and X2. |
|
|
|
01:12:09.970 --> 01:12:12.170 |
|
And then it's trying to predict two |
|
|
|
01:12:12.170 --> 01:12:12.580 |
|
values. |
|
|
|
01:12:12.580 --> 01:12:14.100 |
|
So it took a long time. |
|
|
|
01:12:14.100 --> 01:12:16.520 |
|
It was started in a big plateau, but it |
|
|
|
01:12:16.520 --> 01:12:18.103 |
|
eventually got there alright. |
|
|
|
01:12:18.103 --> 01:12:20.710 |
|
So this Sigmoid did OK this time. |
|
|
|
01:12:22.330 --> 01:12:25.270 |
|
And let's give it a harder problem. |
|
|
|
01:12:25.270 --> 01:12:27.980 |
|
So this guy is really tough. |
|
|
|
01:12:29.770 --> 01:12:31.770 |
|
Now it's not going to be able to do it, |
|
|
|
01:12:31.770 --> 01:12:32.740 |
|
I don't think. |
|
|
|
01:12:32.740 --> 01:12:33.720 |
|
Maybe. |
|
|
|
01:12:35.540 --> 01:12:37.860 |
|
It's going somewhere. |
|
|
|
01:12:37.860 --> 01:12:39.450 |
|
I don't think it has enough expressive |
|
|
|
01:12:39.450 --> 01:12:41.630 |
|
power to fit this weird spiral. |
|
|
|
01:12:42.800 --> 01:12:44.320 |
|
But I'll let it run for a little bit to |
|
|
|
01:12:44.320 --> 01:12:44.890 |
|
see. |
|
|
|
01:12:44.890 --> 01:12:46.180 |
|
So it's going to try to do some |
|
|
|
01:12:46.180 --> 01:12:46.880 |
|
approximation. |
|
|
|
01:12:46.880 --> 01:12:48.243 |
|
The loss over here is going. |
|
|
|
01:12:48.243 --> 01:12:49.623 |
|
It's gone down a little bit. |
|
|
|
01:12:49.623 --> 01:12:50.815 |
|
So it did something. |
|
|
|
01:12:50.815 --> 01:12:53.090 |
|
It got better than chance, but still |
|
|
|
01:12:53.090 --> 01:12:53.710 |
|
not great. |
|
|
|
01:12:53.710 --> 01:12:54.410 |
|
Let me stop it. |
|
|
|
01:12:55.120 --> 01:12:58.015 |
|
So let's add some more layers. |
|
|
|
01:12:58.015 --> 01:13:00.530 |
|
Let's make it super powerful. |
|
|
|
01:13:08.430 --> 01:13:10.200 |
|
And then let's run it. |
|
|
|
01:13:13.620 --> 01:13:15.380 |
|
And it's like not doing anything. |
|
|
|
01:13:16.770 --> 01:13:18.540 |
|
Yeah, you can't do anything. |
|
|
|
01:13:20.290 --> 01:13:22.140 |
|
Because it's got all these, it's pretty |
|
|
|
01:13:22.140 --> 01:13:25.240 |
|
slow too and it's sigmoids. |
|
|
|
01:13:25.240 --> 01:13:26.500 |
|
I mean it looks like it's doing |
|
|
|
01:13:26.500 --> 01:13:29.385 |
|
something but it's in like 6 digit or |
|
|
|
01:13:29.385 --> 01:13:29.650 |
|
something. |
|
|
|
01:13:30.550 --> 01:13:33.180 |
|
Alright, so now let's try Relu. |
|
|
|
01:13:37.430 --> 01:13:38.340 |
|
Is it gonna work? |
|
|
|
01:13:42.800 --> 01:13:43.650 |
|
Go in. |
|
|
|
01:13:48.120 --> 01:13:49.100 |
|
It is really slow. |
|
|
|
01:13:51.890 --> 01:13:52.910 |
|
It's getting there. |
|
|
|
01:14:06.670 --> 01:14:08.420 |
|
And you can see what's so cool is like |
|
|
|
01:14:08.420 --> 01:14:08.972 |
|
each of these. |
|
|
|
01:14:08.972 --> 01:14:10.298 |
|
You can see what they're predicting, |
|
|
|
01:14:10.298 --> 01:14:12.080 |
|
what each of these nodes is |
|
|
|
01:14:12.080 --> 01:14:12.860 |
|
representing. |
|
|
|
01:14:14.230 --> 01:14:16.340 |
|
It's slow because it's calculating all |
|
|
|
01:14:16.340 --> 01:14:17.830 |
|
this stuff and you can see the gradient |
|
|
|
01:14:17.830 --> 01:14:18.880 |
|
visualization. |
|
|
|
01:14:18.880 --> 01:14:20.920 |
|
See all these gradients flowing through |
|
|
|
01:14:20.920 --> 01:14:21.170 |
|
here. |
|
|
|
01:14:22.710 --> 01:14:25.450 |
|
Though it did it like it's pretty good, |
|
|
|
01:14:25.450 --> 01:14:26.970 |
|
the loss is really low. |
|
|
|
01:14:30.590 --> 01:14:32.530 |
|
I mean it's getting all that data |
|
|
|
01:14:32.530 --> 01:14:32.900 |
|
correct. |
|
|
|
01:14:32.900 --> 01:14:34.460 |
|
It might be overfitting a little bit or |
|
|
|
01:14:34.460 --> 01:14:35.740 |
|
not fitting perfectly but. |
|
|
|
01:14:36.870 --> 01:14:39.140 |
|
Still optimizing and then if I want to |
|
|
|
01:14:39.140 --> 01:14:40.710 |
|
lower my learning rate. |
|
|
|
01:14:41.460 --> 01:14:41.890 |
|
Let's. |
|
|
|
01:14:44.580 --> 01:14:46.060 |
|
Then it will like optimize a little |
|
|
|
01:14:46.060 --> 01:14:46.480 |
|
more. |
|
|
|
01:14:46.480 --> 01:14:48.610 |
|
OK, so basically though it did it and |
|
|
|
01:14:48.610 --> 01:14:49.720 |
|
notice that there's like a lot of |
|
|
|
01:14:49.720 --> 01:14:50.640 |
|
strong gradients here. |
|
|
|
01:14:51.410 --> 01:14:54.430 |
|
We're with this Sigmoid the problem. |
|
|
|
01:14:55.430 --> 01:14:58.060 |
|
Is that it's all like low gradients. |
|
|
|
01:14:58.060 --> 01:15:00.120 |
|
There's no strong lines here because |
|
|
|
01:15:00.120 --> 01:15:01.760 |
|
this Sigmoid has those low Values. |
|
|
|
01:15:03.620 --> 01:15:04.130 |
|
All right. |
|
|
|
01:15:04.820 --> 01:15:06.770 |
|
So you guys can play with that more. |
|
|
|
01:15:06.770 --> 01:15:07.660 |
|
It's pretty fun. |
|
|
|
01:15:08.820 --> 01:15:12.570 |
|
And then will, I am out of time. |
|
|
|
01:15:12.570 --> 01:15:14.500 |
|
So I'm going to talk about these things |
|
|
|
01:15:14.500 --> 01:15:15.655 |
|
at the start of the next class. |
|
|
|
01:15:15.655 --> 01:15:17.180 |
|
You have everything that you need now |
|
|
|
01:15:17.180 --> 01:15:18.120 |
|
to do homework 2. |
|
|
|
01:15:19.250 --> 01:15:22.880 |
|
And so next class I'm going to talk |
|
|
|
01:15:22.880 --> 01:15:24.800 |
|
about deep learning. |
|
|
|
01:15:25.980 --> 01:15:28.270 |
|
And then I've got a review after that. |
|
|
|
01:15:28.270 --> 01:15:29.430 |
|
Thank you. |
|
|
|
01:15:31.290 --> 01:15:32.530 |
|
I got some questions. |
|
|
|
|