|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:52:49.1946189Z by ClassTranscribe |
|
|
|
00:01:20.650 --> 00:01:21.930 |
|
Alright, good morning everybody. |
|
|
|
00:01:25.660 --> 00:01:27.860 |
|
So I just wanted to start with a little |
|
|
|
00:01:27.860 --> 00:01:28.850 |
|
Review. |
|
|
|
00:01:28.940 --> 00:01:29.540 |
|
|
|
|
|
00:01:30.320 --> 00:01:32.885 |
|
So first question, and don't yell out |
|
|
|
00:01:32.885 --> 00:01:34.190 |
|
the answer I'll give you. |
|
|
|
00:01:34.190 --> 00:01:35.960 |
|
I want to give everyone a couple a |
|
|
|
00:01:35.960 --> 00:01:37.150 |
|
little bit to think about it. |
|
|
|
00:01:37.150 --> 00:01:39.420 |
|
Which of these tend to be decreased as |
|
|
|
00:01:39.420 --> 00:01:40.790 |
|
the number of training examples |
|
|
|
00:01:40.790 --> 00:01:41.287 |
|
increase? |
|
|
|
00:01:41.287 --> 00:01:43.350 |
|
The Training Error, test error |
|
|
|
00:01:43.350 --> 00:01:45.380 |
|
Generalization could be more than one. |
|
|
|
00:01:46.750 --> 00:01:47.700 |
|
I'll give you. |
|
|
|
00:01:47.850 --> 00:01:49.840 |
|
A little bit to think about it. |
|
|
|
00:02:02.100 --> 00:02:04.545 |
|
Alright, so well, would you expect the |
|
|
|
00:02:04.545 --> 00:02:06.540 |
|
Training Error to decrease as the |
|
|
|
00:02:06.540 --> 00:02:08.530 |
|
number of training examples increases? |
|
|
|
00:02:09.920 --> 00:02:11.190 |
|
Raise your hand if so. |
|
|
|
00:02:13.140 --> 00:02:14.430 |
|
And raise your hand if not. |
|
|
|
00:02:16.110 --> 00:02:20.580 |
|
So is have it a lot of abstains, but if |
|
|
|
00:02:20.580 --> 00:02:21.440 |
|
I don't count them. |
|
|
|
00:02:21.440 --> 00:02:25.980 |
|
So yeah, actually the Training Error |
|
|
|
00:02:25.980 --> 00:02:28.125 |
|
will actually increase as the number of |
|
|
|
00:02:28.125 --> 00:02:30.230 |
|
training examples increases because the |
|
|
|
00:02:30.230 --> 00:02:31.330 |
|
model gets harder to fit. |
|
|
|
00:02:32.030 --> 00:02:33.390 |
|
So assuming the Training Error is |
|
|
|
00:02:33.390 --> 00:02:35.170 |
|
nonzero, then it will increase or the |
|
|
|
00:02:35.170 --> 00:02:36.895 |
|
loss that you're fitting is going to |
|
|
|
00:02:36.895 --> 00:02:38.620 |
|
increase because as you get more |
|
|
|
00:02:38.620 --> 00:02:40.110 |
|
Training examples then. |
|
|
|
00:02:42.710 --> 00:02:44.780 |
|
Then, given a single model, you're |
|
|
|
00:02:44.780 --> 00:02:46.380 |
|
Error is going to go up all right. |
|
|
|
00:02:46.380 --> 00:02:47.310 |
|
What about test Error? |
|
|
|
00:02:47.310 --> 00:02:49.580 |
|
Would you expect that to increase or |
|
|
|
00:02:49.580 --> 00:02:51.100 |
|
decrease or stay the same? |
|
|
|
00:02:51.100 --> 00:02:53.820 |
|
I guess first just do you expect it to |
|
|
|
00:02:53.820 --> 00:02:54.180 |
|
decrease? |
|
|
|
00:02:55.920 --> 00:02:57.160 |
|
Raise your hand for decreased. |
|
|
|
00:02:57.940 --> 00:02:59.030 |
|
All right, raise your hand for |
|
|
|
00:02:59.030 --> 00:02:59.520 |
|
increase. |
|
|
|
00:03:00.820 --> 00:03:02.370 |
|
Everyone expects to test their to |
|
|
|
00:03:02.370 --> 00:03:02.870 |
|
decrease. |
|
|
|
00:03:03.910 --> 00:03:05.930 |
|
And Generalization Error, do you expect |
|
|
|
00:03:05.930 --> 00:03:08.056 |
|
that to increase or I mean sorry, do |
|
|
|
00:03:08.056 --> 00:03:09.060 |
|
you expect it to decrease? |
|
|
|
00:03:10.110 --> 00:03:12.220 |
|
Raise your hand if Generalization Error |
|
|
|
00:03:12.220 --> 00:03:12.990 |
|
should decrease. |
|
|
|
00:03:14.860 --> 00:03:16.380 |
|
And raise your hand if it should |
|
|
|
00:03:16.380 --> 00:03:16.770 |
|
increase. |
|
|
|
00:03:18.520 --> 00:03:19.508 |
|
Right, so you expect. |
|
|
|
00:03:19.508 --> 00:03:21.960 |
|
So the Generalization Error should also |
|
|
|
00:03:21.960 --> 00:03:22.650 |
|
decrease. |
|
|
|
00:03:22.650 --> 00:03:25.172 |
|
And remember that the Generalization |
|
|
|
00:03:25.172 --> 00:03:26.710 |
|
error is the. |
|
|
|
00:03:27.920 --> 00:03:31.000 |
|
Test Error minus the Training error, so |
|
|
|
00:03:31.000 --> 00:03:32.930 |
|
the typical curve you see. |
|
|
|
00:03:35.010 --> 00:03:37.080 |
|
The typical curve you would see if this |
|
|
|
00:03:37.080 --> 00:03:39.720 |
|
is the number of train. |
|
|
|
00:03:41.550 --> 00:03:43.080 |
|
And this is the Error. |
|
|
|
00:03:43.880 --> 00:03:45.220 |
|
Is that Training Error? |
|
|
|
00:03:45.220 --> 00:03:47.600 |
|
We'll go like something like that. |
|
|
|
00:03:47.600 --> 00:03:49.020 |
|
So this is the train. |
|
|
|
00:03:49.670 --> 00:03:52.230 |
|
And the test error will go something |
|
|
|
00:03:52.230 --> 00:03:52.960 |
|
like this. |
|
|
|
00:03:55.050 --> 00:03:58.389 |
|
And this is the generalization error is |
|
|
|
00:03:58.390 --> 00:04:01.010 |
|
a gap between training and test error. |
|
|
|
00:04:01.010 --> 00:04:02.540 |
|
So actually. |
|
|
|
00:04:02.610 --> 00:04:05.345 |
|
The generalization error will decrease |
|
|
|
00:04:05.345 --> 00:04:08.600 |
|
the fastest because that gap is closing |
|
|
|
00:04:08.600 --> 00:04:10.419 |
|
faster than the test error is going |
|
|
|
00:04:10.420 --> 00:04:10.920 |
|
down. |
|
|
|
00:04:10.920 --> 00:04:12.920 |
|
That has to be the case because the |
|
|
|
00:04:12.920 --> 00:04:13.790 |
|
Training Error is going up. |
|
|
|
00:04:14.750 --> 00:04:17.230 |
|
And then the test error decreased the |
|
|
|
00:04:17.230 --> 00:04:18.840 |
|
second fastest, and the Training error |
|
|
|
00:04:18.840 --> 00:04:20.205 |
|
is actually going to increase, so the |
|
|
|
00:04:20.205 --> 00:04:21.350 |
|
Training loss will increase. |
|
|
|
00:04:22.560 --> 00:04:26.330 |
|
Alright, second question and these are |
|
|
|
00:04:26.330 --> 00:04:28.655 |
|
just Review questions that I took from |
|
|
|
00:04:28.655 --> 00:04:31.480 |
|
the thing that I linked but wanted to |
|
|
|
00:04:31.480 --> 00:04:32.290 |
|
do them here. |
|
|
|
00:04:32.290 --> 00:04:35.780 |
|
So Classify the X with the plus using |
|
|
|
00:04:35.780 --> 00:04:37.710 |
|
one nearest neighbor and three Nearest |
|
|
|
00:04:37.710 --> 00:04:39.475 |
|
neighbor where you've got 2 features on |
|
|
|
00:04:39.475 --> 00:04:40.270 |
|
the axis there. |
|
|
|
00:04:42.110 --> 00:04:45.370 |
|
Alright, 41 Nearest neighbor. |
|
|
|
00:04:45.370 --> 00:04:47.220 |
|
How many people think it's an X? |
|
|
|
00:04:48.790 --> 00:04:50.540 |
|
OK, how many people think it's an O? |
|
|
|
00:04:51.700 --> 00:04:52.700 |
|
Everyone said 784x1. |
|
|
|
00:04:52.700 --> 00:04:53.840 |
|
That's correct. |
|
|
|
00:04:53.840 --> 00:04:55.100 |
|
For three Nearest neighbor. |
|
|
|
00:04:55.100 --> 00:04:56.750 |
|
How many people think it's an X? |
|
|
|
00:04:58.010 --> 00:04:59.300 |
|
How many people think it's to know? |
|
|
|
00:05:00.460 --> 00:05:01.040 |
|
Right. |
|
|
|
00:05:01.040 --> 00:05:02.130 |
|
Yeah, you guys got that. |
|
|
|
00:05:02.130 --> 00:05:04.100 |
|
So 3 Nearest neighbor, it's a no. |
|
|
|
00:05:05.650 --> 00:05:06.700 |
|
Right now these I think. |
|
|
|
00:05:08.910 --> 00:05:10.670 |
|
Also, I have a couple of probability |
|
|
|
00:05:10.670 --> 00:05:11.780 |
|
questions. |
|
|
|
00:05:13.330 --> 00:05:15.340 |
|
Alright, so first, just what assumption |
|
|
|
00:05:15.340 --> 00:05:16.890 |
|
does the Naive based model make if |
|
|
|
00:05:16.890 --> 00:05:19.860 |
|
there are two features X1 and X2? |
|
|
|
00:05:19.860 --> 00:05:21.710 |
|
Give you a second to think about it, |
|
|
|
00:05:21.710 --> 00:05:22.030 |
|
there's. |
|
|
|
00:05:22.730 --> 00:05:23.980 |
|
Really two options there. |
|
|
|
00:05:23.980 --> 00:05:26.180 |
|
They either it's one of it's either A |
|
|
|
00:05:26.180 --> 00:05:27.880 |
|
or B, neither or both. |
|
|
|
00:05:29.280 --> 00:05:30.140 |
|
I'll give you a moment. |
|
|
|
00:05:49.590 --> 00:05:52.900 |
|
Alright, so how many say that A is an |
|
|
|
00:05:52.900 --> 00:05:54.960 |
|
assumption that Naive Bayes makes? |
|
|
|
00:05:57.940 --> 00:05:58.180 |
|
Right. |
|
|
|
00:05:58.180 --> 00:05:59.910 |
|
How many people say that B is an |
|
|
|
00:05:59.910 --> 00:06:01.430 |
|
assumption that Naive Bayes makes? |
|
|
|
00:06:03.950 --> 00:06:06.480 |
|
How many say that neither of those are |
|
|
|
00:06:06.480 --> 00:06:06.860 |
|
true? |
|
|
|
00:06:09.740 --> 00:06:12.120 |
|
And how many say that both of those are |
|
|
|
00:06:12.120 --> 00:06:13.582 |
|
true, that they're the same thing and |
|
|
|
00:06:13.582 --> 00:06:14.130 |
|
they're both true? |
|
|
|
00:06:16.390 --> 00:06:18.810 |
|
So I think there are maybe at least one |
|
|
|
00:06:18.810 --> 00:06:19.780 |
|
vote for each of them. |
|
|
|
00:06:19.780 --> 00:06:23.410 |
|
But so the answer is B that Naive Bayes |
|
|
|
00:06:23.410 --> 00:06:25.070 |
|
assumes that the features are |
|
|
|
00:06:25.070 --> 00:06:27.675 |
|
independent of each other given the |
|
|
|
00:06:27.675 --> 00:06:30.626 |
|
given the Prediction given the label. |
|
|
|
00:06:30.626 --> 00:06:32.676 |
|
And I'll consistently use X for |
|
|
|
00:06:32.676 --> 00:06:33.826 |
|
features and Y for label. |
|
|
|
00:06:33.826 --> 00:06:34.089 |
|
So. |
|
|
|
00:06:34.810 --> 00:06:36.920 |
|
Hopefully that part is clear. |
|
|
|
00:06:36.920 --> 00:06:39.270 |
|
So A is not true because it's not |
|
|
|
00:06:39.270 --> 00:06:42.180 |
|
assuming that the in fact A is just |
|
|
|
00:06:42.180 --> 00:06:44.390 |
|
never true or? |
|
|
|
00:06:45.200 --> 00:06:46.420 |
|
Is that ever true? |
|
|
|
00:06:46.420 --> 00:06:48.230 |
|
I guess it could be true if Y is always |
|
|
|
00:06:48.230 --> 00:06:50.180 |
|
one or under certain weird |
|
|
|
00:06:50.180 --> 00:06:52.930 |
|
circumstances, but a is like a bad |
|
|
|
00:06:52.930 --> 00:06:54.370 |
|
probability statement. |
|
|
|
00:06:55.080 --> 00:06:58.555 |
|
And then B assumes that X1 and X2 are |
|
|
|
00:06:58.555 --> 00:07:00.370 |
|
independent given Y because remember |
|
|
|
00:07:00.370 --> 00:07:01.979 |
|
that if A&B are independent. |
|
|
|
00:07:02.930 --> 00:07:04.496 |
|
Then probability of AB equals |
|
|
|
00:07:04.496 --> 00:07:06.150 |
|
probability of a times probability B. |
|
|
|
00:07:06.850 --> 00:07:08.580 |
|
And similarly, even if it's |
|
|
|
00:07:08.580 --> 00:07:10.440 |
|
conditional, if X1 and X2 are |
|
|
|
00:07:10.440 --> 00:07:12.700 |
|
independent, then probability of X1 and |
|
|
|
00:07:12.700 --> 00:07:14.832 |
|
X2 given Y is equal to probability of |
|
|
|
00:07:14.832 --> 00:07:16.642 |
|
X1 given Y times probability of X2 |
|
|
|
00:07:16.642 --> 00:07:17.150 |
|
given Y. |
|
|
|
00:07:18.450 --> 00:07:21.660 |
|
And they're and they're not equivalent, |
|
|
|
00:07:21.660 --> 00:07:24.010 |
|
they're different expressions. |
|
|
|
00:07:24.010 --> 00:07:26.090 |
|
OK, so now this one is probably the. |
|
|
|
00:07:26.090 --> 00:07:27.190 |
|
This one is the most. |
|
|
|
00:07:28.600 --> 00:07:30.040 |
|
Complicated to work through I guess. |
|
|
|
00:07:30.900 --> 00:07:33.060 |
|
So let's say X1 and X2 are binary |
|
|
|
00:07:33.060 --> 00:07:35.780 |
|
features and Y is a binary label. |
|
|
|
00:07:36.410 --> 00:07:37.180 |
|
And. |
|
|
|
00:07:38.100 --> 00:07:40.830 |
|
And then all I've set the probabilities |
|
|
|
00:07:40.830 --> 00:07:44.794 |
|
so we know what X 1 = 1 given y = 0, X |
|
|
|
00:07:44.794 --> 00:07:46.712 |
|
2 = 1 given y = 0. |
|
|
|
00:07:46.712 --> 00:07:48.130 |
|
So I didn't fill out the whole |
|
|
|
00:07:48.130 --> 00:07:49.820 |
|
probability table, but I gave enough |
|
|
|
00:07:49.820 --> 00:07:51.710 |
|
maybe to do the first part. |
|
|
|
00:07:52.920 --> 00:07:55.190 |
|
So if we make an app as assumption. |
|
|
|
00:07:55.800 --> 00:07:57.760 |
|
So that's the assumption under B there. |
|
|
|
00:07:58.810 --> 00:08:01.860 |
|
What is probability of y = 1? |
|
|
|
00:08:02.900 --> 00:08:06.920 |
|
Given X 1 = 1 and X 2 = 1. |
|
|
|
00:08:08.240 --> 00:08:09.890 |
|
I'll give you a little bit of time to |
|
|
|
00:08:09.890 --> 00:08:12.086 |
|
start thinking about it, but I won't |
|
|
|
00:08:12.086 --> 00:08:12.504 |
|
ask. |
|
|
|
00:08:12.504 --> 00:08:14.650 |
|
I won't ask anyone to call it the |
|
|
|
00:08:14.650 --> 00:08:15.275 |
|
answer. |
|
|
|
00:08:15.275 --> 00:08:17.220 |
|
I'll just start working through it. |
|
|
|
00:08:19.050 --> 00:08:21.810 |
|
So think about how you would solve it. |
|
|
|
00:08:22.230 --> 00:08:22.840 |
|
|
|
|
|
00:08:24.940 --> 00:08:26.030 |
|
What things you have to multiply |
|
|
|
00:08:26.030 --> 00:08:26.860 |
|
together, et cetera. |
|
|
|
00:08:35.800 --> 00:08:36.110 |
|
Nice. |
|
|
|
00:08:45.670 --> 00:08:48.020 |
|
Alright, so I'll start working it out. |
|
|
|
00:08:48.020 --> 00:08:51.390 |
|
So probability of Y1 given X1 and X2. |
|
|
|
00:08:52.190 --> 00:08:53.580 |
|
So let's see. |
|
|
|
00:08:53.580 --> 00:08:56.940 |
|
So probability of y = 1. |
|
|
|
00:08:57.750 --> 00:09:02.530 |
|
Given X 1 = 1 and X 2 = 1. |
|
|
|
00:09:05.000 --> 00:09:10.400 |
|
That's the probability of y = 1 X. |
|
|
|
00:09:11.390 --> 00:09:12.160 |
|
1. |
|
|
|
00:09:13.210 --> 00:09:14.550 |
|
Equals one. |
|
|
|
00:09:15.350 --> 00:09:17.180 |
|
X 2 = 1. |
|
|
|
00:09:18.690 --> 00:09:19.830 |
|
Divided by. |
|
|
|
00:09:20.740 --> 00:09:21.870 |
|
Probability. |
|
|
|
00:09:22.300 --> 00:09:22.650 |
|
|
|
|
|
00:09:24.810 --> 00:09:26.830 |
|
I'll just do sum over K to save myself |
|
|
|
00:09:26.830 --> 00:09:27.490 |
|
some rating. |
|
|
|
00:09:27.490 --> 00:09:28.920 |
|
I don't like writing by hand much. |
|
|
|
00:09:30.120 --> 00:09:33.220 |
|
So sum K in the values of zero to 1 |
|
|
|
00:09:33.220 --> 00:09:34.430 |
|
probability of Y. |
|
|
|
00:09:35.310 --> 00:09:41.530 |
|
Equals K&X 1 = 1 and X 2 = 1. |
|
|
|
00:09:42.810 --> 00:09:45.100 |
|
So the reason for this, whoops, the |
|
|
|
00:09:45.100 --> 00:09:46.740 |
|
reason for that is that. |
|
|
|
00:09:46.810 --> 00:09:47.420 |
|
|
|
|
|
00:09:49.330 --> 00:09:51.910 |
|
I'm marginalizing out the Y so that is |
|
|
|
00:09:51.910 --> 00:09:53.430 |
|
just equal to probability. |
|
|
|
00:09:53.430 --> 00:09:54.916 |
|
On the denominator I have probability |
|
|
|
00:09:54.916 --> 00:09:58.889 |
|
of X 1 = 1 and probability of X 2 = 1. |
|
|
|
00:10:05.270 --> 00:10:08.570 |
|
And then this guy is going to be. |
|
|
|
00:10:09.770 --> 00:10:10.690 |
|
I can get there. |
|
|
|
00:10:11.450 --> 00:10:15.750 |
|
By probability of Y given X1 and X2. |
|
|
|
00:10:16.690 --> 00:10:18.440 |
|
Equals probability. |
|
|
|
00:10:19.300 --> 00:10:20.960 |
|
Sorry, I meant to flip that. |
|
|
|
00:10:23.670 --> 00:10:26.130 |
|
Probability of X1 and X2. |
|
|
|
00:10:28.920 --> 00:10:31.812 |
|
Given Y is equal to probability of X1 |
|
|
|
00:10:31.812 --> 00:10:35.919 |
|
given Y times probability of X 2 = y. |
|
|
|
00:10:35.920 --> 00:10:37.490 |
|
That's the Naive Bayes assumption part. |
|
|
|
00:10:38.520 --> 00:10:39.570 |
|
So the numerator. |
|
|
|
00:10:40.540 --> 00:10:41.630 |
|
Is. |
|
|
|
00:10:41.740 --> 00:10:43.250 |
|
Let's see. |
|
|
|
00:10:43.250 --> 00:10:46.790 |
|
So the numerator will be 1/4 * 1/2. |
|
|
|
00:10:47.900 --> 00:10:50.590 |
|
And then probability of Y is 5. |
|
|
|
00:10:50.590 --> 00:10:53.240 |
|
So on the numerator of this expression |
|
|
|
00:10:53.240 --> 00:10:56.730 |
|
here I have 1/4 * 1/2 * 5. |
|
|
|
00:10:58.030 --> 00:11:01.966 |
|
And on the denominator I have 1/4 * 1/2 |
|
|
|
00:11:01.966 --> 00:11:03.230 |
|
* 1.5. |
|
|
|
00:11:04.370 --> 00:11:05.140 |
|
Plus. |
|
|
|
00:11:07.090 --> 00:11:11.180 |
|
2/3 * 1/3 * .5, right? |
|
|
|
00:11:11.180 --> 00:11:13.775 |
|
This is a probability of X = 1 given y |
|
|
|
00:11:13.775 --> 00:11:16.730 |
|
= 0 times that, times that or times. |
|
|
|
00:11:16.730 --> 00:11:18.678 |
|
And then it's times 5 because the |
|
|
|
00:11:18.678 --> 00:11:20.561 |
|
probability of y = 1 is .5. |
|
|
|
00:11:20.561 --> 00:11:23.249 |
|
Then probability of y = 0 is 1 -, .5, |
|
|
|
00:11:23.250 --> 00:11:24.370 |
|
which is also 05. |
|
|
|
00:11:25.650 --> 00:11:27.210 |
|
That's how I solve that first part. |
|
|
|
00:11:29.150 --> 00:11:31.520 |
|
And then under Naive base assumption, |
|
|
|
00:11:31.520 --> 00:11:34.340 |
|
is it possible to calculate this given |
|
|
|
00:11:34.340 --> 00:11:35.990 |
|
the information I provided in those |
|
|
|
00:11:35.990 --> 00:11:36.630 |
|
equations? |
|
|
|
00:11:43.020 --> 00:11:45.180 |
|
So it's not. |
|
|
|
00:11:45.180 --> 00:11:47.010 |
|
Under first glance it might look like |
|
|
|
00:11:47.010 --> 00:11:49.480 |
|
it is, but it's not because I don't |
|
|
|
00:11:49.480 --> 00:11:52.106 |
|
know what the probability of X = 0 |
|
|
|
00:11:52.106 --> 00:11:53.091 |
|
given Y is. |
|
|
|
00:11:53.091 --> 00:11:54.810 |
|
I didn't give any information about |
|
|
|
00:11:54.810 --> 00:11:54.965 |
|
that. |
|
|
|
00:11:54.965 --> 00:11:57.436 |
|
I only said what the probability of X1 |
|
|
|
00:11:57.436 --> 00:11:59.916 |
|
given Y is, and I can't figure out the |
|
|
|
00:11:59.916 --> 00:12:02.683 |
|
probability of X0 given Y from the |
|
|
|
00:12:02.683 --> 00:12:04.149 |
|
probability of X1 given Y. |
|
|
|
00:12:05.960 --> 00:12:07.340 |
|
Or as I at least. |
|
|
|
00:12:08.090 --> 00:12:09.870 |
|
I haven't thought through it in great |
|
|
|
00:12:09.870 --> 00:12:11.245 |
|
detail, but I don't think I can figure |
|
|
|
00:12:11.245 --> 00:12:11.500 |
|
it out. |
|
|
|
00:12:13.090 --> 00:12:13.490 |
|
Alright. |
|
|
|
00:12:13.490 --> 00:12:16.180 |
|
So then with without the Naive's |
|
|
|
00:12:16.180 --> 00:12:18.580 |
|
assumption, yeah, under the nibs, |
|
|
|
00:12:18.580 --> 00:12:18.970 |
|
sorry. |
|
|
|
00:12:19.710 --> 00:12:21.020 |
|
I made a I was. |
|
|
|
00:12:21.180 --> 00:12:21.400 |
|
OK. |
|
|
|
00:12:22.240 --> 00:12:24.030 |
|
Under the name's assumption. |
|
|
|
00:12:24.030 --> 00:12:26.560 |
|
Is it possible to figure that out? |
|
|
|
00:12:26.560 --> 00:12:27.250 |
|
Let me think. |
|
|
|
00:12:28.140 --> 00:12:29.570 |
|
Probability of X1. |
|
|
|
00:12:38.260 --> 00:12:39.630 |
|
Yeah, sorry about that. |
|
|
|
00:12:39.630 --> 00:12:42.530 |
|
I was I switched these in my head. |
|
|
|
00:12:42.530 --> 00:12:44.620 |
|
So under the knob is assumption. |
|
|
|
00:12:44.620 --> 00:12:47.310 |
|
Actually I can figure this out because. |
|
|
|
00:12:47.400 --> 00:12:47.990 |
|
|
|
|
|
00:12:49.270 --> 00:12:52.470 |
|
If because if X, since X is binary, |
|
|
|
00:12:52.470 --> 00:12:56.020 |
|
then if X probability of X 1 = 1 given |
|
|
|
00:12:56.020 --> 00:12:56.779 |
|
y = 0. |
|
|
|
00:12:57.440 --> 00:13:00.891 |
|
Is 2/3 then probability of X 1 = 0 |
|
|
|
00:13:00.891 --> 00:13:04.403 |
|
given y = 0 is 1/3 and probability of |
|
|
|
00:13:04.403 --> 00:13:08.306 |
|
X2 given equals zero given y = 0 is 2/3 |
|
|
|
00:13:08.306 --> 00:13:12.599 |
|
and probability of X 1 = 0 given y = 1 |
|
|
|
00:13:12.599 --> 00:13:13.379 |
|
is 3/4. |
|
|
|
00:13:13.380 --> 00:13:17.979 |
|
So I know probability of X = 0 given Y. |
|
|
|
00:13:18.940 --> 00:13:22.590 |
|
Equals 0 or y = 1 so I can solve this |
|
|
|
00:13:22.590 --> 00:13:22.800 |
|
one. |
|
|
|
00:13:23.520 --> 00:13:25.720 |
|
And then I kind of gave it away, but |
|
|
|
00:13:25.720 --> 00:13:28.440 |
|
without the nib is assumption is it |
|
|
|
00:13:28.440 --> 00:13:30.860 |
|
possible to calculate the probability |
|
|
|
00:13:30.860 --> 00:13:34.179 |
|
of y = 1 given X 1 = 1 and X 2 = 1? |
|
|
|
00:13:37.370 --> 00:13:39.600 |
|
No, I mean I already I said it, but. |
|
|
|
00:13:40.670 --> 00:13:41.520 |
|
But no, it's not. |
|
|
|
00:13:41.520 --> 00:13:43.090 |
|
And the reason is because I don't have |
|
|
|
00:13:43.090 --> 00:13:44.520 |
|
any of the joint probabilities here. |
|
|
|
00:13:44.520 --> 00:13:45.740 |
|
For that I would need to know |
|
|
|
00:13:45.740 --> 00:13:48.785 |
|
something, the probability of X1 and X2 |
|
|
|
00:13:48.785 --> 00:13:51.440 |
|
and Y the full probability table. |
|
|
|
00:13:51.440 --> 00:13:53.205 |
|
Or I would need to be given the |
|
|
|
00:13:53.205 --> 00:13:55.359 |
|
probability of Y given X1 and X2. |
|
|
|
00:14:04.200 --> 00:14:06.700 |
|
Alright, so that was just a little |
|
|
|
00:14:06.700 --> 00:14:07.795 |
|
Review and warm up. |
|
|
|
00:14:07.795 --> 00:14:10.410 |
|
So today I'm going to mainly talk about |
|
|
|
00:14:10.410 --> 00:14:13.418 |
|
Linear models and in particular I'll |
|
|
|
00:14:13.418 --> 00:14:15.938 |
|
talk about Linear, Logistic Regression |
|
|
|
00:14:15.938 --> 00:14:17.522 |
|
and Linear Regression. |
|
|
|
00:14:17.522 --> 00:14:19.240 |
|
And then as part of that I'll talk |
|
|
|
00:14:19.240 --> 00:14:20.290 |
|
about this concept called |
|
|
|
00:14:20.290 --> 00:14:21.180 |
|
regularization. |
|
|
|
00:14:24.880 --> 00:14:27.179 |
|
Right, So what is the Linear model? |
|
|
|
00:14:27.179 --> 00:14:31.925 |
|
A Linear model is a model in a model is |
|
|
|
00:14:31.925 --> 00:14:36.949 |
|
linear in X if it is a X plus some plus |
|
|
|
00:14:36.950 --> 00:14:38.360 |
|
maybe some constant value. |
|
|
|
00:14:39.030 --> 00:14:41.940 |
|
So I can write that as W transpose X + |
|
|
|
00:14:41.940 --> 00:14:44.850 |
|
B and remember using your linear |
|
|
|
00:14:44.850 --> 00:14:46.510 |
|
algebra that that's the same as the sum |
|
|
|
00:14:46.510 --> 00:14:50.113 |
|
over I of wixi plus B. |
|
|
|
00:14:50.113 --> 00:14:53.920 |
|
So for any values of X&B these are WI |
|
|
|
00:14:53.920 --> 00:14:55.260 |
|
and B are scalars. |
|
|
|
00:14:55.260 --> 00:14:57.835 |
|
XI would be a scalar, so X is a vector, |
|
|
|
00:14:57.835 --> 00:14:58.370 |
|
W vector. |
|
|
|
00:14:59.290 --> 00:15:02.680 |
|
So this is a Linear model no matter how |
|
|
|
00:15:02.680 --> 00:15:03.990 |
|
I choose those coefficients. |
|
|
|
00:15:05.750 --> 00:15:07.730 |
|
And there's two main kinds of Linear |
|
|
|
00:15:07.730 --> 00:15:08.330 |
|
models. |
|
|
|
00:15:08.330 --> 00:15:10.603 |
|
There's a Linear classifier and a |
|
|
|
00:15:10.603 --> 00:15:11.570 |
|
Linear regressor. |
|
|
|
00:15:12.370 --> 00:15:15.210 |
|
So in a Linear classifier. |
|
|
|
00:15:16.180 --> 00:15:19.450 |
|
This W transpose X + B is giving you a |
|
|
|
00:15:19.450 --> 00:15:21.490 |
|
score for how likely. |
|
|
|
00:15:22.190 --> 00:15:27.400 |
|
A feature vector is to belong to one |
|
|
|
00:15:27.400 --> 00:15:28.790 |
|
class or the other class. |
|
|
|
00:15:30.020 --> 00:15:31.300 |
|
So that's shown down here. |
|
|
|
00:15:31.300 --> 00:15:33.854 |
|
We have like some O's and some |
|
|
|
00:15:33.854 --> 00:15:34.320 |
|
triangles. |
|
|
|
00:15:34.320 --> 00:15:36.240 |
|
I've got a Linear model here. |
|
|
|
00:15:36.240 --> 00:15:40.555 |
|
This is the West transpose X + B and |
|
|
|
00:15:40.555 --> 00:15:44.270 |
|
that gives me a score that say that say |
|
|
|
00:15:44.270 --> 00:15:46.220 |
|
that class is equal to 1. |
|
|
|
00:15:46.220 --> 00:15:48.170 |
|
Maybe I'm saying the triangles are ones |
|
|
|
00:15:48.170 --> 00:15:49.370 |
|
are y = 1. |
|
|
|
00:15:50.220 --> 00:15:54.692 |
|
So if I this line will project all of |
|
|
|
00:15:54.692 --> 00:15:57.242 |
|
these different points onto the line. |
|
|
|
00:15:57.242 --> 00:15:59.847 |
|
The West transpose X + B projects all |
|
|
|
00:15:59.847 --> 00:16:01.640 |
|
of these points onto this line. |
|
|
|
00:16:02.650 --> 00:16:05.146 |
|
And then we tend to look at when you |
|
|
|
00:16:05.146 --> 00:16:07.140 |
|
when you see like diagrams of Linear |
|
|
|
00:16:07.140 --> 00:16:07.990 |
|
Classifiers. |
|
|
|
00:16:07.990 --> 00:16:09.480 |
|
Often what people are showing is the |
|
|
|
00:16:09.480 --> 00:16:10.150 |
|
boundary. |
|
|
|
00:16:10.890 --> 00:16:13.550 |
|
Which is where W transpose X + b is |
|
|
|
00:16:13.550 --> 00:16:14.400 |
|
equal to 0. |
|
|
|
00:16:16.460 --> 00:16:18.580 |
|
So all the points that project on one |
|
|
|
00:16:18.580 --> 00:16:20.699 |
|
side of the boundary will be one class |
|
|
|
00:16:20.700 --> 00:16:22.264 |
|
and all the ones that project on the |
|
|
|
00:16:22.264 --> 00:16:24.132 |
|
other side of the boundary or the other |
|
|
|
00:16:24.132 --> 00:16:24.366 |
|
class. |
|
|
|
00:16:24.366 --> 00:16:26.670 |
|
Or in other words, if W transpose X + B |
|
|
|
00:16:26.670 --> 00:16:27.830 |
|
is greater than 0. |
|
|
|
00:16:28.470 --> 00:16:29.270 |
|
It's one class. |
|
|
|
00:16:29.270 --> 00:16:30.675 |
|
If it's less than zero, it's the other |
|
|
|
00:16:30.675 --> 00:16:30.960 |
|
class. |
|
|
|
00:16:32.640 --> 00:16:34.020 |
|
A Linear regressor. |
|
|
|
00:16:34.020 --> 00:16:36.720 |
|
You're directly fitting the data |
|
|
|
00:16:36.720 --> 00:16:40.790 |
|
points, and you're solving for a line |
|
|
|
00:16:40.790 --> 00:16:42.430 |
|
that passes through. |
|
|
|
00:16:43.630 --> 00:16:45.130 |
|
The target and features. |
|
|
|
00:16:46.030 --> 00:16:48.495 |
|
So that you're more directly so that |
|
|
|
00:16:48.495 --> 00:16:50.880 |
|
you're able to predict the target |
|
|
|
00:16:50.880 --> 00:16:54.310 |
|
value, the Y given your features and so |
|
|
|
00:16:54.310 --> 00:16:57.740 |
|
in 2D I can plot that as a 2D line, but |
|
|
|
00:16:57.740 --> 00:16:59.740 |
|
it can be ND it could be a high |
|
|
|
00:16:59.740 --> 00:17:00.600 |
|
dimensional line. |
|
|
|
00:17:01.300 --> 00:17:04.890 |
|
And you have y = W transpose X + B. |
|
|
|
00:17:06.290 --> 00:17:09.150 |
|
So in Classification, typically it's |
|
|
|
00:17:09.150 --> 00:17:11.550 |
|
not Y equals W transpose X + B, it's |
|
|
|
00:17:11.550 --> 00:17:15.210 |
|
some kind of score for how it's a score |
|
|
|
00:17:15.210 --> 00:17:17.340 |
|
for Y, and in Regression you're |
|
|
|
00:17:17.340 --> 00:17:20.290 |
|
directly fitting Y with that line. |
|
|
|
00:17:21.820 --> 00:17:22.200 |
|
Question. |
|
|
|
00:17:27.440 --> 00:17:33.335 |
|
I almost all situations so at the so at |
|
|
|
00:17:33.335 --> 00:17:35.740 |
|
the end of the day, like for example if |
|
|
|
00:17:35.740 --> 00:17:36.890 |
|
you're doing deep learning. |
|
|
|
00:17:37.520 --> 00:17:40.510 |
|
All of the different layers of the most |
|
|
|
00:17:40.510 --> 00:17:42.503 |
|
of the layers of the feature, I mean of |
|
|
|
00:17:42.503 --> 00:17:43.960 |
|
the network, you can think of as |
|
|
|
00:17:43.960 --> 00:17:46.260 |
|
learning a feature representation and |
|
|
|
00:17:46.260 --> 00:17:47.510 |
|
at the end of it you have a Linear |
|
|
|
00:17:47.510 --> 00:17:50.190 |
|
classifier that maps from the features |
|
|
|
00:17:50.190 --> 00:17:51.420 |
|
into the target label. |
|
|
|
00:18:06.090 --> 00:18:06.560 |
|
|
|
|
|
00:18:14.220 --> 00:18:16.592 |
|
So the so the question is if you were |
|
|
|
00:18:16.592 --> 00:18:18.170 |
|
if you were trying to predict whether |
|
|
|
00:18:18.170 --> 00:18:20.400 |
|
or not somebody is caught based on a |
|
|
|
00:18:20.400 --> 00:18:21.150 |
|
bunch of features. |
|
|
|
00:18:22.260 --> 00:18:23.979 |
|
You could use the Linear classifier for |
|
|
|
00:18:23.980 --> 00:18:24.250 |
|
that. |
|
|
|
00:18:24.250 --> 00:18:26.840 |
|
So a Linear classifier is always a |
|
|
|
00:18:26.840 --> 00:18:28.843 |
|
binary classifier, but you can also use |
|
|
|
00:18:28.843 --> 00:18:30.530 |
|
it in Multiclass cases. |
|
|
|
00:18:30.530 --> 00:18:33.777 |
|
So for example if you want to Classify |
|
|
|
00:18:33.777 --> 00:18:35.860 |
|
if you have a picture of some animal |
|
|
|
00:18:35.860 --> 00:18:37.366 |
|
and you want to Classify what kind of |
|
|
|
00:18:37.366 --> 00:18:37.900 |
|
animal it is. |
|
|
|
00:18:38.890 --> 00:18:40.634 |
|
And you have a bunch of features. |
|
|
|
00:18:40.634 --> 00:18:43.470 |
|
Features could be like image Pixels, or |
|
|
|
00:18:43.470 --> 00:18:45.950 |
|
it could be more complicated features |
|
|
|
00:18:45.950 --> 00:18:48.420 |
|
than you would have a Linear model for |
|
|
|
00:18:48.420 --> 00:18:50.860 |
|
each of the possible kinds of animals, |
|
|
|
00:18:50.860 --> 00:18:54.040 |
|
and you would score each of the classes |
|
|
|
00:18:54.040 --> 00:18:55.665 |
|
according to that model, and then you |
|
|
|
00:18:55.665 --> 00:18:56.930 |
|
would choose the one with the highest |
|
|
|
00:18:56.930 --> 00:18:57.280 |
|
score. |
|
|
|
00:18:58.790 --> 00:19:00.930 |
|
So there's so some examples of Linear |
|
|
|
00:19:00.930 --> 00:19:03.720 |
|
models are support vector. |
|
|
|
00:19:03.720 --> 00:19:06.570 |
|
The only the two main examples I would |
|
|
|
00:19:06.570 --> 00:19:08.936 |
|
say are support vector machines and |
|
|
|
00:19:08.936 --> 00:19:10.009 |
|
Logistic Regression. |
|
|
|
00:19:10.009 --> 00:19:11.619 |
|
Linear Logistic Regression. |
|
|
|
00:19:12.630 --> 00:19:14.750 |
|
Naive Bayes is also a Linear model, |
|
|
|
00:19:14.750 --> 00:19:15.220 |
|
but. |
|
|
|
00:19:16.230 --> 00:19:18.080 |
|
And many other kinds of Classifiers. |
|
|
|
00:19:18.080 --> 00:19:19.670 |
|
If you like, do the math, you can show |
|
|
|
00:19:19.670 --> 00:19:21.150 |
|
that it's also a Linear model at the |
|
|
|
00:19:21.150 --> 00:19:23.565 |
|
end of the day, but it's less thought |
|
|
|
00:19:23.565 --> 00:19:24.510 |
|
that way. |
|
|
|
00:19:31.680 --> 00:19:35.210 |
|
Cannon is not a Linear model. |
|
|
|
00:19:35.210 --> 00:19:37.390 |
|
It has a non linear decision boundary. |
|
|
|
00:19:38.070 --> 00:19:40.948 |
|
And boosted decision trees you can |
|
|
|
00:19:40.948 --> 00:19:42.677 |
|
think of it as. |
|
|
|
00:19:42.677 --> 00:19:45.180 |
|
So first like I will talk about trees |
|
|
|
00:19:45.180 --> 00:19:47.750 |
|
and Bruce the decision trees next week. |
|
|
|
00:19:47.750 --> 00:19:50.020 |
|
So I'm not going to fill in the details |
|
|
|
00:19:50.020 --> 00:19:51.140 |
|
for those who don't know what they are. |
|
|
|
00:19:51.140 --> 00:19:53.109 |
|
But basically you can think of it as |
|
|
|
00:19:53.110 --> 00:19:55.010 |
|
that the tree is creating a |
|
|
|
00:19:55.010 --> 00:19:56.280 |
|
partitioning of the features. |
|
|
|
00:19:57.470 --> 00:19:59.115 |
|
Given that partitioning, you then have |
|
|
|
00:19:59.115 --> 00:20:02.160 |
|
a Linear model on top of it, so you can |
|
|
|
00:20:02.160 --> 00:20:03.570 |
|
think of it as an encoding of the |
|
|
|
00:20:03.570 --> 00:20:04.850 |
|
features plus a Linear model. |
|
|
|
00:20:06.030 --> 00:20:06.300 |
|
Yeah. |
|
|
|
00:20:24.510 --> 00:20:26.290 |
|
How many like different models you need |
|
|
|
00:20:26.290 --> 00:20:26.610 |
|
or. |
|
|
|
00:20:26.610 --> 00:20:28.350 |
|
So it's the. |
|
|
|
00:20:28.350 --> 00:20:29.020 |
|
It depends. |
|
|
|
00:20:29.020 --> 00:20:30.800 |
|
It's kind of given by the problem |
|
|
|
00:20:30.800 --> 00:20:32.440 |
|
setup, so if you're told. |
|
|
|
00:20:33.930 --> 00:20:35.890 |
|
If you for example. |
|
|
|
00:20:36.970 --> 00:20:37.750 |
|
|
|
|
|
00:20:38.900 --> 00:20:39.590 |
|
|
|
|
|
00:20:40.930 --> 00:20:42.670 |
|
OK, I'll just choose an image example |
|
|
|
00:20:42.670 --> 00:20:44.010 |
|
because this pop into my head most |
|
|
|
00:20:44.010 --> 00:20:44.680 |
|
easily. |
|
|
|
00:20:44.680 --> 00:20:45.970 |
|
So if you're trying to Classify |
|
|
|
00:20:45.970 --> 00:20:47.720 |
|
something between male or female, |
|
|
|
00:20:47.720 --> 00:20:49.670 |
|
Classify an image between is it a male |
|
|
|
00:20:49.670 --> 00:20:50.210 |
|
or female? |
|
|
|
00:20:50.210 --> 00:20:51.678 |
|
Then you know you have two classes so |
|
|
|
00:20:51.678 --> 00:20:54.164 |
|
you need to fit two models, one or need |
|
|
|
00:20:54.164 --> 00:20:54.820 |
|
to fit. |
|
|
|
00:20:55.640 --> 00:20:57.200 |
|
And the two class model you only have |
|
|
|
00:20:57.200 --> 00:20:58.670 |
|
to fit one model because either it's |
|
|
|
00:20:58.670 --> 00:20:59.270 |
|
one or the other. |
|
|
|
00:20:59.980 --> 00:21:03.445 |
|
If you have, if you're trying to |
|
|
|
00:21:03.445 --> 00:21:05.153 |
|
Classify, let's say you're trying to |
|
|
|
00:21:05.153 --> 00:21:06.845 |
|
Classify a face into different age |
|
|
|
00:21:06.845 --> 00:21:07.160 |
|
groups. |
|
|
|
00:21:07.160 --> 00:21:09.460 |
|
Is it somebody that's under 10, between |
|
|
|
00:21:09.460 --> 00:21:11.832 |
|
10 and 2020 and 30 and so on, then you |
|
|
|
00:21:11.832 --> 00:21:13.660 |
|
would need like one model for each of |
|
|
|
00:21:13.660 --> 00:21:14.930 |
|
those age groups. |
|
|
|
00:21:14.930 --> 00:21:17.566 |
|
So usually as a problem set up you say |
|
|
|
00:21:17.566 --> 00:21:19.880 |
|
I have these like features available to |
|
|
|
00:21:19.880 --> 00:21:22.194 |
|
make my Prediction, and I have these |
|
|
|
00:21:22.194 --> 00:21:23.890 |
|
things that I want to Predict. |
|
|
|
00:21:23.890 --> 00:21:28.460 |
|
And if the things are a like a set of |
|
|
|
00:21:28.460 --> 00:21:30.560 |
|
categories, then you would need one |
|
|
|
00:21:30.560 --> 00:21:32.060 |
|
Linear model per category. |
|
|
|
00:21:33.040 --> 00:21:35.390 |
|
And if the thing that you're trying to |
|
|
|
00:21:35.390 --> 00:21:38.990 |
|
Predict is a set of continuous values, |
|
|
|
00:21:38.990 --> 00:21:40.940 |
|
then you would need one Linear model |
|
|
|
00:21:40.940 --> 00:21:42.030 |
|
per continuous value. |
|
|
|
00:21:42.710 --> 00:21:45.160 |
|
If you're using like Linear models. |
|
|
|
00:21:45.860 --> 00:21:46.670 |
|
Does that make sense? |
|
|
|
00:21:47.400 --> 00:21:49.980 |
|
And then you mentioned like. |
|
|
|
00:21:50.790 --> 00:21:52.850 |
|
You mentioned hidden hidden layers or |
|
|
|
00:21:52.850 --> 00:21:54.230 |
|
something, but that would be part of |
|
|
|
00:21:54.230 --> 00:21:56.650 |
|
neural networks and that would be like. |
|
|
|
00:21:57.450 --> 00:22:00.790 |
|
A design choice for the network that we |
|
|
|
00:22:00.790 --> 00:22:02.320 |
|
can talk about when we get to network. |
|
|
|
00:22:05.500 --> 00:22:05.810 |
|
OK. |
|
|
|
00:22:09.100 --> 00:22:12.500 |
|
A Linear classifier, you would say that |
|
|
|
00:22:12.500 --> 00:22:16.150 |
|
the label is 1 if W transpose X + B is |
|
|
|
00:22:16.150 --> 00:22:16.950 |
|
greater than 0. |
|
|
|
00:22:17.960 --> 00:22:19.410 |
|
And then there's this important concept |
|
|
|
00:22:19.410 --> 00:22:21.040 |
|
called linearly separable. |
|
|
|
00:22:21.040 --> 00:22:22.200 |
|
So that just means that you can |
|
|
|
00:22:22.200 --> 00:22:24.425 |
|
separate the points, the features of |
|
|
|
00:22:24.425 --> 00:22:25.530 |
|
the two classes. |
|
|
|
00:22:26.450 --> 00:22:27.750 |
|
Cleanly so. |
|
|
|
00:22:30.220 --> 00:22:33.780 |
|
So for example, which of these is |
|
|
|
00:22:33.780 --> 00:22:36.000 |
|
linearly separable, the left or the |
|
|
|
00:22:36.000 --> 00:22:36.460 |
|
right? |
|
|
|
00:22:38.250 --> 00:22:41.130 |
|
Right the left is linearly separable |
|
|
|
00:22:41.130 --> 00:22:42.950 |
|
because I can put a line between them |
|
|
|
00:22:42.950 --> 00:22:44.680 |
|
and all the triangles will be on one |
|
|
|
00:22:44.680 --> 00:22:46.290 |
|
side and the circles will be on the |
|
|
|
00:22:46.290 --> 00:22:46.520 |
|
other. |
|
|
|
00:22:47.210 --> 00:22:48.660 |
|
But the right side is not linearly |
|
|
|
00:22:48.660 --> 00:22:49.190 |
|
separable. |
|
|
|
00:22:49.190 --> 00:22:51.910 |
|
I can't put any line to separate those |
|
|
|
00:22:51.910 --> 00:22:53.780 |
|
from the triangles. |
|
|
|
00:22:55.970 --> 00:22:58.595 |
|
So it's important to note that. |
|
|
|
00:22:58.595 --> 00:23:01.150 |
|
So sometimes, like the fact that I have |
|
|
|
00:23:01.150 --> 00:23:03.230 |
|
to draw everything in 2D on slides can |
|
|
|
00:23:03.230 --> 00:23:04.240 |
|
be a little misleading. |
|
|
|
00:23:04.860 --> 00:23:07.220 |
|
It may make you think that Linear |
|
|
|
00:23:07.220 --> 00:23:09.200 |
|
Classifiers are not very powerful. |
|
|
|
00:23:10.080 --> 00:23:11.930 |
|
Because in two dimensions they're not |
|
|
|
00:23:11.930 --> 00:23:13.860 |
|
very powerful, I can create lots of |
|
|
|
00:23:13.860 --> 00:23:16.070 |
|
combinations of points where I just |
|
|
|
00:23:16.070 --> 00:23:17.580 |
|
can't get very good Classification |
|
|
|
00:23:17.580 --> 00:23:18.170 |
|
accuracy. |
|
|
|
00:23:19.410 --> 00:23:21.410 |
|
But as you get into higher dimensions, |
|
|
|
00:23:21.410 --> 00:23:23.340 |
|
the Linear Classifiers become more and |
|
|
|
00:23:23.340 --> 00:23:24.140 |
|
more powerful. |
|
|
|
00:23:25.210 --> 00:23:28.434 |
|
And in fact, if you have D dimensions, |
|
|
|
00:23:28.434 --> 00:23:31.460 |
|
if you have D features, that's what I |
|
|
|
00:23:31.460 --> 00:23:32.419 |
|
mean by D dimensions. |
|
|
|
00:23:33.050 --> 00:23:35.850 |
|
Then you can separate D + 1 points with |
|
|
|
00:23:35.850 --> 00:23:37.700 |
|
any arbitrary labeling. |
|
|
|
00:23:37.700 --> 00:23:40.370 |
|
So as an example, if I have one |
|
|
|
00:23:40.370 --> 00:23:42.300 |
|
dimension, I only have one feature |
|
|
|
00:23:42.300 --> 00:23:42.880 |
|
value. |
|
|
|
00:23:43.610 --> 00:23:45.350 |
|
I can separate two points whether I |
|
|
|
00:23:45.350 --> 00:23:47.740 |
|
label this as X and this is O or |
|
|
|
00:23:47.740 --> 00:23:49.400 |
|
reverse I can separate them. |
|
|
|
00:23:50.930 --> 00:23:52.430 |
|
But I can't separate these three |
|
|
|
00:23:52.430 --> 00:23:53.100 |
|
points. |
|
|
|
00:23:53.100 --> 00:23:55.730 |
|
So if it were like XI could separate |
|
|
|
00:23:55.730 --> 00:23:58.519 |
|
it, but when it's Oxo I can't separate |
|
|
|
00:23:58.520 --> 00:24:02.050 |
|
that with A1 dimensional 1 dimensional |
|
|
|
00:24:02.050 --> 00:24:02.880 |
|
linear separator. |
|
|
|
00:24:04.770 --> 00:24:07.090 |
|
In 2 dimensions, I can separate these |
|
|
|
00:24:07.090 --> 00:24:08.730 |
|
three points no matter how I label |
|
|
|
00:24:08.730 --> 00:24:11.570 |
|
them, whether it's ox or ox, no matter |
|
|
|
00:24:11.570 --> 00:24:13.365 |
|
how I do it, I can put a line between |
|
|
|
00:24:13.365 --> 00:24:13.590 |
|
them. |
|
|
|
00:24:14.240 --> 00:24:16.220 |
|
But I can't separate four points. |
|
|
|
00:24:16.220 --> 00:24:18.030 |
|
So that's a concept called shattering |
|
|
|
00:24:18.030 --> 00:24:20.926 |
|
and an idea and Generalization theory |
|
|
|
00:24:20.926 --> 00:24:22.017 |
|
called the VC dimension. |
|
|
|
00:24:22.017 --> 00:24:24.120 |
|
The more points you can shatter, like |
|
|
|
00:24:24.120 --> 00:24:26.175 |
|
the more powerful your classifier, but |
|
|
|
00:24:26.175 --> 00:24:27.630 |
|
more importantly. |
|
|
|
00:24:28.430 --> 00:24:30.910 |
|
The If you think about if you have 1000 |
|
|
|
00:24:30.910 --> 00:24:31.590 |
|
features. |
|
|
|
00:24:32.320 --> 00:24:34.630 |
|
That means that if you have 1000 data |
|
|
|
00:24:34.630 --> 00:24:38.130 |
|
points, random feature points, and you |
|
|
|
00:24:38.130 --> 00:24:40.386 |
|
label them arbitrarily, there's two to |
|
|
|
00:24:40.386 --> 00:24:41.274 |
|
the one. |
|
|
|
00:24:41.274 --> 00:24:43.939 |
|
There's two to the 1000 different |
|
|
|
00:24:43.940 --> 00:24:45.960 |
|
labels that you could assign different |
|
|
|
00:24:45.960 --> 00:24:47.478 |
|
like label sets that you could assign |
|
|
|
00:24:47.478 --> 00:24:49.965 |
|
to those 1000 points because either one |
|
|
|
00:24:49.965 --> 00:24:50.440 |
|
could be. |
|
|
|
00:24:50.440 --> 00:24:51.650 |
|
Every point can be positive or |
|
|
|
00:24:51.650 --> 00:24:51.940 |
|
negative. |
|
|
|
00:24:53.320 --> 00:24:55.150 |
|
For all of those two to the 1000 |
|
|
|
00:24:55.150 --> 00:24:57.110 |
|
different labelings, you can linearly |
|
|
|
00:24:57.110 --> 00:24:59.110 |
|
separate it perfectly with 1000 |
|
|
|
00:24:59.110 --> 00:24:59.560 |
|
features. |
|
|
|
00:25:00.500 --> 00:25:02.490 |
|
So that's pretty crazy. |
|
|
|
00:25:02.490 --> 00:25:04.080 |
|
So this Linear classifier. |
|
|
|
00:25:04.720 --> 00:25:07.480 |
|
Can deal with these two to the 1000 |
|
|
|
00:25:07.480 --> 00:25:09.370 |
|
different cases perfectly. |
|
|
|
00:25:11.010 --> 00:25:12.420 |
|
So as you get into very high |
|
|
|
00:25:12.420 --> 00:25:14.315 |
|
dimensions, Linear classifier gets very |
|
|
|
00:25:14.315 --> 00:25:15.100 |
|
very powerful. |
|
|
|
00:25:22.530 --> 00:25:23.060 |
|
|
|
|
|
00:25:23.940 --> 00:25:26.850 |
|
So the question is, more dimensions |
|
|
|
00:25:26.850 --> 00:25:28.110 |
|
mean more storage? |
|
|
|
00:25:28.110 --> 00:25:30.970 |
|
Yes, but it's only Linear, so. |
|
|
|
00:25:31.040 --> 00:25:33.710 |
|
So that's not usually too much of a |
|
|
|
00:25:33.710 --> 00:25:34.290 |
|
concern. |
|
|
|
00:25:37.990 --> 00:25:38.230 |
|
Yes. |
|
|
|
00:26:14.610 --> 00:26:16.100 |
|
So the question is like how do you |
|
|
|
00:26:16.100 --> 00:26:18.160 |
|
visualize 1000 features? |
|
|
|
00:26:18.830 --> 00:26:20.260 |
|
And. |
|
|
|
00:26:20.400 --> 00:26:23.870 |
|
And so I will talk about essentially |
|
|
|
00:26:23.870 --> 00:26:25.180 |
|
you have to map it down into 2 |
|
|
|
00:26:25.180 --> 00:26:26.750 |
|
dimensions or one dimension in |
|
|
|
00:26:26.750 --> 00:26:29.390 |
|
different ways and I'll talk about that |
|
|
|
00:26:29.390 --> 00:26:30.945 |
|
later in this semester. |
|
|
|
00:26:30.945 --> 00:26:33.890 |
|
So there's the simplest methods are |
|
|
|
00:26:33.890 --> 00:26:36.720 |
|
Linear Linear projections, principal |
|
|
|
00:26:36.720 --> 00:26:38.490 |
|
component analysis, where you'd project |
|
|
|
00:26:38.490 --> 00:26:40.230 |
|
it down under the dominant directions. |
|
|
|
00:26:41.180 --> 00:26:43.220 |
|
There's also like nonlinear local |
|
|
|
00:26:43.220 --> 00:26:46.640 |
|
embeddings that will create a better |
|
|
|
00:26:46.640 --> 00:26:48.100 |
|
mapping out of all the features. |
|
|
|
00:26:49.700 --> 00:26:51.880 |
|
You can also do things like analyze |
|
|
|
00:26:51.880 --> 00:26:53.490 |
|
each feature by itself to see how |
|
|
|
00:26:53.490 --> 00:26:54.380 |
|
predictive it is. |
|
|
|
00:26:55.260 --> 00:26:56.750 |
|
And. |
|
|
|
00:26:56.860 --> 00:26:57.750 |
|
But like. |
|
|
|
00:26:58.850 --> 00:27:00.807 |
|
Ultimately you kind of need to do a |
|
|
|
00:27:00.807 --> 00:27:01.010 |
|
test. |
|
|
|
00:27:01.010 --> 00:27:03.120 |
|
So you what you would do is you do some |
|
|
|
00:27:03.120 --> 00:27:04.936 |
|
kind of validation test where you would |
|
|
|
00:27:04.936 --> 00:27:08.640 |
|
train a train a Linear model on say |
|
|
|
00:27:08.640 --> 00:27:10.600 |
|
like 80% of the data and test it on the |
|
|
|
00:27:10.600 --> 00:27:12.860 |
|
other 20% to see if you're able to |
|
|
|
00:27:12.860 --> 00:27:15.200 |
|
predict the remaining 20% or if you |
|
|
|
00:27:15.200 --> 00:27:16.439 |
|
want to just see if it's linearly |
|
|
|
00:27:16.439 --> 00:27:16.646 |
|
separable. |
|
|
|
00:27:16.646 --> 00:27:18.678 |
|
Then if you train it on all the data, |
|
|
|
00:27:18.678 --> 00:27:20.633 |
|
if you get perfect Training Error then |
|
|
|
00:27:20.633 --> 00:27:21.471 |
|
it's linearly separable. |
|
|
|
00:27:21.471 --> 00:27:23.317 |
|
And if you don't get perfect Training |
|
|
|
00:27:23.317 --> 00:27:25.180 |
|
Error then it's then it's not. |
|
|
|
00:27:25.180 --> 00:27:27.830 |
|
Unless you like if you didn't apply a |
|
|
|
00:27:27.830 --> 00:27:29.070 |
|
very strong regularization. |
|
|
|
00:27:30.640 --> 00:27:31.060 |
|
You're welcome. |
|
|
|
00:27:31.930 --> 00:27:33.380 |
|
Yeah, but you can't really visualize |
|
|
|
00:27:33.380 --> 00:27:34.310 |
|
more than two dimensions. |
|
|
|
00:27:34.310 --> 00:27:36.870 |
|
That's always a challenge, and it leads |
|
|
|
00:27:36.870 --> 00:27:38.820 |
|
sometimes to bad intuitions. |
|
|
|
00:27:40.520 --> 00:27:41.370 |
|
So. |
|
|
|
00:27:42.610 --> 00:27:44.100 |
|
The thing is though that there is still |
|
|
|
00:27:44.100 --> 00:27:45.970 |
|
like there might be many different ways |
|
|
|
00:27:45.970 --> 00:27:48.560 |
|
that I can separate the points, so all |
|
|
|
00:27:48.560 --> 00:27:50.500 |
|
of these will achieve 0 training error. |
|
|
|
00:27:50.500 --> 00:27:53.000 |
|
So the different Classifiers, the |
|
|
|
00:27:53.000 --> 00:27:54.860 |
|
different Linear Classifiers just have |
|
|
|
00:27:54.860 --> 00:27:56.680 |
|
different ways of choosing the line |
|
|
|
00:27:56.680 --> 00:27:58.600 |
|
essentially that make different |
|
|
|
00:27:58.600 --> 00:27:59.200 |
|
assumptions. |
|
|
|
00:28:00.850 --> 00:28:02.360 |
|
The. |
|
|
|
00:28:02.420 --> 00:28:04.450 |
|
Common principles are that you want to |
|
|
|
00:28:04.450 --> 00:28:06.670 |
|
get everything correct if you can, so |
|
|
|
00:28:06.670 --> 00:28:08.295 |
|
it's kind of obvious like ideally you |
|
|
|
00:28:08.295 --> 00:28:10.190 |
|
want to separate the positive from |
|
|
|
00:28:10.190 --> 00:28:11.700 |
|
negative examples with your Linear |
|
|
|
00:28:11.700 --> 00:28:12.210 |
|
classifier. |
|
|
|
00:28:13.030 --> 00:28:14.860 |
|
Or you want the scores to predict the |
|
|
|
00:28:14.860 --> 00:28:15.460 |
|
correct label? |
|
|
|
00:28:17.150 --> 00:28:18.820 |
|
But you also want to have some high |
|
|
|
00:28:18.820 --> 00:28:22.160 |
|
margin, so I would generally prefer |
|
|
|
00:28:22.160 --> 00:28:25.110 |
|
this separating boundary than this one. |
|
|
|
00:28:26.090 --> 00:28:28.465 |
|
Because this one, like everything, has |
|
|
|
00:28:28.465 --> 00:28:30.340 |
|
like at least this distance away from |
|
|
|
00:28:30.340 --> 00:28:32.860 |
|
the line, where with this boundary some |
|
|
|
00:28:32.860 --> 00:28:34.415 |
|
of the points come pretty close to the |
|
|
|
00:28:34.415 --> 00:28:34.630 |
|
line. |
|
|
|
00:28:35.230 --> 00:28:37.420 |
|
And there's theory that shows that the |
|
|
|
00:28:37.420 --> 00:28:40.340 |
|
bigger your margin for the same like |
|
|
|
00:28:40.340 --> 00:28:41.320 |
|
weight size. |
|
|
|
00:28:41.950 --> 00:28:44.590 |
|
The more likely you're classifier is to |
|
|
|
00:28:44.590 --> 00:28:45.360 |
|
generalize. |
|
|
|
00:28:45.360 --> 00:28:46.820 |
|
It kind of makes sense if you think of |
|
|
|
00:28:46.820 --> 00:28:48.055 |
|
this as a random sample. |
|
|
|
00:28:48.055 --> 00:28:50.346 |
|
If I were to Generate like more |
|
|
|
00:28:50.346 --> 00:28:52.400 |
|
triangles from the sample, you could |
|
|
|
00:28:52.400 --> 00:28:54.118 |
|
imagine that maybe one of the triangles |
|
|
|
00:28:54.118 --> 00:28:55.595 |
|
would fall on the wrong side of the |
|
|
|
00:28:55.595 --> 00:28:56.690 |
|
line and then this would make a |
|
|
|
00:28:56.690 --> 00:28:58.800 |
|
Classification Error, while that seems |
|
|
|
00:28:58.800 --> 00:29:00.270 |
|
less likely given this line. |
|
|
|
00:29:05.420 --> 00:29:07.760 |
|
So that brings us to Linear Logistic |
|
|
|
00:29:07.760 --> 00:29:08.390 |
|
Regression. |
|
|
|
00:29:09.230 --> 00:29:12.440 |
|
And in Linear Logistic Regression, we |
|
|
|
00:29:12.440 --> 00:29:14.390 |
|
want to maximize the probability of the |
|
|
|
00:29:14.390 --> 00:29:15.560 |
|
labels given the data. |
|
|
|
00:29:17.530 --> 00:29:19.747 |
|
And the probability of the label equals |
|
|
|
00:29:19.747 --> 00:29:21.950 |
|
one given the data is given by this |
|
|
|
00:29:21.950 --> 00:29:24.210 |
|
expression, here 1 / 1 + e to the |
|
|
|
00:29:24.210 --> 00:29:25.710 |
|
negative my Linear model. |
|
|
|
00:29:26.730 --> 00:29:29.620 |
|
This function 1 / 1 / 1 + E to the |
|
|
|
00:29:29.620 --> 00:29:32.023 |
|
negative whatever is a Logistic |
|
|
|
00:29:32.023 --> 00:29:34.056 |
|
function, that's called the Logistic |
|
|
|
00:29:34.056 --> 00:29:34.449 |
|
function. |
|
|
|
00:29:34.450 --> 00:29:37.132 |
|
So that's why this is Logistic Linear |
|
|
|
00:29:37.132 --> 00:29:39.270 |
|
Logistic Regression because I've got a |
|
|
|
00:29:39.270 --> 00:29:41.020 |
|
Linear model inside my Logistic |
|
|
|
00:29:41.020 --> 00:29:41.500 |
|
function. |
|
|
|
00:29:42.170 --> 00:29:44.060 |
|
So I'm regressing the Logistic function |
|
|
|
00:29:44.060 --> 00:29:44.900 |
|
with a Linear model. |
|
|
|
00:29:46.860 --> 00:29:48.240 |
|
This is called a logic. |
|
|
|
00:29:48.240 --> 00:29:51.270 |
|
So this statement up here the second |
|
|
|
00:29:51.270 --> 00:29:53.410 |
|
line implies that my Linear model. |
|
|
|
00:29:54.200 --> 00:29:56.225 |
|
Is fitting the. |
|
|
|
00:29:56.225 --> 00:29:59.210 |
|
It's called the odds log ratio. |
|
|
|
00:29:59.210 --> 00:30:01.469 |
|
So it's the log or log odds ratio. |
|
|
|
00:30:02.210 --> 00:30:04.673 |
|
It's the log of the probability of y = |
|
|
|
00:30:04.673 --> 00:30:06.962 |
|
1 given X over the probability of y = 0 |
|
|
|
00:30:06.962 --> 00:30:07.450 |
|
given X. |
|
|
|
00:30:08.360 --> 00:30:10.390 |
|
So if this is greater than zero, it |
|
|
|
00:30:10.390 --> 00:30:13.373 |
|
means that probability of y = 1 given X |
|
|
|
00:30:13.373 --> 00:30:16.216 |
|
is more likely than probability of y = |
|
|
|
00:30:16.216 --> 00:30:18.480 |
|
0 given X, and if it's less than zero |
|
|
|
00:30:18.480 --> 00:30:19.590 |
|
then the reverse is true. |
|
|
|
00:30:20.780 --> 00:30:24.042 |
|
This ratio is always 2 alternatives, so |
|
|
|
00:30:24.042 --> 00:30:24.807 |
|
it's one. |
|
|
|
00:30:24.807 --> 00:30:26.350 |
|
It's either going to be one class or |
|
|
|
00:30:26.350 --> 00:30:27.980 |
|
the other class, and this is the ratio |
|
|
|
00:30:27.980 --> 00:30:29.060 |
|
of those probabilities. |
|
|
|
00:30:34.620 --> 00:30:37.640 |
|
So if we think about Linear Logistic |
|
|
|
00:30:37.640 --> 00:30:39.900 |
|
Regression versus Naive Bayes. |
|
|
|
00:30:41.460 --> 00:30:43.350 |
|
They actually both have this Linear |
|
|
|
00:30:43.350 --> 00:30:45.620 |
|
model for at least Naive Bayes does for |
|
|
|
00:30:45.620 --> 00:30:47.420 |
|
many different probability functions. |
|
|
|
00:30:48.070 --> 00:30:49.810 |
|
For all the probability functions and |
|
|
|
00:30:49.810 --> 00:30:52.710 |
|
exponential family, which includes |
|
|
|
00:30:52.710 --> 00:30:55.000 |
|
Bernoulli, multinomial, Gaussian, |
|
|
|
00:30:55.000 --> 00:30:57.790 |
|
Laplacian, and many others, they're the |
|
|
|
00:30:57.790 --> 00:31:00.010 |
|
favorite favorite probability family of |
|
|
|
00:31:00.010 --> 00:31:00.970 |
|
statisticians. |
|
|
|
00:31:02.600 --> 00:31:04.970 |
|
The Naive Bayes predictor is also |
|
|
|
00:31:04.970 --> 00:31:07.610 |
|
Linear in X, but the difference is that |
|
|
|
00:31:07.610 --> 00:31:09.580 |
|
in Logistic Regression you're free to |
|
|
|
00:31:09.580 --> 00:31:11.460 |
|
independently tune these weights in |
|
|
|
00:31:11.460 --> 00:31:14.580 |
|
order to achieve your overall label |
|
|
|
00:31:14.580 --> 00:31:15.250 |
|
likelihood. |
|
|
|
00:31:16.110 --> 00:31:17.835 |
|
While in Naive Bayes you're restricted |
|
|
|
00:31:17.835 --> 00:31:19.650 |
|
to solve for each coefficient |
|
|
|
00:31:19.650 --> 00:31:22.260 |
|
independently in order to maximize the |
|
|
|
00:31:22.260 --> 00:31:24.580 |
|
probability of each feature given the |
|
|
|
00:31:24.580 --> 00:31:24.940 |
|
label. |
|
|
|
00:31:25.980 --> 00:31:27.620 |
|
So for that reason, I would say |
|
|
|
00:31:27.620 --> 00:31:29.430 |
|
Logistic Regression model is typically |
|
|
|
00:31:29.430 --> 00:31:31.060 |
|
more expressive than IBS. |
|
|
|
00:31:31.870 --> 00:31:33.736 |
|
It's possible for your data to be |
|
|
|
00:31:33.736 --> 00:31:35.610 |
|
linearly separable, but Naive Bayes |
|
|
|
00:31:35.610 --> 00:31:37.980 |
|
does not achieve 0 training error while |
|
|
|
00:31:37.980 --> 00:31:39.080 |
|
four Logistic Regression. |
|
|
|
00:31:39.080 --> 00:31:40.637 |
|
You could always achieve 0 training |
|
|
|
00:31:40.637 --> 00:31:42.335 |
|
error if your data is linearly |
|
|
|
00:31:42.335 --> 00:31:42.830 |
|
separable. |
|
|
|
00:31:45.160 --> 00:31:47.470 |
|
And then finally, it's important to |
|
|
|
00:31:47.470 --> 00:31:48.930 |
|
note that Logistic Regression is |
|
|
|
00:31:48.930 --> 00:31:50.810 |
|
directly fitting this discriminative |
|
|
|
00:31:50.810 --> 00:31:52.500 |
|
function, so it's mapping from the |
|
|
|
00:31:52.500 --> 00:31:54.826 |
|
features to a label and solving for |
|
|
|
00:31:54.826 --> 00:31:55.339 |
|
that mapping. |
|
|
|
00:31:56.050 --> 00:31:58.364 |
|
While many bees is trying to model the |
|
|
|
00:31:58.364 --> 00:32:00.773 |
|
probability of the features given the |
|
|
|
00:32:00.773 --> 00:32:02.405 |
|
data, so Logistic Regression doesn't |
|
|
|
00:32:02.405 --> 00:32:02.840 |
|
model that. |
|
|
|
00:32:02.840 --> 00:32:04.541 |
|
It just cares about the probability of |
|
|
|
00:32:04.541 --> 00:32:06.383 |
|
the label given the data, not the |
|
|
|
00:32:06.383 --> 00:32:07.486 |
|
probability of the data given the |
|
|
|
00:32:07.486 --> 00:32:07.670 |
|
label. |
|
|
|
00:32:09.020 --> 00:32:10.190 |
|
That probably features. |
|
|
|
00:32:12.600 --> 00:32:13.050 |
|
Question. |
|
|
|
00:32:14.990 --> 00:32:18.900 |
|
So Logistic Regression, sometimes |
|
|
|
00:32:18.900 --> 00:32:20.520 |
|
people will say it's a discriminative |
|
|
|
00:32:20.520 --> 00:32:22.529 |
|
function because you're trying to |
|
|
|
00:32:22.530 --> 00:32:23.980 |
|
discriminate between the different |
|
|
|
00:32:23.980 --> 00:32:25.330 |
|
things you're trying to Predict, |
|
|
|
00:32:25.330 --> 00:32:28.130 |
|
meaning that you're trying to fit the |
|
|
|
00:32:28.130 --> 00:32:29.560 |
|
probability of the thing that you're |
|
|
|
00:32:29.560 --> 00:32:30.190 |
|
trying to Predict. |
|
|
|
00:32:30.860 --> 00:32:33.170 |
|
Given the features or given the data. |
|
|
|
00:32:34.120 --> 00:32:36.870 |
|
Where sometimes people say that. |
|
|
|
00:32:36.940 --> 00:32:40.000 |
|
That, like Naive Bayes model is a |
|
|
|
00:32:40.000 --> 00:32:42.490 |
|
generative model and they mean that |
|
|
|
00:32:42.490 --> 00:32:45.270 |
|
you're trying to fit the probability of |
|
|
|
00:32:45.270 --> 00:32:47.706 |
|
the data or the features given the |
|
|
|
00:32:47.706 --> 00:32:48.100 |
|
label. |
|
|
|
00:32:48.100 --> 00:32:49.719 |
|
So with Naive Bayes you end up with a |
|
|
|
00:32:49.720 --> 00:32:52.008 |
|
joint distribution of all the data and |
|
|
|
00:32:52.008 --> 00:32:52.384 |
|
features. |
|
|
|
00:32:52.384 --> 00:32:54.500 |
|
With Logistic Regression you would just |
|
|
|
00:32:54.500 --> 00:32:56.222 |
|
have the probability of the label given |
|
|
|
00:32:56.222 --> 00:32:56.730 |
|
the features. |
|
|
|
00:33:02.750 --> 00:33:03.200 |
|
So. |
|
|
|
00:33:03.960 --> 00:33:06.140 |
|
With Linear Logistic Regression, the |
|
|
|
00:33:06.140 --> 00:33:07.510 |
|
further you are from the lion, the |
|
|
|
00:33:07.510 --> 00:33:08.700 |
|
higher the confidence. |
|
|
|
00:33:08.700 --> 00:33:10.875 |
|
So if you're like way over here, then |
|
|
|
00:33:10.875 --> 00:33:11.990 |
|
you're really confident you're a |
|
|
|
00:33:11.990 --> 00:33:12.360 |
|
triangle. |
|
|
|
00:33:12.360 --> 00:33:14.086 |
|
If you're just like right over here, |
|
|
|
00:33:14.086 --> 00:33:15.076 |
|
then you're not very confident. |
|
|
|
00:33:15.076 --> 00:33:16.595 |
|
And if you're right on the line, then |
|
|
|
00:33:16.595 --> 00:33:18.165 |
|
you have equal confidence in triangle |
|
|
|
00:33:18.165 --> 00:33:18.820 |
|
and circle. |
|
|
|
00:33:21.820 --> 00:33:23.626 |
|
So the Logistic Regression algorithm |
|
|
|
00:33:23.626 --> 00:33:25.300 |
|
there's always, as always, there's a |
|
|
|
00:33:25.300 --> 00:33:26.710 |
|
Training and a Prediction phase. |
|
|
|
00:33:27.790 --> 00:33:30.690 |
|
So in Training, you're trying to find |
|
|
|
00:33:30.690 --> 00:33:31.810 |
|
the weights. |
|
|
|
00:33:32.420 --> 00:33:35.450 |
|
That minimize this expression here |
|
|
|
00:33:35.450 --> 00:33:36.635 |
|
which has two parts. |
|
|
|
00:33:36.635 --> 00:33:39.750 |
|
The first part is a negative sum of log |
|
|
|
00:33:39.750 --> 00:33:42.030 |
|
probability of Y given X and the |
|
|
|
00:33:42.030 --> 00:33:42.400 |
|
weights. |
|
|
|
00:33:43.370 --> 00:33:46.160 |
|
So breaking this down, South the reason |
|
|
|
00:33:46.160 --> 00:33:47.022 |
|
for negative. |
|
|
|
00:33:47.022 --> 00:33:49.177 |
|
So this is the negative. |
|
|
|
00:33:49.177 --> 00:33:52.400 |
|
This is the same as. |
|
|
|
00:33:52.470 --> 00:33:57.010 |
|
Maximizing the total probability of the |
|
|
|
00:33:57.010 --> 00:33:58.100 |
|
labels given the data. |
|
|
|
00:34:00.030 --> 00:34:01.670 |
|
The reason for the negative is just so |
|
|
|
00:34:01.670 --> 00:34:03.960 |
|
I can write argument instead of argmax, |
|
|
|
00:34:03.960 --> 00:34:05.830 |
|
because generally we tend to minimize |
|
|
|
00:34:05.830 --> 00:34:07.320 |
|
things in machine learning, not |
|
|
|
00:34:07.320 --> 00:34:07.960 |
|
maximize them. |
|
|
|
00:34:08.680 --> 00:34:13.630 |
|
But the log is making it so that I turn |
|
|
|
00:34:13.630 --> 00:34:14.220 |
|
my. |
|
|
|
00:34:14.220 --> 00:34:15.820 |
|
Normally if I want to model a joint |
|
|
|
00:34:15.820 --> 00:34:18.210 |
|
distribution, I have to take a product |
|
|
|
00:34:18.210 --> 00:34:19.630 |
|
over all the different. |
|
|
|
00:34:20.340 --> 00:34:21.760 |
|
Over all the different likelihood |
|
|
|
00:34:21.760 --> 00:34:22.150 |
|
terms. |
|
|
|
00:34:23.020 --> 00:34:24.570 |
|
But when I take the log of the product, |
|
|
|
00:34:24.570 --> 00:34:25.940 |
|
it becomes the sum of the logs. |
|
|
|
00:34:26.840 --> 00:34:29.360 |
|
And now another thing is that I'm |
|
|
|
00:34:29.360 --> 00:34:31.940 |
|
assuming here that all of that each |
|
|
|
00:34:31.940 --> 00:34:34.419 |
|
label only depends on its own features. |
|
|
|
00:34:34.420 --> 00:34:36.764 |
|
So if I have 1000 data points, then |
|
|
|
00:34:36.764 --> 00:34:38.938 |
|
each of the thousand labels only |
|
|
|
00:34:38.938 --> 00:34:40.483 |
|
depends on the features for its own |
|
|
|
00:34:40.483 --> 00:34:41.677 |
|
data point, it doesn't depend on all |
|
|
|
00:34:41.677 --> 00:34:42.160 |
|
the others. |
|
|
|
00:34:43.610 --> 00:34:45.700 |
|
And then I'm assuming that they all |
|
|
|
00:34:45.700 --> 00:34:47.110 |
|
come from the same distribution. |
|
|
|
00:34:47.110 --> 00:34:50.470 |
|
So I'm assuming IID independent and |
|
|
|
00:34:50.470 --> 00:34:52.520 |
|
identically distributed data, which is |
|
|
|
00:34:52.520 --> 00:34:55.120 |
|
always an almost always an unspoken |
|
|
|
00:34:55.120 --> 00:34:56.390 |
|
assumption in machine learning. |
|
|
|
00:34:58.540 --> 00:35:00.360 |
|
Alright, so the first term is saying I |
|
|
|
00:35:00.360 --> 00:35:02.370 |
|
want to maximize the likelihood of my |
|
|
|
00:35:02.370 --> 00:35:04.040 |
|
labels given the features over the |
|
|
|
00:35:04.040 --> 00:35:04.610 |
|
Training set. |
|
|
|
00:35:05.220 --> 00:35:06.460 |
|
So that's reasonable. |
|
|
|
00:35:07.200 --> 00:35:08.880 |
|
And then the second term is a |
|
|
|
00:35:08.880 --> 00:35:11.000 |
|
regularization term that says I prefer |
|
|
|
00:35:11.000 --> 00:35:12.246 |
|
some models over others. |
|
|
|
00:35:12.246 --> 00:35:14.280 |
|
I prefer models that have smaller |
|
|
|
00:35:14.280 --> 00:35:16.280 |
|
weights, and I'll get into that a |
|
|
|
00:35:16.280 --> 00:35:17.660 |
|
little bit more in a later slide. |
|
|
|
00:35:20.460 --> 00:35:22.170 |
|
So that Prediction is straightforward, |
|
|
|
00:35:22.170 --> 00:35:23.910 |
|
it's just I kind of already went |
|
|
|
00:35:23.910 --> 00:35:24.680 |
|
through it. |
|
|
|
00:35:24.680 --> 00:35:26.360 |
|
Once you have the weights, all you have |
|
|
|
00:35:26.360 --> 00:35:28.160 |
|
to do is multiply your weights by your |
|
|
|
00:35:28.160 --> 00:35:30.330 |
|
features, and that gives you the score |
|
|
|
00:35:30.330 --> 00:35:31.180 |
|
question. |
|
|
|
00:35:38.860 --> 00:35:40.590 |
|
Yeah, so I should explain the notation. |
|
|
|
00:35:40.590 --> 00:35:42.090 |
|
There's different ways of denoting |
|
|
|
00:35:42.090 --> 00:35:42.960 |
|
this, so. |
|
|
|
00:35:44.230 --> 00:35:48.050 |
|
Usually when somebody puts a bar, they |
|
|
|
00:35:48.050 --> 00:35:50.680 |
|
mean that it's given some features, |
|
|
|
00:35:50.680 --> 00:35:52.440 |
|
given some data points or whatever. |
|
|
|
00:35:53.130 --> 00:35:55.156 |
|
And then when somebody puts like a semi |
|
|
|
00:35:55.156 --> 00:35:56.330 |
|
colon, or at least when I do it. |
|
|
|
00:35:56.330 --> 00:35:58.450 |
|
But I see this a lot, if somebody puts |
|
|
|
00:35:58.450 --> 00:36:00.660 |
|
like a semi colon here, then they're |
|
|
|
00:36:00.660 --> 00:36:02.580 |
|
saying that these are the parameters. |
|
|
|
00:36:02.580 --> 00:36:04.070 |
|
So what we're saying is that this |
|
|
|
00:36:04.070 --> 00:36:05.380 |
|
probability function. |
|
|
|
00:36:06.360 --> 00:36:08.830 |
|
Is like parameterized by W. |
|
|
|
00:36:09.640 --> 00:36:13.536 |
|
And the input to that function is X and |
|
|
|
00:36:13.536 --> 00:36:15.030 |
|
the output of the function. |
|
|
|
00:36:15.810 --> 00:36:18.590 |
|
Is that probability of Y? |
|
|
|
00:36:23.080 --> 00:36:24.688 |
|
The other way that you can write it |
|
|
|
00:36:24.688 --> 00:36:26.676 |
|
that you it sometimes, and I first had |
|
|
|
00:36:26.676 --> 00:36:28.443 |
|
it this way and then I switched it, is |
|
|
|
00:36:28.443 --> 00:36:30.890 |
|
you might write like a subscript, so it |
|
|
|
00:36:30.890 --> 00:36:33.635 |
|
might be P under score West. |
|
|
|
00:36:33.635 --> 00:36:35.590 |
|
And part of the reason why you put this |
|
|
|
00:36:35.590 --> 00:36:37.480 |
|
in here is just because otherwise it's |
|
|
|
00:36:37.480 --> 00:36:39.776 |
|
not obvious that this term depends on |
|
|
|
00:36:39.776 --> 00:36:40.480 |
|
West at all. |
|
|
|
00:36:40.480 --> 00:36:43.405 |
|
And if you were like if you looked at |
|
|
|
00:36:43.405 --> 00:36:45.170 |
|
it quickly and you were like trying to |
|
|
|
00:36:45.170 --> 00:36:46.440 |
|
solve, you just be like, I don't care |
|
|
|
00:36:46.440 --> 00:36:47.620 |
|
about that term, I'm just doing |
|
|
|
00:36:47.620 --> 00:36:48.380 |
|
regularization. |
|
|
|
00:36:49.600 --> 00:36:50.260 |
|
Question. |
|
|
|
00:36:57.930 --> 00:37:00.370 |
|
So I forgot to say this out loud. |
|
|
|
00:37:04.110 --> 00:37:06.070 |
|
So it is simplify the notation. |
|
|
|
00:37:06.070 --> 00:37:08.980 |
|
I may omit the B which can be avoided |
|
|
|
00:37:08.980 --> 00:37:10.971 |
|
by putting A1 at the end of the feature |
|
|
|
00:37:10.971 --> 00:37:11.225 |
|
vector. |
|
|
|
00:37:11.225 --> 00:37:12.702 |
|
So basically you can always take your |
|
|
|
00:37:12.702 --> 00:37:14.326 |
|
feature vector and add a one to the end |
|
|
|
00:37:14.326 --> 00:37:16.763 |
|
of all your features and then the B |
|
|
|
00:37:16.763 --> 00:37:19.230 |
|
just becomes one of the W's and so I'm |
|
|
|
00:37:19.230 --> 00:37:20.830 |
|
going to leave out the BA lot of times |
|
|
|
00:37:20.830 --> 00:37:21.950 |
|
because otherwise it just kind of |
|
|
|
00:37:21.950 --> 00:37:23.060 |
|
clutters up the equations. |
|
|
|
00:37:27.540 --> 00:37:28.080 |
|
Thanks for. |
|
|
|
00:37:28.970 --> 00:37:30.430 |
|
Pointing out though. |
|
|
|
00:37:32.040 --> 00:37:34.090 |
|
Alright, so as I said before, one |
|
|
|
00:37:34.090 --> 00:37:34.390 |
|
second. |
|
|
|
00:37:34.390 --> 00:37:36.430 |
|
As I said before the this is the |
|
|
|
00:37:36.430 --> 00:37:38.370 |
|
probability function that Logistic |
|
|
|
00:37:38.370 --> 00:37:39.390 |
|
Regression assumes. |
|
|
|
00:37:39.390 --> 00:37:41.691 |
|
If I multiply the top and the bottom by |
|
|
|
00:37:41.691 --> 00:37:44.115 |
|
east to the West transpose X, then it's |
|
|
|
00:37:44.115 --> 00:37:46.478 |
|
this because east to the West transpose |
|
|
|
00:37:46.478 --> 00:37:47.630 |
|
X times that is 1. |
|
|
|
00:37:48.540 --> 00:37:50.370 |
|
And then this generalizes. |
|
|
|
00:37:50.370 --> 00:37:53.020 |
|
If I have multiple classes, then I |
|
|
|
00:37:53.020 --> 00:37:54.740 |
|
would have a different weight vector |
|
|
|
00:37:54.740 --> 00:37:55.640 |
|
for each class. |
|
|
|
00:37:55.640 --> 00:37:57.435 |
|
So this is summing over all the classes |
|
|
|
00:37:57.435 --> 00:37:59.545 |
|
and the final probability is given by |
|
|
|
00:37:59.545 --> 00:38:02.120 |
|
this expression, so it's east to the |
|
|
|
00:38:02.120 --> 00:38:02.980 |
|
Linear model. |
|
|
|
00:38:04.170 --> 00:38:06.028 |
|
Divided by E to the sum of all the |
|
|
|
00:38:06.028 --> 00:38:06.830 |
|
other Linear models. |
|
|
|
00:38:06.830 --> 00:38:08.780 |
|
So it's basically your score for one |
|
|
|
00:38:08.780 --> 00:38:10.646 |
|
model, divided by the score for all the |
|
|
|
00:38:10.646 --> 00:38:12.513 |
|
other models, sum of score for all the |
|
|
|
00:38:12.513 --> 00:38:12.979 |
|
other models. |
|
|
|
00:38:14.140 --> 00:38:15.060 |
|
Was there a question? |
|
|
|
00:38:15.060 --> 00:38:16.859 |
|
I thought somebody had a question, |
|
|
|
00:38:16.860 --> 00:38:17.010 |
|
yeah. |
|
|
|
00:38:25.670 --> 00:38:26.490 |
|
Yeah, good question. |
|
|
|
00:38:26.490 --> 00:38:28.010 |
|
It's just the log of the probability. |
|
|
|
00:38:28.820 --> 00:38:31.700 |
|
And the sum over N is just the |
|
|
|
00:38:31.700 --> 00:38:33.690 |
|
probability term, it's not summing |
|
|
|
00:38:33.690 --> 00:38:36.080 |
|
over, it's not the regularization times |
|
|
|
00:38:36.080 --> 00:38:36.370 |
|
north. |
|
|
|
00:38:39.350 --> 00:38:39.700 |
|
Question. |
|
|
|
00:38:46.280 --> 00:38:50.170 |
|
If you're doing back prop, it depends |
|
|
|
00:38:50.170 --> 00:38:51.770 |
|
on your activation functions, so. |
|
|
|
00:38:52.600 --> 00:38:55.500 |
|
We will get into neural networks, but |
|
|
|
00:38:55.500 --> 00:38:59.120 |
|
so you would if all your if at the end |
|
|
|
00:38:59.120 --> 00:39:01.250 |
|
you have a Linear Logistic regressor. |
|
|
|
00:39:01.880 --> 00:39:03.580 |
|
Then you would basically calculate the |
|
|
|
00:39:03.580 --> 00:39:06.170 |
|
error due to your predictions in the |
|
|
|
00:39:06.170 --> 00:39:08.170 |
|
last layer and then you would like |
|
|
|
00:39:08.170 --> 00:39:10.234 |
|
accumulate those into the previous |
|
|
|
00:39:10.234 --> 00:39:11.684 |
|
features and the previous features in |
|
|
|
00:39:11.684 --> 00:39:12.409 |
|
the previous features. |
|
|
|
00:39:13.980 --> 00:39:15.900 |
|
But sometimes people use like Velu or |
|
|
|
00:39:15.900 --> 00:39:17.580 |
|
other activation functions, so then it |
|
|
|
00:39:17.580 --> 00:39:18.100 |
|
would be different. |
|
|
|
00:39:22.890 --> 00:39:24.900 |
|
So how do we train this thing? |
|
|
|
00:39:24.900 --> 00:39:26.210 |
|
How do we optimize West? |
|
|
|
00:39:27.330 --> 00:39:28.880 |
|
First, I want to explain the |
|
|
|
00:39:28.880 --> 00:39:29.790 |
|
regularization term. |
|
|
|
00:39:30.510 --> 00:39:31.710 |
|
There's two main kinds of |
|
|
|
00:39:31.710 --> 00:39:32.610 |
|
regularization. |
|
|
|
00:39:32.610 --> 00:39:35.740 |
|
There's L2 2 regularization and L1 |
|
|
|
00:39:35.740 --> 00:39:36.420 |
|
regularization. |
|
|
|
00:39:37.080 --> 00:39:39.280 |
|
So L2 2 regularization is that you're |
|
|
|
00:39:39.280 --> 00:39:41.756 |
|
minimizing the sum of the square values |
|
|
|
00:39:41.756 --> 00:39:42.680 |
|
of the weights. |
|
|
|
00:39:43.330 --> 00:39:45.908 |
|
I can write that as an L2 norm squared. |
|
|
|
00:39:45.908 --> 00:39:48.985 |
|
That double bar thing is means like |
|
|
|
00:39:48.985 --> 00:39:52.635 |
|
norm and the two under it means it's an |
|
|
|
00:39:52.635 --> 00:39:55.132 |
|
L2 and the two above it means it's |
|
|
|
00:39:55.132 --> 00:39:55.340 |
|
squared. |
|
|
|
00:39:56.380 --> 00:39:58.500 |
|
Or I can write or I can do A1 |
|
|
|
00:39:58.500 --> 00:40:00.210 |
|
regularization, which is a sum of the |
|
|
|
00:40:00.210 --> 00:40:01.660 |
|
absolute values of the weights. |
|
|
|
00:40:02.920 --> 00:40:03.570 |
|
And. |
|
|
|
00:40:05.220 --> 00:40:07.540 |
|
And I can write that as the norm like |
|
|
|
00:40:07.540 --> 00:40:08.210 |
|
subscript 1. |
|
|
|
00:40:09.350 --> 00:40:11.700 |
|
And then those are weighted by some |
|
|
|
00:40:11.700 --> 00:40:13.670 |
|
Lambda which is a parameter that has to |
|
|
|
00:40:13.670 --> 00:40:15.910 |
|
be set by the algorithm designer. |
|
|
|
00:40:17.180 --> 00:40:20.100 |
|
Or based on some data like validation |
|
|
|
00:40:20.100 --> 00:40:20.710 |
|
optimization. |
|
|
|
00:40:21.820 --> 00:40:23.910 |
|
So these may look really similar |
|
|
|
00:40:23.910 --> 00:40:25.650 |
|
squared absolute value. |
|
|
|
00:40:25.650 --> 00:40:28.140 |
|
What's the difference as W goes higher? |
|
|
|
00:40:28.140 --> 00:40:30.580 |
|
It means that you get a bigger penalty |
|
|
|
00:40:30.580 --> 00:40:31.180 |
|
in either case. |
|
|
|
00:40:31.890 --> 00:40:33.420 |
|
But they behave actually like quite |
|
|
|
00:40:33.420 --> 00:40:33.960 |
|
differently. |
|
|
|
00:40:34.830 --> 00:40:37.710 |
|
So if you look at this plot of L2 |
|
|
|
00:40:37.710 --> 00:40:39.990 |
|
versus L1, when the weight is 0, |
|
|
|
00:40:39.990 --> 00:40:40.822 |
|
there's no penalty. |
|
|
|
00:40:40.822 --> 00:40:43.090 |
|
When the weight is 1, the penalties are |
|
|
|
00:40:43.090 --> 00:40:43.700 |
|
equal. |
|
|
|
00:40:43.700 --> 00:40:45.760 |
|
When the weight is less than one, then |
|
|
|
00:40:45.760 --> 00:40:48.207 |
|
the L2 penalty is smaller than the L1 |
|
|
|
00:40:48.207 --> 00:40:48.490 |
|
penalty. |
|
|
|
00:40:48.490 --> 00:40:50.080 |
|
It has this like little basin where |
|
|
|
00:40:50.080 --> 00:40:51.820 |
|
basically the penalty is almost 0. |
|
|
|
00:40:52.760 --> 00:40:54.880 |
|
And but when the weight gets far from |
|
|
|
00:40:54.880 --> 00:40:56.960 |
|
one, the L2 penalty shoots up. |
|
|
|
00:40:57.870 --> 00:41:00.820 |
|
So L2 2 regularization hates really |
|
|
|
00:41:00.820 --> 00:41:03.060 |
|
large weights, and they're perfectly |
|
|
|
00:41:03.060 --> 00:41:05.030 |
|
fine with like lots of tiny little |
|
|
|
00:41:05.030 --> 00:41:05.360 |
|
weights. |
|
|
|
00:41:06.560 --> 00:41:08.490 |
|
L1 regularization doesn't like any |
|
|
|
00:41:08.490 --> 00:41:10.600 |
|
weights, but it kind of doesn't like |
|
|
|
00:41:10.600 --> 00:41:11.760 |
|
the mall roughly equally. |
|
|
|
00:41:11.760 --> 00:41:14.170 |
|
So it doesn't like weights of three, |
|
|
|
00:41:14.170 --> 00:41:16.699 |
|
but it's not as bad as it doesn't |
|
|
|
00:41:16.700 --> 00:41:18.250 |
|
dislike them as much as L2 2. |
|
|
|
00:41:19.130 --> 00:41:21.410 |
|
It also doesn't even a weight of 1. |
|
|
|
00:41:21.410 --> 00:41:23.150 |
|
It's going to try just as hard to push |
|
|
|
00:41:23.150 --> 00:41:24.722 |
|
that down as it does to push a weight |
|
|
|
00:41:24.722 --> 00:41:25.200 |
|
of three. |
|
|
|
00:41:27.020 --> 00:41:28.990 |
|
So when you think about when you when |
|
|
|
00:41:28.990 --> 00:41:30.870 |
|
you think about optimization, you |
|
|
|
00:41:30.870 --> 00:41:32.099 |
|
always want to think about the |
|
|
|
00:41:32.100 --> 00:41:35.010 |
|
derivative as well as the. |
|
|
|
00:41:35.390 --> 00:41:37.510 |
|
Like pure function, because you're |
|
|
|
00:41:37.510 --> 00:41:38.830 |
|
always Minimizing, you're always |
|
|
|
00:41:38.830 --> 00:41:40.310 |
|
setting a derivative equal to 0, and |
|
|
|
00:41:40.310 --> 00:41:42.100 |
|
the derivative is what is like guiding |
|
|
|
00:41:42.100 --> 00:41:45.400 |
|
your function optimization towards some |
|
|
|
00:41:45.400 --> 00:41:46.270 |
|
optimal value. |
|
|
|
00:41:47.590 --> 00:41:49.040 |
|
So if you're doing. |
|
|
|
00:41:49.150 --> 00:41:49.800 |
|
|
|
|
|
00:41:51.230 --> 00:41:52.550 |
|
If you're doing L2. |
|
|
|
00:41:54.530 --> 00:41:56.360 |
|
L2 2 minimization. |
|
|
|
00:41:57.120 --> 00:41:59.965 |
|
And I plot the derivative, then the |
|
|
|
00:41:59.965 --> 00:42:01.890 |
|
derivative is just going to be Linear, |
|
|
|
00:42:01.890 --> 00:42:02.780 |
|
right? |
|
|
|
00:42:02.780 --> 00:42:03.950 |
|
It's going to be. |
|
|
|
00:42:04.820 --> 00:42:06.510 |
|
2/2 times. |
|
|
|
00:42:06.590 --> 00:42:07.140 |
|
|
|
|
|
00:42:07.990 --> 00:42:10.420 |
|
It's going to be Lambda 2 WI and |
|
|
|
00:42:10.420 --> 00:42:12.110 |
|
sometimes people put a 1/2 in front of |
|
|
|
00:42:12.110 --> 00:42:13.800 |
|
Lambda just so that the two and the 1/2 |
|
|
|
00:42:13.800 --> 00:42:14.850 |
|
cancel out Mainly. |
|
|
|
00:42:16.560 --> 00:42:17.850 |
|
Don't feel like it's necessary. |
|
|
|
00:42:17.850 --> 00:42:21.350 |
|
If you do L2 one, then the derivatives |
|
|
|
00:42:21.350 --> 00:42:26.830 |
|
are -, 1 if it's greater than zero, and |
|
|
|
00:42:26.830 --> 00:42:29.310 |
|
positive one if it's less than 0. |
|
|
|
00:42:30.270 --> 00:42:33.200 |
|
So basically, if it's L1 minimization, |
|
|
|
00:42:33.200 --> 00:42:35.570 |
|
the regularization is like he's forcing |
|
|
|
00:42:35.570 --> 00:42:38.080 |
|
things in towards zero with equal |
|
|
|
00:42:38.080 --> 00:42:39.600 |
|
pressure no matter where it is. |
|
|
|
00:42:40.240 --> 00:42:42.815 |
|
Wherewith L2 2 minimization, if you |
|
|
|
00:42:42.815 --> 00:42:44.503 |
|
have a high value then it's like |
|
|
|
00:42:44.503 --> 00:42:46.830 |
|
forcing it down, like really hard, and |
|
|
|
00:42:46.830 --> 00:42:48.839 |
|
if you have a low low value then it's |
|
|
|
00:42:48.840 --> 00:42:50.190 |
|
not forcing it very hard at all. |
|
|
|
00:42:50.900 --> 00:42:52.500 |
|
And that's regularization is always |
|
|
|
00:42:52.500 --> 00:42:53.960 |
|
struggling against the other term. |
|
|
|
00:42:53.960 --> 00:42:55.640 |
|
These are like counterbalancing terms. |
|
|
|
00:42:56.510 --> 00:42:58.000 |
|
So the regularization is trying to say |
|
|
|
00:42:58.000 --> 00:42:58.790 |
|
your weights are small. |
|
|
|
00:42:59.580 --> 00:43:02.400 |
|
But the log log likelihood term is |
|
|
|
00:43:02.400 --> 00:43:04.750 |
|
trying to do whatever it can to solve |
|
|
|
00:43:04.750 --> 00:43:07.710 |
|
that likelihood Prediction and so |
|
|
|
00:43:07.710 --> 00:43:10.410 |
|
sometimes there sometimes there are |
|
|
|
00:43:10.410 --> 00:43:11.080 |
|
odds with each other. |
|
|
|
00:43:12.530 --> 00:43:14.700 |
|
Alright, so based on that, can anyone |
|
|
|
00:43:14.700 --> 00:43:18.540 |
|
explain why it is that L2 1 tends to |
|
|
|
00:43:18.540 --> 00:43:20.140 |
|
lead to sparse weights, meaning that |
|
|
|
00:43:20.140 --> 00:43:21.890 |
|
you get a lot of 0 values for your |
|
|
|
00:43:21.890 --> 00:43:22.250 |
|
weights? |
|
|
|
00:43:25.980 --> 00:43:26.140 |
|
Yeah. |
|
|
|
00:43:47.140 --> 00:43:48.630 |
|
Yeah, that's right. |
|
|
|
00:43:48.630 --> 00:43:49.556 |
|
So L2. |
|
|
|
00:43:49.556 --> 00:43:52.030 |
|
So the answer was that L2 1 prefers |
|
|
|
00:43:52.030 --> 00:43:53.984 |
|
like a small number of features that |
|
|
|
00:43:53.984 --> 00:43:56.300 |
|
have a lot of weight that have a lot of |
|
|
|
00:43:56.300 --> 00:43:57.970 |
|
representational value or predictive |
|
|
|
00:43:57.970 --> 00:43:58.370 |
|
value. |
|
|
|
00:43:59.140 --> 00:44:01.370 |
|
Where I'll two really wants everything |
|
|
|
00:44:01.370 --> 00:44:02.700 |
|
to have a little bit of predictive |
|
|
|
00:44:02.700 --> 00:44:03.140 |
|
value. |
|
|
|
00:44:03.770 --> 00:44:05.970 |
|
And you can see that by looking at the |
|
|
|
00:44:05.970 --> 00:44:07.740 |
|
derivatives or just by thinking about |
|
|
|
00:44:07.740 --> 00:44:08.500 |
|
this function. |
|
|
|
00:44:09.140 --> 00:44:12.380 |
|
That L2 one just continually forces |
|
|
|
00:44:12.380 --> 00:44:14.335 |
|
everything down until it hits exactly |
|
|
|
00:44:14.335 --> 00:44:16.970 |
|
0, and while there's not necessarily a |
|
|
|
00:44:16.970 --> 00:44:19.380 |
|
big penalty for some weight, so if you |
|
|
|
00:44:19.380 --> 00:44:20.730 |
|
have a few features that are really |
|
|
|
00:44:20.730 --> 00:44:22.558 |
|
predictive, it's going to allow those |
|
|
|
00:44:22.558 --> 00:44:24.040 |
|
features to have a lot of weights, |
|
|
|
00:44:24.040 --> 00:44:26.314 |
|
while if the other features are not |
|
|
|
00:44:26.314 --> 00:44:27.579 |
|
predictive, given those few features, |
|
|
|
00:44:27.579 --> 00:44:29.450 |
|
it's going to force them down to 0. |
|
|
|
00:44:30.760 --> 00:44:33.132 |
|
With L2 2, if you have a lot of, if you |
|
|
|
00:44:33.132 --> 00:44:34.440 |
|
have some features that are really |
|
|
|
00:44:34.440 --> 00:44:35.870 |
|
predictive and others that are less |
|
|
|
00:44:35.870 --> 00:44:38.040 |
|
predictive, it's still going to want |
|
|
|
00:44:38.040 --> 00:44:40.260 |
|
those very predictive features to have |
|
|
|
00:44:40.260 --> 00:44:41.790 |
|
like a bit smaller weight. |
|
|
|
00:44:42.440 --> 00:44:44.520 |
|
And it's going to like try to make that |
|
|
|
00:44:44.520 --> 00:44:46.530 |
|
up by having the other features will |
|
|
|
00:44:46.530 --> 00:44:47.810 |
|
have just like a little bit of weight |
|
|
|
00:44:47.810 --> 00:44:48.430 |
|
as well. |
|
|
|
00:44:54.130 --> 00:44:56.360 |
|
So in consequence, we can use L2 1 |
|
|
|
00:44:56.360 --> 00:44:58.340 |
|
regularization to select the best |
|
|
|
00:44:58.340 --> 00:45:01.260 |
|
features if we have if we have a bunch |
|
|
|
00:45:01.260 --> 00:45:01.880 |
|
of features. |
|
|
|
00:45:02.750 --> 00:45:04.610 |
|
And we want to instead have a model |
|
|
|
00:45:04.610 --> 00:45:05.890 |
|
that's based on a smaller number of |
|
|
|
00:45:05.890 --> 00:45:07.080 |
|
features. |
|
|
|
00:45:07.080 --> 00:45:09.950 |
|
You can do solve for L1 Logistic |
|
|
|
00:45:09.950 --> 00:45:11.790 |
|
Regression or L1 Linear Regression. |
|
|
|
00:45:12.400 --> 00:45:14.160 |
|
And then choose the features that are |
|
|
|
00:45:14.160 --> 00:45:17.000 |
|
non zero or greater than some epsilon |
|
|
|
00:45:17.000 --> 00:45:20.470 |
|
and then just use those for your model. |
|
|
|
00:45:22.810 --> 00:45:24.840 |
|
OK, I will answer this question for you |
|
|
|
00:45:24.840 --> 00:45:26.430 |
|
to save a little bit of time. |
|
|
|
00:45:27.540 --> 00:45:29.500 |
|
When is regularization absolutely |
|
|
|
00:45:29.500 --> 00:45:30.110 |
|
essential? |
|
|
|
00:45:30.110 --> 00:45:31.450 |
|
It's if your data is linearly |
|
|
|
00:45:31.450 --> 00:45:31.970 |
|
separable. |
|
|
|
00:45:33.390 --> 00:45:35.190 |
|
Because if your data is linearly |
|
|
|
00:45:35.190 --> 00:45:37.445 |
|
separable then you just boost. |
|
|
|
00:45:37.445 --> 00:45:38.820 |
|
You could boost your weights to |
|
|
|
00:45:38.820 --> 00:45:41.083 |
|
Infinity and keep on separating it more |
|
|
|
00:45:41.083 --> 00:45:41.789 |
|
and more and more. |
|
|
|
00:45:42.530 --> 00:45:45.360 |
|
So if you have like 2. |
|
|
|
00:45:46.270 --> 00:45:49.600 |
|
If you have two feature points here and |
|
|
|
00:45:49.600 --> 00:45:50.020 |
|
here. |
|
|
|
00:45:50.970 --> 00:45:54.160 |
|
Then you create this line. |
|
|
|
00:45:55.260 --> 00:45:56.030 |
|
WX. |
|
|
|
00:45:56.690 --> 00:45:59.088 |
|
If it's just one-dimensional and like |
|
|
|
00:45:59.088 --> 00:46:02.220 |
|
if W is equal to 1, then maybe I have a |
|
|
|
00:46:02.220 --> 00:46:04.900 |
|
score of 1 or -, 1 for each of these. |
|
|
|
00:46:04.900 --> 00:46:08.215 |
|
But if test equals like 10,000, now my |
|
|
|
00:46:08.215 --> 00:46:09.985 |
|
score is 10,000 and -, 10,000. |
|
|
|
00:46:09.985 --> 00:46:11.355 |
|
So that's like even better, they're |
|
|
|
00:46:11.355 --> 00:46:13.494 |
|
even further from zero and so there's |
|
|
|
00:46:13.494 --> 00:46:15.130 |
|
no like there's no end to it. |
|
|
|
00:46:15.130 --> 00:46:17.090 |
|
You're W would just go totally out of |
|
|
|
00:46:17.090 --> 00:46:19.420 |
|
control and you would get an error |
|
|
|
00:46:19.420 --> 00:46:21.500 |
|
probably that you're like that your |
|
|
|
00:46:21.500 --> 00:46:22.830 |
|
optimization didn't converge. |
|
|
|
00:46:23.730 --> 00:46:26.020 |
|
So you pretty much always want some |
|
|
|
00:46:26.020 --> 00:46:27.610 |
|
kind of regularization weight, even if |
|
|
|
00:46:27.610 --> 00:46:31.940 |
|
it's really small, to avoid this case |
|
|
|
00:46:31.940 --> 00:46:34.760 |
|
where you don't have a unique solution |
|
|
|
00:46:34.760 --> 00:46:35.990 |
|
to the optimization problem. |
|
|
|
00:46:39.580 --> 00:46:41.240 |
|
There's a lot of different ways to |
|
|
|
00:46:41.240 --> 00:46:43.890 |
|
optimize this and it's not that simple. |
|
|
|
00:46:43.890 --> 00:46:47.440 |
|
So you can do various like gradient |
|
|
|
00:46:47.440 --> 00:46:50.650 |
|
descents or things based on 2nd order |
|
|
|
00:46:50.650 --> 00:46:54.868 |
|
terms, or lasso Regression for L1 or |
|
|
|
00:46:54.868 --> 00:46:57.110 |
|
lasso lasso optimization. |
|
|
|
00:46:57.110 --> 00:46:59.319 |
|
So there's a lot of different |
|
|
|
00:46:59.320 --> 00:46:59.850 |
|
optimizers. |
|
|
|
00:46:59.850 --> 00:47:01.540 |
|
I linked to this paper by Tom Minka |
|
|
|
00:47:01.540 --> 00:47:03.490 |
|
that like explains like several |
|
|
|
00:47:03.490 --> 00:47:05.290 |
|
different choices and their tradeoffs. |
|
|
|
00:47:06.390 --> 00:47:07.760 |
|
At the end of the day, you're going to |
|
|
|
00:47:07.760 --> 00:47:10.399 |
|
use a library, and so it's not really |
|
|
|
00:47:10.400 --> 00:47:12.177 |
|
worth quoting this because it's a |
|
|
|
00:47:12.177 --> 00:47:13.703 |
|
really explored problem and you're not |
|
|
|
00:47:13.703 --> 00:47:15.040 |
|
going to make something better than |
|
|
|
00:47:15.040 --> 00:47:15.840 |
|
somebody else did. |
|
|
|
00:47:17.110 --> 00:47:19.000 |
|
So you want to use the library. |
|
|
|
00:47:19.000 --> 00:47:20.810 |
|
It's worth like it's worth |
|
|
|
00:47:20.810 --> 00:47:21.830 |
|
understanding the different |
|
|
|
00:47:21.830 --> 00:47:25.540 |
|
optimization options a little bit, but |
|
|
|
00:47:25.540 --> 00:47:26.800 |
|
I'm not going to talk about it. |
|
|
|
00:47:30.030 --> 00:47:30.390 |
|
All right. |
|
|
|
00:47:31.040 --> 00:47:31.550 |
|
So. |
|
|
|
00:47:33.150 --> 00:47:35.760 |
|
Here I did an example where I visualize |
|
|
|
00:47:35.760 --> 00:47:38.006 |
|
the weights that are learned using L2 |
|
|
|
00:47:38.006 --> 00:47:39.850 |
|
regularization and L1 regularization |
|
|
|
00:47:39.850 --> 00:47:41.050 |
|
for some digits. |
|
|
|
00:47:41.050 --> 00:47:42.820 |
|
So these are the average Pixels of |
|
|
|
00:47:42.820 --> 00:47:43.940 |
|
digits zero to 4. |
|
|
|
00:47:44.810 --> 00:47:47.308 |
|
These are the L2 2 weights and you can |
|
|
|
00:47:47.308 --> 00:47:49.340 |
|
see like you can sort of see the |
|
|
|
00:47:49.340 --> 00:47:51.125 |
|
numbers in it a little bit like you can |
|
|
|
00:47:51.125 --> 00:47:52.820 |
|
sort of see the three in these weights |
|
|
|
00:47:52.820 --> 00:47:53.020 |
|
that. |
|
|
|
00:47:53.730 --> 00:47:56.437 |
|
And the zero, it wants these weights to |
|
|
|
00:47:56.437 --> 00:47:58.428 |
|
be white, and it wants these weights to |
|
|
|
00:47:58.428 --> 00:47:59.030 |
|
be dark. |
|
|
|
00:47:59.690 --> 00:48:01.320 |
|
I mean these features to be dark, |
|
|
|
00:48:01.320 --> 00:48:03.262 |
|
meaning that if you have a lit pixel |
|
|
|
00:48:03.262 --> 00:48:05.099 |
|
here, it's less likely to be a 0. |
|
|
|
00:48:05.099 --> 00:48:07.100 |
|
If you have a lit pixel here, it's more |
|
|
|
00:48:07.100 --> 00:48:08.390 |
|
likely to be a 0. |
|
|
|
00:48:10.300 --> 00:48:13.390 |
|
But for the L2 one, it's a lot sparser, |
|
|
|
00:48:13.390 --> 00:48:15.590 |
|
so if it's like that blank Gray color, |
|
|
|
00:48:15.590 --> 00:48:17.060 |
|
it means that the weights are zero. |
|
|
|
00:48:18.220 --> 00:48:19.402 |
|
And if it's brighter or darker? |
|
|
|
00:48:19.402 --> 00:48:20.670 |
|
If it's brighter, it means that the |
|
|
|
00:48:20.670 --> 00:48:21.550 |
|
weight is positive. |
|
|
|
00:48:22.260 --> 00:48:26.480 |
|
If it's darker than this uniform Gray, |
|
|
|
00:48:26.480 --> 00:48:27.960 |
|
it means the weight is negative. |
|
|
|
00:48:27.960 --> 00:48:30.430 |
|
So you can see that for L2 one, it's |
|
|
|
00:48:30.430 --> 00:48:32.952 |
|
going to have like some subset of the |
|
|
|
00:48:32.952 --> 00:48:35.123 |
|
L2 features are going to get all the |
|
|
|
00:48:35.123 --> 00:48:36.900 |
|
weight, and most of the weights are |
|
|
|
00:48:36.900 --> 00:48:38.069 |
|
very close to 0. |
|
|
|
00:48:40.120 --> 00:48:42.000 |
|
So for one, it's only going to look at |
|
|
|
00:48:42.000 --> 00:48:44.026 |
|
this small number of pixel, small |
|
|
|
00:48:44.026 --> 00:48:45.990 |
|
number of pixels, and if any of these |
|
|
|
00:48:45.990 --> 00:48:46.640 |
|
guys are. |
|
|
|
00:48:47.400 --> 00:48:49.010 |
|
Are. |
|
|
|
00:48:49.070 --> 00:48:51.130 |
|
Let then it's going to get a big |
|
|
|
00:48:51.130 --> 00:48:52.500 |
|
penalty to being a 0. |
|
|
|
00:48:53.150 --> 00:48:55.560 |
|
If any of these guys are, it gets a big |
|
|
|
00:48:55.560 --> 00:48:56.939 |
|
boost to being a 0. |
|
|
|
00:48:59.420 --> 00:48:59.780 |
|
Question. |
|
|
|
00:49:36.370 --> 00:49:38.230 |
|
OK, let me explain a little bit more |
|
|
|
00:49:38.230 --> 00:49:38.730 |
|
how I get this. |
|
|
|
00:49:39.410 --> 00:49:42.470 |
|
1st So first this is up here is just |
|
|
|
00:49:42.470 --> 00:49:45.510 |
|
simply averaging all the images in a |
|
|
|
00:49:45.510 --> 00:49:46.370 |
|
particular class. |
|
|
|
00:49:47.210 --> 00:49:49.550 |
|
And then I train 2 Logistic Regression |
|
|
|
00:49:49.550 --> 00:49:50.240 |
|
models. |
|
|
|
00:49:50.240 --> 00:49:52.780 |
|
One is trained using the same data that |
|
|
|
00:49:52.780 --> 00:49:55.096 |
|
was used to Average, but to maximize |
|
|
|
00:49:55.096 --> 00:49:57.480 |
|
the train, to maximize the probability |
|
|
|
00:49:57.480 --> 00:49:59.670 |
|
of the labels given the data but under |
|
|
|
00:49:59.670 --> 00:50:02.290 |
|
the L2 regularization penalty. |
|
|
|
00:50:03.040 --> 00:50:05.090 |
|
And the other was trained to maximize |
|
|
|
00:50:05.090 --> 00:50:06.320 |
|
the probability of the label is given |
|
|
|
00:50:06.320 --> 00:50:08.450 |
|
the data under the L1 regularization |
|
|
|
00:50:08.450 --> 00:50:08.920 |
|
penalty. |
|
|
|
00:50:10.410 --> 00:50:12.355 |
|
The way that once you have these |
|
|
|
00:50:12.355 --> 00:50:12.630 |
|
weights. |
|
|
|
00:50:12.630 --> 00:50:14.512 |
|
So these weights are the W's. |
|
|
|
00:50:14.512 --> 00:50:16.750 |
|
These are the coefficients that were |
|
|
|
00:50:16.750 --> 00:50:19.220 |
|
learned as part of as your Linear |
|
|
|
00:50:19.220 --> 00:50:19.560 |
|
model. |
|
|
|
00:50:20.460 --> 00:50:22.310 |
|
In order to apply these weights to do |
|
|
|
00:50:22.310 --> 00:50:23.320 |
|
Classification. |
|
|
|
00:50:24.010 --> 00:50:26.000 |
|
You would multiply each of these |
|
|
|
00:50:26.000 --> 00:50:27.760 |
|
weights with the corresponding pixel. |
|
|
|
00:50:28.490 --> 00:50:31.280 |
|
So given a new test sample, you would |
|
|
|
00:50:31.280 --> 00:50:34.510 |
|
take the sum over all the pixels of the |
|
|
|
00:50:34.510 --> 00:50:36.900 |
|
pixel value times this weight. |
|
|
|
00:50:37.720 --> 00:50:40.257 |
|
So if the way here is bright, it means |
|
|
|
00:50:40.257 --> 00:50:41.755 |
|
that if the pixel value is bright, then |
|
|
|
00:50:41.755 --> 00:50:43.170 |
|
the score is going to go up. |
|
|
|
00:50:43.170 --> 00:50:45.805 |
|
And if the weight here is dark, that |
|
|
|
00:50:45.805 --> 00:50:46.910 |
|
means it's negative. |
|
|
|
00:50:46.910 --> 00:50:50.190 |
|
Then when you if the pixel value is on, |
|
|
|
00:50:50.190 --> 00:50:52.169 |
|
then this is going, then the score is |
|
|
|
00:50:52.169 --> 00:50:53.130 |
|
going to go down. |
|
|
|
00:50:53.130 --> 00:50:55.330 |
|
So that's how to interpret. |
|
|
|
00:50:56.370 --> 00:50:57.930 |
|
How to interpret the weights and? |
|
|
|
00:50:57.930 --> 00:50:59.570 |
|
Normally it's just a vector, but I've |
|
|
|
00:50:59.570 --> 00:51:01.340 |
|
reshaped it into the size of the image |
|
|
|
00:51:01.340 --> 00:51:03.290 |
|
so you could see how it corresponds to |
|
|
|
00:51:03.290 --> 00:51:04.160 |
|
the Pixels. |
|
|
|
00:51:07.190 --> 00:51:08.740 |
|
Where Minimizing 2 things. |
|
|
|
00:51:08.740 --> 00:51:10.540 |
|
One is that we're minimizing the |
|
|
|
00:51:10.540 --> 00:51:11.900 |
|
negative log likelihood of the labels |
|
|
|
00:51:11.900 --> 00:51:12.700 |
|
given the data. |
|
|
|
00:51:12.700 --> 00:51:16.170 |
|
So in other words, we're maximizing the |
|
|
|
00:51:16.170 --> 00:51:17.020 |
|
label likelihood. |
|
|
|
00:51:17.930 --> 00:51:19.740 |
|
And the other is that we're minimizing |
|
|
|
00:51:19.740 --> 00:51:21.237 |
|
the sum of the weights or the sum of |
|
|
|
00:51:21.237 --> 00:51:21.920 |
|
the squared weights. |
|
|
|
00:51:43.810 --> 00:51:44.290 |
|
Right. |
|
|
|
00:51:44.290 --> 00:51:44.580 |
|
Yeah. |
|
|
|
00:51:44.580 --> 00:51:45.385 |
|
So I Prediction time. |
|
|
|
00:51:45.385 --> 00:51:47.530 |
|
So at Training time you have that |
|
|
|
00:51:47.530 --> 00:51:48.388 |
|
regularization term. |
|
|
|
00:51:48.388 --> 00:51:49.700 |
|
At Prediction time you don't. |
|
|
|
00:51:49.700 --> 00:51:52.630 |
|
So at Prediction time, it's just the |
|
|
|
00:51:52.630 --> 00:51:55.510 |
|
score for zero is the sum of all these |
|
|
|
00:51:55.510 --> 00:51:57.340 |
|
coefficients times the corresponding |
|
|
|
00:51:57.340 --> 00:51:58.100 |
|
pixel values. |
|
|
|
00:51:58.760 --> 00:52:00.940 |
|
And the score for one is the sum of all |
|
|
|
00:52:00.940 --> 00:52:02.960 |
|
these coefficient values times the |
|
|
|
00:52:02.960 --> 00:52:04.947 |
|
corresponding pixel values, and so on |
|
|
|
00:52:04.947 --> 00:52:05.830 |
|
for all the digits. |
|
|
|
00:52:06.570 --> 00:52:08.210 |
|
And then at the end you choose. |
|
|
|
00:52:08.210 --> 00:52:09.752 |
|
If you're just assigning a label, you |
|
|
|
00:52:09.752 --> 00:52:11.240 |
|
choose the label with the highest |
|
|
|
00:52:11.240 --> 00:52:11.510 |
|
score. |
|
|
|
00:52:12.230 --> 00:52:12.410 |
|
Yeah. |
|
|
|
00:52:13.580 --> 00:52:14.400 |
|
That did that help? |
|
|
|
00:52:15.100 --> 00:52:15.360 |
|
OK. |
|
|
|
00:52:17.880 --> 00:52:18.570 |
|
Alright. |
|
|
|
00:52:24.020 --> 00:52:25.080 |
|
So. |
|
|
|
00:52:26.630 --> 00:52:28.980 |
|
Alright, so then there's a question of |
|
|
|
00:52:28.980 --> 00:52:29.990 |
|
how do we choose the Lambda? |
|
|
|
00:52:31.260 --> 00:52:34.685 |
|
So selecting Lambda is often called a |
|
|
|
00:52:34.685 --> 00:52:35.098 |
|
hyperparameter. |
|
|
|
00:52:35.098 --> 00:52:37.574 |
|
A hyperparameter is it's a parameter |
|
|
|
00:52:37.574 --> 00:52:40.366 |
|
that the algorithm designer sets that |
|
|
|
00:52:40.366 --> 00:52:42.520 |
|
is not optimized directly by the |
|
|
|
00:52:42.520 --> 00:52:43.120 |
|
Training data. |
|
|
|
00:52:43.120 --> 00:52:45.530 |
|
So the weights are like Parameters of |
|
|
|
00:52:45.530 --> 00:52:46.780 |
|
the Linear model. |
|
|
|
00:52:46.780 --> 00:52:48.660 |
|
But the Lambda is a hyperparameter |
|
|
|
00:52:48.660 --> 00:52:50.030 |
|
because it's a parameter of your |
|
|
|
00:52:50.030 --> 00:52:51.714 |
|
objective function, not a parameter of |
|
|
|
00:52:51.714 --> 00:52:52.219 |
|
your model. |
|
|
|
00:52:56.490 --> 00:52:59.610 |
|
So when you're selecting values for |
|
|
|
00:52:59.610 --> 00:53:02.660 |
|
your hyperparameters, the you can do it |
|
|
|
00:53:02.660 --> 00:53:05.260 |
|
based on intuition, but more commonly |
|
|
|
00:53:05.260 --> 00:53:07.780 |
|
you would do some kind of validation. |
|
|
|
00:53:08.970 --> 00:53:11.210 |
|
So for example, you might say that |
|
|
|
00:53:11.210 --> 00:53:14.000 |
|
Lambda is in this range, one of these |
|
|
|
00:53:14.000 --> 00:53:16.125 |
|
values, 1/8, one quarter, one half one. |
|
|
|
00:53:16.125 --> 00:53:18.350 |
|
It's usually not super sensitive, so |
|
|
|
00:53:18.350 --> 00:53:21.440 |
|
there's no point going into like really |
|
|
|
00:53:21.440 --> 00:53:22.840 |
|
tiny differences. |
|
|
|
00:53:22.840 --> 00:53:24.919 |
|
And it also tends to be like |
|
|
|
00:53:24.920 --> 00:53:27.010 |
|
exponential in its range. |
|
|
|
00:53:27.010 --> 00:53:28.910 |
|
So for example, you don't want to |
|
|
|
00:53:28.910 --> 00:53:32.650 |
|
search from 1/8 to 8 in steps of 1/8 |
|
|
|
00:53:32.650 --> 00:53:34.016 |
|
because that will be like a ton of |
|
|
|
00:53:34.016 --> 00:53:36.080 |
|
values to check and like a difference |
|
|
|
00:53:36.080 --> 00:53:39.090 |
|
between 7:00 and 7/8 and eight is like |
|
|
|
00:53:39.090 --> 00:53:39.610 |
|
nothing. |
|
|
|
00:53:39.680 --> 00:53:40.790 |
|
It won't make any difference. |
|
|
|
00:53:41.830 --> 00:53:43.450 |
|
So usually you want to keep doubling it |
|
|
|
00:53:43.450 --> 00:53:45.770 |
|
or multiplying it by a factor of 10 for |
|
|
|
00:53:45.770 --> 00:53:46.400 |
|
every step. |
|
|
|
00:53:47.690 --> 00:53:49.540 |
|
You train the model using a given |
|
|
|
00:53:49.540 --> 00:53:51.489 |
|
Lambda from the training set, and you |
|
|
|
00:53:51.490 --> 00:53:52.857 |
|
measure and record the performance from |
|
|
|
00:53:52.857 --> 00:53:55.320 |
|
the validation set, and then you choose |
|
|
|
00:53:55.320 --> 00:53:57.053 |
|
the Lambda and the model that gave you |
|
|
|
00:53:57.053 --> 00:53:58.090 |
|
the best performance. |
|
|
|
00:53:58.090 --> 00:53:59.540 |
|
So it's pretty straightforward. |
|
|
|
00:54:00.500 --> 00:54:03.290 |
|
And you can optionally then retrain on |
|
|
|
00:54:03.290 --> 00:54:05.330 |
|
the training and the validation set so |
|
|
|
00:54:05.330 --> 00:54:07.150 |
|
that you didn't like only use your |
|
|
|
00:54:07.150 --> 00:54:09.510 |
|
validation parameters for selecting |
|
|
|
00:54:09.510 --> 00:54:11.992 |
|
that Lambda, and then test on the test |
|
|
|
00:54:11.992 --> 00:54:12.299 |
|
set. |
|
|
|
00:54:12.300 --> 00:54:13.653 |
|
But I'll note that you don't have to do |
|
|
|
00:54:13.653 --> 00:54:14.866 |
|
that for the homework, you should, and |
|
|
|
00:54:14.866 --> 00:54:16.350 |
|
the homework you should generally just. |
|
|
|
00:54:17.480 --> 00:54:20.280 |
|
Use your validation for like measuring |
|
|
|
00:54:20.280 --> 00:54:22.660 |
|
performance and selection and then just |
|
|
|
00:54:22.660 --> 00:54:24.070 |
|
leave your Training. |
|
|
|
00:54:24.070 --> 00:54:25.700 |
|
Leave the models trained on your |
|
|
|
00:54:25.700 --> 00:54:25.960 |
|
Training set. |
|
|
|
00:54:28.300 --> 00:54:30.010 |
|
And then once you've got your final |
|
|
|
00:54:30.010 --> 00:54:32.170 |
|
model, you just test it on the test set |
|
|
|
00:54:32.170 --> 00:54:33.680 |
|
and then that's the measure of the |
|
|
|
00:54:33.680 --> 00:54:34.539 |
|
performance of your model. |
|
|
|
00:54:36.890 --> 00:54:38.525 |
|
So you can start. |
|
|
|
00:54:38.525 --> 00:54:41.020 |
|
So as I said, you typically will keep |
|
|
|
00:54:41.020 --> 00:54:42.080 |
|
on like multiplying your |
|
|
|
00:54:42.080 --> 00:54:44.190 |
|
hyperparameters by some factor rather |
|
|
|
00:54:44.190 --> 00:54:45.380 |
|
than doing a Linear search. |
|
|
|
00:54:46.390 --> 00:54:48.510 |
|
You can also start broad and narrow. |
|
|
|
00:54:48.510 --> 00:54:51.405 |
|
So for example, if I found that 1/4 and |
|
|
|
00:54:51.405 --> 00:54:54.320 |
|
1/2 were the best two values, but it |
|
|
|
00:54:54.320 --> 00:54:55.570 |
|
seemed like there was actually like a |
|
|
|
00:54:55.570 --> 00:54:56.960 |
|
pretty big difference between |
|
|
|
00:54:56.960 --> 00:54:58.560 |
|
neighboring values, then I could then |
|
|
|
00:54:58.560 --> 00:55:01.640 |
|
try like 3/8 and keep on subdividing it |
|
|
|
00:55:01.640 --> 00:55:04.270 |
|
until I feel like I've gotten squeezed |
|
|
|
00:55:04.270 --> 00:55:05.790 |
|
what I can out of that hyperparameter. |
|
|
|
00:55:07.080 --> 00:55:09.750 |
|
Also, if you're searching over many |
|
|
|
00:55:09.750 --> 00:55:13.450 |
|
Parameters simultaneously, the natural |
|
|
|
00:55:13.450 --> 00:55:14.679 |
|
thing that you would do is you would do |
|
|
|
00:55:14.680 --> 00:55:16.420 |
|
a grid search where you do for each |
|
|
|
00:55:16.420 --> 00:55:19.380 |
|
Lambda and for each alpha, and for each |
|
|
|
00:55:19.380 --> 00:55:21.510 |
|
beta you search over some range and try |
|
|
|
00:55:21.510 --> 00:55:23.520 |
|
all combinations of things. |
|
|
|
00:55:23.520 --> 00:55:25.145 |
|
That's actually really inefficient. |
|
|
|
00:55:25.145 --> 00:55:28.377 |
|
The best thing to do is to randomly |
|
|
|
00:55:28.377 --> 00:55:30.720 |
|
select your alpha, beta, gamma, or |
|
|
|
00:55:30.720 --> 00:55:32.790 |
|
whatever things you're searching over, |
|
|
|
00:55:32.790 --> 00:55:34.440 |
|
randomly select them within the |
|
|
|
00:55:34.440 --> 00:55:35.410 |
|
candidate range. |
|
|
|
00:55:36.790 --> 00:55:42.020 |
|
By probabilistic sampling and then try |
|
|
|
00:55:42.020 --> 00:55:44.286 |
|
like 100 different variations and then |
|
|
|
00:55:44.286 --> 00:55:46.173 |
|
and then choose the best combination. |
|
|
|
00:55:46.173 --> 00:55:48.880 |
|
And the reason for that is that often |
|
|
|
00:55:48.880 --> 00:55:50.530 |
|
the Parameters don't depend that |
|
|
|
00:55:50.530 --> 00:55:51.550 |
|
strongly on each other. |
|
|
|
00:55:52.140 --> 00:55:54.450 |
|
And that way in some Parameters will be |
|
|
|
00:55:54.450 --> 00:55:55.920 |
|
much more important than others. |
|
|
|
00:55:56.730 --> 00:55:58.620 |
|
And so if you randomly sample in the |
|
|
|
00:55:58.620 --> 00:56:00.440 |
|
range, if you have multiple Parameters, |
|
|
|
00:56:00.440 --> 00:56:02.270 |
|
then you get to try a lot more |
|
|
|
00:56:02.270 --> 00:56:04.315 |
|
different values of each parameter than |
|
|
|
00:56:04.315 --> 00:56:05.540 |
|
if you're doing a grid search. |
|
|
|
00:56:09.500 --> 00:56:11.270 |
|
So validation. |
|
|
|
00:56:11.390 --> 00:56:11.980 |
|
|
|
|
|
00:56:13.230 --> 00:56:14.870 |
|
You can also do cross validation. |
|
|
|
00:56:14.870 --> 00:56:16.520 |
|
That's just if you split your Training, |
|
|
|
00:56:16.520 --> 00:56:19.173 |
|
split your data set into multiple parts |
|
|
|
00:56:19.173 --> 00:56:22.330 |
|
and each time you train on North minus |
|
|
|
00:56:22.330 --> 00:56:24.642 |
|
one parts and then test on the north |
|
|
|
00:56:24.642 --> 00:56:27.420 |
|
part and then you cycle through which |
|
|
|
00:56:27.420 --> 00:56:28.840 |
|
part you use for validation. |
|
|
|
00:56:29.650 --> 00:56:30.860 |
|
And then you Average all your |
|
|
|
00:56:30.860 --> 00:56:31.775 |
|
validation performance. |
|
|
|
00:56:31.775 --> 00:56:33.960 |
|
So you might do this if you have a very |
|
|
|
00:56:33.960 --> 00:56:36.280 |
|
limited Training set, so that it's |
|
|
|
00:56:36.280 --> 00:56:38.270 |
|
really hard to get both Training |
|
|
|
00:56:38.270 --> 00:56:39.740 |
|
Parameters and get a measure of the |
|
|
|
00:56:39.740 --> 00:56:41.770 |
|
performance with that one Training set, |
|
|
|
00:56:41.770 --> 00:56:43.620 |
|
and so you can. |
|
|
|
00:56:44.820 --> 00:56:47.600 |
|
You can then make more efficient use of |
|
|
|
00:56:47.600 --> 00:56:48.840 |
|
your Training data this way. |
|
|
|
00:56:48.840 --> 00:56:49.870 |
|
Sample efficient use. |
|
|
|
00:56:50.650 --> 00:56:52.110 |
|
And the extreme you can do leave one |
|
|
|
00:56:52.110 --> 00:56:53.780 |
|
out cross validation where you train |
|
|
|
00:56:53.780 --> 00:56:55.777 |
|
with all your data except for one and |
|
|
|
00:56:55.777 --> 00:56:58.050 |
|
then test on that one and then you |
|
|
|
00:56:58.050 --> 00:57:00.965 |
|
cycle which point is used for |
|
|
|
00:57:00.965 --> 00:57:03.749 |
|
validation through all the data |
|
|
|
00:57:03.750 --> 00:57:04.300 |
|
samples. |
|
|
|
00:57:06.440 --> 00:57:09.770 |
|
This is only practical if you if you're |
|
|
|
00:57:09.770 --> 00:57:11.229 |
|
doing like Nearest neighbor for example |
|
|
|
00:57:11.230 --> 00:57:12.890 |
|
where Training takes no time, then |
|
|
|
00:57:12.890 --> 00:57:14.259 |
|
that's easy to do. |
|
|
|
00:57:14.260 --> 00:57:16.859 |
|
Or if you're able to adjust your model |
|
|
|
00:57:16.860 --> 00:57:19.657 |
|
by adjust it for the influence of 1 |
|
|
|
00:57:19.657 --> 00:57:19.885 |
|
sample. |
|
|
|
00:57:19.885 --> 00:57:21.550 |
|
If you can like take out one sample |
|
|
|
00:57:21.550 --> 00:57:23.518 |
|
really easily and adjust your model |
|
|
|
00:57:23.518 --> 00:57:24.740 |
|
then you might be able to do this, |
|
|
|
00:57:24.740 --> 00:57:26.455 |
|
which you could do with Naive Bayes for |
|
|
|
00:57:26.455 --> 00:57:27.060 |
|
example as well. |
|
|
|
00:57:32.060 --> 00:57:33.460 |
|
Right, so Summary of Logistic |
|
|
|
00:57:33.460 --> 00:57:35.180 |
|
Regression. |
|
|
|
00:57:35.180 --> 00:57:37.790 |
|
Key assumptions are that this log odds |
|
|
|
00:57:37.790 --> 00:57:40.460 |
|
ratio can be expressed as a linear |
|
|
|
00:57:40.460 --> 00:57:41.560 |
|
combination of features. |
|
|
|
00:57:42.470 --> 00:57:44.589 |
|
So this probability of y = K given X |
|
|
|
00:57:44.590 --> 00:57:46.710 |
|
over probability of Y not equal to K |
|
|
|
00:57:46.710 --> 00:57:47.730 |
|
given X the log of that. |
|
|
|
00:57:48.470 --> 00:57:51.770 |
|
Is just a Linear model W transpose X. |
|
|
|
00:57:53.350 --> 00:57:55.990 |
|
I've got one coefficient per feature |
|
|
|
00:57:55.990 --> 00:57:57.700 |
|
that's my model Parameters, plus maybe |
|
|
|
00:57:57.700 --> 00:57:59.950 |
|
a bias term which the bias is modeling |
|
|
|
00:57:59.950 --> 00:58:00.850 |
|
like the class prior. |
|
|
|
00:58:02.320 --> 00:58:04.690 |
|
I can Choose L1 or L2 or both. |
|
|
|
00:58:06.110 --> 00:58:08.110 |
|
Regularization in some weight on those. |
|
|
|
00:58:09.810 --> 00:58:11.070 |
|
So this is really. |
|
|
|
00:58:11.070 --> 00:58:13.090 |
|
This works well if you've got a lot of |
|
|
|
00:58:13.090 --> 00:58:14.470 |
|
features, because again, it's much more |
|
|
|
00:58:14.470 --> 00:58:16.100 |
|
powerful in a high dimensional space. |
|
|
|
00:58:16.840 --> 00:58:18.740 |
|
And it's OK if some of those features |
|
|
|
00:58:18.740 --> 00:58:20.520 |
|
are irrelevant or redundant, where |
|
|
|
00:58:20.520 --> 00:58:22.110 |
|
things like Naive Bayes will get |
|
|
|
00:58:22.110 --> 00:58:24.010 |
|
tripped up by irrelevant or redundant |
|
|
|
00:58:24.010 --> 00:58:24.360 |
|
features. |
|
|
|
00:58:25.480 --> 00:58:28.210 |
|
And it provides a good estimate of the |
|
|
|
00:58:28.210 --> 00:58:29.380 |
|
label likelihood. |
|
|
|
00:58:29.380 --> 00:58:32.290 |
|
So it tends to give you a well |
|
|
|
00:58:32.290 --> 00:58:34.233 |
|
calibrated classifier, which means that |
|
|
|
00:58:34.233 --> 00:58:36.425 |
|
if you look at its confidence, if the |
|
|
|
00:58:36.425 --> 00:58:39.520 |
|
confidence is 8, then like 80% of the |
|
|
|
00:58:39.520 --> 00:58:41.279 |
|
times that the confidence is .8, it |
|
|
|
00:58:41.280 --> 00:58:41.960 |
|
will be correct. |
|
|
|
00:58:42.710 --> 00:58:43.300 |
|
Roughly. |
|
|
|
00:58:44.800 --> 00:58:46.150 |
|
Not to use and Weaknesses. |
|
|
|
00:58:46.150 --> 00:58:47.689 |
|
If the features are low dimensional, |
|
|
|
00:58:47.690 --> 00:58:49.410 |
|
then the Linear function is not likely |
|
|
|
00:58:49.410 --> 00:58:50.600 |
|
to be expressive enough. |
|
|
|
00:58:50.600 --> 00:58:52.824 |
|
So usually if your features are low |
|
|
|
00:58:52.824 --> 00:58:54.395 |
|
dimensional to start with, you actually |
|
|
|
00:58:54.395 --> 00:58:56.055 |
|
like turn them into high dimensional |
|
|
|
00:58:56.055 --> 00:58:59.480 |
|
features first, like by doing trees or |
|
|
|
00:58:59.480 --> 00:59:01.820 |
|
other ways of like turning continuous |
|
|
|
00:59:01.820 --> 00:59:03.690 |
|
values into a lot of discrete values. |
|
|
|
00:59:04.310 --> 00:59:05.900 |
|
And then you apply your Linear |
|
|
|
00:59:05.900 --> 00:59:06.450 |
|
classifier. |
|
|
|
00:59:10.310 --> 00:59:11.890 |
|
Right, so I was going to do like a |
|
|
|
00:59:11.890 --> 00:59:13.600 |
|
Pause thing here, but since we only |
|
|
|
00:59:13.600 --> 00:59:16.490 |
|
have 15 minutes left, I will use this |
|
|
|
00:59:16.490 --> 00:59:18.470 |
|
as a Review question for the start of |
|
|
|
00:59:18.470 --> 00:59:20.850 |
|
the next lecture. |
|
|
|
00:59:20.850 --> 00:59:22.830 |
|
And I want to I do want to get into |
|
|
|
00:59:22.830 --> 00:59:25.820 |
|
Linear Regression so apologies for. |
|
|
|
00:59:26.860 --> 00:59:28.010 |
|
Fairly heavy. |
|
|
|
00:59:29.390 --> 00:59:30.620 |
|
75 minutes. |
|
|
|
00:59:33.310 --> 00:59:34.229 |
|
Yeah, there's a lot of math. |
|
|
|
00:59:34.230 --> 00:59:37.080 |
|
There will be a lot of math every |
|
|
|
00:59:37.080 --> 00:59:38.755 |
|
Lecture, pretty much. |
|
|
|
00:59:38.755 --> 00:59:40.120 |
|
There's never not. |
|
|
|
00:59:40.970 --> 00:59:42.075 |
|
There's always Linear. |
|
|
|
00:59:42.075 --> 00:59:43.920 |
|
There's always Linear linear algebra, |
|
|
|
00:59:43.920 --> 00:59:45.060 |
|
calculus, probability. |
|
|
|
00:59:45.060 --> 00:59:47.920 |
|
It's part of every part of machine |
|
|
|
00:59:47.920 --> 00:59:48.210 |
|
learning. |
|
|
|
00:59:49.250 --> 00:59:50.380 |
|
So. |
|
|
|
00:59:50.700 --> 00:59:52.002 |
|
Alright, so Linear Regression. |
|
|
|
00:59:52.002 --> 00:59:53.470 |
|
Linear Regression is actually a little |
|
|
|
00:59:53.470 --> 00:59:55.790 |
|
bit more intuitive I think than Linear |
|
|
|
00:59:55.790 --> 00:59:57.645 |
|
Logistic Regression because you're just |
|
|
|
00:59:57.645 --> 00:59:59.600 |
|
your Linear function is just like a |
|
|
|
00:59:59.600 --> 01:00:01.440 |
|
lion, you're just fitting the data and |
|
|
|
01:00:01.440 --> 01:00:02.570 |
|
we see this all the time. |
|
|
|
01:00:02.570 --> 01:00:04.236 |
|
Like if you use Excel you can do a |
|
|
|
01:00:04.236 --> 01:00:05.380 |
|
Linear fit to your plot. |
|
|
|
01:00:06.120 --> 01:00:08.420 |
|
And there's a lot of reasons that you |
|
|
|
01:00:08.420 --> 01:00:09.850 |
|
want to use Linear Regression. |
|
|
|
01:00:09.850 --> 01:00:11.940 |
|
You might want to just like explain a |
|
|
|
01:00:11.940 --> 01:00:12.580 |
|
trend. |
|
|
|
01:00:12.580 --> 01:00:15.010 |
|
You might want to extrapolate the data |
|
|
|
01:00:15.010 --> 01:00:18.330 |
|
to say if my Frequency were like 25 for |
|
|
|
01:00:18.330 --> 01:00:21.530 |
|
chirps, then what is my likely cricket |
|
|
|
01:00:21.530 --> 01:00:21.970 |
|
Temperature? |
|
|
|
01:00:23.780 --> 01:00:25.265 |
|
You may want to do. |
|
|
|
01:00:25.265 --> 01:00:26.950 |
|
You may actually want to do Prediction |
|
|
|
01:00:26.950 --> 01:00:28.159 |
|
if you have a lot of features and |
|
|
|
01:00:28.160 --> 01:00:29.580 |
|
you're trying to predict a single |
|
|
|
01:00:29.580 --> 01:00:30.740 |
|
variable. |
|
|
|
01:00:30.740 --> 01:00:32.650 |
|
Again, here I'm only showing 2D plots, |
|
|
|
01:00:32.650 --> 01:00:34.500 |
|
but you can, like in your Temperature |
|
|
|
01:00:34.500 --> 01:00:36.110 |
|
Regression problem, you can't have lots |
|
|
|
01:00:36.110 --> 01:00:37.600 |
|
of features and use the Linear model |
|
|
|
01:00:37.600 --> 01:00:37.800 |
|
on. |
|
|
|
01:00:39.630 --> 01:00:41.046 |
|
The Linear Regression, you're trying to |
|
|
|
01:00:41.046 --> 01:00:42.750 |
|
fit Linear coefficients to features to |
|
|
|
01:00:42.750 --> 01:00:44.920 |
|
predicted continuous variable, and if |
|
|
|
01:00:44.920 --> 01:00:46.545 |
|
you're trying to fit multiple |
|
|
|
01:00:46.545 --> 01:00:48.560 |
|
continuous variables, then you do, then |
|
|
|
01:00:48.560 --> 01:00:49.920 |
|
you have multiple Linear models. |
|
|
|
01:00:52.450 --> 01:00:55.900 |
|
So this is evaluated by like root mean |
|
|
|
01:00:55.900 --> 01:00:57.940 |
|
squared error, the sum of squared |
|
|
|
01:00:57.940 --> 01:00:59.570 |
|
differences between the points. |
|
|
|
01:01:01.560 --> 01:01:02.930 |
|
Square root of that. |
|
|
|
01:01:02.930 --> 01:01:04.942 |
|
Or it could be like the median absolute |
|
|
|
01:01:04.942 --> 01:01:06.890 |
|
error, which is the absolute difference |
|
|
|
01:01:06.890 --> 01:01:08.858 |
|
between the points and the median of |
|
|
|
01:01:08.858 --> 01:01:10.907 |
|
that, various combinations of that. |
|
|
|
01:01:10.907 --> 01:01:13.079 |
|
And then here I'm showing the R2 |
|
|
|
01:01:13.080 --> 01:01:15.680 |
|
residual which is essentially the |
|
|
|
01:01:15.680 --> 01:01:19.460 |
|
variance or the sum of squared error of |
|
|
|
01:01:19.460 --> 01:01:20.490 |
|
the points. |
|
|
|
01:01:21.110 --> 01:01:24.550 |
|
From the predicted line divided by the |
|
|
|
01:01:24.550 --> 01:01:27.897 |
|
sum of squared difference between the |
|
|
|
01:01:27.897 --> 01:01:29.771 |
|
points and the average of the points, |
|
|
|
01:01:29.771 --> 01:01:31.378 |
|
the predicted values and the target |
|
|
|
01:01:31.378 --> 01:01:33.252 |
|
values, and the average of the target |
|
|
|
01:01:33.252 --> 01:01:33.519 |
|
values. |
|
|
|
01:01:35.360 --> 01:01:37.750 |
|
It's 1 minus that thing, and so this is |
|
|
|
01:01:37.750 --> 01:01:39.825 |
|
essentially the amount of variance that |
|
|
|
01:01:39.825 --> 01:01:42.810 |
|
is explained by your Linear model. |
|
|
|
01:01:43.550 --> 01:01:44.690 |
|
That's the R2. |
|
|
|
01:01:45.960 --> 01:01:48.460 |
|
And if R2 is close to zero, then it |
|
|
|
01:01:48.460 --> 01:01:50.810 |
|
means that the Linear model that you |
|
|
|
01:01:50.810 --> 01:01:52.680 |
|
can't really linearly explain your |
|
|
|
01:01:52.680 --> 01:01:54.880 |
|
target variable very well from the |
|
|
|
01:01:54.880 --> 01:01:55.440 |
|
features. |
|
|
|
01:01:56.470 --> 01:01:58.390 |
|
If it's close to one, it means that you |
|
|
|
01:01:58.390 --> 01:02:00.060 |
|
can explain it almost perfectly. |
|
|
|
01:02:00.060 --> 01:02:01.310 |
|
In other words, you can get an almost |
|
|
|
01:02:01.310 --> 01:02:03.440 |
|
perfect Prediction compared to the |
|
|
|
01:02:03.440 --> 01:02:04.230 |
|
original variance. |
|
|
|
01:02:05.570 --> 01:02:08.330 |
|
So you can see here that this isn't |
|
|
|
01:02:08.330 --> 01:02:09.060 |
|
really. |
|
|
|
01:02:09.060 --> 01:02:10.500 |
|
If you look at the points, there's |
|
|
|
01:02:10.500 --> 01:02:12.060 |
|
actually a curve to it, so there's |
|
|
|
01:02:12.060 --> 01:02:14.203 |
|
probably a better fit than this Linear |
|
|
|
01:02:14.203 --> 01:02:14.649 |
|
model. |
|
|
|
01:02:14.650 --> 01:02:16.220 |
|
But the Linear model still isn't too |
|
|
|
01:02:16.220 --> 01:02:16.670 |
|
bad. |
|
|
|
01:02:16.670 --> 01:02:18.789 |
|
We have an R sqrt 87. |
|
|
|
01:02:20.350 --> 01:02:23.330 |
|
Here the Linear model seems pretty |
|
|
|
01:02:23.330 --> 01:02:25.410 |
|
decent, but there's a lot of as a |
|
|
|
01:02:25.410 --> 01:02:25.920 |
|
choice. |
|
|
|
01:02:25.920 --> 01:02:28.200 |
|
But there's a lot of variance to the |
|
|
|
01:02:28.200 --> 01:02:28.570 |
|
data. |
|
|
|
01:02:28.570 --> 01:02:30.632 |
|
Even for this exact same data, exact |
|
|
|
01:02:30.632 --> 01:02:32.210 |
|
same Frequency, there's many different |
|
|
|
01:02:32.210 --> 01:02:32.660 |
|
temperatures. |
|
|
|
01:02:33.430 --> 01:02:35.400 |
|
And so here the amount of variance that |
|
|
|
01:02:35.400 --> 01:02:37.010 |
|
can be explained is 68%. |
|
|
|
01:02:42.160 --> 01:02:43.010 |
|
The Linear. |
|
|
|
01:02:44.090 --> 01:02:44.630 |
|
Whoops. |
|
|
|
01:02:45.760 --> 01:02:48.400 |
|
This should actually Linear Regression |
|
|
|
01:02:48.400 --> 01:02:49.670 |
|
algorithm, not Logistic. |
|
|
|
01:02:52.200 --> 01:02:54.090 |
|
So the Linear Regression algorithm. |
|
|
|
01:02:54.090 --> 01:02:55.520 |
|
It's an easy mistake to make because |
|
|
|
01:02:55.520 --> 01:02:56.570 |
|
they look almost the same. |
|
|
|
01:02:57.300 --> 01:02:59.800 |
|
Is just that I'm Minimizing. |
|
|
|
01:02:59.800 --> 01:03:01.440 |
|
Now I'm just minimizing the squared |
|
|
|
01:03:01.440 --> 01:03:03.580 |
|
difference between the Linear model and |
|
|
|
01:03:03.580 --> 01:03:04.630 |
|
the. |
|
|
|
01:03:05.480 --> 01:03:08.640 |
|
And the target value over all of the. |
|
|
|
01:03:09.380 --> 01:03:11.050 |
|
XNS so also. |
|
|
|
01:03:11.970 --> 01:03:13.280 |
|
Let me fix. |
|
|
|
01:03:17.040 --> 01:03:19.170 |
|
So this should be X. |
|
|
|
01:03:21.580 --> 01:03:21.900 |
|
OK. |
|
|
|
01:03:23.800 --> 01:03:25.740 |
|
Right, so I'm minimizing the sum of |
|
|
|
01:03:25.740 --> 01:03:27.820 |
|
squared error here between the |
|
|
|
01:03:27.820 --> 01:03:29.718 |
|
predicted value and the true value, and |
|
|
|
01:03:29.718 --> 01:03:32.280 |
|
you could have different variations on |
|
|
|
01:03:32.280 --> 01:03:32.482 |
|
that. |
|
|
|
01:03:32.482 --> 01:03:34.140 |
|
You could minimize the sum of absolute |
|
|
|
01:03:34.140 --> 01:03:35.825 |
|
error, which is a harder thing to |
|
|
|
01:03:35.825 --> 01:03:38.030 |
|
minimize but more robust to outliers. |
|
|
|
01:03:38.030 --> 01:03:39.340 |
|
And then I also have this |
|
|
|
01:03:39.340 --> 01:03:41.520 |
|
regularization term that Prediction is |
|
|
|
01:03:41.520 --> 01:03:43.340 |
|
just the sum of weights times the |
|
|
|
01:03:43.340 --> 01:03:45.950 |
|
features or W transpose X. |
|
|
|
01:03:45.950 --> 01:03:47.500 |
|
So straightforward. |
|
|
|
01:03:50.060 --> 01:03:52.780 |
|
In terms of the optimization, it's just |
|
|
|
01:03:52.780 --> 01:03:55.070 |
|
if you have L2 2 regularization, then |
|
|
|
01:03:55.070 --> 01:03:55.920 |
|
it's just a. |
|
|
|
01:03:57.260 --> 01:03:59.130 |
|
At least squares optimization. |
|
|
|
01:03:59.810 --> 01:04:00.320 |
|
So. |
|
|
|
01:04:01.360 --> 01:04:03.050 |
|
I did like a sort of Brief. |
|
|
|
01:04:03.620 --> 01:04:06.760 |
|
Brief derivation, just Minimizing that |
|
|
|
01:04:06.760 --> 01:04:07.970 |
|
function, taking the derivative, |
|
|
|
01:04:07.970 --> 01:04:08.790 |
|
setting it equal to 0. |
|
|
|
01:04:09.640 --> 01:04:12.180 |
|
At the end you will skip most of the |
|
|
|
01:04:12.180 --> 01:04:13.770 |
|
steps because it's just a. |
|
|
|
01:04:14.830 --> 01:04:15.905 |
|
It's the least squares problem. |
|
|
|
01:04:15.905 --> 01:04:17.520 |
|
It shows up in a lot of cases and I |
|
|
|
01:04:17.520 --> 01:04:19.020 |
|
didn't want to focus on it. |
|
|
|
01:04:19.700 --> 01:04:21.079 |
|
At the end you will get this thing. |
|
|
|
01:04:21.080 --> 01:04:24.000 |
|
So you'll say that A is the thing that |
|
|
|
01:04:24.000 --> 01:04:25.810 |
|
minimizes this squared term. |
|
|
|
01:04:27.340 --> 01:04:28.810 |
|
Or this is just a different way of |
|
|
|
01:04:28.810 --> 01:04:31.508 |
|
writing that problem and so this is an |
|
|
|
01:04:31.508 --> 01:04:32.970 |
|
N by M matrix. |
|
|
|
01:04:32.970 --> 01:04:36.506 |
|
So these are your N examples and M |
|
|
|
01:04:36.506 --> 01:04:36.984 |
|
features. |
|
|
|
01:04:36.984 --> 01:04:38.690 |
|
This is the thing that we're |
|
|
|
01:04:38.690 --> 01:04:39.420 |
|
optimizing. |
|
|
|
01:04:39.420 --> 01:04:41.590 |
|
It's an M by 1 vector if I have M |
|
|
|
01:04:41.590 --> 01:04:41.890 |
|
features. |
|
|
|
01:04:42.630 --> 01:04:44.900 |
|
These are my values that I want to |
|
|
|
01:04:44.900 --> 01:04:45.540 |
|
Predict. |
|
|
|
01:04:45.540 --> 01:04:47.200 |
|
This is an north by 1 vector. |
|
|
|
01:04:47.200 --> 01:04:49.420 |
|
That's my Different labels for the |
|
|
|
01:04:49.420 --> 01:04:50.370 |
|
North examples. |
|
|
|
01:04:50.950 --> 01:04:53.550 |
|
And then I'm squaring that term in |
|
|
|
01:04:53.550 --> 01:04:54.700 |
|
matrix wise. |
|
|
|
01:04:55.570 --> 01:04:58.577 |
|
And the solution this is just that a is |
|
|
|
01:04:58.577 --> 01:05:01.125 |
|
the pseudo inverse of X * Y which |
|
|
|
01:05:01.125 --> 01:05:02.920 |
|
pseudo inverse is given here. |
|
|
|
01:05:05.640 --> 01:05:08.470 |
|
And again if you have. |
|
|
|
01:05:09.510 --> 01:05:10.400 |
|
So. |
|
|
|
01:05:11.060 --> 01:05:13.180 |
|
The regularization is exactly the same. |
|
|
|
01:05:13.180 --> 01:05:15.455 |
|
It's usually used L2 or L1 |
|
|
|
01:05:15.455 --> 01:05:16.900 |
|
regularization and they do the same |
|
|
|
01:05:16.900 --> 01:05:18.050 |
|
things that they did in Logistic |
|
|
|
01:05:18.050 --> 01:05:18.335 |
|
Regression. |
|
|
|
01:05:18.335 --> 01:05:19.890 |
|
They want the weights to be small, but |
|
|
|
01:05:19.890 --> 01:05:23.280 |
|
L2 one wants is OK with some sparse |
|
|
|
01:05:23.280 --> 01:05:25.186 |
|
higher values where L2 2 wants all the |
|
|
|
01:05:25.186 --> 01:05:25.850 |
|
weights to be small. |
|
|
|
01:05:27.820 --> 01:05:30.020 |
|
So L2 2 Linear Regression is pretty |
|
|
|
01:05:30.020 --> 01:05:31.540 |
|
easy to implement, it's just going to |
|
|
|
01:05:31.540 --> 01:05:37.020 |
|
be like in pseudocode or roughly exact |
|
|
|
01:05:37.020 --> 01:05:37.290 |
|
code. |
|
|
|
01:05:37.970 --> 01:05:41.530 |
|
It would just be inverse X * Y. |
|
|
|
01:05:41.530 --> 01:05:42.190 |
|
That's it. |
|
|
|
01:05:42.190 --> 01:05:44.360 |
|
So W equals inverse X * Y. |
|
|
|
01:05:45.070 --> 01:05:47.700 |
|
And if you add some regularization |
|
|
|
01:05:47.700 --> 01:05:50.080 |
|
term, you just have to add to XA little |
|
|
|
01:05:50.080 --> 01:05:51.830 |
|
bit and add on to that. |
|
|
|
01:05:51.830 --> 01:05:53.330 |
|
The target for West is 0. |
|
|
|
01:05:55.330 --> 01:05:55.940 |
|
And. |
|
|
|
01:05:56.740 --> 01:05:58.610 |
|
L1 regularization is actually a pretty |
|
|
|
01:05:58.610 --> 01:06:00.850 |
|
tricky optimization problem, but I |
|
|
|
01:06:00.850 --> 01:06:02.920 |
|
would just say you can also use the |
|
|
|
01:06:02.920 --> 01:06:04.620 |
|
library for either of these. |
|
|
|
01:06:04.620 --> 01:06:07.260 |
|
So similar to 1 Logistic Regression, |
|
|
|
01:06:07.260 --> 01:06:08.890 |
|
Linear Regression is ubiquitous. |
|
|
|
01:06:08.890 --> 01:06:10.470 |
|
No matter what program language you're |
|
|
|
01:06:10.470 --> 01:06:12.190 |
|
using, there's going to be a library |
|
|
|
01:06:12.190 --> 01:06:14.310 |
|
that you can use to solve this problem. |
|
|
|
01:06:15.410 --> 01:06:18.517 |
|
So when I decide whether you should |
|
|
|
01:06:18.517 --> 01:06:20.400 |
|
implement something by hand, or know |
|
|
|
01:06:20.400 --> 01:06:22.202 |
|
how to implement it by hand, or whether |
|
|
|
01:06:22.202 --> 01:06:24.240 |
|
you should just use a model, it's kind |
|
|
|
01:06:24.240 --> 01:06:25.353 |
|
of a function of like. |
|
|
|
01:06:25.353 --> 01:06:27.360 |
|
How complicated is that optimization |
|
|
|
01:06:27.360 --> 01:06:30.200 |
|
problem also, are there? |
|
|
|
01:06:30.200 --> 01:06:32.350 |
|
Is it like a really standard problem |
|
|
|
01:06:32.350 --> 01:06:34.320 |
|
where you're pretty much guaranteed |
|
|
|
01:06:34.320 --> 01:06:35.350 |
|
that for your own? |
|
|
|
01:06:36.270 --> 01:06:37.260 |
|
Custom problem. |
|
|
|
01:06:37.260 --> 01:06:39.530 |
|
You'll be able to just use a library to |
|
|
|
01:06:39.530 --> 01:06:40.410 |
|
solve it. |
|
|
|
01:06:40.410 --> 01:06:41.920 |
|
Or is it something where there's a lot |
|
|
|
01:06:41.920 --> 01:06:43.380 |
|
of customization that's typically |
|
|
|
01:06:43.380 --> 01:06:45.170 |
|
involved, like for a Naive Bayes for |
|
|
|
01:06:45.170 --> 01:06:45.620 |
|
example. |
|
|
|
01:06:47.590 --> 01:06:48.560 |
|
And. |
|
|
|
01:06:49.670 --> 01:06:51.250 |
|
And that's basically it. |
|
|
|
01:06:51.250 --> 01:06:53.750 |
|
So in cases where the optimization is |
|
|
|
01:06:53.750 --> 01:06:55.750 |
|
hard and there's not much customization |
|
|
|
01:06:55.750 --> 01:06:57.680 |
|
to be done and it's a really well |
|
|
|
01:06:57.680 --> 01:07:00.140 |
|
established problem, then you might as |
|
|
|
01:07:00.140 --> 01:07:01.536 |
|
well just use a model that's out there |
|
|
|
01:07:01.536 --> 01:07:02.900 |
|
and not worry about the. |
|
|
|
01:07:03.800 --> 01:07:05.050 |
|
Details of optimization. |
|
|
|
01:07:07.130 --> 01:07:08.520 |
|
The one thing that's important to know |
|
|
|
01:07:08.520 --> 01:07:11.150 |
|
is that sometimes you have, sometimes |
|
|
|
01:07:11.150 --> 01:07:12.480 |
|
it's helpful to transform the |
|
|
|
01:07:12.480 --> 01:07:13.050 |
|
variables. |
|
|
|
01:07:13.920 --> 01:07:15.520 |
|
So it might be that originally your |
|
|
|
01:07:15.520 --> 01:07:18.460 |
|
model is not very linearly predictive, |
|
|
|
01:07:18.460 --> 01:07:19.250 |
|
so. |
|
|
|
01:07:20.660 --> 01:07:24.330 |
|
Here I have a frequency of word usage |
|
|
|
01:07:24.330 --> 01:07:25.160 |
|
in Shakespeare. |
|
|
|
01:07:26.220 --> 01:07:29.270 |
|
And on the X axis is the rank of how |
|
|
|
01:07:29.270 --> 01:07:31.360 |
|
common that word is. |
|
|
|
01:07:31.360 --> 01:07:34.537 |
|
So the most common word occurs 14,000 |
|
|
|
01:07:34.537 --> 01:07:37.062 |
|
times, the second most common word |
|
|
|
01:07:37.062 --> 01:07:39.290 |
|
occurs 4000 times, the third most |
|
|
|
01:07:39.290 --> 01:07:41.190 |
|
common word occurs 2000 times. |
|
|
|
01:07:41.960 --> 01:07:42.732 |
|
And so on. |
|
|
|
01:07:42.732 --> 01:07:45.300 |
|
So it keeps on dropping by a big |
|
|
|
01:07:45.300 --> 01:07:46.490 |
|
fraction every time. |
|
|
|
01:07:47.420 --> 01:07:49.020 |
|
Most common word might be thy or |
|
|
|
01:07:49.020 --> 01:07:49.500 |
|
something. |
|
|
|
01:07:50.570 --> 01:07:53.864 |
|
So if I try to do a Linear fit to that, |
|
|
|
01:07:53.864 --> 01:07:55.620 |
|
it's not really a good fit. |
|
|
|
01:07:55.620 --> 01:07:57.670 |
|
It's obviously like not really lying |
|
|
|
01:07:57.670 --> 01:07:59.085 |
|
along those points at all. |
|
|
|
01:07:59.085 --> 01:08:01.220 |
|
It's way underestimating for the small |
|
|
|
01:08:01.220 --> 01:08:03.140 |
|
values and weight overestimating where |
|
|
|
01:08:03.140 --> 01:08:06.230 |
|
the rank is high, or reverse that |
|
|
|
01:08:06.230 --> 01:08:06.990 |
|
weight, underestimating. |
|
|
|
01:08:07.990 --> 01:08:09.810 |
|
It's underestimating both of those. |
|
|
|
01:08:09.810 --> 01:08:11.680 |
|
It's only overestimating this range. |
|
|
|
01:08:12.470 --> 01:08:13.010 |
|
And. |
|
|
|
01:08:13.880 --> 01:08:17.030 |
|
But if I like think about it, I can see |
|
|
|
01:08:17.030 --> 01:08:18.450 |
|
that there's some kind of logarithmic |
|
|
|
01:08:18.450 --> 01:08:20.350 |
|
behavior here, where it's always |
|
|
|
01:08:20.350 --> 01:08:22.840 |
|
decreasing by some fraction rather than |
|
|
|
01:08:22.840 --> 01:08:24.540 |
|
decreasing by a constant amount. |
|
|
|
01:08:25.830 --> 01:08:28.809 |
|
And so if I replot this as a log log |
|
|
|
01:08:28.810 --> 01:08:31.100 |
|
plot where I have the log rank on the X |
|
|
|
01:08:31.100 --> 01:08:33.940 |
|
axis and the log number of appearances. |
|
|
|
01:08:34.610 --> 01:08:36.000 |
|
On the Y axis. |
|
|
|
01:08:36.000 --> 01:08:39.680 |
|
Then I have this nice Linear behavior |
|
|
|
01:08:39.680 --> 01:08:42.030 |
|
and so now I can fit a linear model to |
|
|
|
01:08:42.030 --> 01:08:43.000 |
|
my log log plot. |
|
|
|
01:08:43.860 --> 01:08:47.040 |
|
And then I can in order to do that, I |
|
|
|
01:08:47.040 --> 01:08:49.380 |
|
would just then have essentially. |
|
|
|
01:08:52.910 --> 01:08:56.150 |
|
I would say like let's say X hat. |
|
|
|
01:08:57.550 --> 01:09:01.610 |
|
Equals log of X where X is the rank. |
|
|
|
01:09:03.380 --> 01:09:06.800 |
|
And then Y hat equals. |
|
|
|
01:09:07.650 --> 01:09:10.690 |
|
W transpose or here there's only One X, |
|
|
|
01:09:10.690 --> 01:09:13.000 |
|
but leave it in vector format anyway. |
|
|
|
01:09:13.000 --> 01:09:14.770 |
|
W transpose X hat. |
|
|
|
01:09:17.320 --> 01:09:19.950 |
|
And then Y, which is the original thing |
|
|
|
01:09:19.950 --> 01:09:22.060 |
|
that I wanted to Predict, is just the |
|
|
|
01:09:22.060 --> 01:09:23.910 |
|
exponent of Y hat. |
|
|
|
01:09:25.030 --> 01:09:28.070 |
|
Since Y was the. |
|
|
|
01:09:29.110 --> 01:09:31.750 |
|
Since Y hat is the log Frequency. |
|
|
|
01:09:33.680 --> 01:09:35.970 |
|
So I can just learn this Linear model, |
|
|
|
01:09:35.970 --> 01:09:37.870 |
|
but then I can easily transform the |
|
|
|
01:09:37.870 --> 01:09:38.620 |
|
variables. |
|
|
|
01:09:39.290 --> 01:09:42.406 |
|
Get my prediction of the log number of |
|
|
|
01:09:42.406 --> 01:09:43.870 |
|
appearances and then transform that |
|
|
|
01:09:43.870 --> 01:09:47.350 |
|
back into the like regular number of |
|
|
|
01:09:47.350 --> 01:09:47.760 |
|
appearances. |
|
|
|
01:09:53.160 --> 01:09:55.890 |
|
It's also worth noting that if you are |
|
|
|
01:09:55.890 --> 01:09:58.460 |
|
Minimizing a ^2 loss. |
|
|
|
01:09:59.120 --> 01:10:01.760 |
|
Then you're then you're going to be |
|
|
|
01:10:01.760 --> 01:10:04.860 |
|
sensitive to outliers, so as this |
|
|
|
01:10:04.860 --> 01:10:07.240 |
|
example from the textbook and some a |
|
|
|
01:10:07.240 --> 01:10:08.820 |
|
lot of these plots are examples from |
|
|
|
01:10:08.820 --> 01:10:09.960 |
|
the Forsyth textbook. |
|
|
|
01:10:12.120 --> 01:10:13.286 |
|
I've got these points here. |
|
|
|
01:10:13.286 --> 01:10:15.379 |
|
I've got the exact same points here, |
|
|
|
01:10:15.380 --> 01:10:18.290 |
|
but added one outlying .1 point that's |
|
|
|
01:10:18.290 --> 01:10:19.050 |
|
way off the line. |
|
|
|
01:10:19.890 --> 01:10:22.360 |
|
And you can see that totally messed up |
|
|
|
01:10:22.360 --> 01:10:23.206 |
|
my fit. |
|
|
|
01:10:23.206 --> 01:10:24.990 |
|
Like, now that fit hardly goes through |
|
|
|
01:10:24.990 --> 01:10:28.040 |
|
anything, just from that one point. |
|
|
|
01:10:28.040 --> 01:10:29.020 |
|
That's way off base. |
|
|
|
01:10:30.070 --> 01:10:32.763 |
|
And so that's really a problem with the |
|
|
|
01:10:32.763 --> 01:10:33.149 |
|
optimization. |
|
|
|
01:10:33.149 --> 01:10:35.930 |
|
With the optimization objective, if I |
|
|
|
01:10:35.930 --> 01:10:38.362 |
|
have a squared error, then I really, |
|
|
|
01:10:38.362 --> 01:10:40.150 |
|
really, really hate points that are far |
|
|
|
01:10:40.150 --> 01:10:42.670 |
|
from the line, so that one point is |
|
|
|
01:10:42.670 --> 01:10:44.620 |
|
able to pull this whole line towards |
|
|
|
01:10:44.620 --> 01:10:46.630 |
|
it, because this squared penalty is |
|
|
|
01:10:46.630 --> 01:10:48.750 |
|
just so big if it's that far away. |
|
|
|
01:10:49.950 --> 01:10:51.980 |
|
But if I have an L1, if I'm Minimizing |
|
|
|
01:10:51.980 --> 01:10:55.380 |
|
the L2 one difference, then this will |
|
|
|
01:10:55.380 --> 01:10:55.920 |
|
not happen. |
|
|
|
01:10:55.920 --> 01:10:57.900 |
|
I would end up with roughly the same |
|
|
|
01:10:57.900 --> 01:10:58.680 |
|
plot. |
|
|
|
01:10:59.330 --> 01:11:02.380 |
|
Or the other way of dealing with it is |
|
|
|
01:11:02.380 --> 01:11:05.960 |
|
to do something like me estimation, |
|
|
|
01:11:05.960 --> 01:11:08.670 |
|
where I'm also estimating a weight for |
|
|
|
01:11:08.670 --> 01:11:10.310 |
|
each point of how well it fits into the |
|
|
|
01:11:10.310 --> 01:11:12.270 |
|
model, and then at the end of that |
|
|
|
01:11:12.270 --> 01:11:13.730 |
|
estimation this will get very little |
|
|
|
01:11:13.730 --> 01:11:15.250 |
|
weight and then I'll also end up with |
|
|
|
01:11:15.250 --> 01:11:16.120 |
|
the original line. |
|
|
|
01:11:17.220 --> 01:11:19.270 |
|
So I will talk more about or I plan |
|
|
|
01:11:19.270 --> 01:11:21.880 |
|
anyway to talk more about like robust |
|
|
|
01:11:21.880 --> 01:11:24.480 |
|
fitting later in the semester, but I |
|
|
|
01:11:24.480 --> 01:11:25.790 |
|
just wanted to make you aware of this |
|
|
|
01:11:25.790 --> 01:11:26.180 |
|
issue. |
|
|
|
01:11:32.600 --> 01:11:34.260 |
|
Linear. |
|
|
|
01:11:34.260 --> 01:11:34.630 |
|
OK. |
|
|
|
01:11:34.630 --> 01:11:37.170 |
|
So just comparing these algorithms |
|
|
|
01:11:37.170 --> 01:11:37.700 |
|
we've seen. |
|
|
|
01:11:38.480 --> 01:11:41.635 |
|
So K&N between Linear Regression K&N |
|
|
|
01:11:41.635 --> 01:11:42.770 |
|
and IBS. |
|
|
|
01:11:42.770 --> 01:11:45.660 |
|
K&N is the most nonlinear of them, so |
|
|
|
01:11:45.660 --> 01:11:47.530 |
|
you can fit nonlinear functions with |
|
|
|
01:11:47.530 --> 01:11:47.850 |
|
K&N. |
|
|
|
01:11:49.240 --> 01:11:50.880 |
|
Linear Regression is the only one that |
|
|
|
01:11:50.880 --> 01:11:51.665 |
|
can extrapolate. |
|
|
|
01:11:51.665 --> 01:11:54.250 |
|
So for a function like this like K&N |
|
|
|
01:11:54.250 --> 01:11:56.290 |
|
and Naive Bayes will still give me some |
|
|
|
01:11:56.290 --> 01:11:58.230 |
|
value that's within the range of values |
|
|
|
01:11:58.230 --> 01:11:59.350 |
|
that I have observed. |
|
|
|
01:11:59.350 --> 01:12:02.330 |
|
So if I have a frequency of like 5 or |
|
|
|
01:12:02.330 --> 01:12:03.090 |
|
25. |
|
|
|
01:12:04.000 --> 01:12:06.620 |
|
K&N is still going to give me like a |
|
|
|
01:12:06.620 --> 01:12:08.716 |
|
Temperature that's in this range or in |
|
|
|
01:12:08.716 --> 01:12:09.209 |
|
this range. |
|
|
|
01:12:10.260 --> 01:12:11.960 |
|
Where Linear Regression can |
|
|
|
01:12:11.960 --> 01:12:13.863 |
|
extrapolate, it can actually make a |
|
|
|
01:12:13.863 --> 01:12:15.730 |
|
better like, assuming that it continues |
|
|
|
01:12:15.730 --> 01:12:17.320 |
|
to be a Linear relationship, a better |
|
|
|
01:12:17.320 --> 01:12:19.230 |
|
prediction for the extreme values that |
|
|
|
01:12:19.230 --> 01:12:20.380 |
|
were not observed in Training. |
|
|
|
01:12:22.370 --> 01:12:26.670 |
|
Linear Regression is compared to. |
|
|
|
01:12:27.970 --> 01:12:31.460 |
|
Compared to K&N, Linear Regression is |
|
|
|
01:12:31.460 --> 01:12:33.225 |
|
higher, higher bias and lower variance. |
|
|
|
01:12:33.225 --> 01:12:35.140 |
|
It's a more constrained model than K&N |
|
|
|
01:12:35.140 --> 01:12:37.816 |
|
because it's constrained to this Linear |
|
|
|
01:12:37.816 --> 01:12:39.680 |
|
model where K&N is nonlinear. |
|
|
|
01:12:41.140 --> 01:12:43.040 |
|
Linear Regression is more useful to |
|
|
|
01:12:43.040 --> 01:12:46.439 |
|
explain a relationship than K&N or |
|
|
|
01:12:46.440 --> 01:12:47.220 |
|
Naive Bayes. |
|
|
|
01:12:47.220 --> 01:12:49.530 |
|
You can see things like well as the |
|
|
|
01:12:49.530 --> 01:12:51.550 |
|
frequency increases by one then my |
|
|
|
01:12:51.550 --> 01:12:53.280 |
|
Temperature tends to increase by three |
|
|
|
01:12:53.280 --> 01:12:54.325 |
|
or whatever it is. |
|
|
|
01:12:54.325 --> 01:12:56.420 |
|
So you get like a very simple |
|
|
|
01:12:56.420 --> 01:12:57.960 |
|
explanation that relates to your |
|
|
|
01:12:57.960 --> 01:12:59.030 |
|
features to your data. |
|
|
|
01:12:59.030 --> 01:13:00.770 |
|
So that's why you do like a trend fit |
|
|
|
01:13:00.770 --> 01:13:01.650 |
|
in your Excel plot. |
|
|
|
01:13:04.020 --> 01:13:05.930 |
|
Linear compared to Gaussian I Bayes, |
|
|
|
01:13:05.930 --> 01:13:08.485 |
|
Linear Regression is more powerful in |
|
|
|
01:13:08.485 --> 01:13:10.700 |
|
the sense that it should always fit the |
|
|
|
01:13:10.700 --> 01:13:12.350 |
|
Training data better because it has |
|
|
|
01:13:12.350 --> 01:13:13.990 |
|
more freedom to adjust its |
|
|
|
01:13:13.990 --> 01:13:14.700 |
|
coefficients. |
|
|
|
01:13:16.340 --> 01:13:17.820 |
|
But it doesn't necessarily mean that |
|
|
|
01:13:17.820 --> 01:13:19.030 |
|
will fit the test data better. |
|
|
|
01:13:19.030 --> 01:13:20.980 |
|
So if your data is really Gaussian, |
|
|
|
01:13:20.980 --> 01:13:22.830 |
|
then Gaussian nibs would be the best |
|
|
|
01:13:22.830 --> 01:13:23.510 |
|
thing you could do. |
|
|
|
01:13:28.290 --> 01:13:34.480 |
|
So the key it's basically that Y can be |
|
|
|
01:13:34.480 --> 01:13:35.980 |
|
predicted by your Linear combination of |
|
|
|
01:13:35.980 --> 01:13:36.590 |
|
features. |
|
|
|
01:13:37.570 --> 01:13:38.354 |
|
You can. |
|
|
|
01:13:38.354 --> 01:13:40.450 |
|
You want to use it if you want to |
|
|
|
01:13:40.450 --> 01:13:42.380 |
|
extrapolate or visualize or quantify |
|
|
|
01:13:42.380 --> 01:13:44.903 |
|
correlations or relationships, or if |
|
|
|
01:13:44.903 --> 01:13:46.710 |
|
you have Many features that can be very |
|
|
|
01:13:46.710 --> 01:13:47.620 |
|
powerful predictor. |
|
|
|
01:13:48.580 --> 01:13:50.410 |
|
And you don't want to use it obviously |
|
|
|
01:13:50.410 --> 01:13:51.860 |
|
if the relationships are very nonlinear |
|
|
|
01:13:51.860 --> 01:13:53.540 |
|
and that or you need to apply a |
|
|
|
01:13:53.540 --> 01:13:54.700 |
|
transformation first. |
|
|
|
01:13:56.520 --> 01:13:58.850 |
|
I'll be done in just one second. |
|
|
|
01:13:59.270 --> 01:14:02.490 |
|
And so these are used so widely that I |
|
|
|
01:14:02.490 --> 01:14:03.420 |
|
couldn't think of. |
|
|
|
01:14:03.420 --> 01:14:05.480 |
|
I felt like coming up with an example |
|
|
|
01:14:05.480 --> 01:14:07.230 |
|
of when they're used would not give |
|
|
|
01:14:07.230 --> 01:14:10.010 |
|
you, would not be the right thing to do |
|
|
|
01:14:10.010 --> 01:14:11.940 |
|
because they're used millions of times, |
|
|
|
01:14:11.940 --> 01:14:14.360 |
|
like almost all the time you're doing |
|
|
|
01:14:14.360 --> 01:14:16.970 |
|
Linear Regression or Linear or Logistic |
|
|
|
01:14:16.970 --> 01:14:17.550 |
|
Regression. |
|
|
|
01:14:18.510 --> 01:14:20.300 |
|
If you have a neural network, the last |
|
|
|
01:14:20.300 --> 01:14:22.130 |
|
layer is a Logistic regressor. |
|
|
|
01:14:22.130 --> 01:14:24.240 |
|
So they use like really, really widely. |
|
|
|
01:14:24.240 --> 01:14:24.735 |
|
They're the. |
|
|
|
01:14:24.735 --> 01:14:26.080 |
|
They're the bread and butter of machine |
|
|
|
01:14:26.080 --> 01:14:26.410 |
|
learning. |
|
|
|
01:14:28.310 --> 01:14:29.010 |
|
I'm going to. |
|
|
|
01:14:29.010 --> 01:14:30.480 |
|
I'll Recap this at the start of the |
|
|
|
01:14:30.480 --> 01:14:31.040 |
|
next class. |
|
|
|
01:14:31.820 --> 01:14:34.715 |
|
And I'll talk about, I'll go through |
|
|
|
01:14:34.715 --> 01:14:36.110 |
|
the review at the start of the next |
|
|
|
01:14:36.110 --> 01:14:37.530 |
|
class of homework one as well. |
|
|
|
01:14:37.530 --> 01:14:39.840 |
|
This is just basically information, |
|
|
|
01:14:39.840 --> 01:14:41.560 |
|
summary of information that's already |
|
|
|
01:14:41.560 --> 01:14:42.539 |
|
given to you in the homework |
|
|
|
01:14:42.540 --> 01:14:42.880 |
|
assignment. |
|
|
|
01:14:44.960 --> 01:14:45.315 |
|
Alright. |
|
|
|
01:14:45.315 --> 01:14:47.160 |
|
So next week I'll just go through that |
|
|
|
01:14:47.160 --> 01:14:49.610 |
|
review and then I'll talk about trees |
|
|
|
01:14:49.610 --> 01:14:51.390 |
|
and I'll talk about Ensembles. |
|
|
|
01:14:51.390 --> 01:14:54.580 |
|
And remember that your homework one is |
|
|
|
01:14:54.580 --> 01:14:56.620 |
|
due on February 6, so a week from |
|
|
|
01:14:56.620 --> 01:14:57.500 |
|
Monday. |
|
|
|
01:14:57.500 --> 01:14:58.160 |
|
Thank you. |
|
|
|
01:15:03.740 --> 01:15:04.530 |
|
Question about. |
|
|
|
01:15:06.630 --> 01:15:10.140 |
|
I observed the Training data and I |
|
|
|
01:15:10.140 --> 01:15:13.110 |
|
think this occurrence is not simple one |
|
|
|
01:15:13.110 --> 01:15:13.770 |
|
or zero. |
|
|
|
01:15:13.770 --> 01:15:16.570 |
|
So how should we count the occurrence |
|
|
|
01:15:16.570 --> 01:15:17.940 |
|
on each of the? |
|
|
|
01:15:20.610 --> 01:15:24.257 |
|
So first you have to you threshold it |
|
|
|
01:15:24.257 --> 01:15:28.690 |
|
so first you say like X train equals. |
|
|
|
01:15:29.340 --> 01:15:30.810 |
|
784X1 train. |
|
|
|
01:15:31.780 --> 01:15:33.580 |
|
Greater than 0.5. |
|
|
|
01:15:34.750 --> 01:15:35.896 |
|
So that's what I mean by thresholding |
|
|
|
01:15:35.896 --> 01:15:38.450 |
|
and now this will be zeros or zeros and |
|
|
|
01:15:38.450 --> 01:15:40.820 |
|
ones and so now you can count. |
|
|
|
01:15:42.360 --> 01:15:44.530 |
|
So that's how we. |
|
|
|
01:15:46.270 --> 01:15:48.550 |
|
Now you can count it, yeah? |
|
|
|
01:15:50.090 --> 01:15:51.270 |
|
Hi, I'm not sure if. |
|
|
|
01:16:01.130 --> 01:16:01.790 |
|
So. |
|
|
|
01:16:03.040 --> 01:16:05.420 |
|
In terms of so if you think it's the |
|
|
|
01:16:05.420 --> 01:16:07.347 |
|
case that there's like a lot of. |
|
|
|
01:16:07.347 --> 01:16:09.089 |
|
So first, if you think there's a lot of |
|
|
|
01:16:09.090 --> 01:16:11.500 |
|
noisy features that aren't very useful |
|
|
|
01:16:11.500 --> 01:16:13.200 |
|
and you have limited data, then L2 one |
|
|
|
01:16:13.200 --> 01:16:15.400 |
|
might be better because it will be |
|
|
|
01:16:15.400 --> 01:16:17.480 |
|
focused more on a few Useful features. |
|
|
|
01:16:18.780 --> 01:16:21.150 |
|
The other is that if you have. |
|
|
|
01:16:23.080 --> 01:16:24.960 |
|
If you want to select what are the most |
|
|
|
01:16:24.960 --> 01:16:26.820 |
|
important features, then L2 one is |
|
|
|
01:16:26.820 --> 01:16:27.450 |
|
better. |
|
|
|
01:16:27.450 --> 01:16:28.750 |
|
It can do it in L2 2 can't. |
|
|
|
01:16:30.170 --> 01:16:32.650 |
|
Otherwise, you often want to use L2 |
|
|
|
01:16:32.650 --> 01:16:34.370 |
|
just because the optimization is a lot |
|
|
|
01:16:34.370 --> 01:16:34.940 |
|
faster. |
|
|
|
01:16:34.940 --> 01:16:37.580 |
|
So one is a harder optimization problem |
|
|
|
01:16:37.580 --> 01:16:39.440 |
|
and it will take a lot longer. |
|
|
|
01:16:40.190 --> 01:16:41.840 |
|
From what I'm understanding, L2 one is |
|
|
|
01:16:41.840 --> 01:16:43.210 |
|
only better when there are limited |
|
|
|
01:16:43.210 --> 01:16:44.150 |
|
features and limited. |
|
|
|
01:16:45.210 --> 01:16:48.160 |
|
If you think that some features are |
|
|
|
01:16:48.160 --> 01:16:49.850 |
|
very valuable and there's a lot of |
|
|
|
01:16:49.850 --> 01:16:51.396 |
|
other weak features, then it can give |
|
|
|
01:16:51.396 --> 01:16:52.630 |
|
you a better result. |
|
|
|
01:16:53.350 --> 01:16:53.870 |
|
|
|
|
|
01:16:54.490 --> 01:16:56.260 |
|
Or if you want to do feature selection. |
|
|
|
01:16:56.260 --> 01:16:59.300 |
|
But in most practical cases you will |
|
|
|
01:16:59.300 --> 01:17:01.450 |
|
get fairly similar accuracy from the |
|
|
|
01:17:01.450 --> 01:17:01.800 |
|
two. |
|
|
|
01:17:05.690 --> 01:17:07.740 |
|
Y is equal to 1 in this case would be. |
|
|
|
01:17:14.630 --> 01:17:15.660 |
|
If it's binary. |
|
|
|
01:17:17.460 --> 01:17:20.820 |
|
So if it's binary, then the score of Y, |
|
|
|
01:17:20.820 --> 01:17:24.030 |
|
this Y the score for 0. |
|
|
|
01:17:24.700 --> 01:17:28.010 |
|
Is the negative of the score, for one. |
|
|
|
01:17:29.240 --> 01:17:31.730 |
|
So if it's binary then these relate |
|
|
|
01:17:31.730 --> 01:17:34.080 |
|
because this would be east to the West |
|
|
|
01:17:34.080 --> 01:17:34.690 |
|
transpose. |
|
|
|
01:17:36.590 --> 01:17:40.100 |
|
784X1 over east to the West transpose X |
|
|
|
01:17:40.100 --> 01:17:41.360 |
|
Plus wait. |
|
|
|
01:17:41.360 --> 01:17:42.130 |
|
Am I doing that right? |
|
|
|
01:17:49.990 --> 01:17:51.077 |
|
Sorry, I forgot. |
|
|
|
01:17:51.077 --> 01:17:52.046 |
|
I can't explain. |
|
|
|
01:17:52.046 --> 01:17:54.050 |
|
I forgot how to explain like why this |
|
|
|
01:17:54.050 --> 01:17:56.059 |
|
is the same under the binary case. |
|
|
|
01:17:56.060 --> 01:17:58.633 |
|
OK, so but there would be the same |
|
|
|
01:17:58.633 --> 01:17:59.678 |
|
under the binary case. |
|
|
|
01:17:59.678 --> 01:18:01.010 |
|
Yeah, they're still there. |
|
|
|
01:18:01.010 --> 01:18:02.440 |
|
It ends up working out to be the same |
|
|
|
01:18:02.440 --> 01:18:02.990 |
|
equation. |
|
|
|
01:18:03.420 --> 01:18:04.580 |
|
You're welcome. |
|
|
|
01:18:17.130 --> 01:18:17.650 |
|
Convert this. |
|
|
|
01:18:38.230 --> 01:18:39.650 |
|
So you. |
|
|
|
01:18:40.770 --> 01:18:41.750 |
|
I'm not sure if I understood. |
|
|
|
01:18:41.750 --> 01:18:43.950 |
|
You said from audio you want to do |
|
|
|
01:18:43.950 --> 01:18:44.360 |
|
what? |
|
|
|
01:18:45.560 --> 01:18:48.660 |
|
I'm sitting on a beach this sentence. |
|
|
|
01:18:49.440 --> 01:18:51.700 |
|
Or you are sitting OK. |
|
|
|
01:18:52.980 --> 01:18:53.450 |
|
OK. |
|
|
|
01:18:54.820 --> 01:18:57.130 |
|
My model or app should convert it as a. |
|
|
|
01:19:00.490 --> 01:19:01.280 |
|
So that person. |
|
|
|
01:19:05.870 --> 01:19:08.090 |
|
You want to generate a video from a |
|
|
|
01:19:08.090 --> 01:19:08.840 |
|
speech. |
|
|
|
01:19:12.670 --> 01:19:12.920 |
|
Right. |
|
|
|
01:19:12.920 --> 01:19:14.760 |
|
That's like really, really complicated. |
|
|
|
01:19:16.390 --> 01:19:17.070 |
|
So. |
|
|
|
|