|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:53:55.0930204Z by ClassTranscribe |
|
|
|
00:01:12.090 --> 00:01:13.230 |
|
Alright, good morning everybody. |
|
|
|
00:01:15.530 --> 00:01:20.650 |
|
So I saw in response to the feedback, I |
|
|
|
00:01:20.650 --> 00:01:22.790 |
|
got some feedback on the course and. |
|
|
|
00:01:23.690 --> 00:01:26.200 |
|
Overall, there's of course a mix of |
|
|
|
00:01:26.200 --> 00:01:28.650 |
|
responses, but some on average people |
|
|
|
00:01:28.650 --> 00:01:30.810 |
|
feel like it's moving a little fast and |
|
|
|
00:01:30.810 --> 00:01:33.040 |
|
we're and also it's challenging. |
|
|
|
00:01:33.980 --> 00:01:37.350 |
|
So I wanted to take some time to like |
|
|
|
00:01:37.350 --> 00:01:39.430 |
|
consolidate and to talk about some of |
|
|
|
00:01:39.430 --> 00:01:40.890 |
|
the most important points. |
|
|
|
00:01:41.790 --> 00:01:45.160 |
|
That we've covered so far, and then so |
|
|
|
00:01:45.160 --> 00:01:46.930 |
|
I'll do that for the first half of the |
|
|
|
00:01:46.930 --> 00:01:49.280 |
|
lecture, and then I'm also going to go |
|
|
|
00:01:49.280 --> 00:01:53.105 |
|
through a detailed example using code |
|
|
|
00:01:53.105 --> 00:01:55.200 |
|
to solve a particular problem. |
|
|
|
00:01:59.460 --> 00:02:00.000 |
|
So. |
|
|
|
00:02:00.690 --> 00:02:01.340 |
|
Let me see. |
|
|
|
00:02:04.270 --> 00:02:04.800 |
|
All right. |
|
|
|
00:02:06.140 --> 00:02:09.360 |
|
So this is a mostly the same as a slide |
|
|
|
00:02:09.360 --> 00:02:10.750 |
|
that I showed in the intro. |
|
|
|
00:02:10.750 --> 00:02:12.750 |
|
This is machine learning in general. |
|
|
|
00:02:12.750 --> 00:02:15.800 |
|
You've got some raw features and so far |
|
|
|
00:02:15.800 --> 00:02:17.430 |
|
we've COVID cases where we have |
|
|
|
00:02:17.430 --> 00:02:20.190 |
|
discrete and continuous values and also |
|
|
|
00:02:20.190 --> 00:02:21.780 |
|
some simple images in terms of the |
|
|
|
00:02:21.780 --> 00:02:22.670 |
|
amnesty characters. |
|
|
|
00:02:23.600 --> 00:02:25.590 |
|
And we have some kind of. |
|
|
|
00:02:25.590 --> 00:02:28.000 |
|
Sometimes we process those features in |
|
|
|
00:02:28.000 --> 00:02:29.740 |
|
some way we have like what's called an |
|
|
|
00:02:29.740 --> 00:02:31.970 |
|
encoder or we have feature transforms. |
|
|
|
00:02:32.790 --> 00:02:34.290 |
|
We've only gotten into that a little |
|
|
|
00:02:34.290 --> 00:02:34.500 |
|
bit. |
|
|
|
00:02:35.210 --> 00:02:38.410 |
|
In terms of the decision trees, which |
|
|
|
00:02:38.410 --> 00:02:39.670 |
|
you can view as a kind of feature |
|
|
|
00:02:39.670 --> 00:02:40.530 |
|
transformation. |
|
|
|
00:02:41.290 --> 00:02:43.440 |
|
And feature selection using one the |
|
|
|
00:02:43.440 --> 00:02:44.350 |
|
district regression. |
|
|
|
00:02:44.980 --> 00:02:47.030 |
|
So the job of the encoder is to take |
|
|
|
00:02:47.030 --> 00:02:48.690 |
|
your raw features and turn them into |
|
|
|
00:02:48.690 --> 00:02:50.570 |
|
something that's more easily. |
|
|
|
00:02:51.340 --> 00:02:53.270 |
|
That more easily yields a predictor. |
|
|
|
00:02:54.580 --> 00:02:56.180 |
|
Then you have decoder, the thing that |
|
|
|
00:02:56.180 --> 00:02:58.290 |
|
predicts from your encoded features, |
|
|
|
00:02:58.290 --> 00:03:00.510 |
|
and we've covered pretty much all the |
|
|
|
00:03:00.510 --> 00:03:02.550 |
|
methods here except for SVM, which |
|
|
|
00:03:02.550 --> 00:03:04.910 |
|
we're doing next week. |
|
|
|
00:03:05.830 --> 00:03:08.110 |
|
And so we've got a linear aggressor, a |
|
|
|
00:03:08.110 --> 00:03:09.952 |
|
logistic regressor, nearest neighbor |
|
|
|
00:03:09.952 --> 00:03:11.430 |
|
and probabilistic models. |
|
|
|
00:03:11.430 --> 00:03:13.035 |
|
Now there's lots of different kinds of |
|
|
|
00:03:13.035 --> 00:03:13.650 |
|
probabilistic models. |
|
|
|
00:03:13.650 --> 00:03:15.930 |
|
We only talked about a couple of one of |
|
|
|
00:03:15.930 --> 00:03:17.420 |
|
them nibs. |
|
|
|
00:03:18.750 --> 00:03:20.350 |
|
But still, we've touched on this. |
|
|
|
00:03:21.360 --> 00:03:22.690 |
|
And then you have a prediction and |
|
|
|
00:03:22.690 --> 00:03:24.047 |
|
there's lots of different things you |
|
|
|
00:03:24.047 --> 00:03:24.585 |
|
can predict. |
|
|
|
00:03:24.585 --> 00:03:26.480 |
|
You can predict a category or a |
|
|
|
00:03:26.480 --> 00:03:28.050 |
|
continuous value, which is what we've |
|
|
|
00:03:28.050 --> 00:03:29.205 |
|
talked about South far. |
|
|
|
00:03:29.205 --> 00:03:31.420 |
|
You could also be generating clusters |
|
|
|
00:03:31.420 --> 00:03:35.595 |
|
or pixel labels or poses or other kinds |
|
|
|
00:03:35.595 --> 00:03:36.460 |
|
of predictions. |
|
|
|
00:03:37.600 --> 00:03:40.275 |
|
And in training, you've got some data |
|
|
|
00:03:40.275 --> 00:03:42.280 |
|
and target labels, and you're trying to |
|
|
|
00:03:42.280 --> 00:03:44.060 |
|
update the models of your parameters to |
|
|
|
00:03:44.060 --> 00:03:46.200 |
|
get the best prediction possible, where |
|
|
|
00:03:46.200 --> 00:03:48.434 |
|
you want to really not only maximize |
|
|
|
00:03:48.434 --> 00:03:50.619 |
|
your prediction on the training data, |
|
|
|
00:03:50.620 --> 00:03:52.970 |
|
but also to maximize your expected or |
|
|
|
00:03:52.970 --> 00:03:55.520 |
|
minimize your expected error on the |
|
|
|
00:03:55.520 --> 00:03:56.170 |
|
test data. |
|
|
|
00:03:59.950 --> 00:04:02.520 |
|
So one important part of machine |
|
|
|
00:04:02.520 --> 00:04:04.255 |
|
learning is learning a model. |
|
|
|
00:04:04.255 --> 00:04:04.650 |
|
So. |
|
|
|
00:04:05.430 --> 00:04:08.385 |
|
Here this is like this kind of. |
|
|
|
00:04:08.385 --> 00:04:10.610 |
|
This function, in one form or another |
|
|
|
00:04:10.610 --> 00:04:12.140 |
|
will be part of every machine learning |
|
|
|
00:04:12.140 --> 00:04:14.180 |
|
algorithm where you're trying to. |
|
|
|
00:04:14.180 --> 00:04:17.720 |
|
You have some model F of X Theta. |
|
|
|
00:04:18.360 --> 00:04:20.500 |
|
Where X is the raw features. |
|
|
|
00:04:21.890 --> 00:04:23.600 |
|
Beta are the parameters that you're |
|
|
|
00:04:23.600 --> 00:04:25.440 |
|
trying to optimize that you're going to |
|
|
|
00:04:25.440 --> 00:04:26.840 |
|
optimize to fit your model. |
|
|
|
00:04:27.940 --> 00:04:31.080 |
|
And why is the prediction that you're |
|
|
|
00:04:31.080 --> 00:04:31.750 |
|
trying to make? |
|
|
|
00:04:31.750 --> 00:04:33.359 |
|
So you're given. |
|
|
|
00:04:33.360 --> 00:04:35.260 |
|
In supervised learning you're given |
|
|
|
00:04:35.260 --> 00:04:40.100 |
|
pairs XY of some features and labels. |
|
|
|
00:04:40.990 --> 00:04:42.430 |
|
And then you're trying to solve for |
|
|
|
00:04:42.430 --> 00:04:45.570 |
|
parameters that minimizes your loss, |
|
|
|
00:04:45.570 --> 00:04:49.909 |
|
and your loss is a is like A is a |
|
|
|
00:04:49.910 --> 00:04:51.628 |
|
objective function that you're trying |
|
|
|
00:04:51.628 --> 00:04:54.130 |
|
to reduce, and it usually has two |
|
|
|
00:04:54.130 --> 00:04:54.860 |
|
components. |
|
|
|
00:04:54.860 --> 00:04:56.550 |
|
1 component is that you want your |
|
|
|
00:04:56.550 --> 00:04:59.140 |
|
predictions on the training data to be |
|
|
|
00:04:59.140 --> 00:05:00.490 |
|
as good as possible. |
|
|
|
00:05:00.490 --> 00:05:03.066 |
|
For example, you might say that you |
|
|
|
00:05:03.066 --> 00:05:05.525 |
|
want to maximize the probability of |
|
|
|
00:05:05.525 --> 00:05:07.210 |
|
your labels given your features. |
|
|
|
00:05:07.840 --> 00:05:10.930 |
|
Or, equivalently, you want to minimize |
|
|
|
00:05:10.930 --> 00:05:12.795 |
|
the negative sum of log likelihood of |
|
|
|
00:05:12.795 --> 00:05:14.450 |
|
your labels given your features. |
|
|
|
00:05:14.450 --> 00:05:16.762 |
|
This is the same as maximizing the |
|
|
|
00:05:16.762 --> 00:05:17.650 |
|
likelihood of the labels. |
|
|
|
00:05:18.280 --> 00:05:22.360 |
|
But we often want to minimize things, |
|
|
|
00:05:22.360 --> 00:05:24.679 |
|
so negative log is. |
|
|
|
00:05:24.680 --> 00:05:26.581 |
|
Minimizing the negative log is the same |
|
|
|
00:05:26.581 --> 00:05:30.056 |
|
as maximizing the log and taking the |
|
|
|
00:05:30.056 --> 00:05:30.369 |
|
log. |
|
|
|
00:05:30.369 --> 00:05:33.500 |
|
The Max of the log is the same as the |
|
|
|
00:05:33.500 --> 00:05:35.040 |
|
Max of the value. |
|
|
|
00:05:35.850 --> 00:05:37.510 |
|
And this form tends to be easier to |
|
|
|
00:05:37.510 --> 00:05:38.030 |
|
optimize. |
|
|
|
00:05:40.730 --> 00:05:41.730 |
|
The second term. |
|
|
|
00:05:41.730 --> 00:05:43.720 |
|
So we want to maximize the likelihood |
|
|
|
00:05:43.720 --> 00:05:45.590 |
|
of the labels given the data, but we |
|
|
|
00:05:45.590 --> 00:05:49.000 |
|
also want to have some likely. |
|
|
|
00:05:49.000 --> 00:05:51.750 |
|
We often want to impose some kinds of |
|
|
|
00:05:51.750 --> 00:05:53.450 |
|
constraints or some kinds of |
|
|
|
00:05:53.450 --> 00:05:56.020 |
|
preferences for the parameters of our |
|
|
|
00:05:56.020 --> 00:05:56.450 |
|
model. |
|
|
|
00:05:57.210 --> 00:05:58.240 |
|
So. |
|
|
|
00:05:58.430 --> 00:05:59.010 |
|
And. |
|
|
|
00:06:00.730 --> 00:06:02.549 |
|
So a common thing is that we want to |
|
|
|
00:06:02.550 --> 00:06:04.449 |
|
say that the sum of the parameters we |
|
|
|
00:06:04.449 --> 00:06:06.119 |
|
want to minimize the sum of the |
|
|
|
00:06:06.120 --> 00:06:07.465 |
|
parameter squared, or we want to |
|
|
|
00:06:07.465 --> 00:06:09.148 |
|
minimize the sum of the absolute values |
|
|
|
00:06:09.148 --> 00:06:10.282 |
|
of the parameters. |
|
|
|
00:06:10.282 --> 00:06:11.815 |
|
So this is called regularization. |
|
|
|
00:06:11.815 --> 00:06:13.565 |
|
Or if you have a probabilistic model, |
|
|
|
00:06:13.565 --> 00:06:16.490 |
|
that might be in the form of a prior on |
|
|
|
00:06:16.490 --> 00:06:19.200 |
|
the statistics that you're estimating. |
|
|
|
00:06:20.910 --> 00:06:22.850 |
|
So the regularization and priors |
|
|
|
00:06:22.850 --> 00:06:24.720 |
|
indicate some kind of preference for a |
|
|
|
00:06:24.720 --> 00:06:26.110 |
|
particular solutions. |
|
|
|
00:06:26.940 --> 00:06:28.700 |
|
And they tend to improve |
|
|
|
00:06:28.700 --> 00:06:29.330 |
|
generalization. |
|
|
|
00:06:29.330 --> 00:06:31.700 |
|
And in some cases they're necessary to |
|
|
|
00:06:31.700 --> 00:06:33.520 |
|
obtain a unique solution. |
|
|
|
00:06:33.520 --> 00:06:35.430 |
|
Like there might be many linear models |
|
|
|
00:06:35.430 --> 00:06:37.510 |
|
that can separate your one class from |
|
|
|
00:06:37.510 --> 00:06:40.380 |
|
another, and without regularization you |
|
|
|
00:06:40.380 --> 00:06:41.820 |
|
have no way of choosing among those |
|
|
|
00:06:41.820 --> 00:06:42.510 |
|
different models. |
|
|
|
00:06:42.510 --> 00:06:45.660 |
|
The regularization specifies a |
|
|
|
00:06:45.660 --> 00:06:46.720 |
|
particular solution. |
|
|
|
00:06:48.250 --> 00:06:50.690 |
|
And this is it's more important the |
|
|
|
00:06:50.690 --> 00:06:52.040 |
|
less data you have. |
|
|
|
00:06:52.950 --> 00:06:55.450 |
|
Or the more features or larger your |
|
|
|
00:06:55.450 --> 00:06:56.060 |
|
problem is. |
|
|
|
00:07:00.900 --> 00:07:03.240 |
|
Once we've once we've trained a model, |
|
|
|
00:07:03.240 --> 00:07:05.020 |
|
then we want to do prediction using |
|
|
|
00:07:05.020 --> 00:07:06.033 |
|
that model. |
|
|
|
00:07:06.033 --> 00:07:08.300 |
|
So in prediction we're given some new |
|
|
|
00:07:08.300 --> 00:07:09.200 |
|
set of features. |
|
|
|
00:07:09.860 --> 00:07:10.980 |
|
It will be the same. |
|
|
|
00:07:10.980 --> 00:07:14.216 |
|
So in training we might have seen 500 |
|
|
|
00:07:14.216 --> 00:07:16.870 |
|
examples, and for each of those |
|
|
|
00:07:16.870 --> 00:07:19.191 |
|
examples 10 features and some label |
|
|
|
00:07:19.191 --> 00:07:20.716 |
|
you're trying to predict. |
|
|
|
00:07:20.716 --> 00:07:23.480 |
|
So in testing you'll have a set of |
|
|
|
00:07:23.480 --> 00:07:25.860 |
|
testing examples, and each one will |
|
|
|
00:07:25.860 --> 00:07:27.272 |
|
also have the same number of features. |
|
|
|
00:07:27.272 --> 00:07:29.028 |
|
So it might have 10 features as well, |
|
|
|
00:07:29.028 --> 00:07:30.771 |
|
and you're trying to predict the same |
|
|
|
00:07:30.771 --> 00:07:31.019 |
|
label. |
|
|
|
00:07:31.020 --> 00:07:32.810 |
|
But in testing you don't give the model |
|
|
|
00:07:32.810 --> 00:07:34.265 |
|
your label, you're trying to output the |
|
|
|
00:07:34.265 --> 00:07:34.510 |
|
label. |
|
|
|
00:07:35.550 --> 00:07:37.990 |
|
So in testing, we're given some test |
|
|
|
00:07:37.990 --> 00:07:42.687 |
|
sample with input features XT and if |
|
|
|
00:07:42.687 --> 00:07:44.084 |
|
we're doing a regression, then we're |
|
|
|
00:07:44.084 --> 00:07:45.474 |
|
trying to output yet directly. |
|
|
|
00:07:45.474 --> 00:07:48.050 |
|
So we're trying to say, predict the |
|
|
|
00:07:48.050 --> 00:07:49.830 |
|
stock price or temperature or something |
|
|
|
00:07:49.830 --> 00:07:50.520 |
|
like that. |
|
|
|
00:07:50.520 --> 00:07:52.334 |
|
If we're doing classification, we're |
|
|
|
00:07:52.334 --> 00:07:54.210 |
|
trying to output the likelihood of a |
|
|
|
00:07:54.210 --> 00:07:56.065 |
|
particular category or the most likely |
|
|
|
00:07:56.065 --> 00:07:56.470 |
|
category. |
|
|
|
00:08:03.280 --> 00:08:03.780 |
|
And. |
|
|
|
00:08:04.810 --> 00:08:08.590 |
|
So then there's a so if we're trying to |
|
|
|
00:08:08.590 --> 00:08:10.490 |
|
develop a machine learning algorithm. |
|
|
|
00:08:11.240 --> 00:08:13.780 |
|
Then we go through this model |
|
|
|
00:08:13.780 --> 00:08:15.213 |
|
evaluation process. |
|
|
|
00:08:15.213 --> 00:08:18.660 |
|
So the first step is that we need to |
|
|
|
00:08:18.660 --> 00:08:19.930 |
|
collect some data. |
|
|
|
00:08:19.930 --> 00:08:21.848 |
|
So if we're creating a new problem, |
|
|
|
00:08:21.848 --> 00:08:25.900 |
|
then we might need to capture capture |
|
|
|
00:08:25.900 --> 00:08:28.940 |
|
images or record observations or |
|
|
|
00:08:28.940 --> 00:08:30.950 |
|
download information from the Internet, |
|
|
|
00:08:30.950 --> 00:08:31.930 |
|
or whatever. |
|
|
|
00:08:31.930 --> 00:08:33.688 |
|
One way or another, you need to get |
|
|
|
00:08:33.688 --> 00:08:34.092 |
|
some data. |
|
|
|
00:08:34.092 --> 00:08:36.232 |
|
You need to get labels for that data. |
|
|
|
00:08:36.232 --> 00:08:37.550 |
|
So it might include. |
|
|
|
00:08:37.550 --> 00:08:38.970 |
|
You might need to do some manual |
|
|
|
00:08:38.970 --> 00:08:39.494 |
|
annotation. |
|
|
|
00:08:39.494 --> 00:08:41.590 |
|
You might need to. |
|
|
|
00:08:41.650 --> 00:08:44.080 |
|
Crowd source or use platforms to get |
|
|
|
00:08:44.080 --> 00:08:44.820 |
|
the labels. |
|
|
|
00:08:44.820 --> 00:08:46.760 |
|
At the end of this you'll have a whole |
|
|
|
00:08:46.760 --> 00:08:49.708 |
|
set of samples X&Y where X are the are |
|
|
|
00:08:49.708 --> 00:08:51.355 |
|
the features that you want to use to |
|
|
|
00:08:51.355 --> 00:08:53.070 |
|
make a prediction and why are the |
|
|
|
00:08:53.070 --> 00:08:54.625 |
|
predictions that you want to make. |
|
|
|
00:08:54.625 --> 00:08:57.442 |
|
And then you split that data into a |
|
|
|
00:08:57.442 --> 00:09:00.190 |
|
training and validation and a test set |
|
|
|
00:09:00.190 --> 00:09:01.630 |
|
where you're going to use the training |
|
|
|
00:09:01.630 --> 00:09:03.175 |
|
set to optimize parameters, validation |
|
|
|
00:09:03.175 --> 00:09:06.130 |
|
set to choose your best model and |
|
|
|
00:09:06.130 --> 00:09:08.070 |
|
testing for your final evaluation and |
|
|
|
00:09:08.070 --> 00:09:08.680 |
|
performance. |
|
|
|
00:09:10.180 --> 00:09:12.330 |
|
So once you have the data, you might |
|
|
|
00:09:12.330 --> 00:09:14.134 |
|
spend some time inspecting the features |
|
|
|
00:09:14.134 --> 00:09:16.605 |
|
and trying to understand the problem a |
|
|
|
00:09:16.605 --> 00:09:17.203 |
|
little bit better. |
|
|
|
00:09:17.203 --> 00:09:19.570 |
|
Trying to look at do some little test |
|
|
|
00:09:19.570 --> 00:09:23.960 |
|
to see how like baselines work and how |
|
|
|
00:09:23.960 --> 00:09:27.320 |
|
certain features predict the label. |
|
|
|
00:09:28.410 --> 00:09:29.600 |
|
And then you'll decide on some |
|
|
|
00:09:29.600 --> 00:09:31.190 |
|
candidate models and parameters. |
|
|
|
00:09:31.870 --> 00:09:34.610 |
|
Then for each candidate you would train |
|
|
|
00:09:34.610 --> 00:09:36.970 |
|
the parameters using the train set. |
|
|
|
00:09:37.720 --> 00:09:39.970 |
|
And you'll evaluate your trained model |
|
|
|
00:09:39.970 --> 00:09:41.170 |
|
on the validation set. |
|
|
|
00:09:41.910 --> 00:09:43.870 |
|
And then you choose the best model |
|
|
|
00:09:43.870 --> 00:09:45.630 |
|
based on your validation performance. |
|
|
|
00:09:46.470 --> 00:09:48.800 |
|
And then you evaluate it on the test |
|
|
|
00:09:48.800 --> 00:09:49.040 |
|
set. |
|
|
|
00:09:50.320 --> 00:09:54.160 |
|
And sometimes, very often you have like |
|
|
|
00:09:54.160 --> 00:09:55.320 |
|
a tree and vowel test set. |
|
|
|
00:09:55.320 --> 00:09:56.920 |
|
But an alternative is that you could do |
|
|
|
00:09:56.920 --> 00:09:59.320 |
|
cross validation, which I'll show an |
|
|
|
00:09:59.320 --> 00:10:02.000 |
|
example of, where you just split your |
|
|
|
00:10:02.000 --> 00:10:05.305 |
|
whole set into 10 parts and each time |
|
|
|
00:10:05.305 --> 00:10:07.423 |
|
you train on 9 parts and test on the |
|
|
|
00:10:07.423 --> 00:10:08.130 |
|
10th part. |
|
|
|
00:10:08.130 --> 00:10:09.430 |
|
That becomes. |
|
|
|
00:10:09.430 --> 00:10:11.410 |
|
If you have like a very limited amount |
|
|
|
00:10:11.410 --> 00:10:13.070 |
|
of data then that can help you make the |
|
|
|
00:10:13.070 --> 00:10:14.360 |
|
best use of your limited data. |
|
|
|
00:10:16.340 --> 00:10:18.040 |
|
So typically when you're evaluating the |
|
|
|
00:10:18.040 --> 00:10:19.500 |
|
performance, you're going to measure |
|
|
|
00:10:19.500 --> 00:10:21.370 |
|
like the error, the accuracy like root |
|
|
|
00:10:21.370 --> 00:10:23.250 |
|
mean squared error or accuracy, or the |
|
|
|
00:10:23.250 --> 00:10:24.810 |
|
amount of variance you can explain. |
|
|
|
00:10:26.070 --> 00:10:27.130 |
|
Or you could be doing. |
|
|
|
00:10:27.130 --> 00:10:28.555 |
|
If you're doing like a retrieval task, |
|
|
|
00:10:28.555 --> 00:10:30.190 |
|
you might do precision recall. |
|
|
|
00:10:30.190 --> 00:10:31.600 |
|
So there's a variety of metrics that |
|
|
|
00:10:31.600 --> 00:10:32.510 |
|
depend on the problem. |
|
|
|
00:10:36.890 --> 00:10:37.390 |
|
So. |
|
|
|
00:10:38.160 --> 00:10:39.730 |
|
When we're trying to think about like |
|
|
|
00:10:39.730 --> 00:10:41.530 |
|
these mill algorithms, there's actually |
|
|
|
00:10:41.530 --> 00:10:43.400 |
|
a lot of different things that we |
|
|
|
00:10:43.400 --> 00:10:44.170 |
|
should consider. |
|
|
|
00:10:45.300 --> 00:10:48.187 |
|
One of them is like, what is the model? |
|
|
|
00:10:48.187 --> 00:10:50.330 |
|
What kinds of things can it represent? |
|
|
|
00:10:50.330 --> 00:10:52.139 |
|
For example, in a linear model and a |
|
|
|
00:10:52.140 --> 00:10:55.350 |
|
classifier model, it means that all the |
|
|
|
00:10:55.350 --> 00:10:57.382 |
|
data that's on one side of the |
|
|
|
00:10:57.382 --> 00:10:58.893 |
|
hyperplane is going to be assigned to |
|
|
|
00:10:58.893 --> 00:11:00.619 |
|
one class, and all the data on the |
|
|
|
00:11:00.620 --> 00:11:02.066 |
|
other side of the hyperplane will be |
|
|
|
00:11:02.066 --> 00:11:04.210 |
|
assigned to another class, where for |
|
|
|
00:11:04.210 --> 00:11:06.610 |
|
nearest neighbor you can have much more |
|
|
|
00:11:06.610 --> 00:11:08.150 |
|
flexible decision boundaries. |
|
|
|
00:11:10.010 --> 00:11:11.270 |
|
You can also think about. |
|
|
|
00:11:11.270 --> 00:11:13.440 |
|
Maybe the model implies that some kinds |
|
|
|
00:11:13.440 --> 00:11:16.160 |
|
of functions are preferred over others. |
|
|
|
00:11:18.810 --> 00:11:20.470 |
|
You think about like what is your |
|
|
|
00:11:20.470 --> 00:11:21.187 |
|
objective function? |
|
|
|
00:11:21.187 --> 00:11:22.870 |
|
So what is it that you're trying to |
|
|
|
00:11:22.870 --> 00:11:25.100 |
|
minimize, and what kinds of like values |
|
|
|
00:11:25.100 --> 00:11:26.060 |
|
does that imply? |
|
|
|
00:11:26.060 --> 00:11:26.960 |
|
So do you prefer? |
|
|
|
00:11:26.960 --> 00:11:27.890 |
|
Does it mean? |
|
|
|
00:11:27.890 --> 00:11:29.840 |
|
Does your regularization, for example, |
|
|
|
00:11:29.840 --> 00:11:32.620 |
|
mean that you prefer that you're using |
|
|
|
00:11:32.620 --> 00:11:34.230 |
|
a few features or that you have low |
|
|
|
00:11:34.230 --> 00:11:35.520 |
|
weight on a lot of features? |
|
|
|
00:11:36.270 --> 00:11:39.126 |
|
Are you trying to minimize a likelihood |
|
|
|
00:11:39.126 --> 00:11:42.190 |
|
or maximize the likelihood, or are you |
|
|
|
00:11:42.190 --> 00:11:45.250 |
|
trying to just get high enough |
|
|
|
00:11:45.250 --> 00:11:46.899 |
|
confidence on each example to get |
|
|
|
00:11:46.900 --> 00:11:47.610 |
|
things correct? |
|
|
|
00:11:49.430 --> 00:11:50.850 |
|
And it's important to note that the |
|
|
|
00:11:50.850 --> 00:11:53.080 |
|
objective function often does not match |
|
|
|
00:11:53.080 --> 00:11:54.290 |
|
your final evaluation. |
|
|
|
00:11:54.290 --> 00:11:57.590 |
|
So nobody really trains a model to |
|
|
|
00:11:57.590 --> 00:12:00.170 |
|
minimize the classification error, even |
|
|
|
00:12:00.170 --> 00:12:01.960 |
|
though they often evaluate based on |
|
|
|
00:12:01.960 --> 00:12:03.000 |
|
classification error. |
|
|
|
00:12:03.940 --> 00:12:06.576 |
|
And there's two reasons for that. |
|
|
|
00:12:06.576 --> 00:12:09.388 |
|
So one reason is that it's really hard |
|
|
|
00:12:09.388 --> 00:12:11.550 |
|
to minimize classification error over |
|
|
|
00:12:11.550 --> 00:12:13.510 |
|
training set, because a small change in |
|
|
|
00:12:13.510 --> 00:12:15.000 |
|
parameters may not change your |
|
|
|
00:12:15.000 --> 00:12:15.680 |
|
classification error. |
|
|
|
00:12:15.680 --> 00:12:18.200 |
|
So it's hard to for an optimization |
|
|
|
00:12:18.200 --> 00:12:19.850 |
|
algorithm to figure out how it should |
|
|
|
00:12:19.850 --> 00:12:21.400 |
|
change to minimize that error. |
|
|
|
00:12:22.430 --> 00:12:25.823 |
|
The second reason is that there might |
|
|
|
00:12:25.823 --> 00:12:27.730 |
|
be many different models that can have |
|
|
|
00:12:27.730 --> 00:12:29.620 |
|
similar classification error, the same |
|
|
|
00:12:29.620 --> 00:12:31.980 |
|
classification error, and so you need |
|
|
|
00:12:31.980 --> 00:12:33.560 |
|
some way of choosing among them. |
|
|
|
00:12:33.560 --> 00:12:35.670 |
|
So many algorithms, many times the |
|
|
|
00:12:35.670 --> 00:12:37.422 |
|
objective function will also say that |
|
|
|
00:12:37.422 --> 00:12:39.160 |
|
you want to be very confident about |
|
|
|
00:12:39.160 --> 00:12:41.274 |
|
your examples, not only that, you want |
|
|
|
00:12:41.274 --> 00:12:42.010 |
|
to be correct. |
|
|
|
00:12:45.380 --> 00:12:47.140 |
|
The third thing is that you would think |
|
|
|
00:12:47.140 --> 00:12:50.070 |
|
about how you can optimize the model. |
|
|
|
00:12:50.070 --> 00:12:51.610 |
|
So does it. |
|
|
|
00:12:51.680 --> 00:12:56.200 |
|
For example for like logistic |
|
|
|
00:12:56.200 --> 00:12:56.880 |
|
regression. |
|
|
|
00:12:58.760 --> 00:13:01.480 |
|
You're able to reach a global optimum. |
|
|
|
00:13:01.480 --> 00:13:04.220 |
|
It's a convex problem so that you're |
|
|
|
00:13:04.220 --> 00:13:06.290 |
|
going to find the best solution, where |
|
|
|
00:13:06.290 --> 00:13:08.020 |
|
for something a neural network it may |
|
|
|
00:13:08.020 --> 00:13:09.742 |
|
not be possible to get the best |
|
|
|
00:13:09.742 --> 00:13:11.000 |
|
solution, but you can usually get a |
|
|
|
00:13:11.000 --> 00:13:11.860 |
|
pretty good solution. |
|
|
|
00:13:12.680 --> 00:13:14.430 |
|
You also will think about like how long |
|
|
|
00:13:14.430 --> 00:13:17.260 |
|
does it take to train and how does that |
|
|
|
00:13:17.260 --> 00:13:18.709 |
|
depend on the number of examples and |
|
|
|
00:13:18.709 --> 00:13:19.950 |
|
the number of features. |
|
|
|
00:13:19.950 --> 00:13:22.010 |
|
So if you're later we'll talk about |
|
|
|
00:13:22.010 --> 00:13:25.260 |
|
SVMS and Kernelized SVM is one of the |
|
|
|
00:13:25.260 --> 00:13:27.560 |
|
problems, is that it's the training is |
|
|
|
00:13:27.560 --> 00:13:29.761 |
|
quadratic in the number of examples, so |
|
|
|
00:13:29.761 --> 00:13:32.600 |
|
it becomes a pretty expensive, at least |
|
|
|
00:13:32.600 --> 00:13:34.976 |
|
according to the earlier optimization |
|
|
|
00:13:34.976 --> 00:13:35.582 |
|
algorithms. |
|
|
|
00:13:35.582 --> 00:13:38.120 |
|
So some algorithms can be used with a |
|
|
|
00:13:38.120 --> 00:13:39.710 |
|
lot of examples, but some are just too |
|
|
|
00:13:39.710 --> 00:13:40.370 |
|
expensive. |
|
|
|
00:13:40.440 --> 00:13:40.880 |
|
Yeah. |
|
|
|
00:13:43.520 --> 00:13:47.060 |
|
So the objective function is your, it's |
|
|
|
00:13:47.060 --> 00:13:48.120 |
|
your loss essentially. |
|
|
|
00:13:48.120 --> 00:13:50.470 |
|
So it usually has that data term where |
|
|
|
00:13:50.470 --> 00:13:51.540 |
|
you're trying to maximize the |
|
|
|
00:13:51.540 --> 00:13:52.910 |
|
likelihood of the data or the labels |
|
|
|
00:13:52.910 --> 00:13:53.960 |
|
given the data. |
|
|
|
00:13:53.960 --> 00:13:56.075 |
|
And it has some regularization term |
|
|
|
00:13:56.075 --> 00:13:58.130 |
|
that says that you prefer some models |
|
|
|
00:13:58.130 --> 00:13:58.670 |
|
over others. |
|
|
|
00:14:05.090 --> 00:14:07.890 |
|
So yeah, feel free to please do ask as |
|
|
|
00:14:07.890 --> 00:14:11.140 |
|
many questions as pop into your mind. |
|
|
|
00:14:11.140 --> 00:14:13.010 |
|
I'm happy to answer them and I want to |
|
|
|
00:14:13.010 --> 00:14:14.992 |
|
make sure, hopefully at the end of this |
|
|
|
00:14:14.992 --> 00:14:17.670 |
|
lecture, or if it's or if you like |
|
|
|
00:14:17.670 --> 00:14:18.630 |
|
further review the lecture. |
|
|
|
00:14:18.630 --> 00:14:20.345 |
|
Again, I hope that all of this stuff is |
|
|
|
00:14:20.345 --> 00:14:22.340 |
|
like really clear, and if it's not, |
|
|
|
00:14:22.340 --> 00:14:26.847 |
|
just don't feel don't be afraid to ask |
|
|
|
00:14:26.847 --> 00:14:28.680 |
|
questions in office hours or after |
|
|
|
00:14:28.680 --> 00:14:29.660 |
|
class or whatever. |
|
|
|
00:14:31.920 --> 00:14:34.065 |
|
So then finally, how does the |
|
|
|
00:14:34.065 --> 00:14:34.670 |
|
prediction work? |
|
|
|
00:14:34.670 --> 00:14:36.340 |
|
So then you want to think about like |
|
|
|
00:14:36.340 --> 00:14:37.740 |
|
can I make a prediction really quickly? |
|
|
|
00:14:37.740 --> 00:14:39.730 |
|
So like for a nearest neighbor it's not |
|
|
|
00:14:39.730 --> 00:14:41.579 |
|
necessarily so quick, but for the |
|
|
|
00:14:41.580 --> 00:14:43.050 |
|
linear models it's pretty fast. |
|
|
|
00:14:44.750 --> 00:14:46.580 |
|
Can I find the most likely prediction |
|
|
|
00:14:46.580 --> 00:14:48.260 |
|
according to my model? |
|
|
|
00:14:48.260 --> 00:14:50.390 |
|
So sometimes even after you've |
|
|
|
00:14:50.390 --> 00:14:53.790 |
|
optimized your model, you don't have a |
|
|
|
00:14:53.790 --> 00:14:55.530 |
|
guarantee that you can generate the |
|
|
|
00:14:55.530 --> 00:14:57.410 |
|
best solution for a new sample. |
|
|
|
00:14:57.410 --> 00:14:59.930 |
|
So for example with these image |
|
|
|
00:14:59.930 --> 00:15:02.090 |
|
generation algorithms even though. |
|
|
|
00:15:02.890 --> 00:15:05.060 |
|
Even after you optimize your model |
|
|
|
00:15:05.060 --> 00:15:08.150 |
|
given some phrase, you're not |
|
|
|
00:15:08.150 --> 00:15:09.720 |
|
necessarily going to generate the most |
|
|
|
00:15:09.720 --> 00:15:11.630 |
|
likely image given that phrase. |
|
|
|
00:15:11.630 --> 00:15:13.710 |
|
You'll just generate like an image that |
|
|
|
00:15:13.710 --> 00:15:16.199 |
|
is like consistent with the phrase |
|
|
|
00:15:16.200 --> 00:15:18.010 |
|
according to some scoring function. |
|
|
|
00:15:18.010 --> 00:15:20.810 |
|
So not all models can even be perfectly |
|
|
|
00:15:20.810 --> 00:15:22.040 |
|
optimized for prediction. |
|
|
|
00:15:23.100 --> 00:15:25.110 |
|
And then finally, does my algorithm |
|
|
|
00:15:25.110 --> 00:15:27.180 |
|
output confidence as well as |
|
|
|
00:15:27.180 --> 00:15:27.710 |
|
prediction? |
|
|
|
00:15:27.710 --> 00:15:30.770 |
|
Usually it's helpful if your model not |
|
|
|
00:15:30.770 --> 00:15:32.193 |
|
only gives you an answer, but also |
|
|
|
00:15:32.193 --> 00:15:33.930 |
|
gives you a confidence in how to write |
|
|
|
00:15:33.930 --> 00:15:34.790 |
|
that answer is. |
|
|
|
00:15:35.420 --> 00:15:37.580 |
|
And it's nice if that confidence is |
|
|
|
00:15:37.580 --> 00:15:38.030 |
|
accurate. |
|
|
|
00:15:39.240 --> 00:15:41.580 |
|
Meaning that if it says that you've got |
|
|
|
00:15:41.580 --> 00:15:44.000 |
|
like a 99% chance of being correct, |
|
|
|
00:15:44.000 --> 00:15:46.250 |
|
then hopefully 99 out of 100 times |
|
|
|
00:15:46.250 --> 00:15:48.640 |
|
you'll be correct in that situation. |
|
|
|
00:15:55.440 --> 00:15:57.234 |
|
So we looked at. |
|
|
|
00:15:57.234 --> 00:15:59.300 |
|
We looked at several different |
|
|
|
00:15:59.300 --> 00:16:00.870 |
|
classification algorithms. |
|
|
|
00:16:01.560 --> 00:16:04.440 |
|
And so here they're all compared |
|
|
|
00:16:04.440 --> 00:16:05.890 |
|
side-by-side according to some |
|
|
|
00:16:05.890 --> 00:16:06.290 |
|
criteria. |
|
|
|
00:16:06.290 --> 00:16:08.130 |
|
So we can think about like what type of |
|
|
|
00:16:08.130 --> 00:16:10.290 |
|
algorithm it is it a nearest neighbor |
|
|
|
00:16:10.290 --> 00:16:12.480 |
|
is instance based, and that the |
|
|
|
00:16:12.480 --> 00:16:14.120 |
|
parameters are the instances |
|
|
|
00:16:14.120 --> 00:16:14.740 |
|
themselves. |
|
|
|
00:16:14.740 --> 00:16:17.870 |
|
There's additional like linear model or |
|
|
|
00:16:17.870 --> 00:16:19.450 |
|
something that's parametric that you're |
|
|
|
00:16:19.450 --> 00:16:20.590 |
|
trying to fit to your data. |
|
|
|
00:16:22.150 --> 00:16:24.170 |
|
Naive Bayes is probabilistic is |
|
|
|
00:16:24.170 --> 00:16:26.060 |
|
logistic regression, but. |
|
|
|
00:16:26.910 --> 00:16:29.090 |
|
Naive Bayes, you're maximizing the |
|
|
|
00:16:29.090 --> 00:16:31.210 |
|
likelihood of your features given the |
|
|
|
00:16:31.210 --> 00:16:33.020 |
|
data or your features, and I mean |
|
|
|
00:16:33.020 --> 00:16:34.270 |
|
sorry, you're maximizing likelihood of |
|
|
|
00:16:34.270 --> 00:16:35.720 |
|
your features and the label. |
|
|
|
00:16:36.600 --> 00:16:37.230 |
|
|
|
|
|
00:16:38.790 --> 00:16:40.800 |
|
Under the assumption that your features |
|
|
|
00:16:40.800 --> 00:16:42.610 |
|
are independent given the label. |
|
|
|
00:16:43.450 --> 00:16:45.450 |
|
Where in logistic regression you're |
|
|
|
00:16:45.450 --> 00:16:47.695 |
|
directly maximizing the likelihood of |
|
|
|
00:16:47.695 --> 00:16:48.970 |
|
the label given the data. |
|
|
|
00:16:51.820 --> 00:16:53.880 |
|
They both often end up being linear |
|
|
|
00:16:53.880 --> 00:16:55.750 |
|
models, but you're modeling different |
|
|
|
00:16:55.750 --> 00:16:57.659 |
|
things in these two in these two |
|
|
|
00:16:57.660 --> 00:16:58.170 |
|
settings. |
|
|
|
00:16:58.790 --> 00:17:01.880 |
|
And in logistic regression, the model |
|
|
|
00:17:01.880 --> 00:17:04.460 |
|
the linear part, so it's, I just wrote |
|
|
|
00:17:04.460 --> 00:17:05.890 |
|
logistic regression, but often we're |
|
|
|
00:17:05.890 --> 00:17:07.176 |
|
doing linear logistic regression. |
|
|
|
00:17:07.176 --> 00:17:09.490 |
|
The linear part is that we're seeing |
|
|
|
00:17:09.490 --> 00:17:11.993 |
|
that this logic function is linear. |
|
|
|
00:17:11.993 --> 00:17:15.896 |
|
The log ratio of the probability of the |
|
|
|
00:17:15.896 --> 00:17:19.830 |
|
of label equals one given the features |
|
|
|
00:17:19.830 --> 00:17:21.460 |
|
over probability of label equals zero |
|
|
|
00:17:21.460 --> 00:17:22.319 |
|
given the features. |
|
|
|
00:17:22.319 --> 00:17:24.323 |
|
That thing is the linear thing that |
|
|
|
00:17:24.323 --> 00:17:24.970 |
|
we're fitting. |
|
|
|
00:17:27.290 --> 00:17:28.700 |
|
And then we talked about decision |
|
|
|
00:17:28.700 --> 00:17:29.350 |
|
trees. |
|
|
|
00:17:29.350 --> 00:17:31.706 |
|
I would also say that's a kind of a |
|
|
|
00:17:31.706 --> 00:17:33.040 |
|
probabilistic function in the sense |
|
|
|
00:17:33.040 --> 00:17:35.555 |
|
that we're choosing our splits to |
|
|
|
00:17:35.555 --> 00:17:38.700 |
|
maximize the mutual information or to, |
|
|
|
00:17:38.700 --> 00:17:41.200 |
|
sorry, to maximize the information gain |
|
|
|
00:17:41.200 --> 00:17:44.870 |
|
to minimize the conditional entropy. |
|
|
|
00:17:44.870 --> 00:17:47.780 |
|
And that's like a probabilistic basis |
|
|
|
00:17:47.780 --> 00:17:49.400 |
|
for the optimization. |
|
|
|
00:17:50.080 --> 00:17:51.810 |
|
And then at the end of the prediction, |
|
|
|
00:17:51.810 --> 00:17:53.560 |
|
you would typically be estimating the |
|
|
|
00:17:53.560 --> 00:17:55.330 |
|
probability of each label given the |
|
|
|
00:17:55.330 --> 00:17:57.170 |
|
data that has fallen into some leaf |
|
|
|
00:17:57.170 --> 00:17:57.430 |
|
node. |
|
|
|
00:17:59.490 --> 00:18:01.024 |
|
But that has quite different rules than |
|
|
|
00:18:01.024 --> 00:18:01.460 |
|
the other. |
|
|
|
00:18:01.460 --> 00:18:03.260 |
|
So nearest neighbor is just going to be |
|
|
|
00:18:03.260 --> 00:18:05.189 |
|
like finding the sample that has the |
|
|
|
00:18:05.190 --> 00:18:06.750 |
|
closest distance. |
|
|
|
00:18:06.750 --> 00:18:08.422 |
|
Naive Bayes and logistic regression |
|
|
|
00:18:08.422 --> 00:18:11.363 |
|
will be these probability functions |
|
|
|
00:18:11.363 --> 00:18:13.540 |
|
that will tend to give you like linear |
|
|
|
00:18:13.540 --> 00:18:14.485 |
|
classifiers. |
|
|
|
00:18:14.485 --> 00:18:17.480 |
|
And Decision Tree has these conjunctive |
|
|
|
00:18:17.480 --> 00:18:19.840 |
|
rules that you say if this feature is |
|
|
|
00:18:19.840 --> 00:18:22.249 |
|
greater than this value then you go |
|
|
|
00:18:22.249 --> 00:18:22.615 |
|
this way. |
|
|
|
00:18:22.615 --> 00:18:23.955 |
|
And then if this other thing happens |
|
|
|
00:18:23.955 --> 00:18:26.090 |
|
then you go another way and then at the |
|
|
|
00:18:26.090 --> 00:18:29.350 |
|
end you can express that as a series of |
|
|
|
00:18:29.350 --> 00:18:29.700 |
|
rules. |
|
|
|
00:18:29.750 --> 00:18:31.425 |
|
Where you have a bunch of and |
|
|
|
00:18:31.425 --> 00:18:32.850 |
|
conditions, and if all of those |
|
|
|
00:18:32.850 --> 00:18:34.220 |
|
conditions are met, then you make a |
|
|
|
00:18:34.220 --> 00:18:35.290 |
|
particular prediction. |
|
|
|
00:18:38.370 --> 00:18:40.150 |
|
So these algorithms have different |
|
|
|
00:18:40.150 --> 00:18:42.480 |
|
strengths, like nearest neighbor has |
|
|
|
00:18:42.480 --> 00:18:45.547 |
|
low bias, so that means that you can |
|
|
|
00:18:45.547 --> 00:18:47.340 |
|
almost always get perfect training |
|
|
|
00:18:47.340 --> 00:18:47.970 |
|
accuracy. |
|
|
|
00:18:47.970 --> 00:18:49.706 |
|
You can fit like almost anything with |
|
|
|
00:18:49.706 --> 00:18:50.279 |
|
nearest neighbor. |
|
|
|
00:18:52.310 --> 00:18:54.725 |
|
On the other hand, I guess I didn't put |
|
|
|
00:18:54.725 --> 00:18:56.640 |
|
it here, but limitation is that it has |
|
|
|
00:18:56.640 --> 00:18:57.300 |
|
high variance. |
|
|
|
00:18:58.000 --> 00:18:59.650 |
|
You might get very different prediction |
|
|
|
00:18:59.650 --> 00:19:01.590 |
|
functions if you resample your data. |
|
|
|
00:19:03.390 --> 00:19:05.150 |
|
It has no training time. |
|
|
|
00:19:06.230 --> 00:19:08.200 |
|
It's very widely applicable and it's |
|
|
|
00:19:08.200 --> 00:19:08.900 |
|
very simple. |
|
|
|
00:19:09.690 --> 00:19:12.110 |
|
Another limitation is that it can take |
|
|
|
00:19:12.110 --> 00:19:13.780 |
|
a long time to do inference, but if you |
|
|
|
00:19:13.780 --> 00:19:15.642 |
|
use approximate nearest neighbor |
|
|
|
00:19:15.642 --> 00:19:17.790 |
|
inference, which we'll talk about |
|
|
|
00:19:17.790 --> 00:19:21.230 |
|
later, then it can be like relatively |
|
|
|
00:19:21.230 --> 00:19:21.608 |
|
fast. |
|
|
|
00:19:21.608 --> 00:19:23.881 |
|
You can do approximate nearest neighbor |
|
|
|
00:19:23.881 --> 00:19:26.600 |
|
in log N time, where N is the number of |
|
|
|
00:19:26.600 --> 00:19:29.310 |
|
training samples, where so far we're |
|
|
|
00:19:29.310 --> 00:19:31.470 |
|
just doing brute force, which is linear |
|
|
|
00:19:31.470 --> 00:19:32.460 |
|
in the number of samples. |
|
|
|
00:19:34.620 --> 00:19:35.770 |
|
Naive bayes. |
|
|
|
00:19:35.770 --> 00:19:37.980 |
|
The strengths are that you can estimate |
|
|
|
00:19:37.980 --> 00:19:39.950 |
|
these parameters reasonably well from |
|
|
|
00:19:39.950 --> 00:19:40.680 |
|
limited data. |
|
|
|
00:19:41.690 --> 00:19:43.000 |
|
It's also pretty simple. |
|
|
|
00:19:43.000 --> 00:19:45.380 |
|
It's fast to train, and the downside is |
|
|
|
00:19:45.380 --> 00:19:48.030 |
|
that as limited modeling power, so even |
|
|
|
00:19:48.030 --> 00:19:49.876 |
|
on the training set you often won't get |
|
|
|
00:19:49.876 --> 00:19:52.049 |
|
0 error or even close to 0 error. |
|
|
|
00:19:53.520 --> 00:19:55.290 |
|
Logistic regression is really powerful |
|
|
|
00:19:55.290 --> 00:19:57.250 |
|
in high dimensions, so remember that |
|
|
|
00:19:57.250 --> 00:19:59.050 |
|
even though it's a linear classifier, |
|
|
|
00:19:59.050 --> 00:20:01.400 |
|
which feels like it can't do much in |
|
|
|
00:20:01.400 --> 00:20:04.830 |
|
terms of separation in high dimensions, |
|
|
|
00:20:04.830 --> 00:20:05.530 |
|
you can. |
|
|
|
00:20:05.530 --> 00:20:07.330 |
|
These classifiers are actually very |
|
|
|
00:20:07.330 --> 00:20:07.850 |
|
powerful. |
|
|
|
00:20:08.510 --> 00:20:10.710 |
|
If you have 1000 dimensional feature. |
|
|
|
00:20:11.330 --> 00:20:13.930 |
|
And you have 1000 data points, then you |
|
|
|
00:20:13.930 --> 00:20:16.094 |
|
can assign those data points arbitrary |
|
|
|
00:20:16.094 --> 00:20:18.210 |
|
labels, arbitrary binary labels, and |
|
|
|
00:20:18.210 --> 00:20:19.590 |
|
still get a perfect classifier. |
|
|
|
00:20:19.590 --> 00:20:21.770 |
|
You're guaranteed a perfect classifier |
|
|
|
00:20:21.770 --> 00:20:23.050 |
|
in terms of the training data. |
|
|
|
00:20:23.860 --> 00:20:26.740 |
|
Now, that power power is always a |
|
|
|
00:20:26.740 --> 00:20:27.750 |
|
double edged sword. |
|
|
|
00:20:27.750 --> 00:20:29.740 |
|
You, if you have a powerful classifier, |
|
|
|
00:20:29.740 --> 00:20:32.040 |
|
means you can fit your training data |
|
|
|
00:20:32.040 --> 00:20:34.140 |
|
really well, but it also means that |
|
|
|
00:20:34.140 --> 00:20:35.850 |
|
you're more susceptible to overfitting |
|
|
|
00:20:35.850 --> 00:20:37.510 |
|
your training data, which means that |
|
|
|
00:20:37.510 --> 00:20:38.510 |
|
you perform well. |
|
|
|
00:20:39.460 --> 00:20:41.160 |
|
And the training data, but your test |
|
|
|
00:20:41.160 --> 00:20:43.170 |
|
performance is not so good, you get |
|
|
|
00:20:43.170 --> 00:20:43.940 |
|
higher test error. |
|
|
|
00:20:45.780 --> 00:20:47.830 |
|
It's also widely applicable. |
|
|
|
00:20:47.830 --> 00:20:50.480 |
|
It produces good confidence estimates, |
|
|
|
00:20:50.480 --> 00:20:52.130 |
|
so that can be helpful if you want to |
|
|
|
00:20:52.130 --> 00:20:54.170 |
|
know whether the prediction is correct. |
|
|
|
00:20:54.780 --> 00:20:56.640 |
|
And it gives you fast prediction |
|
|
|
00:20:56.640 --> 00:20:57.840 |
|
because it's the linear model. |
|
|
|
00:20:59.470 --> 00:21:01.470 |
|
Similar to nearest neighbor has a |
|
|
|
00:21:01.470 --> 00:21:03.380 |
|
limitation that it relies on good input |
|
|
|
00:21:03.380 --> 00:21:04.330 |
|
features. |
|
|
|
00:21:04.330 --> 00:21:05.730 |
|
So nearest neighbor if you have a |
|
|
|
00:21:05.730 --> 00:21:06.160 |
|
simple. |
|
|
|
00:21:07.240 --> 00:21:10.040 |
|
If you have a simple distance function |
|
|
|
00:21:10.040 --> 00:21:13.660 |
|
like Euclidian distance, that assumes |
|
|
|
00:21:13.660 --> 00:21:15.665 |
|
that all your features are scaled so |
|
|
|
00:21:15.665 --> 00:21:17.110 |
|
that there are like comparable scales |
|
|
|
00:21:17.110 --> 00:21:18.930 |
|
to each other, and that they're all |
|
|
|
00:21:18.930 --> 00:21:19.540 |
|
predictive. |
|
|
|
00:21:20.400 --> 00:21:22.310 |
|
Nearest logistic regression doesn't |
|
|
|
00:21:22.310 --> 00:21:23.970 |
|
make assumptions that strong. |
|
|
|
00:21:23.970 --> 00:21:25.799 |
|
It can kind of choose which features to |
|
|
|
00:21:25.800 --> 00:21:27.420 |
|
use and it can rescale them |
|
|
|
00:21:27.420 --> 00:21:29.790 |
|
essentially, but it does. |
|
|
|
00:21:29.790 --> 00:21:33.230 |
|
But it's not able to model like joint |
|
|
|
00:21:33.230 --> 00:21:35.425 |
|
combinations of features, so the |
|
|
|
00:21:35.425 --> 00:21:37.360 |
|
features should be individually useful. |
|
|
|
00:21:39.270 --> 00:21:41.340 |
|
And then finally, decision trees are |
|
|
|
00:21:41.340 --> 00:21:42.930 |
|
good because they can provide an |
|
|
|
00:21:42.930 --> 00:21:44.600 |
|
explainable decision function. |
|
|
|
00:21:44.600 --> 00:21:47.040 |
|
You get these nice rules that are easy |
|
|
|
00:21:47.040 --> 00:21:47.750 |
|
to communicate. |
|
|
|
00:21:48.360 --> 00:21:49.740 |
|
It's also widely applicable. |
|
|
|
00:21:49.740 --> 00:21:51.400 |
|
You can use that on continuous discrete |
|
|
|
00:21:51.400 --> 00:21:52.040 |
|
data. |
|
|
|
00:21:52.040 --> 00:21:54.162 |
|
You don't need to scale the features. |
|
|
|
00:21:54.162 --> 00:21:55.740 |
|
It's like it doesn't really matter if |
|
|
|
00:21:55.740 --> 00:21:57.930 |
|
you multiply the features by 10, it |
|
|
|
00:21:57.930 --> 00:21:59.230 |
|
just means that you'd be choosing a |
|
|
|
00:21:59.230 --> 00:22:00.790 |
|
threshold that's 10 times bigger. |
|
|
|
00:22:01.820 --> 00:22:03.510 |
|
And you can deal with a mix of discrete |
|
|
|
00:22:03.510 --> 00:22:05.720 |
|
and continuous variables. |
|
|
|
00:22:05.720 --> 00:22:07.380 |
|
The downside is that. |
|
|
|
00:22:08.330 --> 00:22:11.780 |
|
One tree by itself either tends to |
|
|
|
00:22:11.780 --> 00:22:14.170 |
|
generalize poorly, meaning like you |
|
|
|
00:22:14.170 --> 00:22:15.870 |
|
train a full tree and you do perfect |
|
|
|
00:22:15.870 --> 00:22:18.140 |
|
training, but you get bad test error. |
|
|
|
00:22:18.770 --> 00:22:20.240 |
|
Or you tend to underfit the data. |
|
|
|
00:22:20.240 --> 00:22:21.910 |
|
If you train a short tree then you |
|
|
|
00:22:21.910 --> 00:22:23.510 |
|
don't get very good training or test |
|
|
|
00:22:23.510 --> 00:22:23.770 |
|
error. |
|
|
|
00:22:24.650 --> 00:22:26.920 |
|
And so a single tree by itself is not |
|
|
|
00:22:26.920 --> 00:22:28.160 |
|
usually the best predictor. |
|
|
|
00:22:31.530 --> 00:22:34.085 |
|
So there's just like you can also think |
|
|
|
00:22:34.085 --> 00:22:35.530 |
|
about these methods, I won't talk |
|
|
|
00:22:35.530 --> 00:22:37.366 |
|
through this whole slide, but you can |
|
|
|
00:22:37.366 --> 00:22:39.290 |
|
also think about the methods in terms |
|
|
|
00:22:39.290 --> 00:22:42.130 |
|
of like the learning objectives, the |
|
|
|
00:22:42.130 --> 00:22:44.556 |
|
training, like how you optimize those |
|
|
|
00:22:44.556 --> 00:22:46.350 |
|
learning objectives and then the |
|
|
|
00:22:46.350 --> 00:22:47.840 |
|
inference, how you make your final |
|
|
|
00:22:47.840 --> 00:22:48.430 |
|
prediction. |
|
|
|
00:22:49.040 --> 00:22:52.460 |
|
And so here I also included linear |
|
|
|
00:22:52.460 --> 00:22:54.870 |
|
SVMS, which we'll talk about next week, |
|
|
|
00:22:54.870 --> 00:22:57.590 |
|
but you can see for example that. |
|
|
|
00:22:59.260 --> 00:23:01.730 |
|
That these in terms of inference, |
|
|
|
00:23:01.730 --> 00:23:04.200 |
|
linear SVM, logistic regression, Naive |
|
|
|
00:23:04.200 --> 00:23:06.790 |
|
Bayes are all linear models, at least |
|
|
|
00:23:06.790 --> 00:23:08.230 |
|
in the case where you're dealing with |
|
|
|
00:23:08.230 --> 00:23:11.190 |
|
discrete variables or Gaussians for 9 |
|
|
|
00:23:11.190 --> 00:23:11.630 |
|
days. |
|
|
|
00:23:11.630 --> 00:23:13.948 |
|
But they have different ways, they have |
|
|
|
00:23:13.948 --> 00:23:15.695 |
|
different learning objectives and then |
|
|
|
00:23:15.695 --> 00:23:17.000 |
|
different ways of doing the training. |
|
|
|
00:23:22.330 --> 00:23:24.790 |
|
And then question go ahead. |
|
|
|
00:23:36.030 --> 00:23:37.450 |
|
Yeah. |
|
|
|
00:23:37.450 --> 00:23:39.810 |
|
Thank you for the clarification, so. |
|
|
|
00:23:40.710 --> 00:23:42.850 |
|
So what I mean by that it doesn't |
|
|
|
00:23:42.850 --> 00:23:46.110 |
|
require feature scaling is that if you |
|
|
|
00:23:46.110 --> 00:23:47.909 |
|
could have one feature that ranges from |
|
|
|
00:23:47.910 --> 00:23:50.495 |
|
like zero to 1000 and another feature |
|
|
|
00:23:50.495 --> 00:23:52.160 |
|
that ranges from zero to 1. |
|
|
|
00:23:52.960 --> 00:23:56.090 |
|
And decision trees are perfectly fine |
|
|
|
00:23:56.090 --> 00:23:57.770 |
|
with that, because it can like freely |
|
|
|
00:23:57.770 --> 00:23:59.390 |
|
choose the threshold and stuff. |
|
|
|
00:23:59.390 --> 00:24:01.450 |
|
And if you multiply 1 feature value by |
|
|
|
00:24:01.450 --> 00:24:03.700 |
|
50, it doesn't really change the |
|
|
|
00:24:03.700 --> 00:24:05.643 |
|
function, it can still choose like |
|
|
|
00:24:05.643 --> 00:24:07.300 |
|
threshold that's 50 times larger. |
|
|
|
00:24:08.050 --> 00:24:10.220 |
|
Where nearest neighbor, for example, if |
|
|
|
00:24:10.220 --> 00:24:13.084 |
|
one feature ranges from zero to 1001 |
|
|
|
00:24:13.084 --> 00:24:15.880 |
|
ranges from zero to 1, then it's not |
|
|
|
00:24:15.880 --> 00:24:17.673 |
|
going to care at all about the zero to |
|
|
|
00:24:17.673 --> 00:24:19.270 |
|
1 feature because like that difference |
|
|
|
00:24:19.270 --> 00:24:21.790 |
|
of like 200 on the scale of zero to |
|
|
|
00:24:21.790 --> 00:24:23.738 |
|
1000 is going to overwhelm completely a |
|
|
|
00:24:23.738 --> 00:24:26.290 |
|
difference of 1 on the 0 to one |
|
|
|
00:24:26.290 --> 00:24:26.609 |
|
feature. |
|
|
|
00:24:35.130 --> 00:24:36.275 |
|
Right, it doesn't. |
|
|
|
00:24:36.275 --> 00:24:37.910 |
|
It's not influenced. |
|
|
|
00:24:37.910 --> 00:24:40.040 |
|
I guess it's not influenced by the |
|
|
|
00:24:40.040 --> 00:24:41.370 |
|
variance of the features, yeah. |
|
|
|
00:24:46.320 --> 00:24:49.130 |
|
So I don't need to read talk through |
|
|
|
00:24:49.130 --> 00:24:51.260 |
|
all of this because even for |
|
|
|
00:24:51.260 --> 00:24:53.480 |
|
aggression, most of these algorithms |
|
|
|
00:24:53.480 --> 00:24:55.219 |
|
are the same and they have the same |
|
|
|
00:24:55.220 --> 00:24:56.710 |
|
strengths and the same weaknesses. |
|
|
|
00:24:57.500 --> 00:24:59.630 |
|
The only difference between regression |
|
|
|
00:24:59.630 --> 00:25:01.310 |
|
and classification is that you tend to |
|
|
|
00:25:01.310 --> 00:25:03.235 |
|
have a different loss function where |
|
|
|
00:25:03.235 --> 00:25:04.820 |
|
you because you're trying to predict a |
|
|
|
00:25:04.820 --> 00:25:06.790 |
|
continuous value instead of predicting |
|
|
|
00:25:06.790 --> 00:25:09.590 |
|
a likelihood of a categorical value, or |
|
|
|
00:25:09.590 --> 00:25:11.240 |
|
trying to just output the categorical |
|
|
|
00:25:11.240 --> 00:25:12.000 |
|
value directly. |
|
|
|
00:25:14.330 --> 00:25:17.450 |
|
Linear regression though is A1 new |
|
|
|
00:25:17.450 --> 00:25:18.290 |
|
algorithm here. |
|
|
|
00:25:18.980 --> 00:25:21.923 |
|
So in linear regression, you're trying |
|
|
|
00:25:21.923 --> 00:25:24.585 |
|
to fit the data, so you're not trying |
|
|
|
00:25:24.585 --> 00:25:24.940 |
|
to. |
|
|
|
00:25:26.480 --> 00:25:28.396 |
|
Fit like a probability model like |
|
|
|
00:25:28.396 --> 00:25:29.590 |
|
linear logistic regression. |
|
|
|
00:25:29.590 --> 00:25:31.860 |
|
You're just trying to directly fit the |
|
|
|
00:25:31.860 --> 00:25:33.680 |
|
prediction given the data, and so you |
|
|
|
00:25:33.680 --> 00:25:35.575 |
|
have like a linear function like W |
|
|
|
00:25:35.575 --> 00:25:37.960 |
|
transpose X or West transpose X + B. |
|
|
|
00:25:37.960 --> 00:25:41.120 |
|
That should ideally output output Y |
|
|
|
00:25:41.120 --> 00:25:41.710 |
|
directly. |
|
|
|
00:25:43.830 --> 00:25:45.670 |
|
Similar to linear to logistic |
|
|
|
00:25:45.670 --> 00:25:47.030 |
|
regression, though it's powerful and |
|
|
|
00:25:47.030 --> 00:25:48.220 |
|
high dimensions, it's widely |
|
|
|
00:25:48.220 --> 00:25:48.820 |
|
applicable. |
|
|
|
00:25:48.820 --> 00:25:50.650 |
|
You get fast prediction. |
|
|
|
00:25:50.650 --> 00:25:52.770 |
|
Also, it can be useful to interpret the |
|
|
|
00:25:52.770 --> 00:25:54.300 |
|
coefficients to say like what the |
|
|
|
00:25:54.300 --> 00:25:56.040 |
|
correlations are of the features with |
|
|
|
00:25:56.040 --> 00:25:58.110 |
|
your prediction, or to see which |
|
|
|
00:25:58.110 --> 00:25:59.900 |
|
features are more predictive than |
|
|
|
00:25:59.900 --> 00:26:00.300 |
|
others. |
|
|
|
00:26:01.410 --> 00:26:03.440 |
|
And similar to logistic regression, it |
|
|
|
00:26:03.440 --> 00:26:06.140 |
|
relies to some extent on good features. |
|
|
|
00:26:06.140 --> 00:26:07.720 |
|
In fact, I would say even more. |
|
|
|
00:26:08.320 --> 00:26:12.220 |
|
Because this is assuming that Y is |
|
|
|
00:26:12.220 --> 00:26:15.040 |
|
going to be a linear function of X and |
|
|
|
00:26:15.040 --> 00:26:17.130 |
|
West, which is in a way a stronger |
|
|
|
00:26:17.130 --> 00:26:18.140 |
|
assumption than that. |
|
|
|
00:26:18.140 --> 00:26:20.670 |
|
Like a binary classification will be a |
|
|
|
00:26:20.670 --> 00:26:21.870 |
|
linear function of the features. |
|
|
|
00:26:23.360 --> 00:26:24.940 |
|
So you often have to do some kind of |
|
|
|
00:26:24.940 --> 00:26:26.950 |
|
feature transformations to make it work |
|
|
|
00:26:26.950 --> 00:26:27.220 |
|
well. |
|
|
|
00:26:28.520 --> 00:26:28.960 |
|
Question. |
|
|
|
00:26:40.800 --> 00:26:43.402 |
|
So naive bayes. |
|
|
|
00:26:43.402 --> 00:26:46.295 |
|
The example I gave was a semi semi |
|
|
|
00:26:46.295 --> 00:26:48.850 |
|
Naive Bayes algorithm for classifying |
|
|
|
00:26:48.850 --> 00:26:50.650 |
|
faces and cars. |
|
|
|
00:26:50.650 --> 00:26:52.618 |
|
So there they took groups of features |
|
|
|
00:26:52.618 --> 00:26:54.190 |
|
and modeled the probabilities of small |
|
|
|
00:26:54.190 --> 00:26:55.720 |
|
groups of features and then took the |
|
|
|
00:26:55.720 --> 00:26:57.090 |
|
product of those to give you your |
|
|
|
00:26:57.090 --> 00:26:58.190 |
|
probabilistic model. |
|
|
|
00:26:58.190 --> 00:27:01.770 |
|
I also would use like Naive Bayes if |
|
|
|
00:27:01.770 --> 00:27:03.719 |
|
I'm trying to do like color like |
|
|
|
00:27:03.720 --> 00:27:05.600 |
|
segmentation based on color and I need |
|
|
|
00:27:05.600 --> 00:27:08.000 |
|
to estimate the probability of color |
|
|
|
00:27:08.000 --> 00:27:09.490 |
|
given that it's in one region versus |
|
|
|
00:27:09.490 --> 00:27:11.470 |
|
another, I might assume that. |
|
|
|
00:27:11.530 --> 00:27:15.320 |
|
By that, my color features like the hue |
|
|
|
00:27:15.320 --> 00:27:17.920 |
|
versus intensity for example, are |
|
|
|
00:27:17.920 --> 00:27:19.380 |
|
independent given the region that it |
|
|
|
00:27:19.380 --> 00:27:22.260 |
|
came from and so use that as part of my |
|
|
|
00:27:22.260 --> 00:27:23.760 |
|
probabilistic model for doing the |
|
|
|
00:27:23.760 --> 00:27:24.670 |
|
segmentation. |
|
|
|
00:27:25.880 --> 00:27:30.940 |
|
Logistic regression you would like any |
|
|
|
00:27:30.940 --> 00:27:32.610 |
|
neural network is doing logistic |
|
|
|
00:27:32.610 --> 00:27:35.807 |
|
regression in the last layer. |
|
|
|
00:27:35.807 --> 00:27:38.703 |
|
So most things are using logistic |
|
|
|
00:27:38.703 --> 00:27:40.770 |
|
regression now as part of it. |
|
|
|
00:27:40.770 --> 00:27:42.775 |
|
So you can view like the early layers |
|
|
|
00:27:42.775 --> 00:27:44.674 |
|
as feature learning and the last layer |
|
|
|
00:27:44.674 --> 00:27:45.519 |
|
is logistic regression. |
|
|
|
00:27:46.490 --> 00:27:49.250 |
|
And then decision trees are. |
|
|
|
00:27:50.660 --> 00:27:52.200 |
|
We'll see an example. |
|
|
|
00:27:52.200 --> 00:27:53.670 |
|
It's used in the example I'm going to |
|
|
|
00:27:53.670 --> 00:27:55.723 |
|
give, but like medical analysis is a is |
|
|
|
00:27:55.723 --> 00:27:57.680 |
|
a good one because you often want some |
|
|
|
00:27:57.680 --> 00:28:00.631 |
|
interpretable function as well as some |
|
|
|
00:28:00.631 --> 00:28:01.620 |
|
good prediction. |
|
|
|
00:28:03.820 --> 00:28:04.090 |
|
Yep. |
|
|
|
00:28:09.200 --> 00:28:11.450 |
|
All right, so one of the one of the key |
|
|
|
00:28:11.450 --> 00:28:15.360 |
|
concepts is like how performance varies |
|
|
|
00:28:15.360 --> 00:28:17.230 |
|
with the number of training samples. |
|
|
|
00:28:17.230 --> 00:28:20.080 |
|
So as you get more training data, you |
|
|
|
00:28:20.080 --> 00:28:21.670 |
|
should be able to fit a more accurate |
|
|
|
00:28:21.670 --> 00:28:22.120 |
|
model. |
|
|
|
00:28:23.310 --> 00:28:25.600 |
|
And so you would expect that your test |
|
|
|
00:28:25.600 --> 00:28:27.746 |
|
error should decrease as you get more |
|
|
|
00:28:27.746 --> 00:28:29.760 |
|
training samples, because if you have |
|
|
|
00:28:29.760 --> 00:28:33.640 |
|
only like 1 training sample, then you |
|
|
|
00:28:33.640 --> 00:28:34.700 |
|
don't know if that's like really |
|
|
|
00:28:34.700 --> 00:28:36.420 |
|
representative, if it's covering all |
|
|
|
00:28:36.420 --> 00:28:37.195 |
|
the different cases. |
|
|
|
00:28:37.195 --> 00:28:39.263 |
|
As you get more and more training |
|
|
|
00:28:39.263 --> 00:28:41.020 |
|
samples, you can fit more complex |
|
|
|
00:28:41.020 --> 00:28:43.858 |
|
models and you can be more assured that |
|
|
|
00:28:43.858 --> 00:28:46.110 |
|
the training samples that you've seen |
|
|
|
00:28:46.110 --> 00:28:47.850 |
|
fully represent the distribution that |
|
|
|
00:28:47.850 --> 00:28:48.710 |
|
you'll see in testing. |
|
|
|
00:28:50.040 --> 00:28:52.040 |
|
But as you get more training, it |
|
|
|
00:28:52.040 --> 00:28:53.700 |
|
becomes harder to fit the training |
|
|
|
00:28:53.700 --> 00:28:54.060 |
|
data. |
|
|
|
00:28:54.920 --> 00:28:57.655 |
|
So maybe a linear model can perfectly |
|
|
|
00:28:57.655 --> 00:29:00.340 |
|
classify like 500 examples, but it |
|
|
|
00:29:00.340 --> 00:29:02.350 |
|
can't perfectly classify 500 million |
|
|
|
00:29:02.350 --> 00:29:04.900 |
|
examples, even if they're even in the |
|
|
|
00:29:04.900 --> 00:29:05.430 |
|
training set. |
|
|
|
00:29:07.110 --> 00:29:10.420 |
|
As you get more data, these will test |
|
|
|
00:29:10.420 --> 00:29:12.630 |
|
and the training error will converge. |
|
|
|
00:29:13.380 --> 00:29:15.100 |
|
And if they're coming from exactly the |
|
|
|
00:29:15.100 --> 00:29:16.540 |
|
same distribution, then they'll |
|
|
|
00:29:16.540 --> 00:29:18.500 |
|
converge to exactly the same value. |
|
|
|
00:29:19.680 --> 00:29:21.030 |
|
Only if they come from different |
|
|
|
00:29:21.030 --> 00:29:22.790 |
|
distributions would you possibly have a |
|
|
|
00:29:22.790 --> 00:29:24.250 |
|
gap if you have infinite training |
|
|
|
00:29:24.250 --> 00:29:24.720 |
|
samples. |
|
|
|
00:29:25.330 --> 00:29:27.133 |
|
So we have these concepts of the test |
|
|
|
00:29:27.133 --> 00:29:27.411 |
|
error. |
|
|
|
00:29:27.411 --> 00:29:29.253 |
|
So that's the error on some samples |
|
|
|
00:29:29.253 --> 00:29:31.420 |
|
that are not used for training that are |
|
|
|
00:29:31.420 --> 00:29:34.360 |
|
randomly sampled from your distribution |
|
|
|
00:29:34.360 --> 00:29:35.020 |
|
of data. |
|
|
|
00:29:35.020 --> 00:29:38.744 |
|
The training error is the error on your |
|
|
|
00:29:38.744 --> 00:29:41.240 |
|
training set that is used to optimize |
|
|
|
00:29:41.240 --> 00:29:43.458 |
|
your model, and the generalization |
|
|
|
00:29:43.458 --> 00:29:46.803 |
|
error is the gap between the test and |
|
|
|
00:29:46.803 --> 00:29:49.237 |
|
the training error, so that the |
|
|
|
00:29:49.237 --> 00:29:51.672 |
|
generalization error is your error due |
|
|
|
00:29:51.672 --> 00:29:55.386 |
|
to due to like an imperfect model due |
|
|
|
00:29:55.386 --> 00:29:55.679 |
|
to. |
|
|
|
00:29:55.750 --> 00:29:57.280 |
|
To limited training samples. |
|
|
|
00:30:04.950 --> 00:30:05.650 |
|
Question. |
|
|
|
00:30:07.940 --> 00:30:09.675 |
|
So there's test error. |
|
|
|
00:30:09.675 --> 00:30:12.620 |
|
So that's the I'll start with training. |
|
|
|
00:30:12.620 --> 00:30:14.070 |
|
OK, so first there's training error. |
|
|
|
00:30:14.810 --> 00:30:17.610 |
|
So training error is you fit, you fit a |
|
|
|
00:30:17.610 --> 00:30:19.010 |
|
model on a training set, and then |
|
|
|
00:30:19.010 --> 00:30:20.540 |
|
you're evaluating the error on the same |
|
|
|
00:30:20.540 --> 00:30:21.230 |
|
training set. |
|
|
|
00:30:22.490 --> 00:30:24.620 |
|
So if your model is really powerful, |
|
|
|
00:30:24.620 --> 00:30:27.282 |
|
that training error might be 0, But if |
|
|
|
00:30:27.282 --> 00:30:29.220 |
|
it's if it's more limited, like Naive |
|
|
|
00:30:29.220 --> 00:30:32.090 |
|
Bayes, you'll often have nonzero error. |
|
|
|
00:30:32.950 --> 00:30:35.652 |
|
And you since your loss is, since you |
|
|
|
00:30:35.652 --> 00:30:36.384 |
|
have some. |
|
|
|
00:30:36.384 --> 00:30:38.580 |
|
If you're optimizing a loss like the |
|
|
|
00:30:38.580 --> 00:30:41.160 |
|
probability, then there's always room |
|
|
|
00:30:41.160 --> 00:30:42.870 |
|
to improve that loss, so you'll always |
|
|
|
00:30:42.870 --> 00:30:45.430 |
|
have like non like some loss on your |
|
|
|
00:30:45.430 --> 00:30:45.890 |
|
training set. |
|
|
|
00:30:47.970 --> 00:30:50.120 |
|
The test error is if you take that same |
|
|
|
00:30:50.120 --> 00:30:52.770 |
|
model, but now you evaluate it on other |
|
|
|
00:30:52.770 --> 00:30:54.516 |
|
samples from the distribution, other |
|
|
|
00:30:54.516 --> 00:30:56.040 |
|
test samples, and you compute an |
|
|
|
00:30:56.040 --> 00:30:56.835 |
|
expected error. |
|
|
|
00:30:56.835 --> 00:30:59.264 |
|
The average error over those test |
|
|
|
00:30:59.264 --> 00:31:01.172 |
|
samples, your test error. |
|
|
|
00:31:01.172 --> 00:31:03.330 |
|
You always expect your test error to be |
|
|
|
00:31:03.330 --> 00:31:04.480 |
|
higher than your training error. |
|
|
|
00:31:05.130 --> 00:31:06.400 |
|
Because you're. |
|
|
|
00:31:06.490 --> 00:31:07.000 |
|
Time. |
|
|
|
00:31:07.860 --> 00:31:10.140 |
|
Because your test error was not used to |
|
|
|
00:31:10.140 --> 00:31:11.530 |
|
optimize your model, but your training |
|
|
|
00:31:11.530 --> 00:31:12.000 |
|
error was. |
|
|
|
00:31:13.140 --> 00:31:15.260 |
|
In that gap between the test air and |
|
|
|
00:31:15.260 --> 00:31:16.260 |
|
the training error is the |
|
|
|
00:31:16.260 --> 00:31:17.320 |
|
generalization error. |
|
|
|
00:31:18.050 --> 00:31:20.560 |
|
So that's how that's the error due to |
|
|
|
00:31:20.560 --> 00:31:23.680 |
|
the challenge of making predictions |
|
|
|
00:31:23.680 --> 00:31:25.330 |
|
about new samples that were not made in |
|
|
|
00:31:25.330 --> 00:31:25.710 |
|
training. |
|
|
|
00:31:26.340 --> 00:31:27.510 |
|
That were not seen in training. |
|
|
|
00:31:29.880 --> 00:31:30.260 |
|
Question. |
|
|
|
00:31:33.240 --> 00:31:35.950 |
|
So overfit means that. |
|
|
|
00:31:35.950 --> 00:31:37.920 |
|
So this isn't the ideal plot for |
|
|
|
00:31:37.920 --> 00:31:38.610 |
|
overfitting, but. |
|
|
|
00:31:39.500 --> 00:31:41.520 |
|
Overfitting is that as your model gets |
|
|
|
00:31:41.520 --> 00:31:43.600 |
|
more complicated, your training error |
|
|
|
00:31:43.600 --> 00:31:45.115 |
|
will always should always go down. |
|
|
|
00:31:45.115 --> 00:31:48.510 |
|
You would expect it to go down if you. |
|
|
|
00:31:49.070 --> 00:31:52.200 |
|
If you, for example were to keep adding |
|
|
|
00:31:52.200 --> 00:31:55.040 |
|
features to your model, then the same |
|
|
|
00:31:55.040 --> 00:31:57.030 |
|
model should keep getting better on |
|
|
|
00:31:57.030 --> 00:31:58.550 |
|
your training set because you've got |
|
|
|
00:31:58.550 --> 00:32:00.235 |
|
more features with which to fit your |
|
|
|
00:32:00.235 --> 00:32:00.810 |
|
training data. |
|
|
|
00:32:02.050 --> 00:32:04.430 |
|
And maybe for a while your test error |
|
|
|
00:32:04.430 --> 00:32:06.320 |
|
will also go down because you genuinely |
|
|
|
00:32:06.320 --> 00:32:07.350 |
|
get a better predictor. |
|
|
|
00:32:08.190 --> 00:32:10.200 |
|
But then at some point, as you continue |
|
|
|
00:32:10.200 --> 00:32:12.500 |
|
to increase the complexity, the test |
|
|
|
00:32:12.500 --> 00:32:13.880 |
|
error will start going up. |
|
|
|
00:32:13.880 --> 00:32:15.260 |
|
Even though the training error keeps |
|
|
|
00:32:15.260 --> 00:32:17.540 |
|
going down, the test error goes up, and |
|
|
|
00:32:17.540 --> 00:32:18.690 |
|
that's the point at which you've |
|
|
|
00:32:18.690 --> 00:32:19.180 |
|
overfit. |
|
|
|
00:32:19.920 --> 00:32:21.604 |
|
So you can't. |
|
|
|
00:32:21.604 --> 00:32:22.165 |
|
Really. |
|
|
|
00:32:22.165 --> 00:32:24.600 |
|
Common, really common conceptual |
|
|
|
00:32:24.600 --> 00:32:27.500 |
|
mistake that people make is to think |
|
|
|
00:32:27.500 --> 00:32:29.670 |
|
that once you're training error is 0, |
|
|
|
00:32:29.670 --> 00:32:30.890 |
|
then you've overfit. |
|
|
|
00:32:30.890 --> 00:32:32.060 |
|
That's not overfitting. |
|
|
|
00:32:32.060 --> 00:32:32.515 |
|
You can't. |
|
|
|
00:32:32.515 --> 00:32:33.930 |
|
You can't look at your training error |
|
|
|
00:32:33.930 --> 00:32:35.789 |
|
by itself to say that you've overfit. |
|
|
|
00:32:36.560 --> 00:32:38.430 |
|
Overfitting is when your test error |
|
|
|
00:32:38.430 --> 00:32:40.480 |
|
starts to go up after increasing the |
|
|
|
00:32:40.480 --> 00:32:41.190 |
|
complexity. |
|
|
|
00:32:43.380 --> 00:32:44.950 |
|
So in your homework 2. |
|
|
|
00:32:45.850 --> 00:32:47.778 |
|
Trees are like a really good way to |
|
|
|
00:32:47.778 --> 00:32:49.235 |
|
look at overfitting because the |
|
|
|
00:32:49.235 --> 00:32:51.280 |
|
complexity is like the depth of the |
|
|
|
00:32:51.280 --> 00:32:52.983 |
|
tree or the number of nodes in the |
|
|
|
00:32:52.983 --> 00:32:53.329 |
|
tree. |
|
|
|
00:32:53.330 --> 00:32:56.530 |
|
So in your in your homework two, you're |
|
|
|
00:32:56.530 --> 00:32:58.930 |
|
going to look at overfitting and how |
|
|
|
00:32:58.930 --> 00:33:01.170 |
|
the training and test error varies as |
|
|
|
00:33:01.170 --> 00:33:02.510 |
|
you increase the complexity of your |
|
|
|
00:33:02.510 --> 00:33:03.080 |
|
classifiers. |
|
|
|
00:33:04.230 --> 00:33:04.550 |
|
Question. |
|
|
|
00:33:09.440 --> 00:33:09.880 |
|
Right. |
|
|
|
00:33:09.880 --> 00:33:10.820 |
|
Yeah, that's a good point. |
|
|
|
00:33:10.820 --> 00:33:13.380 |
|
So increasing the sample size does not |
|
|
|
00:33:13.380 --> 00:33:15.610 |
|
Causeway overfitting, but you will |
|
|
|
00:33:15.610 --> 00:33:21.280 |
|
always get, you should expect to get a |
|
|
|
00:33:21.280 --> 00:33:24.070 |
|
better fit to the true model, a closer |
|
|
|
00:33:24.070 --> 00:33:25.450 |
|
fit to the true model as you increase |
|
|
|
00:33:25.450 --> 00:33:26.340 |
|
the training size. |
|
|
|
00:33:26.340 --> 00:33:28.550 |
|
The reason that I say I keep on saying |
|
|
|
00:33:28.550 --> 00:33:31.860 |
|
expect and what that means is that if |
|
|
|
00:33:31.860 --> 00:33:34.416 |
|
you were to resample this problem, like |
|
|
|
00:33:34.416 --> 00:33:36.430 |
|
resample your data over and over again. |
|
|
|
00:33:36.590 --> 00:33:39.152 |
|
Than on average this will happen, but |
|
|
|
00:33:39.152 --> 00:33:41.289 |
|
in any particular scenario you can get |
|
|
|
00:33:41.290 --> 00:33:41.840 |
|
unlucky. |
|
|
|
00:33:41.840 --> 00:33:44.270 |
|
You could add like 5 training examples |
|
|
|
00:33:44.270 --> 00:33:46.490 |
|
and they're really non representative |
|
|
|
00:33:46.490 --> 00:33:48.620 |
|
by chance and they cause your model to |
|
|
|
00:33:48.620 --> 00:33:49.500 |
|
get worse. |
|
|
|
00:33:49.500 --> 00:33:51.080 |
|
So there's no guarantees. |
|
|
|
00:33:51.080 --> 00:33:53.365 |
|
But you can say more easily what will |
|
|
|
00:33:53.365 --> 00:33:55.980 |
|
happen in expectation, which means on |
|
|
|
00:33:55.980 --> 00:33:58.420 |
|
average under the same kinds of |
|
|
|
00:33:58.420 --> 00:33:59.100 |
|
situations. |
|
|
|
00:34:06.160 --> 00:34:10.527 |
|
Alright, so I want to so a lot of a lot |
|
|
|
00:34:10.527 --> 00:34:13.120 |
|
of people said that these a lot of |
|
|
|
00:34:13.120 --> 00:34:14.729 |
|
respondents to the survey said that. |
|
|
|
00:34:16.090 --> 00:34:17.850 |
|
Even when these concepts feel like they |
|
|
|
00:34:17.850 --> 00:34:20.910 |
|
make sense abstractly or theoretically, |
|
|
|
00:34:20.910 --> 00:34:22.540 |
|
it's not that easy to understand. |
|
|
|
00:34:22.540 --> 00:34:23.749 |
|
How do you actually put it into |
|
|
|
00:34:23.750 --> 00:34:25.660 |
|
practice and turn it into code? |
|
|
|
00:34:25.660 --> 00:34:27.750 |
|
So I want to work through a particular |
|
|
|
00:34:27.750 --> 00:34:29.200 |
|
example in some detail. |
|
|
|
00:34:30.090 --> 00:34:33.490 |
|
And the example I choose is this |
|
|
|
00:34:33.490 --> 00:34:35.550 |
|
Wisconsin breast cancer data set. |
|
|
|
00:34:36.450 --> 00:34:38.290 |
|
So this data set was collected in the |
|
|
|
00:34:38.290 --> 00:34:39.360 |
|
early 90s. |
|
|
|
00:34:40.440 --> 00:34:44.650 |
|
The motivation is that is that doctors |
|
|
|
00:34:44.650 --> 00:34:46.800 |
|
wanted to use this tool, called fine |
|
|
|
00:34:46.800 --> 00:34:50.410 |
|
needle aspirates to diagnose whether a |
|
|
|
00:34:50.410 --> 00:34:52.660 |
|
tumor is malignant or benign. |
|
|
|
00:34:53.900 --> 00:34:54.900 |
|
And doctors. |
|
|
|
00:34:54.900 --> 00:34:57.040 |
|
In some medical papers, doctors |
|
|
|
00:34:57.040 --> 00:35:01.360 |
|
reported a 94% accuracy in making this |
|
|
|
00:35:01.360 --> 00:35:02.540 |
|
diagnosis. |
|
|
|
00:35:02.540 --> 00:35:06.560 |
|
But the authors of this study, the |
|
|
|
00:35:06.560 --> 00:35:08.520 |
|
first author, is a medical doctor |
|
|
|
00:35:08.520 --> 00:35:08.980 |
|
himself. |
|
|
|
00:35:11.150 --> 00:35:12.490 |
|
Have like 2 concerns. |
|
|
|
00:35:12.490 --> 00:35:14.210 |
|
One is that they want to see if you can |
|
|
|
00:35:14.210 --> 00:35:15.327 |
|
get a better accuracy. |
|
|
|
00:35:15.327 --> 00:35:17.983 |
|
They want two or three, maybe they want |
|
|
|
00:35:17.983 --> 00:35:19.560 |
|
to reduce the amount of expertise |
|
|
|
00:35:19.560 --> 00:35:21.160 |
|
that's needed in order to make a good |
|
|
|
00:35:21.160 --> 00:35:21.925 |
|
diagnosis. |
|
|
|
00:35:21.925 --> 00:35:24.080 |
|
And third, they suspect that these |
|
|
|
00:35:24.080 --> 00:35:26.620 |
|
reports may be biased because there's a |
|
|
|
00:35:26.620 --> 00:35:29.065 |
|
they note that there tends to be like a |
|
|
|
00:35:29.065 --> 00:35:30.900 |
|
bias towards positive results that are. |
|
|
|
00:35:30.900 --> 00:35:34.638 |
|
I mean, yeah, there tends to be a bias |
|
|
|
00:35:34.638 --> 00:35:36.879 |
|
towards positive results and reports, |
|
|
|
00:35:36.880 --> 00:35:37.130 |
|
right? |
|
|
|
00:35:37.990 --> 00:35:40.140 |
|
People are more likely to report |
|
|
|
00:35:40.140 --> 00:35:41.436 |
|
something if they think it's good, then |
|
|
|
00:35:41.436 --> 00:35:43.240 |
|
if they get a disappointing outcome. |
|
|
|
00:35:44.810 --> 00:35:47.190 |
|
So they want to create computer based |
|
|
|
00:35:47.190 --> 00:35:49.250 |
|
tests that are less objective and |
|
|
|
00:35:49.250 --> 00:35:51.270 |
|
provide an effective diagnostic tool. |
|
|
|
00:35:52.830 --> 00:35:55.350 |
|
So they collected data from 569 |
|
|
|
00:35:55.350 --> 00:35:58.660 |
|
patients and then for developing the |
|
|
|
00:35:58.660 --> 00:36:00.584 |
|
algorithm and doing their first tests |
|
|
|
00:36:00.584 --> 00:36:02.525 |
|
and then they collected an additional |
|
|
|
00:36:02.525 --> 00:36:03.250 |
|
54. |
|
|
|
00:36:03.960 --> 00:36:06.570 |
|
Data from another 54 patients for their |
|
|
|
00:36:06.570 --> 00:36:07.290 |
|
final tests. |
|
|
|
00:36:08.850 --> 00:36:13.080 |
|
And so you can it's like important to |
|
|
|
00:36:13.080 --> 00:36:16.090 |
|
understand like how painstaking this |
|
|
|
00:36:16.090 --> 00:36:18.340 |
|
process is of collecting data. |
|
|
|
00:36:18.340 --> 00:36:18.740 |
|
So. |
|
|
|
00:36:19.470 --> 00:36:21.620 |
|
These are these are real people who |
|
|
|
00:36:21.620 --> 00:36:24.350 |
|
have tumors and they take medical |
|
|
|
00:36:24.350 --> 00:36:26.660 |
|
images of them and then they have some |
|
|
|
00:36:26.660 --> 00:36:28.730 |
|
interface where somebody can go in and |
|
|
|
00:36:28.730 --> 00:36:31.176 |
|
outline several of the cells, many of |
|
|
|
00:36:31.176 --> 00:36:32.530 |
|
the cells that were detected. |
|
|
|
00:36:33.930 --> 00:36:35.836 |
|
And then they have a. |
|
|
|
00:36:35.836 --> 00:36:38.220 |
|
Then they do like an automated analysis |
|
|
|
00:36:38.220 --> 00:36:40.060 |
|
of those outlines to compute different |
|
|
|
00:36:40.060 --> 00:36:42.100 |
|
features, like how what is the radius |
|
|
|
00:36:42.100 --> 00:36:43.853 |
|
of the cells and what's the area of the |
|
|
|
00:36:43.853 --> 00:36:45.250 |
|
cells and what's the compactness. |
|
|
|
00:36:46.420 --> 00:36:47.350 |
|
And then? |
|
|
|
00:36:47.450 --> 00:36:48.110 |
|
|
|
|
|
00:36:48.860 --> 00:36:51.460 |
|
As the final features, they look at |
|
|
|
00:36:51.460 --> 00:36:53.790 |
|
these characteristics of the cells. |
|
|
|
00:36:53.790 --> 00:36:54.810 |
|
They look at the average |
|
|
|
00:36:54.810 --> 00:36:57.162 |
|
characteristic, the characteristic of |
|
|
|
00:36:57.162 --> 00:36:59.620 |
|
the largest cell, the worst cell. |
|
|
|
00:37:00.340 --> 00:37:04.030 |
|
And the and then the standard deviation |
|
|
|
00:37:04.030 --> 00:37:05.340 |
|
of these characteristics. |
|
|
|
00:37:05.340 --> 00:37:06.730 |
|
So they're looking at trying to look at |
|
|
|
00:37:06.730 --> 00:37:09.250 |
|
like the distribution of these shape |
|
|
|
00:37:09.250 --> 00:37:11.680 |
|
properties of the cells in order to |
|
|
|
00:37:11.680 --> 00:37:13.410 |
|
determine if the cancerous cells are |
|
|
|
00:37:13.410 --> 00:37:14.390 |
|
malignant or benign. |
|
|
|
00:37:15.880 --> 00:37:18.820 |
|
So it's a pretty involved process to |
|
|
|
00:37:18.820 --> 00:37:19.620 |
|
collect that data. |
|
|
|
00:37:22.080 --> 00:37:22.420 |
|
|
|
|
|
00:38:00.720 --> 00:38:01.480 |
|
Right. |
|
|
|
00:38:01.480 --> 00:38:04.120 |
|
So what you would do? |
|
|
|
00:38:04.120 --> 00:38:08.160 |
|
And if you go for any kinds of tests, |
|
|
|
00:38:08.160 --> 00:38:10.000 |
|
you'll probably experience this to some |
|
|
|
00:38:10.000 --> 00:38:10.320 |
|
extent. |
|
|
|
00:38:11.820 --> 00:38:13.870 |
|
Like often, somebody will go, a |
|
|
|
00:38:13.870 --> 00:38:16.093 |
|
technician will go in, they see some |
|
|
|
00:38:16.093 --> 00:38:17.710 |
|
image, they take different measurements |
|
|
|
00:38:17.710 --> 00:38:18.350 |
|
on the image. |
|
|
|
00:38:19.090 --> 00:38:22.410 |
|
And then they can say then they may run |
|
|
|
00:38:22.410 --> 00:38:24.765 |
|
this like through some data analysis, |
|
|
|
00:38:24.765 --> 00:38:27.650 |
|
and either either they have rules in |
|
|
|
00:38:27.650 --> 00:38:29.640 |
|
their head for like what are acceptable |
|
|
|
00:38:29.640 --> 00:38:32.715 |
|
variations, or they run it through some |
|
|
|
00:38:32.715 --> 00:38:36.760 |
|
analysis and they'll say, they might |
|
|
|
00:38:36.760 --> 00:38:39.110 |
|
tell you have no cause for concern, or |
|
|
|
00:38:39.110 --> 00:38:41.474 |
|
there's like some cause for concern, or |
|
|
|
00:38:41.474 --> 00:38:43.350 |
|
like there's great cause for concern. |
|
|
|
00:38:44.140 --> 00:38:45.510 |
|
But if you have an algorithm that it |
|
|
|
00:38:45.510 --> 00:38:47.100 |
|
might tell you, in this case, for |
|
|
|
00:38:47.100 --> 00:38:49.630 |
|
example, what's the probability that |
|
|
|
00:38:49.630 --> 00:38:51.850 |
|
these cells are malignant versus |
|
|
|
00:38:51.850 --> 00:38:52.980 |
|
benign? |
|
|
|
00:38:52.980 --> 00:38:55.595 |
|
And then you might say, if there's a |
|
|
|
00:38:55.595 --> 00:38:57.730 |
|
30% chance that it's malignant, then |
|
|
|
00:38:57.730 --> 00:38:59.210 |
|
I'm going to recommend a biopsy. |
|
|
|
00:38:59.210 --> 00:39:02.160 |
|
So you want to have some confidence |
|
|
|
00:39:02.160 --> 00:39:03.140 |
|
with your prediction. |
|
|
|
00:39:04.210 --> 00:39:05.360 |
|
So in this. |
|
|
|
00:39:06.760 --> 00:39:08.392 |
|
In our analysis, we're not going to |
|
|
|
00:39:08.392 --> 00:39:11.020 |
|
look at the confidences too much for |
|
|
|
00:39:11.020 --> 00:39:12.010 |
|
simplicity. |
|
|
|
00:39:12.010 --> 00:39:15.457 |
|
But in the study they also will look, |
|
|
|
00:39:15.457 --> 00:39:18.340 |
|
they also look at the like specificity, |
|
|
|
00:39:18.340 --> 00:39:20.560 |
|
like how often can you do you |
|
|
|
00:39:20.560 --> 00:39:22.406 |
|
misdiagnose one way or the other and |
|
|
|
00:39:22.406 --> 00:39:24.155 |
|
they can use the confidence as part of |
|
|
|
00:39:24.155 --> 00:39:24.860 |
|
the recommendation. |
|
|
|
00:39:30.410 --> 00:39:35.273 |
|
Alright, so I'm going to go into this |
|
|
|
00:39:35.273 --> 00:39:37.140 |
|
and I think now is a good time to take |
|
|
|
00:39:37.140 --> 00:39:38.050 |
|
a minute or two. |
|
|
|
00:39:38.050 --> 00:39:39.515 |
|
You can think about this problem, how |
|
|
|
00:39:39.515 --> 00:39:40.250 |
|
you would solve it. |
|
|
|
00:39:40.250 --> 00:39:42.130 |
|
You've got 30 features, continuous |
|
|
|
00:39:42.130 --> 00:39:43.510 |
|
features, and you're trying to predict |
|
|
|
00:39:43.510 --> 00:39:44.450 |
|
malignant or benign. |
|
|
|
00:39:45.150 --> 00:39:48.480 |
|
And also feel free to stretch your it. |
|
|
|
00:39:48.480 --> 00:39:51.920 |
|
You need to prepare your mind for the |
|
|
|
00:39:51.920 --> 00:39:52.410 |
|
next half. |
|
|
|
00:40:20.140 --> 00:40:20.570 |
|
Question. |
|
|
|
00:40:36.560 --> 00:40:39.556 |
|
Decision trees for example does that |
|
|
|
00:40:39.556 --> 00:40:42.250 |
|
and neural networks will also do that. |
|
|
|
00:40:42.250 --> 00:40:44.940 |
|
Or kernelized SVMS and nearest |
|
|
|
00:40:44.940 --> 00:40:45.374 |
|
neighbor. |
|
|
|
00:40:45.374 --> 00:40:47.950 |
|
They all they all depend jointly on the |
|
|
|
00:40:47.950 --> 00:40:48.560 |
|
features. |
|
|
|
00:40:51.930 --> 00:40:52.700 |
|
How does what? |
|
|
|
00:40:56.030 --> 00:40:58.985 |
|
I guess because the distance is. |
|
|
|
00:40:58.985 --> 00:41:01.517 |
|
That's a good point, yeah. |
|
|
|
00:41:01.517 --> 00:41:04.160 |
|
The K&NI guess, it depends jointly on |
|
|
|
00:41:04.160 --> 00:41:05.790 |
|
them, but it's independently |
|
|
|
00:41:05.790 --> 00:41:07.020 |
|
considering those features. |
|
|
|
00:41:07.020 --> 00:41:08.180 |
|
That's right, yeah. |
|
|
|
00:41:20.030 --> 00:41:23.680 |
|
But it's nice if it's often hard to |
|
|
|
00:41:23.680 --> 00:41:25.810 |
|
know what's relevant, and so it's nice. |
|
|
|
00:41:25.810 --> 00:41:27.510 |
|
The ideal is that you can just collect |
|
|
|
00:41:27.510 --> 00:41:28.840 |
|
a lot of things that you think might be |
|
|
|
00:41:28.840 --> 00:41:30.950 |
|
relevant and feed it into the algorithm |
|
|
|
00:41:30.950 --> 00:41:34.578 |
|
and not have to manually like manually |
|
|
|
00:41:34.578 --> 00:41:36.640 |
|
like prune it and out. |
|
|
|
00:41:42.050 --> 00:41:45.256 |
|
Yeah, so one is robust to irrelevant |
|
|
|
00:41:45.256 --> 00:41:47.780 |
|
features, but if you do L2, it's not so |
|
|
|
00:41:47.780 --> 00:41:49.340 |
|
robust to irrelevant features. |
|
|
|
00:41:49.340 --> 00:41:50.900 |
|
So that's like another property of the |
|
|
|
00:41:50.900 --> 00:41:52.160 |
|
algorithm is whether it has that |
|
|
|
00:41:52.160 --> 00:41:52.660 |
|
robustness. |
|
|
|
00:41:57.120 --> 00:41:59.780 |
|
Alright, so let me zoom in a little |
|
|
|
00:41:59.780 --> 00:42:00.280 |
|
bit. |
|
|
|
00:42:03.050 --> 00:42:04.260 |
|
I guess over here. |
|
|
|
00:42:10.690 --> 00:42:13.660 |
|
So we've got this data set. |
|
|
|
00:42:13.660 --> 00:42:15.710 |
|
Fortunately, in this case, I can load |
|
|
|
00:42:15.710 --> 00:42:17.900 |
|
the data set from sklearn datasets. |
|
|
|
00:42:19.720 --> 00:42:22.300 |
|
So here I have the initialization code |
|
|
|
00:42:22.300 --> 00:42:22.965 |
|
and your homework. |
|
|
|
00:42:22.965 --> 00:42:24.790 |
|
I provided this code to you as well |
|
|
|
00:42:24.790 --> 00:42:26.670 |
|
that initially like loads the data and |
|
|
|
00:42:26.670 --> 00:42:28.480 |
|
splits it up into different datasets. |
|
|
|
00:42:29.440 --> 00:42:32.010 |
|
But here I've just got my libraries |
|
|
|
00:42:32.010 --> 00:42:33.470 |
|
that I'm going to use. |
|
|
|
00:42:33.470 --> 00:42:37.960 |
|
I load the data I this data comes in |
|
|
|
00:42:37.960 --> 00:42:39.260 |
|
like a particular structure. |
|
|
|
00:42:39.260 --> 00:42:40.930 |
|
So I take out the features which are |
|
|
|
00:42:40.930 --> 00:42:43.740 |
|
capital X, the predictions which are Y. |
|
|
|
00:42:44.490 --> 00:42:45.940 |
|
And it also gives me names of the |
|
|
|
00:42:45.940 --> 00:42:49.120 |
|
features and names of the predictions |
|
|
|
00:42:49.120 --> 00:42:50.690 |
|
which are good for visualization. |
|
|
|
00:42:51.740 --> 00:42:53.330 |
|
So if I run this, it's going to start |
|
|
|
00:42:53.330 --> 00:42:55.328 |
|
an instance on collabs and then it's |
|
|
|
00:42:55.328 --> 00:42:57.366 |
|
going to download the data and print |
|
|
|
00:42:57.366 --> 00:42:59.900 |
|
out the shape and the shape of Y. |
|
|
|
00:42:59.900 --> 00:43:02.950 |
|
So I often like I print a lot of shapes |
|
|
|
00:43:02.950 --> 00:43:05.130 |
|
of variables when I'm doing stuff |
|
|
|
00:43:05.130 --> 00:43:07.880 |
|
because it helps me to make sure I |
|
|
|
00:43:07.880 --> 00:43:09.230 |
|
understand exactly what I loaded. |
|
|
|
00:43:09.230 --> 00:43:11.679 |
|
Like if I print out the shape and it's |
|
|
|
00:43:11.679 --> 00:43:14.006 |
|
if the shape of X is 1 by something |
|
|
|
00:43:14.006 --> 00:43:15.660 |
|
then I would be like maybe I took the |
|
|
|
00:43:15.660 --> 00:43:18.160 |
|
wrong like values from this data |
|
|
|
00:43:18.160 --> 00:43:18.630 |
|
structure. |
|
|
|
00:43:19.760 --> 00:43:23.580 |
|
Alright, so I've got 569 data points. |
|
|
|
00:43:23.580 --> 00:43:26.950 |
|
So remember that there were 569 samples |
|
|
|
00:43:26.950 --> 00:43:28.790 |
|
that were drawn at first that were used |
|
|
|
00:43:28.790 --> 00:43:30.350 |
|
for their training and algorithm |
|
|
|
00:43:30.350 --> 00:43:32.680 |
|
development, and then another like 56 |
|
|
|
00:43:32.680 --> 00:43:34.340 |
|
or something that we use for testing. |
|
|
|
00:43:34.340 --> 00:43:36.380 |
|
The 56 are not released, they're not |
|
|
|
00:43:36.380 --> 00:43:37.170 |
|
part of this data set. |
|
|
|
00:43:38.230 --> 00:43:40.150 |
|
And then there's 30 features, there's |
|
|
|
00:43:40.150 --> 00:43:41.300 |
|
10 characteristics. |
|
|
|
00:43:41.970 --> 00:43:44.560 |
|
That correspond to the like the worst |
|
|
|
00:43:44.560 --> 00:43:46.230 |
|
case, the average case and the steering |
|
|
|
00:43:46.230 --> 00:43:46.760 |
|
deviation. |
|
|
|
00:43:47.470 --> 00:43:50.034 |
|
And I've got 569 labels, so number of |
|
|
|
00:43:50.034 --> 00:43:52.010 |
|
labels equals number of data points, so |
|
|
|
00:43:52.010 --> 00:43:52.500 |
|
that's good. |
|
|
|
00:43:54.430 --> 00:43:56.433 |
|
Now I can print out. |
|
|
|
00:43:56.433 --> 00:43:58.960 |
|
I usually will also like print out some |
|
|
|
00:43:58.960 --> 00:44:00.940 |
|
examples just to make sure that there's |
|
|
|
00:44:00.940 --> 00:44:01.585 |
|
nothing weird here. |
|
|
|
00:44:01.585 --> 00:44:04.125 |
|
I don't have any nins or anything like |
|
|
|
00:44:04.125 --> 00:44:04.330 |
|
that. |
|
|
|
00:44:05.190 --> 00:44:06.620 |
|
So here are the different feature |
|
|
|
00:44:06.620 --> 00:44:08.060 |
|
names. |
|
|
|
00:44:08.060 --> 00:44:11.080 |
|
Here's I chose a few random example |
|
|
|
00:44:11.080 --> 00:44:11.760 |
|
indices. |
|
|
|
00:44:12.430 --> 00:44:14.980 |
|
And I can see, I can see some of the |
|
|
|
00:44:14.980 --> 00:44:15.740 |
|
feature values. |
|
|
|
00:44:15.740 --> 00:44:18.530 |
|
So there's no NANS or Memphis or |
|
|
|
00:44:18.530 --> 00:44:19.570 |
|
anything like that in there. |
|
|
|
00:44:19.570 --> 00:44:20.400 |
|
That's good. |
|
|
|
00:44:20.400 --> 00:44:22.320 |
|
Also I can notice like. |
|
|
|
00:44:23.030 --> 00:44:25.974 |
|
Some of some of their values are like |
|
|
|
00:44:25.974 --> 00:44:30.416 |
|
1.2 E 2 or 11 E 3, so this is like |
|
|
|
00:44:30.416 --> 00:44:32.080 |
|
1000, while some other ones are really |
|
|
|
00:44:32.080 --> 00:44:36.134 |
|
small, like 1188 E -, 1. |
|
|
|
00:44:36.134 --> 00:44:37.910 |
|
So that's something to consider. |
|
|
|
00:44:37.910 --> 00:44:39.340 |
|
There's a pretty big range of the |
|
|
|
00:44:39.340 --> 00:44:40.230 |
|
feature values here. |
|
|
|
00:44:43.520 --> 00:44:45.600 |
|
So then another thing I'll do early is |
|
|
|
00:44:45.600 --> 00:44:48.050 |
|
say how common is each class, because |
|
|
|
00:44:48.050 --> 00:44:50.120 |
|
if like 99% of the examples are in one |
|
|
|
00:44:50.120 --> 00:44:51.745 |
|
class, that's something I need to keep |
|
|
|
00:44:51.745 --> 00:44:53.840 |
|
in mind versus a 5050 split. |
|
|
|
00:44:55.290 --> 00:44:56.650 |
|
So in this case. |
|
|
|
00:44:56.750 --> 00:44:57.360 |
|
|
|
|
|
00:44:58.700 --> 00:45:02.810 |
|
37% of the examples have Class 0 and |
|
|
|
00:45:02.810 --> 00:45:04.600 |
|
63% have Class 1. |
|
|
|
00:45:05.630 --> 00:45:10.190 |
|
And if I think I printed the label |
|
|
|
00:45:10.190 --> 00:45:12.105 |
|
names, yeah, so the label names. |
|
|
|
00:45:12.105 --> 00:45:14.750 |
|
So 0 means malignant and one means |
|
|
|
00:45:14.750 --> 00:45:15.260 |
|
benign. |
|
|
|
00:45:15.940 --> 00:45:20.190 |
|
So in this sample, 37% are malignant |
|
|
|
00:45:20.190 --> 00:45:21.940 |
|
and 63% are benign. |
|
|
|
00:45:24.410 --> 00:45:26.060 |
|
Now I'm going to create a training and |
|
|
|
00:45:26.060 --> 00:45:27.160 |
|
validation set. |
|
|
|
00:45:27.160 --> 00:45:29.410 |
|
So I define the number of training |
|
|
|
00:45:29.410 --> 00:45:31.720 |
|
samples 469. |
|
|
|
00:45:32.650 --> 00:45:35.845 |
|
I use a random seed and that's because |
|
|
|
00:45:35.845 --> 00:45:38.360 |
|
it might be that the training samples |
|
|
|
00:45:38.360 --> 00:45:40.141 |
|
are stored in some structured way. |
|
|
|
00:45:40.141 --> 00:45:42.125 |
|
Maybe they put all the examples with |
|
|
|
00:45:42.125 --> 00:45:44.260 |
|
zero first, label zero first and then |
|
|
|
00:45:44.260 --> 00:45:45.280 |
|
label one. |
|
|
|
00:45:45.280 --> 00:45:47.629 |
|
Or maybe they were structured in some |
|
|
|
00:45:47.630 --> 00:45:49.910 |
|
other way and I want it to be random, |
|
|
|
00:45:49.910 --> 00:45:51.800 |
|
so randomness is not something you can |
|
|
|
00:45:51.800 --> 00:45:52.720 |
|
leave to chance. |
|
|
|
00:45:52.720 --> 00:45:56.250 |
|
You need to use some permutation to |
|
|
|
00:45:56.250 --> 00:45:58.450 |
|
make sure that you get a random sample |
|
|
|
00:45:58.450 --> 00:45:59.040 |
|
of the data. |
|
|
|
00:46:00.580 --> 00:46:03.319 |
|
So I do a random permutation of the |
|
|
|
00:46:03.320 --> 00:46:05.840 |
|
same length as the number of indices. |
|
|
|
00:46:05.840 --> 00:46:08.280 |
|
I set a seed here because I just wanted |
|
|
|
00:46:08.280 --> 00:46:10.010 |
|
this to be repeatable for the purpose |
|
|
|
00:46:10.010 --> 00:46:11.890 |
|
of the class, and actually it's a good |
|
|
|
00:46:11.890 --> 00:46:14.310 |
|
idea to set a seed anyway so that. |
|
|
|
00:46:16.450 --> 00:46:18.540 |
|
Because takes out one source of |
|
|
|
00:46:18.540 --> 00:46:20.210 |
|
variance for your debugging. |
|
|
|
00:46:21.980 --> 00:46:24.145 |
|
So I split it into a training set. |
|
|
|
00:46:24.145 --> 00:46:25.770 |
|
I took the first untrained. |
|
|
|
00:46:26.750 --> 00:46:29.290 |
|
It's my X train and Y train and then I |
|
|
|
00:46:29.290 --> 00:46:32.420 |
|
took all the rest as my X value, Y Val |
|
|
|
00:46:32.420 --> 00:46:34.410 |
|
and by the 1st examples I mean the |
|
|
|
00:46:34.410 --> 00:46:36.060 |
|
first ones that in this random |
|
|
|
00:46:36.060 --> 00:46:37.130 |
|
permutation list. |
|
|
|
00:46:38.330 --> 00:46:41.580 |
|
Now X train and Y train have. |
|
|
|
00:46:42.020 --> 00:46:47.790 |
|
I have 469 examples so 469 by 30. |
|
|
|
00:46:48.680 --> 00:46:51.575 |
|
And X value Y Val which is the second |
|
|
|
00:46:51.575 --> 00:46:53.310 |
|
one has 100 examples. |
|
|
|
00:46:55.420 --> 00:46:58.375 |
|
Sometimes the first thing I'll do is |
|
|
|
00:46:58.375 --> 00:47:01.360 |
|
like a simple classifier just to see is |
|
|
|
00:47:01.360 --> 00:47:02.390 |
|
this problem trivial. |
|
|
|
00:47:02.390 --> 00:47:04.125 |
|
If I get like 0 error right away, then |
|
|
|
00:47:04.125 --> 00:47:06.780 |
|
I can just like stop spend time on it. |
|
|
|
00:47:07.630 --> 00:47:10.909 |
|
So I made a nearest neighbor |
|
|
|
00:47:10.910 --> 00:47:11.620 |
|
classifier. |
|
|
|
00:47:11.620 --> 00:47:13.390 |
|
So I have nearest neighbor. |
|
|
|
00:47:13.390 --> 00:47:15.600 |
|
X train and Y train are fed in as well |
|
|
|
00:47:15.600 --> 00:47:16.340 |
|
as X test. |
|
|
|
00:47:17.640 --> 00:47:21.470 |
|
Pre initialize my predictions, so I do |
|
|
|
00:47:21.470 --> 00:47:23.560 |
|
initialize it with zeros. |
|
|
|
00:47:23.560 --> 00:47:25.990 |
|
For each test sample, I take the |
|
|
|
00:47:25.990 --> 00:47:27.540 |
|
difference from the test sample and all |
|
|
|
00:47:27.540 --> 00:47:29.140 |
|
the training samples. |
|
|
|
00:47:29.140 --> 00:47:30.940 |
|
Under the hood, Numpy we'll do like |
|
|
|
00:47:30.940 --> 00:47:32.800 |
|
broadcasting, which means it will copy |
|
|
|
00:47:32.800 --> 00:47:36.085 |
|
this as necessary so that the X test |
|
|
|
00:47:36.085 --> 00:47:38.669 |
|
will be a 1 by 30 and it will copy it |
|
|
|
00:47:38.669 --> 00:47:42.560 |
|
so that it becomes a 469 by 30. |
|
|
|
00:47:43.860 --> 00:47:45.139 |
|
Then I take the difference. |
|
|
|
00:47:45.140 --> 00:47:46.330 |
|
It will be the difference of each |
|
|
|
00:47:46.330 --> 00:47:49.270 |
|
element of the features and samples. |
|
|
|
00:47:49.960 --> 00:47:51.840 |
|
Square it will be the square of each |
|
|
|
00:47:51.840 --> 00:47:54.660 |
|
element and then I sum over axis one |
|
|
|
00:47:54.660 --> 00:47:55.920 |
|
which is the 2nd axis. |
|
|
|
00:47:55.920 --> 00:47:57.210 |
|
Zero is the first axis. |
|
|
|
00:47:58.110 --> 00:47:59.790 |
|
So this will be the sum squared |
|
|
|
00:47:59.790 --> 00:48:00.830 |
|
distance of the features. |
|
|
|
00:48:01.890 --> 00:48:02.770 |
|
Euclidean distance. |
|
|
|
00:48:02.770 --> 00:48:04.390 |
|
You would also take the square root, |
|
|
|
00:48:04.390 --> 00:48:05.857 |
|
but I don't need to take the square |
|
|
|
00:48:05.857 --> 00:48:09.008 |
|
root because the minimum of the square |
|
|
|
00:48:09.008 --> 00:48:11.104 |
|
of a value is the same as the minimum |
|
|
|
00:48:11.104 --> 00:48:13.251 |
|
of the square of the square of the |
|
|
|
00:48:13.251 --> 00:48:13.519 |
|
value. |
|
|
|
00:48:13.680 --> 00:48:13.890 |
|
Right. |
|
|
|
00:48:16.060 --> 00:48:19.780 |
|
J is the argument distance, so I say J |
|
|
|
00:48:19.780 --> 00:48:21.495 |
|
equals the argument and this distance. |
|
|
|
00:48:21.495 --> 00:48:23.130 |
|
So this will give me the index that had |
|
|
|
00:48:23.130 --> 00:48:24.010 |
|
the minimum distance. |
|
|
|
00:48:24.700 --> 00:48:26.420 |
|
If I needed more than one, I could use |
|
|
|
00:48:26.420 --> 00:48:29.500 |
|
argsort and then take like the first K |
|
|
|
00:48:29.500 --> 00:48:30.050 |
|
indices. |
|
|
|
00:48:31.000 --> 00:48:33.386 |
|
I assign the test to the training to |
|
|
|
00:48:33.386 --> 00:48:34.720 |
|
the training sample that had the |
|
|
|
00:48:34.720 --> 00:48:36.500 |
|
minimum distance and I returned it. |
|
|
|
00:48:36.500 --> 00:48:39.240 |
|
So nearest neighbor is pretty simple. |
|
|
|
00:48:40.800 --> 00:48:43.980 |
|
This like if you're a proficient coder, |
|
|
|
00:48:43.980 --> 00:48:46.410 |
|
it's like a two minutes or whatever to |
|
|
|
00:48:46.410 --> 00:48:46.790 |
|
decode it. |
|
|
|
00:48:48.690 --> 00:48:52.140 |
|
Then I'm going to test it, so I then do |
|
|
|
00:48:52.140 --> 00:48:54.050 |
|
the prediction on the validation set. |
|
|
|
00:48:54.050 --> 00:48:55.230 |
|
Remember, nearest neighbor has no |
|
|
|
00:48:55.230 --> 00:48:56.870 |
|
training, so I have no training code |
|
|
|
00:48:56.870 --> 00:48:58.105 |
|
here, it's just really a prediction |
|
|
|
00:48:58.105 --> 00:48:58.430 |
|
code. |
|
|
|
00:48:59.450 --> 00:49:02.320 |
|
And now compute my average accuracy, |
|
|
|
00:49:02.320 --> 00:49:05.309 |
|
which is why is the number of times the |
|
|
|
00:49:05.310 --> 00:49:08.500 |
|
mean times that the validation label is |
|
|
|
00:49:08.500 --> 00:49:09.760 |
|
equal to the prediction label. |
|
|
|
00:49:10.710 --> 00:49:12.230 |
|
And then the error is 1 minus the |
|
|
|
00:49:12.230 --> 00:49:13.490 |
|
accuracy, right? |
|
|
|
00:49:13.490 --> 00:49:14.040 |
|
So let's run it. |
|
|
|
00:49:16.480 --> 00:49:21.550 |
|
All right, so I got an error of 8% now. |
|
|
|
00:49:23.090 --> 00:49:24.060 |
|
I could quit here. |
|
|
|
00:49:24.060 --> 00:49:26.840 |
|
I could be like, OK, I'm done 8%, but I |
|
|
|
00:49:26.840 --> 00:49:28.150 |
|
shouldn't really be satisfied with |
|
|
|
00:49:28.150 --> 00:49:29.080 |
|
this, right? |
|
|
|
00:49:29.080 --> 00:49:32.400 |
|
So the remember that in the study they |
|
|
|
00:49:32.400 --> 00:49:34.105 |
|
said that doctors were reporting that |
|
|
|
00:49:34.105 --> 00:49:37.380 |
|
they can get like 6% error, they had |
|
|
|
00:49:37.380 --> 00:49:38.810 |
|
94% accuracy. |
|
|
|
00:49:39.530 --> 00:49:41.906 |
|
And since I'm a machine learning |
|
|
|
00:49:41.906 --> 00:49:43.940 |
|
machine learning engineer, I'm armed |
|
|
|
00:49:43.940 --> 00:49:44.800 |
|
with data. |
|
|
|
00:49:44.800 --> 00:49:47.250 |
|
I should be able to outperform a |
|
|
|
00:49:47.250 --> 00:49:49.190 |
|
medical Doctor Who has years of |
|
|
|
00:49:49.190 --> 00:49:51.960 |
|
experience on the same problem. |
|
|
|
00:49:54.860 --> 00:49:56.800 |
|
Right, so all of his wits and |
|
|
|
00:49:56.800 --> 00:49:58.420 |
|
experience is just bringing a knife to |
|
|
|
00:49:58.420 --> 00:49:59.300 |
|
a gunfight. |
|
|
|
00:50:01.760 --> 00:50:02.410 |
|
I'm just kidding. |
|
|
|
00:50:03.810 --> 00:50:05.670 |
|
But seriously, like, I can probably do |
|
|
|
00:50:05.670 --> 00:50:06.130 |
|
better, right? |
|
|
|
00:50:06.130 --> 00:50:07.190 |
|
It's just my first attempt. |
|
|
|
00:50:07.900 --> 00:50:09.530 |
|
So let's look at the data a little bit |
|
|
|
00:50:09.530 --> 00:50:11.440 |
|
better, a little more in depth. |
|
|
|
00:50:12.340 --> 00:50:13.610 |
|
So remember that one thing we noticed |
|
|
|
00:50:13.610 --> 00:50:15.145 |
|
is that it looked like some feature |
|
|
|
00:50:15.145 --> 00:50:16.895 |
|
values were a lot larger than other |
|
|
|
00:50:16.895 --> 00:50:18.540 |
|
values, and nearest neighbor is not |
|
|
|
00:50:18.540 --> 00:50:19.716 |
|
very robust to that. |
|
|
|
00:50:19.716 --> 00:50:22.830 |
|
It might be like emphasizing the large |
|
|
|
00:50:22.830 --> 00:50:24.620 |
|
values much more, which might not be |
|
|
|
00:50:24.620 --> 00:50:25.840 |
|
the most important features. |
|
|
|
00:50:26.490 --> 00:50:28.390 |
|
So here I have a print statement. |
|
|
|
00:50:28.390 --> 00:50:30.210 |
|
The only thing fancy is that I use some |
|
|
|
00:50:30.210 --> 00:50:32.900 |
|
spacing thing to make it like evenly |
|
|
|
00:50:32.900 --> 00:50:33.420 |
|
spaced. |
|
|
|
00:50:34.040 --> 00:50:35.828 |
|
And I'm printing the means of the |
|
|
|
00:50:35.828 --> 00:50:37.330 |
|
features, the standard deviations of |
|
|
|
00:50:37.330 --> 00:50:39.710 |
|
the features, the means of the features |
|
|
|
00:50:39.710 --> 00:50:42.413 |
|
where y = 1 zero, and the means of the |
|
|
|
00:50:42.413 --> 00:50:43.599 |
|
features were y = 1. |
|
|
|
00:50:44.340 --> 00:50:46.250 |
|
So that can kind of tell me a couple |
|
|
|
00:50:46.250 --> 00:50:46.580 |
|
things. |
|
|
|
00:50:46.580 --> 00:50:48.100 |
|
One is like what is the scale that |
|
|
|
00:50:48.100 --> 00:50:49.530 |
|
features by looking at the steering |
|
|
|
00:50:49.530 --> 00:50:50.310 |
|
deviation and the mean. |
|
|
|
00:50:51.170 --> 00:50:54.050 |
|
Also, are the features like predictive |
|
|
|
00:50:54.050 --> 00:50:54.338 |
|
or not? |
|
|
|
00:50:54.338 --> 00:50:56.315 |
|
If I have a good spread of the means of |
|
|
|
00:50:56.315 --> 00:50:59.095 |
|
the two features, I mean of the of y = |
|
|
|
00:50:59.095 --> 00:51:01.749 |
|
0 and y = 1, then it's predictive. |
|
|
|
00:51:01.750 --> 00:51:03.600 |
|
But if I have a small spread compared |
|
|
|
00:51:03.600 --> 00:51:05.530 |
|
to the steering deviation then it's not |
|
|
|
00:51:05.530 --> 00:51:06.240 |
|
very predictive. |
|
|
|
00:51:07.350 --> 00:51:10.150 |
|
Right, so for example, this feature |
|
|
|
00:51:10.150 --> 00:51:11.824 |
|
here means smoothness. |
|
|
|
00:51:11.824 --> 00:51:15.584 |
|
Mean is 1, standard deviation is 01, |
|
|
|
00:51:15.584 --> 00:51:19.947 |
|
the mean of zero is 1, the mean of one |
|
|
|
00:51:19.947 --> 00:51:20.690 |
|
is 09. |
|
|
|
00:51:20.690 --> 00:51:22.770 |
|
And you know with three digits there |
|
|
|
00:51:22.770 --> 00:51:24.305 |
|
might be look even closer. |
|
|
|
00:51:24.305 --> 00:51:26.092 |
|
So obviously smoothness means |
|
|
|
00:51:26.092 --> 00:51:28.430 |
|
smoothness is not a very good feature, |
|
|
|
00:51:28.430 --> 00:51:31.340 |
|
it's not very predictive of the label. |
|
|
|
00:51:32.120 --> 00:51:35.050 |
|
Where if I look at something like. |
|
|
|
00:51:35.140 --> 00:51:35.930 |
|
|
|
|
|
00:51:37.780 --> 00:51:40.240 |
|
If I look at something like this, just |
|
|
|
00:51:40.240 --> 00:51:42.125 |
|
take the first one, the difference of |
|
|
|
00:51:42.125 --> 00:51:43.730 |
|
the means is more than one steering |
|
|
|
00:51:43.730 --> 00:51:47.620 |
|
deviation of the feature, and so mean |
|
|
|
00:51:47.620 --> 00:51:49.420 |
|
radius is like fairly predictive. |
|
|
|
00:51:51.210 --> 00:51:53.395 |
|
But the thing my take home from this is |
|
|
|
00:51:53.395 --> 00:51:56.480 |
|
that some features have means and |
|
|
|
00:51:56.480 --> 00:51:58.950 |
|
standard deviations that are sub one |
|
|
|
00:51:58.950 --> 00:51:59.730 |
|
less than one. |
|
|
|
00:52:00.400 --> 00:52:03.340 |
|
And others are in the hundreds, so not |
|
|
|
00:52:03.340 --> 00:52:04.540 |
|
that's not good. |
|
|
|
00:52:04.540 --> 00:52:05.700 |
|
So I want to do some kind of |
|
|
|
00:52:05.700 --> 00:52:06.590 |
|
normalization. |
|
|
|
00:52:09.520 --> 00:52:11.857 |
|
So I'm going to normalize by the mean |
|
|
|
00:52:11.857 --> 00:52:13.820 |
|
and steering deviation, which means |
|
|
|
00:52:13.820 --> 00:52:16.537 |
|
that I subtract the mean and divide by |
|
|
|
00:52:16.537 --> 00:52:17.880 |
|
the standard deviation. |
|
|
|
00:52:17.880 --> 00:52:20.040 |
|
Importantly, you want to compute the |
|
|
|
00:52:20.040 --> 00:52:22.138 |
|
mean and the standard deviation once on |
|
|
|
00:52:22.138 --> 00:52:23.880 |
|
the training set and then apply the |
|
|
|
00:52:23.880 --> 00:52:25.531 |
|
same normalization to the training and |
|
|
|
00:52:25.531 --> 00:52:26.566 |
|
the validation set. |
|
|
|
00:52:26.566 --> 00:52:28.580 |
|
So you can't provide different |
|
|
|
00:52:28.580 --> 00:52:31.620 |
|
normalizations to different sets, or |
|
|
|
00:52:31.620 --> 00:52:33.080 |
|
else you're going to your features will |
|
|
|
00:52:33.080 --> 00:52:35.030 |
|
not be comparable and you'll it's a |
|
|
|
00:52:35.030 --> 00:52:35.640 |
|
bug. |
|
|
|
00:52:35.640 --> 00:52:37.360 |
|
It's so it won't work. |
|
|
|
00:52:38.650 --> 00:52:40.240 |
|
OK, so I compute the mean compute |
|
|
|
00:52:40.240 --> 00:52:41.720 |
|
steering, aviation take the difference, |
|
|
|
00:52:41.720 --> 00:52:43.160 |
|
divide by zero and aviation do the same |
|
|
|
00:52:43.160 --> 00:52:44.220 |
|
thing on my valve set. |
|
|
|
00:52:44.990 --> 00:52:46.430 |
|
And there's nothing to print here, but |
|
|
|
00:52:46.430 --> 00:52:47.430 |
|
I need to run it. |
|
|
|
00:52:47.430 --> 00:52:48.000 |
|
Whoops. |
|
|
|
00:52:51.250 --> 00:52:52.380 |
|
All right, so now I'm going to repeat |
|
|
|
00:52:52.380 --> 00:52:53.150 |
|
my nearest neighbor. |
|
|
|
00:52:53.920 --> 00:52:54.866 |
|
OK, 4%. |
|
|
|
00:52:54.866 --> 00:52:57.336 |
|
So there was a lot better before I got |
|
|
|
00:52:57.336 --> 00:53:01.206 |
|
12%, I think 8%, yeah, so before I got |
|
|
|
00:53:01.206 --> 00:53:01.500 |
|
8%. |
|
|
|
00:53:02.130 --> 00:53:03.200 |
|
Now it's 4%. |
|
|
|
00:53:04.050 --> 00:53:04.720 |
|
So that's good. |
|
|
|
00:53:05.380 --> 00:53:07.040 |
|
But I still don't know if like nearest |
|
|
|
00:53:07.040 --> 00:53:07.850 |
|
neighbor is the best. |
|
|
|
00:53:07.850 --> 00:53:09.240 |
|
So I shouldn't just try like 1 |
|
|
|
00:53:09.240 --> 00:53:11.140 |
|
algorithm and then assume that's the |
|
|
|
00:53:11.140 --> 00:53:11.910 |
|
best I should get. |
|
|
|
00:53:11.910 --> 00:53:14.620 |
|
I should try other algorithms and try |
|
|
|
00:53:14.620 --> 00:53:16.280 |
|
to see if I can improve things further. |
|
|
|
00:53:17.510 --> 00:53:18.110 |
|
Question. |
|
|
|
00:53:24.670 --> 00:53:25.550 |
|
So the yes. |
|
|
|
00:53:25.550 --> 00:53:26.940 |
|
So the question is why did the error |
|
|
|
00:53:26.940 --> 00:53:28.170 |
|
rate get better? |
|
|
|
00:53:28.170 --> 00:53:30.950 |
|
And I think it's because under the |
|
|
|
00:53:30.950 --> 00:53:33.920 |
|
original features, these features like |
|
|
|
00:53:33.920 --> 00:53:38.000 |
|
mean area that have a huge range are |
|
|
|
00:53:38.000 --> 00:53:40.690 |
|
going to dominate the distances. |
|
|
|
00:53:40.690 --> 00:53:42.420 |
|
All of these features concavity, |
|
|
|
00:53:42.420 --> 00:53:45.470 |
|
compactness, concave point, symmetry at |
|
|
|
00:53:45.470 --> 00:53:48.730 |
|
mostly we'll add a distance of .1 or |
|
|
|
00:53:48.730 --> 00:53:51.010 |
|
something like that where this mean |
|
|
|
00:53:51.010 --> 00:53:53.887 |
|
area is going to tend to add distances |
|
|
|
00:53:53.887 --> 00:53:54.430 |
|
of. |
|
|
|
00:53:54.490 --> 00:53:54.960 |
|
Hundreds. |
|
|
|
00:53:55.580 --> 00:53:58.620 |
|
And so if I don't normalize it, that |
|
|
|
00:53:58.620 --> 00:54:00.100 |
|
means that essentially I'm seeing the |
|
|
|
00:54:00.100 --> 00:54:01.728 |
|
bigger the feature values, the more |
|
|
|
00:54:01.728 --> 00:54:02.990 |
|
important they are, or the more |
|
|
|
00:54:02.990 --> 00:54:04.307 |
|
variants and the feature values, the |
|
|
|
00:54:04.307 --> 00:54:05.049 |
|
more important they are. |
|
|
|
00:54:05.670 --> 00:54:07.340 |
|
And that's not based on any like |
|
|
|
00:54:07.340 --> 00:54:08.480 |
|
knowledge of the problem. |
|
|
|
00:54:08.480 --> 00:54:09.970 |
|
That was just because that's how the |
|
|
|
00:54:09.970 --> 00:54:10.720 |
|
data turned out. |
|
|
|
00:54:10.720 --> 00:54:12.560 |
|
And so I don't really trust that kind |
|
|
|
00:54:12.560 --> 00:54:14.210 |
|
of decision. |
|
|
|
00:54:16.270 --> 00:54:16.650 |
|
Go ahead. |
|
|
|
00:54:18.070 --> 00:54:18.350 |
|
OK. |
|
|
|
00:54:19.290 --> 00:54:20.240 |
|
You had a question? |
|
|
|
00:54:29.700 --> 00:54:32.490 |
|
So I compute the mean and this is |
|
|
|
00:54:32.490 --> 00:54:34.615 |
|
computing the mean over the first axis. |
|
|
|
00:54:34.615 --> 00:54:36.640 |
|
So it means that for every feature |
|
|
|
00:54:36.640 --> 00:54:38.700 |
|
value I compute the mean over all the |
|
|
|
00:54:38.700 --> 00:54:39.320 |
|
examples. |
|
|
|
00:54:40.110 --> 00:54:42.680 |
|
Of the training features XTR. |
|
|
|
00:54:43.450 --> 00:54:45.560 |
|
So I computed the mean, the expectation |
|
|
|
00:54:45.560 --> 00:54:49.370 |
|
or the arithmetic average of each |
|
|
|
00:54:49.370 --> 00:54:50.010 |
|
feature. |
|
|
|
00:54:50.990 --> 00:54:53.330 |
|
Over all the training samples, and then |
|
|
|
00:54:53.330 --> 00:54:56.500 |
|
I compute this stern deviation of each |
|
|
|
00:54:56.500 --> 00:54:58.140 |
|
feature over all the examples. |
|
|
|
00:54:58.140 --> 00:54:58.960 |
|
So that's the. |
|
|
|
00:55:00.570 --> 00:55:00.940 |
|
Right. |
|
|
|
00:55:03.150 --> 00:55:07.200 |
|
So remember that X train has this shape |
|
|
|
00:55:07.200 --> 00:55:11.920 |
|
469 by 30, so if I go down the first |
|
|
|
00:55:11.920 --> 00:55:14.480 |
|
axis then I'm changing the example. |
|
|
|
00:55:14.480 --> 00:55:17.330 |
|
So 0123 et cetera are different |
|
|
|
00:55:17.330 --> 00:55:18.100 |
|
examples. |
|
|
|
00:55:18.100 --> 00:55:20.363 |
|
And if I go down the second axis then |
|
|
|
00:55:20.363 --> 00:55:22.470 |
|
I'm going into different feature |
|
|
|
00:55:22.470 --> 00:55:22.960 |
|
columns. |
|
|
|
00:55:23.680 --> 00:55:25.760 |
|
And so I want to take the mean over the |
|
|
|
00:55:25.760 --> 00:55:27.524 |
|
examples for each feature. |
|
|
|
00:55:27.524 --> 00:55:30.113 |
|
And so I say access equals zero for the |
|
|
|
00:55:30.113 --> 00:55:31.870 |
|
mean to take the mean over samples. |
|
|
|
00:55:31.870 --> 00:55:34.774 |
|
Otherwise I'll end up with a 1 by 30 |
|
|
|
00:55:34.774 --> 00:55:38.480 |
|
where I mean with a 469 by 1 where I've |
|
|
|
00:55:38.480 --> 00:55:39.850 |
|
taken the average feature for each |
|
|
|
00:55:39.850 --> 00:55:40.380 |
|
example. |
|
|
|
00:55:46.980 --> 00:55:49.390 |
|
So if I say X is equals zero, it means |
|
|
|
00:55:49.390 --> 00:55:51.000 |
|
it will take the mean over all the |
|
|
|
00:55:51.000 --> 00:55:52.400 |
|
remaining dimensions. |
|
|
|
00:55:52.750 --> 00:55:53.320 |
|
And. |
|
|
|
00:55:54.040 --> 00:55:55.590 |
|
Averaging over the first dimension. |
|
|
|
00:56:02.380 --> 00:56:04.870 |
|
So then this will be a 30 dimensional |
|
|
|
00:56:04.870 --> 00:56:06.080 |
|
vector X MU. |
|
|
|
00:56:07.050 --> 00:56:11.230 |
|
It will be the mean of each feature |
|
|
|
00:56:11.230 --> 00:56:12.060 |
|
over the samples. |
|
|
|
00:56:12.930 --> 00:56:14.540 |
|
And this is also a 30 dimensional |
|
|
|
00:56:14.540 --> 00:56:15.880 |
|
vector standard deviation. |
|
|
|
00:56:17.170 --> 00:56:19.300 |
|
And then I'm subtracting off the mean |
|
|
|
00:56:19.300 --> 00:56:21.185 |
|
and dividing by the standard deviation. |
|
|
|
00:56:21.185 --> 00:56:24.150 |
|
And Numpy is nice that even though X |
|
|
|
00:56:24.150 --> 00:56:28.355 |
|
train is 469 by 30 and X mu is 30, is |
|
|
|
00:56:28.355 --> 00:56:28.840 |
|
30. |
|
|
|
00:56:29.030 --> 00:56:32.370 |
|
Numpy is smart, and it says you're |
|
|
|
00:56:32.370 --> 00:56:35.390 |
|
doing a 469 by 30 -, A thirty. |
|
|
|
00:56:35.390 --> 00:56:39.060 |
|
So I need to copy that 3469 times to |
|
|
|
00:56:39.060 --> 00:56:39.810 |
|
take the difference. |
|
|
|
00:56:41.550 --> 00:56:42.790 |
|
And same for the divide. |
|
|
|
00:56:42.790 --> 00:56:44.990 |
|
This is an element wise divide so it's |
|
|
|
00:56:44.990 --> 00:56:45.800 |
|
important to know. |
|
|
|
00:56:46.500 --> 00:56:48.340 |
|
There you can have like a matrix |
|
|
|
00:56:48.340 --> 00:56:50.710 |
|
multiplication or matrix inverse or you |
|
|
|
00:56:50.710 --> 00:56:53.006 |
|
can have an element wise multiplication |
|
|
|
00:56:53.006 --> 00:56:53.759 |
|
or inverse. |
|
|
|
00:56:54.570 --> 00:56:57.070 |
|
Usually like the simple operators are |
|
|
|
00:56:57.070 --> 00:56:58.320 |
|
element wise in Python. |
|
|
|
00:56:58.970 --> 00:57:01.485 |
|
So this means that for every element of |
|
|
|
00:57:01.485 --> 00:57:04.796 |
|
this matrix, I'm going to divide by the |
|
|
|
00:57:04.796 --> 00:57:06.940 |
|
standard deviation the corresponding |
|
|
|
00:57:06.940 --> 00:57:07.680 |
|
standard deviation. |
|
|
|
00:57:09.390 --> 00:57:10.690 |
|
And then I do the same thing for the |
|
|
|
00:57:10.690 --> 00:57:11.640 |
|
validation set. |
|
|
|
00:57:11.640 --> 00:57:12.960 |
|
And what was your question? |
|
|
|
00:57:22.780 --> 00:57:23.490 |
|
Yeah. |
|
|
|
00:57:32.420 --> 00:57:37.550 |
|
So L1 used L1 regularization for linear |
|
|
|
00:57:37.550 --> 00:57:40.183 |
|
logistic regression and that will that |
|
|
|
00:57:40.183 --> 00:57:43.110 |
|
will like put that will like select |
|
|
|
00:57:43.110 --> 00:57:44.030 |
|
features for. |
|
|
|
00:57:44.030 --> 00:57:46.110 |
|
You could also use L1 nearest neighbor |
|
|
|
00:57:46.110 --> 00:57:47.720 |
|
distance which would be less sensitive |
|
|
|
00:57:47.720 --> 00:57:48.110 |
|
to this. |
|
|
|
00:57:49.700 --> 00:57:52.150 |
|
But with this range of like .1 versus |
|
|
|
00:57:52.150 --> 00:57:54.590 |
|
like 500, it will still be that the |
|
|
|
00:57:54.590 --> 00:57:55.820 |
|
larger features will dominate. |
|
|
|
00:57:57.180 --> 00:57:57.430 |
|
Yep. |
|
|
|
00:57:59.850 --> 00:58:03.560 |
|
All right, so after I normalized, now |
|
|
|
00:58:03.560 --> 00:58:06.550 |
|
note that I'm passing in X train N, |
|
|
|
00:58:06.550 --> 00:58:08.670 |
|
which is for stands for norm for me. |
|
|
|
00:58:09.450 --> 00:58:10.380 |
|
In X Val north. |
|
|
|
00:58:10.380 --> 00:58:12.240 |
|
Now I get lower error. |
|
|
|
00:58:12.830 --> 00:58:14.220 |
|
Alright, so now let's try a different |
|
|
|
00:58:14.220 --> 00:58:14.885 |
|
classifier. |
|
|
|
00:58:14.885 --> 00:58:17.340 |
|
Let's do Naive Bayes, and I'm going to |
|
|
|
00:58:17.340 --> 00:58:21.055 |
|
assume that each feature value given |
|
|
|
00:58:21.055 --> 00:58:23.399 |
|
the class is a Gaussian. |
|
|
|
00:58:23.399 --> 00:58:27.480 |
|
So given that y = 0, Y equals one. |
|
|
|
00:58:27.480 --> 00:58:30.232 |
|
Then my probability of the feature is a |
|
|
|
00:58:30.232 --> 00:58:31.770 |
|
Gaussian with some mean and some |
|
|
|
00:58:31.770 --> 00:58:32.680 |
|
standard deviation. |
|
|
|
00:58:33.410 --> 00:58:35.640 |
|
Now for nibs I need a training and |
|
|
|
00:58:35.640 --> 00:58:36.610 |
|
prediction function. |
|
|
|
00:58:37.590 --> 00:58:40.560 |
|
So I'm going to pass in my training |
|
|
|
00:58:40.560 --> 00:58:41.430 |
|
data X&Y. |
|
|
|
00:58:42.300 --> 00:58:44.760 |
|
App says some like I'm going to use |
|
|
|
00:58:44.760 --> 00:58:46.864 |
|
that as like a prior to add it to the |
|
|
|
00:58:46.864 --> 00:58:48.390 |
|
variance so that even if my feature |
|
|
|
00:58:48.390 --> 00:58:50.340 |
|
value has no variance in training, I'm |
|
|
|
00:58:50.340 --> 00:58:52.175 |
|
going to have some minimal variance so |
|
|
|
00:58:52.175 --> 00:58:54.450 |
|
that I don't have like a divide by zero |
|
|
|
00:58:54.450 --> 00:58:56.610 |
|
essentially where I'm not like over |
|
|
|
00:58:56.610 --> 00:59:00.600 |
|
relying on the variance that I observe. |
|
|
|
00:59:02.080 --> 00:59:03.960 |
|
All right, so initialize my MU and my |
|
|
|
00:59:03.960 --> 00:59:06.988 |
|
Sigma to be the number of features by |
|
|
|
00:59:06.988 --> 00:59:08.880 |
|
two, and the two is because there's two |
|
|
|
00:59:08.880 --> 00:59:10.360 |
|
classes, so I'm going to estimate this |
|
|
|
00:59:10.360 --> 00:59:10.960 |
|
for each class. |
|
|
|
00:59:12.250 --> 00:59:14.988 |
|
I compute my probability of the label |
|
|
|
00:59:14.988 --> 00:59:17.870 |
|
to be just the mean of y = 0. |
|
|
|
00:59:17.870 --> 00:59:19.180 |
|
So this is a probability that the label |
|
|
|
00:59:19.180 --> 00:59:20.000 |
|
is equal to 0. |
|
|
|
00:59:21.530 --> 00:59:23.820 |
|
And then for each feature so range, |
|
|
|
00:59:23.820 --> 00:59:25.650 |
|
you'll be 0 to the number of features. |
|
|
|
00:59:26.510 --> 00:59:30.100 |
|
I compute the mean over the cases where |
|
|
|
00:59:30.100 --> 00:59:31.330 |
|
the label equals 0. |
|
|
|
00:59:32.660 --> 00:59:34.770 |
|
And the mean over the case where the |
|
|
|
00:59:34.770 --> 00:59:36.450 |
|
labels equals one. |
|
|
|
00:59:36.450 --> 00:59:37.990 |
|
And I could do this as like a |
|
|
|
00:59:37.990 --> 00:59:40.260 |
|
vectorized operation like over an axis, |
|
|
|
00:59:40.260 --> 00:59:41.970 |
|
but for clarity I did it this way. |
|
|
|
00:59:42.700 --> 00:59:43.350 |
|
With the four loop. |
|
|
|
00:59:45.040 --> 00:59:47.990 |
|
Compute their stern deviation where y = |
|
|
|
00:59:47.990 --> 00:59:50.827 |
|
0 and the stereo deviation where y = 1 |
|
|
|
00:59:50.827 --> 00:59:52.520 |
|
and again like this epsilon will be |
|
|
|
00:59:52.520 --> 00:59:55.600 |
|
some small number that will just like |
|
|
|
00:59:55.600 --> 00:59:57.260 |
|
make sure that my variance isn't zero. |
|
|
|
00:59:57.260 --> 00:59:59.810 |
|
Or like says that like I think there |
|
|
|
00:59:59.810 --> 01:00:01.030 |
|
might be a little bit more variance |
|
|
|
01:00:01.030 --> 01:00:01.740 |
|
than I observe. |
|
|
|
01:00:03.080 --> 01:00:03.600 |
|
And. |
|
|
|
01:00:04.420 --> 01:00:05.090 |
|
That's it. |
|
|
|
01:00:05.090 --> 01:00:07.570 |
|
So then I'll return my mean steering |
|
|
|
01:00:07.570 --> 01:00:09.150 |
|
deviation and the probability of the |
|
|
|
01:00:09.150 --> 01:00:10.010 |
|
label question. |
|
|
|
01:00:12.500 --> 01:00:12.760 |
|
Sorry. |
|
|
|
01:00:21.950 --> 01:00:24.952 |
|
Because X shape one, so X shape zero is |
|
|
|
01:00:24.952 --> 01:00:26.505 |
|
the number of samples and X shape one |
|
|
|
01:00:26.505 --> 01:00:27.840 |
|
is the number of features. |
|
|
|
01:00:27.840 --> 01:00:30.810 |
|
And there's a mean for every mean |
|
|
|
01:00:30.810 --> 01:00:33.273 |
|
estimate for every feature, not for |
|
|
|
01:00:33.273 --> 01:00:34.050 |
|
every sample. |
|
|
|
01:00:35.780 --> 01:00:37.840 |
|
So this will be a number of features by |
|
|
|
01:00:37.840 --> 01:00:38.230 |
|
two. |
|
|
|
01:00:43.510 --> 01:00:44.720 |
|
Alright, and then I'm going to do |
|
|
|
01:00:44.720 --> 01:00:45.380 |
|
prediction. |
|
|
|
01:00:45.380 --> 01:00:48.200 |
|
So now I'll write my prediction code. |
|
|
|
01:00:48.200 --> 01:00:50.080 |
|
I now need to pass in the thing that I |
|
|
|
01:00:50.080 --> 01:00:50.930 |
|
want to predict for. |
|
|
|
01:00:51.620 --> 01:00:53.720 |
|
That means in the steering deviations |
|
|
|
01:00:53.720 --> 01:00:55.840 |
|
and the P0 that I estimated from my |
|
|
|
01:00:55.840 --> 01:00:56.670 |
|
training function. |
|
|
|
01:00:57.640 --> 01:01:00.450 |
|
And I'm going to compute the log |
|
|
|
01:01:00.450 --> 01:01:04.460 |
|
probability of X given of X&Y, not the |
|
|
|
01:01:04.460 --> 01:01:05.390 |
|
probability of X&Y. |
|
|
|
01:01:06.130 --> 01:01:07.889 |
|
And the reason for that is that if I |
|
|
|
01:01:07.890 --> 01:01:09.960 |
|
multiply a lot of small probabilities |
|
|
|
01:01:09.960 --> 01:01:11.706 |
|
together then I get a really small |
|
|
|
01:01:11.706 --> 01:01:11.972 |
|
number. |
|
|
|
01:01:11.972 --> 01:01:13.955 |
|
And if I have a lot of features like |
|
|
|
01:01:13.955 --> 01:01:16.418 |
|
you do for MNIST for example, then that |
|
|
|
01:01:16.418 --> 01:01:18.470 |
|
small number will eventually become |
|
|
|
01:01:18.470 --> 01:01:21.820 |
|
zero and like in terms of floating |
|
|
|
01:01:21.820 --> 01:01:23.889 |
|
point operations or it will become like |
|
|
|
01:01:23.890 --> 01:01:26.470 |
|
unwieldly small. |
|
|
|
01:01:26.470 --> 01:01:28.160 |
|
So you want to compute the log |
|
|
|
01:01:28.160 --> 01:01:29.460 |
|
probability, not the probability. |
|
|
|
01:01:30.460 --> 01:01:33.100 |
|
And minimizing the OR maximizing the |
|
|
|
01:01:33.100 --> 01:01:34.602 |
|
log probability is the same as |
|
|
|
01:01:34.602 --> 01:01:35.660 |
|
maximizing the probability. |
|
|
|
01:01:36.860 --> 01:01:38.560 |
|
So for each feature. |
|
|
|
01:01:39.350 --> 01:01:43.388 |
|
I add the log probability of the |
|
|
|
01:01:43.388 --> 01:01:46.726 |
|
feature given y = 0 or the feature |
|
|
|
01:01:46.726 --> 01:01:47.739 |
|
given y = 1. |
|
|
|
01:01:48.960 --> 01:01:53.265 |
|
And this is this is the log of the |
|
|
|
01:01:53.265 --> 01:01:54.000 |
|
Gaussian function. |
|
|
|
01:01:54.000 --> 01:01:56.340 |
|
Just ignoring the constant multiplier |
|
|
|
01:01:56.340 --> 01:01:58.540 |
|
in the Gaussian function because that |
|
|
|
01:01:58.540 --> 01:02:01.300 |
|
won't be any different whether y = 0 |
|
|
|
01:02:01.300 --> 01:02:03.059 |
|
one there one over square root, square |
|
|
|
01:02:03.059 --> 01:02:04.310 |
|
root 2π is Sigma. |
|
|
|
01:02:06.200 --> 01:02:12.750 |
|
So this minus mean minus X ^2 divided |
|
|
|
01:02:12.750 --> 01:02:14.140 |
|
by Sigma squared. |
|
|
|
01:02:14.140 --> 01:02:15.930 |
|
That's like in the exponent of the |
|
|
|
01:02:15.930 --> 01:02:16.490 |
|
Gaussian. |
|
|
|
01:02:16.490 --> 01:02:18.530 |
|
So when I take the log of it, I've just |
|
|
|
01:02:18.530 --> 01:02:19.860 |
|
got that exponent there. |
|
|
|
01:02:20.820 --> 01:02:25.040 |
|
So I'm adding that to my score of log |
|
|
|
01:02:25.040 --> 01:02:29.630 |
|
PX y = 0 and log pxy equals one. |
|
|
|
01:02:32.780 --> 01:02:35.721 |
|
Then I'm adding my prior so to my 0 |
|
|
|
01:02:35.721 --> 01:02:38.204 |
|
score I add the log probability of y = |
|
|
|
01:02:38.204 --> 01:02:38.479 |
|
0. |
|
|
|
01:02:38.480 --> 01:02:41.440 |
|
Into my one score, I add the log |
|
|
|
01:02:41.440 --> 01:02:44.230 |
|
probability of y = 1, which is just one |
|
|
|
01:02:44.230 --> 01:02:45.729 |
|
minus the probability of y = 0. |
|
|
|
01:02:46.780 --> 01:02:48.540 |
|
And then I take the argmax to get my |
|
|
|
01:02:48.540 --> 01:02:50.899 |
|
prediction and I'm taking the argmax |
|
|
|
01:02:50.900 --> 01:02:53.910 |
|
over axis one because that was my label |
|
|
|
01:02:53.910 --> 01:02:54.380 |
|
axis. |
|
|
|
01:02:55.170 --> 01:02:55.720 |
|
So. |
|
|
|
01:02:56.860 --> 01:02:58.875 |
|
So here the first axis is the number of |
|
|
|
01:02:58.875 --> 01:03:00.915 |
|
test samples, the second axis is the |
|
|
|
01:03:00.915 --> 01:03:01.860 |
|
number of labels. |
|
|
|
01:03:01.860 --> 01:03:04.470 |
|
I take the argmax over the labels to |
|
|
|
01:03:04.470 --> 01:03:07.820 |
|
get my maximum my most likely |
|
|
|
01:03:07.820 --> 01:03:09.510 |
|
prediction for every test sample. |
|
|
|
01:03:13.750 --> 01:03:15.930 |
|
And then finally the code to call this |
|
|
|
01:03:15.930 --> 01:03:18.334 |
|
so I call Gaussian train NI Bayes |
|
|
|
01:03:18.334 --> 01:03:21.650 |
|
Gaussian train and I use this as my as |
|
|
|
01:03:21.650 --> 01:03:23.800 |
|
like my prior on the variance my |
|
|
|
01:03:23.800 --> 01:03:24.290 |
|
epsilon. |
|
|
|
01:03:25.400 --> 01:03:29.310 |
|
And then I'd call predict and I pass in |
|
|
|
01:03:29.310 --> 01:03:30.240 |
|
the validation data. |
|
|
|
01:03:31.200 --> 01:03:32.510 |
|
And then I measure my error. |
|
|
|
01:03:33.400 --> 01:03:35.130 |
|
And I'm going to do this. |
|
|
|
01:03:35.130 --> 01:03:36.970 |
|
So here's a question. |
|
|
|
01:03:36.970 --> 01:03:39.338 |
|
Do you think that here I'm doing it on |
|
|
|
01:03:39.338 --> 01:03:41.219 |
|
the non normalized features and here |
|
|
|
01:03:41.219 --> 01:03:43.182 |
|
I'm doing it on the normalized |
|
|
|
01:03:43.182 --> 01:03:43.509 |
|
features? |
|
|
|
01:03:44.380 --> 01:03:47.160 |
|
Do you think that those results will be |
|
|
|
01:03:47.160 --> 01:03:48.800 |
|
different or the same? |
|
|
|
01:03:48.800 --> 01:03:50.510 |
|
So how many people think that these |
|
|
|
01:03:50.510 --> 01:03:52.260 |
|
will be the same if I? |
|
|
|
01:03:52.960 --> 01:03:56.930 |
|
Do not have bays on rescaled and mean |
|
|
|
01:03:56.930 --> 01:04:00.130 |
|
normalized features versus normalized. |
|
|
|
01:04:01.370 --> 01:04:02.790 |
|
So how many people think it will be the |
|
|
|
01:04:02.790 --> 01:04:03.640 |
|
same result? |
|
|
|
01:04:05.470 --> 01:04:07.060 |
|
OK, how many people think it will be a |
|
|
|
01:04:07.060 --> 01:04:07.610 |
|
different result? |
|
|
|
01:04:10.570 --> 01:04:12.250 |
|
About 5050. |
|
|
|
01:04:12.250 --> 01:04:13.510 |
|
Alright, so let's see. |
|
|
|
01:04:13.510 --> 01:04:14.820 |
|
Let's see how it turns out. |
|
|
|
01:04:18.860 --> 01:04:20.980 |
|
So it's exactly the same, and it's |
|
|
|
01:04:20.980 --> 01:04:22.855 |
|
actually guaranteed to be exactly the |
|
|
|
01:04:22.855 --> 01:04:25.350 |
|
same in this case because. |
|
|
|
01:04:27.190 --> 01:04:28.790 |
|
Because if I scale or shift the |
|
|
|
01:04:28.790 --> 01:04:30.910 |
|
features, all it's going to do is |
|
|
|
01:04:30.910 --> 01:04:32.320 |
|
change my mean invariance. |
|
|
|
01:04:32.960 --> 01:04:34.420 |
|
But it will change it the same way for |
|
|
|
01:04:34.420 --> 01:04:36.500 |
|
each class, so the probability of the |
|
|
|
01:04:36.500 --> 01:04:38.450 |
|
features given the data given the label |
|
|
|
01:04:38.450 --> 01:04:40.540 |
|
doesn't change at all when I shift them |
|
|
|
01:04:40.540 --> 01:04:42.050 |
|
or scale them according to a Gaussian |
|
|
|
01:04:42.050 --> 01:04:42.990 |
|
distribution. |
|
|
|
01:04:42.990 --> 01:04:45.080 |
|
So that's why the feature normalization |
|
|
|
01:04:45.080 --> 01:04:46.790 |
|
isn't really necessary here for Naive |
|
|
|
01:04:46.790 --> 01:04:47.060 |
|
Bayes. |
|
|
|
01:04:48.890 --> 01:04:50.605 |
|
But it wasn't didn't do great. |
|
|
|
01:04:50.605 --> 01:04:51.790 |
|
It doesn't usually. |
|
|
|
01:04:51.790 --> 01:04:52.870 |
|
So not a big surprise. |
|
|
|
01:04:54.240 --> 01:04:56.697 |
|
So then finally, let's do. |
|
|
|
01:04:56.697 --> 01:04:58.500 |
|
Let's put in a logistic there. |
|
|
|
01:04:58.500 --> 01:05:00.100 |
|
Let's do linear and logistic |
|
|
|
01:05:00.100 --> 01:05:03.060 |
|
regression, and I'm going to use the |
|
|
|
01:05:03.060 --> 01:05:03.770 |
|
model here. |
|
|
|
01:05:04.510 --> 01:05:06.700 |
|
So C = 1 is the default that's Lambda |
|
|
|
01:05:06.700 --> 01:05:07.650 |
|
equals one. |
|
|
|
01:05:07.650 --> 01:05:09.410 |
|
I'll give it plenty of iterations, just |
|
|
|
01:05:09.410 --> 01:05:10.750 |
|
make sure it can converge. |
|
|
|
01:05:10.750 --> 01:05:12.350 |
|
I fit it on the training data. |
|
|
|
01:05:13.230 --> 01:05:15.310 |
|
Test it on the validation data. |
|
|
|
01:05:15.310 --> 01:05:17.270 |
|
And here I'm going to compare for if I |
|
|
|
01:05:17.270 --> 01:05:19.230 |
|
don't normalize versus I normalize. |
|
|
|
01:05:23.690 --> 01:05:27.037 |
|
And so in this case I got 3% error when |
|
|
|
01:05:27.037 --> 01:05:29.907 |
|
I didn't normalize and I got 0% error |
|
|
|
01:05:29.907 --> 01:05:31.350 |
|
when I normalized. |
|
|
|
01:05:33.670 --> 01:05:34.990 |
|
So the normalization. |
|
|
|
01:05:34.990 --> 01:05:36.470 |
|
The reason it makes a difference in |
|
|
|
01:05:36.470 --> 01:05:39.070 |
|
this linear model is that I have some |
|
|
|
01:05:39.070 --> 01:05:40.100 |
|
regularization weight. |
|
|
|
01:05:40.770 --> 01:05:43.420 |
|
So if I set this to something really |
|
|
|
01:05:43.420 --> 01:05:46.780 |
|
big, SK learn is a little awkward and |
|
|
|
01:05:46.780 --> 01:05:48.620 |
|
that C is the inverse of Lambda. |
|
|
|
01:05:48.620 --> 01:05:50.970 |
|
So the higher this value is, the less |
|
|
|
01:05:50.970 --> 01:05:51.970 |
|
the regularization. |
|
|
|
01:05:58.010 --> 01:06:00.710 |
|
I thought they would do something, but |
|
|
|
01:06:00.710 --> 01:06:01.240 |
|
it didn't. |
|
|
|
01:06:03.440 --> 01:06:05.290 |
|
That's not going to make a difference. |
|
|
|
01:06:06.730 --> 01:06:07.790 |
|
That's interesting actually. |
|
|
|
01:06:07.790 --> 01:06:08.970 |
|
I don't know why. |
|
|
|
01:06:09.730 --> 01:06:11.180 |
|
Maybe I maybe I got. |
|
|
|
01:06:11.180 --> 01:06:13.510 |
|
Let's see, let's make it really small |
|
|
|
01:06:13.510 --> 01:06:13.980 |
|
instead. |
|
|
|
01:06:24.460 --> 01:06:24.920 |
|
What's what? |
|
|
|
01:06:29.290 --> 01:06:32.130 |
|
So that definitely changed things, but |
|
|
|
01:06:32.130 --> 01:06:33.620 |
|
it made the normalization worse. |
|
|
|
01:06:33.620 --> 01:06:34.500 |
|
That's interesting. |
|
|
|
01:06:34.500 --> 01:06:36.420 |
|
OK, I cannot explain that off the dot |
|
|
|
01:06:36.420 --> 01:06:37.200 |
|
my head. |
|
|
|
01:06:38.070 --> 01:06:41.200 |
|
But another thing is that if I do 0. |
|
|
|
01:06:42.740 --> 01:06:44.095 |
|
Wait, actually zero. |
|
|
|
01:06:44.095 --> 01:06:46.425 |
|
I don't remember again if which way? |
|
|
|
01:06:46.425 --> 01:06:47.710 |
|
I have to, yeah. |
|
|
|
01:06:48.470 --> 01:06:48.990 |
|
So. |
|
|
|
01:06:50.650 --> 01:06:52.340 |
|
You need like you need some |
|
|
|
01:06:52.340 --> 01:06:53.280 |
|
regularization. |
|
|
|
01:06:54.220 --> 01:06:55.780 |
|
Or else you get errors like that. |
|
|
|
01:06:58.220 --> 01:07:01.460 |
|
They're not regularizing is not info. |
|
|
|
01:07:02.560 --> 01:07:05.650 |
|
Not regularizing is usually not an |
|
|
|
01:07:05.650 --> 01:07:05.980 |
|
option. |
|
|
|
01:07:05.980 --> 01:07:07.070 |
|
OK, never mind, all right. |
|
|
|
01:07:08.140 --> 01:07:10.723 |
|
Yeah, you guys can play with it if you |
|
|
|
01:07:10.723 --> 01:07:10.859 |
|
want. |
|
|
|
01:07:10.860 --> 01:07:11.323 |
|
I'm going to. |
|
|
|
01:07:11.323 --> 01:07:12.910 |
|
I just, I don't want to get stuck there |
|
|
|
01:07:12.910 --> 01:07:15.340 |
|
as getting too much into the weeds. |
|
|
|
01:07:16.530 --> 01:07:20.235 |
|
The normalization helped in the case of |
|
|
|
01:07:20.235 --> 01:07:22.370 |
|
the default regularization. |
|
|
|
01:07:24.010 --> 01:07:27.120 |
|
I can also plot a. |
|
|
|
01:07:27.790 --> 01:07:29.590 |
|
I can also do like other ways of |
|
|
|
01:07:29.590 --> 01:07:31.360 |
|
looking at the data. |
|
|
|
01:07:31.360 --> 01:07:32.550 |
|
Let's look at. |
|
|
|
01:07:32.550 --> 01:07:34.390 |
|
I'm going to change this since it was |
|
|
|
01:07:34.390 --> 01:07:35.520 |
|
kind of boring. |
|
|
|
01:07:37.500 --> 01:07:38.410 |
|
Let me just. |
|
|
|
01:07:38.500 --> 01:07:39.190 |
|
|
|
|
|
01:07:41.150 --> 01:07:41.510 |
|
Whoops. |
|
|
|
01:07:42.630 --> 01:07:44.430 |
|
I don't it's not very interesting to |
|
|
|
01:07:44.430 --> 01:07:46.340 |
|
look at an Roc curve if you get perfect |
|
|
|
01:07:46.340 --> 01:07:46.910 |
|
prediction. |
|
|
|
01:07:48.670 --> 01:07:50.290 |
|
So let me just change this a little |
|
|
|
01:07:50.290 --> 01:07:50.640 |
|
bit. |
|
|
|
01:07:52.040 --> 01:07:54.870 |
|
So I'm going to look at the one where I |
|
|
|
01:07:54.870 --> 01:07:56.380 |
|
did not perfect prediction. |
|
|
|
01:07:57.840 --> 01:07:58.650 |
|
|
|
|
|
01:08:00.300 --> 01:08:00.830 |
|
Mexican. |
|
|
|
01:08:03.700 --> 01:08:07.390 |
|
Right, so this arc curve shows me given |
|
|
|
01:08:07.390 --> 01:08:09.320 |
|
if I choose different thresholds on my |
|
|
|
01:08:09.320 --> 01:08:10.000 |
|
confidence. |
|
|
|
01:08:10.870 --> 01:08:13.535 |
|
By default, you choose a confidence at |
|
|
|
01:08:13.535 --> 01:08:14.050 |
|
5:00. |
|
|
|
01:08:14.050 --> 01:08:15.810 |
|
If probability is greater than five, |
|
|
|
01:08:15.810 --> 01:08:17.810 |
|
then you assign it to the class that |
|
|
|
01:08:17.810 --> 01:08:19.069 |
|
had that greater probability. |
|
|
|
01:08:19.700 --> 01:08:21.440 |
|
But you can say for example if the |
|
|
|
01:08:21.440 --> 01:08:23.820 |
|
probability is greater than .3 then I'm |
|
|
|
01:08:23.820 --> 01:08:27.030 |
|
going to say it's like malignant and |
|
|
|
01:08:27.030 --> 01:08:28.150 |
|
otherwise it's benign. |
|
|
|
01:08:28.150 --> 01:08:29.740 |
|
So you can choose different thresholds. |
|
|
|
01:08:30.450 --> 01:08:31.990 |
|
Especially if there's a different |
|
|
|
01:08:31.990 --> 01:08:33.440 |
|
consequence to getting either one |
|
|
|
01:08:33.440 --> 01:08:36.100 |
|
wrong, like which there is for |
|
|
|
01:08:36.100 --> 01:08:37.260 |
|
malignant versus benign. |
|
|
|
01:08:38.080 --> 01:08:40.530 |
|
So you can look at this arc curve which |
|
|
|
01:08:40.530 --> 01:08:42.260 |
|
shows you the true positive rate and |
|
|
|
01:08:42.260 --> 01:08:43.990 |
|
the false positive rate for different |
|
|
|
01:08:43.990 --> 01:08:44.700 |
|
thresholds. |
|
|
|
01:08:45.460 --> 01:08:48.710 |
|
So I can choose a value such that L |
|
|
|
01:08:48.710 --> 01:08:50.170 |
|
never have a. |
|
|
|
01:08:50.940 --> 01:08:52.910 |
|
Where here I define true positive as y |
|
|
|
01:08:52.910 --> 01:08:53.510 |
|
= 0. |
|
|
|
01:08:54.220 --> 01:08:56.190 |
|
So I can choose a threshold where. |
|
|
|
01:08:57.010 --> 01:08:59.930 |
|
I will get every single malign case |
|
|
|
01:08:59.930 --> 01:09:02.380 |
|
correct, but I'll have like 20% false |
|
|
|
01:09:02.380 --> 01:09:03.450 |
|
positives. |
|
|
|
01:09:03.450 --> 01:09:05.870 |
|
Or I can choose a case where I'll |
|
|
|
01:09:05.870 --> 01:09:07.360 |
|
sometimes make mistakes. |
|
|
|
01:09:07.360 --> 01:09:10.110 |
|
Thinking I'm malignant is not |
|
|
|
01:09:10.110 --> 01:09:11.040 |
|
malignant. |
|
|
|
01:09:11.040 --> 01:09:15.360 |
|
But when it's benign, like 9099% of the |
|
|
|
01:09:15.360 --> 01:09:16.570 |
|
time I'll think it's benign. |
|
|
|
01:09:16.570 --> 01:09:18.815 |
|
So you can choose like you can kind of |
|
|
|
01:09:18.815 --> 01:09:19.450 |
|
choose your errors. |
|
|
|
01:09:25.800 --> 01:09:30.690 |
|
So this is so this like given some |
|
|
|
01:09:30.690 --> 01:09:33.080 |
|
point on this curve, it tells me the |
|
|
|
01:09:33.080 --> 01:09:35.120 |
|
true positive rate is the percent of |
|
|
|
01:09:35.120 --> 01:09:37.775 |
|
times that I correctly classify equals |
|
|
|
01:09:37.775 --> 01:09:39.379 |
|
zero as y = 0. |
|
|
|
01:09:40.330 --> 01:09:42.020 |
|
And the false positive rate is the |
|
|
|
01:09:42.020 --> 01:09:43.660 |
|
percent of times that I. |
|
|
|
01:09:45.460 --> 01:09:46.790 |
|
Classify. |
|
|
|
01:09:48.160 --> 01:09:50.400 |
|
Y = 1 as y = 0. |
|
|
|
01:09:54.870 --> 01:09:57.410 |
|
Alright, so I can also look at the |
|
|
|
01:09:57.410 --> 01:09:58.350 |
|
feature importance. |
|
|
|
01:09:58.350 --> 01:10:01.450 |
|
So if I do L1, so here I trained one |
|
|
|
01:10:01.450 --> 01:10:04.230 |
|
model with L1 logistic regression or |
|
|
|
01:10:04.230 --> 01:10:06.586 |
|
this is L2 and one with L1 logistic |
|
|
|
01:10:06.586 --> 01:10:06.930 |
|
regression? |
|
|
|
01:10:07.740 --> 01:10:08.860 |
|
And that makes me use a different |
|
|
|
01:10:08.860 --> 01:10:10.000 |
|
solver if it's L1. |
|
|
|
01:10:11.270 --> 01:10:13.980 |
|
So I can see the errors. |
|
|
|
01:10:14.070 --> 01:10:14.730 |
|
|
|
|
|
01:10:18.090 --> 01:10:19.505 |
|
A little weird but that error. |
|
|
|
01:10:19.505 --> 01:10:24.588 |
|
But OK, I can see the errors and I can |
|
|
|
01:10:24.588 --> 01:10:26.780 |
|
see the feature values. |
|
|
|
01:10:29.290 --> 01:10:32.870 |
|
So with L2 I get lots of low weights, |
|
|
|
01:10:32.870 --> 01:10:34.222 |
|
but none of them are zero. |
|
|
|
01:10:34.222 --> 01:10:37.750 |
|
With L1 I get lots of 0 weights in a |
|
|
|
01:10:37.750 --> 01:10:39.160 |
|
few larger weights. |
|
|
|
01:10:43.420 --> 01:10:44.910 |
|
And then I can also do some further |
|
|
|
01:10:44.910 --> 01:10:46.400 |
|
analysis looking at the tree. |
|
|
|
01:10:48.090 --> 01:10:50.090 |
|
So first I'll train a full tree. |
|
|
|
01:10:51.060 --> 01:10:53.010 |
|
And then next I'll train a tree with |
|
|
|
01:10:53.010 --> 01:10:54.370 |
|
Max depth equals 2. |
|
|
|
01:10:56.680 --> 01:11:00.006 |
|
So with the full tree I got error of |
|
|
|
01:11:00.006 --> 01:11:00.403 |
|
4%. |
|
|
|
01:11:00.403 --> 01:11:05.106 |
|
So it was as good as the OR was not as |
|
|
|
01:11:05.106 --> 01:11:06.590 |
|
good as logistic regressor but pretty |
|
|
|
01:11:06.590 --> 01:11:06.930 |
|
decent. |
|
|
|
01:11:08.220 --> 01:11:09.500 |
|
But this tree is kind of hard to |
|
|
|
01:11:09.500 --> 01:11:09.940 |
|
interpret. |
|
|
|
01:11:09.940 --> 01:11:11.410 |
|
You wouldn't be able to give it to a |
|
|
|
01:11:11.410 --> 01:11:13.415 |
|
technician and say like use this tree |
|
|
|
01:11:13.415 --> 01:11:14.330 |
|
to make your decision. |
|
|
|
01:11:15.050 --> 01:11:17.020 |
|
The short tree had higher error, but |
|
|
|
01:11:17.020 --> 01:11:18.730 |
|
it's a lot simpler, so I can see its |
|
|
|
01:11:18.730 --> 01:11:20.530 |
|
first splitting on the perimeter of the |
|
|
|
01:11:20.530 --> 01:11:21.240 |
|
largest cells. |
|
|
|
01:11:25.000 --> 01:11:27.510 |
|
And then finally, after doing all this |
|
|
|
01:11:27.510 --> 01:11:30.010 |
|
analysis, I'm going to do tenfold cross |
|
|
|
01:11:30.010 --> 01:11:32.780 |
|
validation using my best model. |
|
|
|
01:11:33.370 --> 01:11:35.590 |
|
So here I'll just compare L1 logistic |
|
|
|
01:11:35.590 --> 01:11:38.240 |
|
regression and nearest neighbor. |
|
|
|
01:11:39.160 --> 01:11:41.345 |
|
I am doing tenfold, so I'm going to do |
|
|
|
01:11:41.345 --> 01:11:45.126 |
|
10 estimates I do for each split. |
|
|
|
01:11:45.126 --> 01:11:48.490 |
|
So the split will be after permutation. |
|
|
|
01:11:48.490 --> 01:11:53.120 |
|
The first split will take indices 01020 |
|
|
|
01:11:53.120 --> 01:11:56.414 |
|
or yeah, 0102030, et cetera. |
|
|
|
01:11:56.414 --> 01:12:00.540 |
|
The second split will take 11121, the |
|
|
|
01:12:00.540 --> 01:12:03.840 |
|
third will take 21222, et cetera. |
|
|
|
01:12:04.830 --> 01:12:07.050 |
|
Every time I use 90% of the data to |
|
|
|
01:12:07.050 --> 01:12:09.400 |
|
train and the remaining data to test. |
|
|
|
01:12:10.520 --> 01:12:12.510 |
|
And I'm doing that by just specifying |
|
|
|
01:12:12.510 --> 01:12:13.990 |
|
the data that I'm using to test and |
|
|
|
01:12:13.990 --> 01:12:15.930 |
|
then subtracting those indices to get |
|
|
|
01:12:15.930 --> 01:12:17.100 |
|
the data that I used to train. |
|
|
|
01:12:18.080 --> 01:12:21.396 |
|
Every time I normalize based on the |
|
|
|
01:12:21.396 --> 01:12:23.140 |
|
training data, normalize both my |
|
|
|
01:12:23.140 --> 01:12:24.554 |
|
training and validation data based on |
|
|
|
01:12:24.554 --> 01:12:26.180 |
|
the same training data for the current |
|
|
|
01:12:26.180 --> 01:12:26.540 |
|
split. |
|
|
|
01:12:27.600 --> 01:12:29.340 |
|
Then I train and evaluate my nearest |
|
|
|
01:12:29.340 --> 01:12:31.870 |
|
neighbor and logistic regressor. |
|
|
|
01:12:38.000 --> 01:12:39.230 |
|
So that was fast. |
|
|
|
01:12:40.850 --> 01:12:41.103 |
|
Right. |
|
|
|
01:12:41.103 --> 01:12:43.950 |
|
And so then I have my errors. |
|
|
|
01:12:43.950 --> 01:12:46.970 |
|
So one thing to note is that my even |
|
|
|
01:12:46.970 --> 01:12:48.250 |
|
though in that one case I was |
|
|
|
01:12:48.250 --> 01:12:50.310 |
|
evaluating before that one split, my |
|
|
|
01:12:50.310 --> 01:12:52.190 |
|
logistic regression error was zero, |
|
|
|
01:12:52.190 --> 01:12:53.670 |
|
it's not 0 every time. |
|
|
|
01:12:53.670 --> 01:12:56.984 |
|
It ranges from zero to 5.3. |
|
|
|
01:12:56.984 --> 01:12:59.906 |
|
And my nearest neighbor accuracy ranges |
|
|
|
01:12:59.906 --> 01:13:02.980 |
|
from zero to 8 or 8.7 depending on the |
|
|
|
01:13:02.980 --> 01:13:03.330 |
|
split. |
|
|
|
01:13:04.300 --> 01:13:06.085 |
|
So different samples of your training |
|
|
|
01:13:06.085 --> 01:13:08.592 |
|
and test data will give you different |
|
|
|
01:13:08.592 --> 01:13:09.866 |
|
error measurement errors. |
|
|
|
01:13:09.866 --> 01:13:11.950 |
|
And so that's why like cross validation |
|
|
|
01:13:11.950 --> 01:13:14.300 |
|
can be a nice tool to give you not only |
|
|
|
01:13:14.300 --> 01:13:16.870 |
|
an expected error, but some variance on |
|
|
|
01:13:16.870 --> 01:13:18.140 |
|
the estimate of that error. |
|
|
|
01:13:19.000 --> 01:13:19.500 |
|
So. |
|
|
|
01:13:20.410 --> 01:13:23.330 |
|
My standard error of my estimate of the |
|
|
|
01:13:23.330 --> 01:13:26.195 |
|
mean, which is the stair deviation of |
|
|
|
01:13:26.195 --> 01:13:28.390 |
|
my error estimates divided by the |
|
|
|
01:13:28.390 --> 01:13:29.720 |
|
square of the number of samples. |
|
|
|
01:13:30.680 --> 01:13:35.420 |
|
Is 09 for nearest neighbor and six for |
|
|
|
01:13:35.420 --> 01:13:36.500 |
|
logistic regression. |
|
|
|
01:13:37.500 --> 01:13:39.270 |
|
And I can also use that to compute a |
|
|
|
01:13:39.270 --> 01:13:41.540 |
|
confidence interval by multiplying that |
|
|
|
01:13:41.540 --> 01:13:45.410 |
|
standard error by I forgot 1.96. |
|
|
|
01:13:46.280 --> 01:13:49.330 |
|
So I can say like I'm 95% confident |
|
|
|
01:13:49.330 --> 01:13:51.930 |
|
that my logistic regression error is |
|
|
|
01:13:51.930 --> 01:13:56.440 |
|
somewhere between 12 and 34 or three. |
|
|
|
01:13:56.440 --> 01:14:00.040 |
|
Sorry, 1.2% and 34%. |
|
|
|
01:14:02.360 --> 01:14:04.615 |
|
And my nearest neighbor error is higher |
|
|
|
01:14:04.615 --> 01:14:06.620 |
|
and I have like a bigger confidence |
|
|
|
01:14:06.620 --> 01:14:07.020 |
|
interval. |
|
|
|
01:14:09.360 --> 01:14:14.360 |
|
Now let's just compare very briefly how |
|
|
|
01:14:14.360 --> 01:14:14.860 |
|
that. |
|
|
|
01:14:15.610 --> 01:14:19.660 |
|
How the original paper did on this same |
|
|
|
01:14:19.660 --> 01:14:20.110 |
|
problem? |
|
|
|
01:14:23.320 --> 01:14:25.480 |
|
I just have one more slide, so don't |
|
|
|
01:14:25.480 --> 01:14:27.950 |
|
worry, we will finish. |
|
|
|
01:14:28.690 --> 01:14:30.360 |
|
Within a minute or so of runtime. |
|
|
|
01:14:31.200 --> 01:14:33.610 |
|
Alright, so in the paper they use an |
|
|
|
01:14:33.610 --> 01:14:36.300 |
|
MSM tree, which is that you have a |
|
|
|
01:14:36.300 --> 01:14:37.820 |
|
linear classifier. |
|
|
|
01:14:37.820 --> 01:14:39.240 |
|
Essentially that's used to do each |
|
|
|
01:14:39.240 --> 01:14:40.140 |
|
split of the tree. |
|
|
|
01:14:41.090 --> 01:14:42.720 |
|
But at the end of the day they choose |
|
|
|
01:14:42.720 --> 01:14:44.550 |
|
only one split, so it ends up being a |
|
|
|
01:14:44.550 --> 01:14:45.380 |
|
linear classifier. |
|
|
|
01:14:46.300 --> 01:14:49.633 |
|
There they are trying to minimize the |
|
|
|
01:14:49.633 --> 01:14:51.520 |
|
number of features as well as the |
|
|
|
01:14:51.520 --> 01:14:53.900 |
|
number of splitting planes in order to |
|
|
|
01:14:53.900 --> 01:14:55.550 |
|
improve generalization and make a |
|
|
|
01:14:55.550 --> 01:14:57.090 |
|
simple interpretable function. |
|
|
|
01:14:57.800 --> 01:14:59.370 |
|
So at the end of the day, they choose |
|
|
|
01:14:59.370 --> 01:15:01.105 |
|
just three features, mean texture, |
|
|
|
01:15:01.105 --> 01:15:02.780 |
|
worst area and worst smoothness. |
|
|
|
01:15:03.520 --> 01:15:04.420 |
|
And. |
|
|
|
01:15:05.930 --> 01:15:08.610 |
|
They used tenfold cross validation and |
|
|
|
01:15:08.610 --> 01:15:11.770 |
|
they got an error of 3% within a |
|
|
|
01:15:11.770 --> 01:15:15.570 |
|
confidence interval or minus 15%. |
|
|
|
01:15:15.570 --> 01:15:17.120 |
|
So pretty similar to what we got. |
|
|
|
01:15:17.120 --> 01:15:18.960 |
|
We got slightly lower error but we were |
|
|
|
01:15:18.960 --> 01:15:20.560 |
|
using more features in the logistic |
|
|
|
01:15:20.560 --> 01:15:21.090 |
|
regressor. |
|
|
|
01:15:21.910 --> 01:15:23.694 |
|
And then they tested it on their held |
|
|
|
01:15:23.694 --> 01:15:26.475 |
|
out set and they got a perfect accuracy |
|
|
|
01:15:26.475 --> 01:15:27.730 |
|
on the held out set. |
|
|
|
01:15:28.550 --> 01:15:29.849 |
|
Now that doesn't mean that their |
|
|
|
01:15:29.850 --> 01:15:31.670 |
|
accuracy is perfect because they're |
|
|
|
01:15:31.670 --> 01:15:34.350 |
|
cross validation if anything, is a |
|
|
|
01:15:34.350 --> 01:15:37.315 |
|
biased towards a underestimating the |
|
|
|
01:15:37.315 --> 01:15:37.570 |
|
error. |
|
|
|
01:15:37.570 --> 01:15:40.440 |
|
So I would say their error is like |
|
|
|
01:15:40.440 --> 01:15:43.870 |
|
roughly 15 to 45%, which is what they |
|
|
|
01:15:43.870 --> 01:15:45.180 |
|
correctly report in the paper. |
|
|
|
01:15:46.950 --> 01:15:47.290 |
|
Right. |
|
|
|
01:15:47.290 --> 01:15:51.030 |
|
So we performed fairly similarly to the |
|
|
|
01:15:51.030 --> 01:15:51.705 |
|
analysis. |
|
|
|
01:15:51.705 --> 01:15:53.670 |
|
The nice thing is that now I can do |
|
|
|
01:15:53.670 --> 01:15:56.900 |
|
this like in under an hour if I want. |
|
|
|
01:15:56.900 --> 01:15:59.140 |
|
Well at that time it would be a lot |
|
|
|
01:15:59.140 --> 01:16:01.380 |
|
more work to do that kind of analysis. |
|
|
|
01:16:02.330 --> 01:16:04.490 |
|
But they also need to obviously want to |
|
|
|
01:16:04.490 --> 01:16:06.410 |
|
be a lot more careful and do careful |
|
|
|
01:16:06.410 --> 01:16:07.780 |
|
analysis and make sure that this is |
|
|
|
01:16:07.780 --> 01:16:10.240 |
|
going to be like a useful tool for. |
|
|
|
01:16:10.320 --> 01:16:12.180 |
|
That guy's diagnosis. |
|
|
|
01:16:14.130 --> 01:16:14.870 |
|
Hey. |
|
|
|
01:16:14.870 --> 01:16:16.400 |
|
So hopefully that was helpful. |
|
|
|
01:16:16.400 --> 01:16:19.700 |
|
And next week I am going to talk about |
|
|
|
01:16:19.700 --> 01:16:20.150 |
|
or not. |
|
|
|
01:16:20.150 --> 01:16:21.750 |
|
Next week it's only Tuesday. |
|
|
|
01:16:21.750 --> 01:16:23.550 |
|
On Thursday I'm going to talk about. |
|
|
|
01:16:23.550 --> 01:16:24.962 |
|
No, wait, what day is it? |
|
|
|
01:16:24.962 --> 01:16:25.250 |
|
Thursday. |
|
|
|
01:16:25.250 --> 01:16:25.868 |
|
OK, good. |
|
|
|
01:16:25.868 --> 01:16:27.020 |
|
It is next week. |
|
|
|
01:16:27.020 --> 01:16:28.810 |
|
Yeah, at least chat with time. |
|
|
|
01:16:30.520 --> 01:16:33.300 |
|
Next week I'll talk about ensembles and |
|
|
|
01:16:33.300 --> 01:16:35.310 |
|
SVM and stochastic gradient descent. |
|
|
|
01:16:35.310 --> 01:16:35.780 |
|
Thanks. |
|
|
|
01:16:35.780 --> 01:16:36.690 |
|
Have a good weekend. |
|
|
|
01:16:38.360 --> 01:16:40.130 |
|
And remember that homework one is due |
|
|
|
01:16:40.130 --> 01:16:40.830 |
|
Monday. |
|
|
|
01:16:41.650 --> 01:16:42.760 |
|
For those asking question. |
|
|
|
|