|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:54:31.1029159Z by ClassTranscribe |
|
|
|
00:01:27.710 --> 00:01:28.060 |
|
All right. |
|
|
|
00:01:28.060 --> 00:01:28.990 |
|
Good morning, everybody. |
|
|
|
00:01:30.560 --> 00:01:32.070 |
|
Hope you had a good weekend. |
|
|
|
00:01:33.880 --> 00:01:35.350 |
|
Form relatively. |
|
|
|
00:01:37.950 --> 00:01:40.110 |
|
Alright, so I'm going to get started. |
|
|
|
00:01:40.110 --> 00:01:42.920 |
|
So in the previous lectures we've |
|
|
|
00:01:42.920 --> 00:01:44.820 |
|
mainly learned about how to build and |
|
|
|
00:01:44.820 --> 00:01:46.220 |
|
apply single models. |
|
|
|
00:01:46.220 --> 00:01:48.550 |
|
So we talked about nearest neighbor, |
|
|
|
00:01:48.550 --> 00:01:50.915 |
|
logistic regression, linear regression, |
|
|
|
00:01:50.915 --> 00:01:51.960 |
|
and trees. |
|
|
|
00:01:51.960 --> 00:01:54.609 |
|
And so now we're going to. |
|
|
|
00:01:55.570 --> 00:01:57.676 |
|
Talk about how to build collection of |
|
|
|
00:01:57.676 --> 00:01:59.850 |
|
models and use them for prediction. |
|
|
|
00:01:59.850 --> 00:02:02.045 |
|
So that technique is called ensembles |
|
|
|
00:02:02.045 --> 00:02:05.280 |
|
and ensemble is when you build a bunch |
|
|
|
00:02:05.280 --> 00:02:07.420 |
|
of models and then you average their |
|
|
|
00:02:07.420 --> 00:02:09.430 |
|
predictions or you train them in a way |
|
|
|
00:02:09.430 --> 00:02:11.040 |
|
that they build on top of each other. |
|
|
|
00:02:12.270 --> 00:02:14.020 |
|
So some of you might remember this show |
|
|
|
00:02:14.020 --> 00:02:15.160 |
|
who wants to be a millionaire? |
|
|
|
00:02:16.100 --> 00:02:18.520 |
|
The idea of this show is that there's a |
|
|
|
00:02:18.520 --> 00:02:20.490 |
|
contestant and they get asked a series |
|
|
|
00:02:20.490 --> 00:02:22.280 |
|
of questions and they have multiple |
|
|
|
00:02:22.280 --> 00:02:25.030 |
|
choice answers and if they get it right |
|
|
|
00:02:25.030 --> 00:02:27.020 |
|
then like the dollar value that they |
|
|
|
00:02:27.020 --> 00:02:29.429 |
|
would bring home increases, but if they |
|
|
|
00:02:29.430 --> 00:02:31.280 |
|
ever get it wrong, then they go home |
|
|
|
00:02:31.280 --> 00:02:31.910 |
|
with nothing. |
|
|
|
00:02:32.620 --> 00:02:35.150 |
|
And they had three forms of help. |
|
|
|
00:02:35.150 --> 00:02:37.070 |
|
One of the forms was that they could |
|
|
|
00:02:37.070 --> 00:02:39.380 |
|
eliminate 2 of the incorrect choices. |
|
|
|
00:02:40.230 --> 00:02:42.517 |
|
Another form is that they could call a |
|
|
|
00:02:42.517 --> 00:02:42.769 |
|
friend. |
|
|
|
00:02:42.770 --> 00:02:44.610 |
|
So they would have like people. |
|
|
|
00:02:44.610 --> 00:02:46.210 |
|
They would have friends at home that |
|
|
|
00:02:46.210 --> 00:02:48.695 |
|
they think have like various expertise. |
|
|
|
00:02:48.695 --> 00:02:51.135 |
|
And if they see a question that they |
|
|
|
00:02:51.135 --> 00:02:52.450 |
|
think is really hard and they're not |
|
|
|
00:02:52.450 --> 00:02:54.220 |
|
sure of the answer, they could choose |
|
|
|
00:02:54.220 --> 00:02:55.946 |
|
which friend to call to give them the |
|
|
|
00:02:55.946 --> 00:02:56.199 |
|
answer. |
|
|
|
00:02:57.660 --> 00:03:00.120 |
|
The third, the third form of help they |
|
|
|
00:03:00.120 --> 00:03:02.910 |
|
could get is pull the audience so. |
|
|
|
00:03:03.680 --> 00:03:06.475 |
|
They would ask the audience to vote on |
|
|
|
00:03:06.475 --> 00:03:07.520 |
|
the correct answer. |
|
|
|
00:03:08.120 --> 00:03:11.120 |
|
And the audience would all vote, and |
|
|
|
00:03:11.120 --> 00:03:12.530 |
|
then they could make a decision based |
|
|
|
00:03:12.530 --> 00:03:13.190 |
|
on that. |
|
|
|
00:03:14.020 --> 00:03:15.745 |
|
And they could use each of these forms |
|
|
|
00:03:15.745 --> 00:03:17.850 |
|
of help one time. |
|
|
|
00:03:18.780 --> 00:03:22.369 |
|
What do you which of these do you think |
|
|
|
00:03:22.370 --> 00:03:24.270 |
|
between pull the audience and call a |
|
|
|
00:03:24.270 --> 00:03:24.900 |
|
friend? |
|
|
|
00:03:24.900 --> 00:03:28.369 |
|
Which of these do you think is a is |
|
|
|
00:03:28.370 --> 00:03:30.590 |
|
more likely to give the correct answer? |
|
|
|
00:03:33.500 --> 00:03:35.020 |
|
Alright, so how many people think it's |
|
|
|
00:03:35.020 --> 00:03:36.250 |
|
pulled the audience? |
|
|
|
00:03:36.250 --> 00:03:39.710 |
|
How many people think it's for in a |
|
|
|
00:03:39.710 --> 00:03:40.210 |
|
friend? |
|
|
|
00:03:42.060 --> 00:03:45.000 |
|
So the audience is correct, it's pulled |
|
|
|
00:03:45.000 --> 00:03:45.540 |
|
the audience. |
|
|
|
00:03:46.250 --> 00:03:49.975 |
|
But they did statistics. |
|
|
|
00:03:49.975 --> 00:03:52.910 |
|
They looked at analysis of the show and |
|
|
|
00:03:52.910 --> 00:03:55.110 |
|
on average the audience is correct 92% |
|
|
|
00:03:55.110 --> 00:03:56.240 |
|
of the time. |
|
|
|
00:03:57.050 --> 00:03:59.750 |
|
And call a friend is correct 66% of the |
|
|
|
00:03:59.750 --> 00:04:00.150 |
|
time. |
|
|
|
00:04:01.780 --> 00:04:04.500 |
|
So that might be kind of unintuitive, |
|
|
|
00:04:04.500 --> 00:04:06.970 |
|
especially the margin, because. |
|
|
|
00:04:08.210 --> 00:04:09.574 |
|
When you get to call a friend, you get |
|
|
|
00:04:09.574 --> 00:04:11.670 |
|
to call somebody who you think knows |
|
|
|
00:04:11.670 --> 00:04:13.620 |
|
about the particular subject matter. |
|
|
|
00:04:13.620 --> 00:04:15.300 |
|
So they're an expert. |
|
|
|
00:04:15.300 --> 00:04:16.562 |
|
You would expect that out of. |
|
|
|
00:04:16.562 --> 00:04:18.200 |
|
You would expect that they would be |
|
|
|
00:04:18.200 --> 00:04:20.160 |
|
much, much more informed than an |
|
|
|
00:04:20.160 --> 00:04:22.770 |
|
average audience member who is just |
|
|
|
00:04:22.770 --> 00:04:24.020 |
|
there to be entertained. |
|
|
|
00:04:24.880 --> 00:04:28.190 |
|
But the audience is actually much more |
|
|
|
00:04:28.190 --> 00:04:30.160 |
|
accurate and that kind of that |
|
|
|
00:04:30.160 --> 00:04:32.330 |
|
demonstrates the power of ensembles |
|
|
|
00:04:32.330 --> 00:04:34.370 |
|
that averaging multiple weak |
|
|
|
00:04:34.370 --> 00:04:35.140 |
|
predictions. |
|
|
|
00:04:35.830 --> 00:04:38.720 |
|
Is often more accurate than any single |
|
|
|
00:04:38.720 --> 00:04:40.003 |
|
predictor, even if that single |
|
|
|
00:04:40.003 --> 00:04:41.150 |
|
predictor is pretty good. |
|
|
|
00:04:43.770 --> 00:04:46.464 |
|
It's possible to construct models to |
|
|
|
00:04:46.464 --> 00:04:48.269 |
|
construct ensembles in different ways. |
|
|
|
00:04:48.270 --> 00:04:49.930 |
|
One of the ways is that you |
|
|
|
00:04:49.930 --> 00:04:51.745 |
|
independently train a bunch of |
|
|
|
00:04:51.745 --> 00:04:53.540 |
|
different models by resampling the data |
|
|
|
00:04:53.540 --> 00:04:55.830 |
|
or resampling features, and then you |
|
|
|
00:04:55.830 --> 00:04:57.846 |
|
average those the predictions of those |
|
|
|
00:04:57.846 --> 00:04:58.119 |
|
models. |
|
|
|
00:04:58.810 --> 00:05:00.780 |
|
Another is that you incrementally train |
|
|
|
00:05:00.780 --> 00:05:02.860 |
|
new models that try to fix the mistakes |
|
|
|
00:05:02.860 --> 00:05:04.350 |
|
of the previous models. |
|
|
|
00:05:04.350 --> 00:05:05.750 |
|
So we're going to talk about both of |
|
|
|
00:05:05.750 --> 00:05:06.170 |
|
those. |
|
|
|
00:05:06.790 --> 00:05:08.800 |
|
And they work on different principles. |
|
|
|
00:05:08.800 --> 00:05:10.758 |
|
There's different reasons why each one |
|
|
|
00:05:10.758 --> 00:05:13.460 |
|
is a is a reasonable choice. |
|
|
|
00:05:16.420 --> 00:05:19.740 |
|
So the theory behind ensembles really |
|
|
|
00:05:19.740 --> 00:05:22.260 |
|
comes down to this theorem called the |
|
|
|
00:05:22.260 --> 00:05:24.480 |
|
balance, the bias variance tradeoff. |
|
|
|
00:05:25.110 --> 00:05:27.040 |
|
And this is a really fundamental |
|
|
|
00:05:27.040 --> 00:05:28.850 |
|
concept in machine learning. |
|
|
|
00:05:29.690 --> 00:05:31.730 |
|
And I'm not going to go through the |
|
|
|
00:05:31.730 --> 00:05:33.780 |
|
derivation of it, it's at this link |
|
|
|
00:05:33.780 --> 00:05:34.084 |
|
here. |
|
|
|
00:05:34.084 --> 00:05:34.692 |
|
It's not. |
|
|
|
00:05:34.692 --> 00:05:36.280 |
|
It's not really, it's something that |
|
|
|
00:05:36.280 --> 00:05:37.960 |
|
anyone could follow along, but it does |
|
|
|
00:05:37.960 --> 00:05:38.980 |
|
take a while to get through it. |
|
|
|
00:05:40.280 --> 00:05:41.880 |
|
But it's a really fundamental idea in |
|
|
|
00:05:41.880 --> 00:05:42.740 |
|
machine learning. |
|
|
|
00:05:42.740 --> 00:05:46.390 |
|
So in terms of one way that you can |
|
|
|
00:05:46.390 --> 00:05:48.560 |
|
express it is in terms of the squared |
|
|
|
00:05:48.560 --> 00:05:49.610 |
|
error of prediction. |
|
|
|
00:05:50.620 --> 00:05:53.220 |
|
So for regression, but there's also |
|
|
|
00:05:53.220 --> 00:05:55.949 |
|
equivalent theorems for classification, |
|
|
|
00:05:55.949 --> 00:05:59.450 |
|
for 01 classification or for log |
|
|
|
00:05:59.450 --> 00:06:00.760 |
|
probability loss. |
|
|
|
00:06:01.870 --> 00:06:04.460 |
|
And it all works out to the same thing, |
|
|
|
00:06:04.460 --> 00:06:06.080 |
|
which is that you're expected test |
|
|
|
00:06:06.080 --> 00:06:06.670 |
|
error. |
|
|
|
00:06:06.670 --> 00:06:08.599 |
|
So what this means is that. |
|
|
|
00:06:09.350 --> 00:06:11.490 |
|
If you were to randomly choose some |
|
|
|
00:06:11.490 --> 00:06:13.410 |
|
number of samples from the general |
|
|
|
00:06:13.410 --> 00:06:14.680 |
|
distribution of data. |
|
|
|
00:06:15.900 --> 00:06:18.530 |
|
Then the expected error that you would |
|
|
|
00:06:18.530 --> 00:06:20.410 |
|
get for the model that you've trained |
|
|
|
00:06:20.410 --> 00:06:24.230 |
|
on your sample of data compared to what |
|
|
|
00:06:24.230 --> 00:06:25.260 |
|
it should have predicted. |
|
|
|
00:06:26.680 --> 00:06:29.560 |
|
Has three different components, so one |
|
|
|
00:06:29.560 --> 00:06:30.910 |
|
component is the variance. |
|
|
|
00:06:31.590 --> 00:06:34.095 |
|
The variance is that if UV sampled that |
|
|
|
00:06:34.095 --> 00:06:36.510 |
|
same amount of data multiple times from |
|
|
|
00:06:36.510 --> 00:06:38.590 |
|
the general distribution, you'd get |
|
|
|
00:06:38.590 --> 00:06:40.390 |
|
different data samples and that would |
|
|
|
00:06:40.390 --> 00:06:41.920 |
|
lead to different models that make |
|
|
|
00:06:41.920 --> 00:06:43.660 |
|
different predictions on the same test |
|
|
|
00:06:43.660 --> 00:06:44.000 |
|
data. |
|
|
|
00:06:44.730 --> 00:06:46.710 |
|
So you have some variance in your |
|
|
|
00:06:46.710 --> 00:06:47.180 |
|
prediction. |
|
|
|
00:06:47.180 --> 00:06:48.470 |
|
That's due to the randomness of |
|
|
|
00:06:48.470 --> 00:06:49.600 |
|
sampling your model. |
|
|
|
00:06:49.600 --> 00:06:52.049 |
|
Or it could be due to if you have a |
|
|
|
00:06:52.050 --> 00:06:53.023 |
|
randomized optimization. |
|
|
|
00:06:53.023 --> 00:06:54.390 |
|
It could also be due to the |
|
|
|
00:06:54.390 --> 00:06:56.060 |
|
randomization of the optimization. |
|
|
|
00:06:57.910 --> 00:07:00.360 |
|
So this is a variance mainly due to |
|
|
|
00:07:00.360 --> 00:07:02.580 |
|
resampling data of your model. |
|
|
|
00:07:03.580 --> 00:07:05.760 |
|
Compared to your expected model. |
|
|
|
00:07:05.760 --> 00:07:08.919 |
|
So this is how the sum of the average |
|
|
|
00:07:08.920 --> 00:07:12.310 |
|
square distance between the predictions |
|
|
|
00:07:12.310 --> 00:07:15.240 |
|
of an individual model and the average |
|
|
|
00:07:15.240 --> 00:07:17.080 |
|
over all possible models that you would |
|
|
|
00:07:17.080 --> 00:07:18.500 |
|
learn from sampling the data many |
|
|
|
00:07:18.500 --> 00:07:18.940 |
|
times. |
|
|
|
00:07:20.570 --> 00:07:23.270 |
|
Then there's a skip over here for now. |
|
|
|
00:07:23.270 --> 00:07:25.347 |
|
Then there's a bias component squared. |
|
|
|
00:07:25.347 --> 00:07:28.690 |
|
So the bias is if you were to sample |
|
|
|
00:07:28.690 --> 00:07:31.820 |
|
the data infinite times, train your |
|
|
|
00:07:31.820 --> 00:07:33.375 |
|
infinite models and average them, then |
|
|
|
00:07:33.375 --> 00:07:35.497 |
|
you get this expected prediction. |
|
|
|
00:07:35.497 --> 00:07:37.940 |
|
So it's the expected the average |
|
|
|
00:07:37.940 --> 00:07:39.850 |
|
prediction of all of those infinite |
|
|
|
00:07:39.850 --> 00:07:41.240 |
|
models that you trained with the same |
|
|
|
00:07:41.240 --> 00:07:41.949 |
|
amount of data. |
|
|
|
00:07:43.010 --> 00:07:44.460 |
|
And if you look at the difference |
|
|
|
00:07:44.460 --> 00:07:46.790 |
|
between that and the true prediction, |
|
|
|
00:07:46.790 --> 00:07:48.030 |
|
then that's your bias. |
|
|
|
00:07:49.220 --> 00:07:53.070 |
|
So if you have no bias, then obviously |
|
|
|
00:07:53.070 --> 00:07:55.655 |
|
if you have no bias this would be 0. |
|
|
|
00:07:55.655 --> 00:07:57.379 |
|
If on average your models would |
|
|
|
00:07:57.380 --> 00:07:59.095 |
|
converge to the true answer, this will |
|
|
|
00:07:59.095 --> 00:07:59.700 |
|
be 0. |
|
|
|
00:07:59.700 --> 00:08:01.660 |
|
But if your models tend to predict too |
|
|
|
00:08:01.660 --> 00:08:04.050 |
|
high or too low on average, then this |
|
|
|
00:08:04.050 --> 00:08:05.110 |
|
will be nonzero. |
|
|
|
00:08:06.440 --> 00:08:07.970 |
|
And then finally there's the noise. |
|
|
|
00:08:07.970 --> 00:08:10.710 |
|
So this is kind of like the irreducible |
|
|
|
00:08:10.710 --> 00:08:13.000 |
|
error due to the problem that it might |
|
|
|
00:08:13.000 --> 00:08:14.780 |
|
be that for the exact same input |
|
|
|
00:08:14.780 --> 00:08:16.060 |
|
there's different outputs that are |
|
|
|
00:08:16.060 --> 00:08:17.380 |
|
possible, like if you're trying to |
|
|
|
00:08:17.380 --> 00:08:20.205 |
|
predict temperature or read characters |
|
|
|
00:08:20.205 --> 00:08:22.390 |
|
or something like that, the features |
|
|
|
00:08:22.390 --> 00:08:24.250 |
|
are not sufficient to completely |
|
|
|
00:08:24.250 --> 00:08:26.150 |
|
identify the correct answer. |
|
|
|
00:08:26.970 --> 00:08:29.390 |
|
So there's these three parts to the |
|
|
|
00:08:29.390 --> 00:08:29.690 |
|
error. |
|
|
|
00:08:29.690 --> 00:08:31.330 |
|
There's the variance due to limited |
|
|
|
00:08:31.330 --> 00:08:34.069 |
|
data in your models due to the |
|
|
|
00:08:34.070 --> 00:08:35.800 |
|
randomness in a model. |
|
|
|
00:08:35.800 --> 00:08:38.083 |
|
That's either due to randomly sampling |
|
|
|
00:08:38.083 --> 00:08:40.040 |
|
the data or due to your optimization. |
|
|
|
00:08:40.660 --> 00:08:42.340 |
|
There's the bias, which is due to the |
|
|
|
00:08:42.340 --> 00:08:44.770 |
|
inability of your model to fit the true |
|
|
|
00:08:44.770 --> 00:08:45.390 |
|
solution. |
|
|
|
00:08:46.080 --> 00:08:48.740 |
|
And there's a noise which is due to the |
|
|
|
00:08:48.740 --> 00:08:50.160 |
|
problem characteristics or the |
|
|
|
00:08:50.160 --> 00:08:51.840 |
|
inability to make a perfect prediction |
|
|
|
00:08:51.840 --> 00:08:52.600 |
|
from the features. |
|
|
|
00:08:54.920 --> 00:08:55.410 |
|
Yeah. |
|
|
|
00:08:57.940 --> 00:09:02.930 |
|
So here, so why is a particular? |
|
|
|
00:09:04.210 --> 00:09:08.110 |
|
That particular label assigned to X&Y |
|
|
|
00:09:08.110 --> 00:09:12.260 |
|
bar is the average of all the labels |
|
|
|
00:09:12.260 --> 00:09:14.390 |
|
that you would that could be assigned |
|
|
|
00:09:14.390 --> 00:09:15.260 |
|
to ex. |
|
|
|
00:09:15.260 --> 00:09:18.337 |
|
So for example, if you had imagine that |
|
|
|
00:09:18.337 --> 00:09:20.700 |
|
you had the exact same, let's say your |
|
|
|
00:09:20.700 --> 00:09:22.600 |
|
prediction predicting temperature based |
|
|
|
00:09:22.600 --> 00:09:23.640 |
|
on the last five days. |
|
|
|
00:09:24.360 --> 00:09:26.480 |
|
And you saw that exact same scenario of |
|
|
|
00:09:26.480 --> 00:09:29.675 |
|
the last five days like 15 times, but |
|
|
|
00:09:29.675 --> 00:09:31.620 |
|
you had different next day |
|
|
|
00:09:31.620 --> 00:09:32.340 |
|
temperatures. |
|
|
|
00:09:32.960 --> 00:09:35.683 |
|
So why would be like one of those next |
|
|
|
00:09:35.683 --> 00:09:37.190 |
|
day temperatures and why bar is the |
|
|
|
00:09:37.190 --> 00:09:38.780 |
|
average of those next day temperatures? |
|
|
|
00:09:39.980 --> 00:09:40.460 |
|
Question. |
|
|
|
00:09:43.200 --> 00:09:44.820 |
|
How is your model? |
|
|
|
00:09:44.820 --> 00:09:48.684 |
|
So HD is a model that's trained on a |
|
|
|
00:09:48.684 --> 00:09:51.310 |
|
sample on a DF sample of the |
|
|
|
00:09:51.310 --> 00:09:51.950 |
|
distribution. |
|
|
|
00:09:53.210 --> 00:09:56.310 |
|
And H bar is the average of all such |
|
|
|
00:09:56.310 --> 00:09:56.680 |
|
models. |
|
|
|
00:10:03.740 --> 00:10:07.270 |
|
So the bias and variance is illustrated |
|
|
|
00:10:07.270 --> 00:10:08.215 |
|
here. |
|
|
|
00:10:08.215 --> 00:10:10.500 |
|
So imagine that you're trying to shoot |
|
|
|
00:10:10.500 --> 00:10:11.040 |
|
a target. |
|
|
|
00:10:11.700 --> 00:10:13.833 |
|
Then if you have low bias and low |
|
|
|
00:10:13.833 --> 00:10:15.243 |
|
variance, it means that all your shots |
|
|
|
00:10:15.243 --> 00:10:17.470 |
|
are clustered in the center of the |
|
|
|
00:10:17.470 --> 00:10:17.774 |
|
target. |
|
|
|
00:10:17.774 --> 00:10:20.265 |
|
If you have low bias and high variance |
|
|
|
00:10:20.265 --> 00:10:22.910 |
|
means that the average of your shots is |
|
|
|
00:10:22.910 --> 00:10:24.640 |
|
in the center of your target, but the |
|
|
|
00:10:24.640 --> 00:10:26.260 |
|
shots are more widely distributed. |
|
|
|
00:10:27.890 --> 00:10:31.360 |
|
If you have high bias and low variance, |
|
|
|
00:10:31.360 --> 00:10:33.210 |
|
it means that your shots are clustered |
|
|
|
00:10:33.210 --> 00:10:34.730 |
|
tight together, but they're off the |
|
|
|
00:10:34.730 --> 00:10:35.160 |
|
center. |
|
|
|
00:10:35.940 --> 00:10:37.580 |
|
And if you have high bias and high |
|
|
|
00:10:37.580 --> 00:10:40.298 |
|
variance, then both they're dispersed, |
|
|
|
00:10:40.298 --> 00:10:42.560 |
|
dispersed, and they're off the center. |
|
|
|
00:10:44.230 --> 00:10:45.920 |
|
So you can see from even from this |
|
|
|
00:10:45.920 --> 00:10:48.924 |
|
illustration that obviously low bias |
|
|
|
00:10:48.924 --> 00:10:51.840 |
|
and low variance is the best, but both |
|
|
|
00:10:51.840 --> 00:10:54.267 |
|
variance and bias caused some error, |
|
|
|
00:10:54.267 --> 00:10:56.590 |
|
and high bias and high variance has the |
|
|
|
00:10:56.590 --> 00:10:57.950 |
|
greatest average error. |
|
|
|
00:11:02.670 --> 00:11:04.988 |
|
You also often see a expressed in a |
|
|
|
00:11:04.988 --> 00:11:07.147 |
|
plot like this, where you're looking at |
|
|
|
00:11:07.147 --> 00:11:09.654 |
|
your model complexity and this is like. |
|
|
|
00:11:09.654 --> 00:11:10.990 |
|
This is kind of like a classic |
|
|
|
00:11:10.990 --> 00:11:13.580 |
|
overfitting plot, so this model |
|
|
|
00:11:13.580 --> 00:11:15.240 |
|
complexity could for example be the |
|
|
|
00:11:15.240 --> 00:11:16.440 |
|
height of your tree. |
|
|
|
00:11:17.540 --> 00:11:19.420 |
|
So if you train a tree with two leaf |
|
|
|
00:11:19.420 --> 00:11:22.930 |
|
nodes with just a height of 1, then |
|
|
|
00:11:22.930 --> 00:11:24.754 |
|
you're going to have a very low |
|
|
|
00:11:24.754 --> 00:11:25.016 |
|
variance. |
|
|
|
00:11:25.016 --> 00:11:26.900 |
|
If you were to resample the data many |
|
|
|
00:11:26.900 --> 00:11:29.259 |
|
times and train that short tree, you |
|
|
|
00:11:29.260 --> 00:11:30.790 |
|
would very likely get a very similar |
|
|
|
00:11:30.790 --> 00:11:33.304 |
|
tree every single time, so the variance |
|
|
|
00:11:33.304 --> 00:11:33.980 |
|
is low. |
|
|
|
00:11:33.980 --> 00:11:34.870 |
|
That's the blue curve. |
|
|
|
00:11:35.760 --> 00:11:37.100 |
|
But the bias is high. |
|
|
|
00:11:37.100 --> 00:11:38.580 |
|
You're unlikely to make very good |
|
|
|
00:11:38.580 --> 00:11:40.070 |
|
predictions with that really short |
|
|
|
00:11:40.070 --> 00:11:40.880 |
|
tree. |
|
|
|
00:11:40.880 --> 00:11:43.275 |
|
Even if you averaged an infinite number |
|
|
|
00:11:43.275 --> 00:11:44.189 |
|
of them, you would still. |
|
|
|
00:11:44.189 --> 00:11:45.570 |
|
You would still have a lot of error. |
|
|
|
00:11:46.960 --> 00:11:49.520 |
|
As you increase the depth of the tree, |
|
|
|
00:11:49.520 --> 00:11:51.290 |
|
your bias drops. |
|
|
|
00:11:51.290 --> 00:11:53.232 |
|
You're able to make better predictions |
|
|
|
00:11:53.232 --> 00:11:56.030 |
|
on your on average. |
|
|
|
00:11:57.250 --> 00:11:59.340 |
|
But the variance starts to increase. |
|
|
|
00:11:59.340 --> 00:12:01.030 |
|
The trees start to look more different |
|
|
|
00:12:01.030 --> 00:12:01.920 |
|
from each other. |
|
|
|
00:12:01.920 --> 00:12:04.780 |
|
So if you train a full tree so that |
|
|
|
00:12:04.780 --> 00:12:06.990 |
|
there's one data point per leaf node, |
|
|
|
00:12:06.990 --> 00:12:08.410 |
|
then the trees are going to look pretty |
|
|
|
00:12:08.410 --> 00:12:10.230 |
|
different when you resample the data |
|
|
|
00:12:10.230 --> 00:12:11.550 |
|
because you'll have different data |
|
|
|
00:12:11.550 --> 00:12:12.080 |
|
samples. |
|
|
|
00:12:13.850 --> 00:12:16.460 |
|
So eventually, at some point you reach |
|
|
|
00:12:16.460 --> 00:12:19.616 |
|
some ideal situation where the bias |
|
|
|
00:12:19.616 --> 00:12:21.677 |
|
plus the bias squared plus the variance |
|
|
|
00:12:21.677 --> 00:12:23.940 |
|
is minimized, and that's when you'd |
|
|
|
00:12:23.940 --> 00:12:25.510 |
|
want to, like, stop if you're trying to |
|
|
|
00:12:25.510 --> 00:12:26.165 |
|
choose hyperparameters. |
|
|
|
00:12:26.165 --> 00:12:29.530 |
|
And if you train more complex models, |
|
|
|
00:12:29.530 --> 00:12:31.330 |
|
it's going to continue to reduce the |
|
|
|
00:12:31.330 --> 00:12:32.925 |
|
bias, but the increase in variance is |
|
|
|
00:12:32.925 --> 00:12:35.326 |
|
going to cause your test error to |
|
|
|
00:12:35.326 --> 00:12:35.629 |
|
increase. |
|
|
|
00:12:39.100 --> 00:12:41.404 |
|
So if you're thinking about it in terms |
|
|
|
00:12:41.404 --> 00:12:45.510 |
|
of a single model, really this, then |
|
|
|
00:12:45.510 --> 00:12:47.111 |
|
you would be thinking about it in terms |
|
|
|
00:12:47.111 --> 00:12:49.190 |
|
of the plot that I just showed where |
|
|
|
00:12:49.190 --> 00:12:50.690 |
|
you're trying to figure out like what |
|
|
|
00:12:50.690 --> 00:12:52.330 |
|
complexity, if it's a model that can |
|
|
|
00:12:52.330 --> 00:12:54.450 |
|
have varying complexity trees or neural |
|
|
|
00:12:54.450 --> 00:12:57.327 |
|
networks, like how complex should my |
|
|
|
00:12:57.327 --> 00:12:59.550 |
|
model be in order to best. |
|
|
|
00:13:00.440 --> 00:13:02.285 |
|
Find the balance between the bias and |
|
|
|
00:13:02.285 --> 00:13:02.950 |
|
the variance. |
|
|
|
00:13:03.710 --> 00:13:05.910 |
|
But ensembles have a different way to |
|
|
|
00:13:05.910 --> 00:13:08.050 |
|
directly combat the bias and the |
|
|
|
00:13:08.050 --> 00:13:10.430 |
|
variance, so I'm going to talk about a |
|
|
|
00:13:10.430 --> 00:13:12.470 |
|
few ensemble methods and how they |
|
|
|
00:13:12.470 --> 00:13:12.920 |
|
relate. |
|
|
|
00:13:16.400 --> 00:13:19.130 |
|
The first one is called first, like. |
|
|
|
00:13:19.130 --> 00:13:20.580 |
|
This is actually not one of these |
|
|
|
00:13:20.580 --> 00:13:22.007 |
|
ensemble method, but it is an ensemble |
|
|
|
00:13:22.007 --> 00:13:22.245 |
|
method. |
|
|
|
00:13:22.245 --> 00:13:23.690 |
|
It's the simplest of these, and it's |
|
|
|
00:13:23.690 --> 00:13:25.219 |
|
kind of the foundation of the ensemble |
|
|
|
00:13:25.220 --> 00:13:25.810 |
|
methods. |
|
|
|
00:13:25.810 --> 00:13:28.010 |
|
So it's a statistical technique called |
|
|
|
00:13:28.010 --> 00:13:28.710 |
|
bootstrapping. |
|
|
|
00:13:29.860 --> 00:13:32.740 |
|
Imagine that, for example, I wanted to |
|
|
|
00:13:32.740 --> 00:13:35.170 |
|
know what is the average age of |
|
|
|
00:13:35.170 --> 00:13:36.380 |
|
somebody in this class. |
|
|
|
00:13:37.610 --> 00:13:39.990 |
|
One way that I could do it is I could |
|
|
|
00:13:39.990 --> 00:13:42.323 |
|
ask each of you your ages and then I |
|
|
|
00:13:42.323 --> 00:13:43.840 |
|
could average it, and then that might |
|
|
|
00:13:43.840 --> 00:13:45.605 |
|
give me like an estimate for the |
|
|
|
00:13:45.605 --> 00:13:47.110 |
|
average age of all the students in the |
|
|
|
00:13:47.110 --> 00:13:47.450 |
|
class. |
|
|
|
00:13:48.720 --> 00:13:51.700 |
|
But maybe I not only want to know the |
|
|
|
00:13:51.700 --> 00:13:53.850 |
|
average age, but I also want some |
|
|
|
00:13:53.850 --> 00:13:56.020 |
|
confidence range on that average age. |
|
|
|
00:13:56.020 --> 00:13:58.210 |
|
And if all I do is I average all your |
|
|
|
00:13:58.210 --> 00:14:00.960 |
|
ages, that doesn't tell me how likely I |
|
|
|
00:14:00.960 --> 00:14:02.930 |
|
am to be within, say, like three years. |
|
|
|
00:14:04.000 --> 00:14:07.090 |
|
And so one way, one way that I can |
|
|
|
00:14:07.090 --> 00:14:09.950 |
|
solve that problem is with bootstrap |
|
|
|
00:14:09.950 --> 00:14:13.590 |
|
estimation where I resample the data |
|
|
|
00:14:13.590 --> 00:14:15.530 |
|
multiple times so I could choose. |
|
|
|
00:14:15.530 --> 00:14:18.800 |
|
I could take 50 samples and sample with |
|
|
|
00:14:18.800 --> 00:14:21.235 |
|
repetition so I could potentially call |
|
|
|
00:14:21.235 --> 00:14:22.350 |
|
the same person twice. |
|
|
|
00:14:23.160 --> 00:14:24.125 |
|
Ask your ages. |
|
|
|
00:14:24.125 --> 00:14:26.750 |
|
Ask the ages of 50 individuals. |
|
|
|
00:14:26.750 --> 00:14:28.140 |
|
Again, the same individual may be |
|
|
|
00:14:28.140 --> 00:14:28.870 |
|
repeated. |
|
|
|
00:14:28.870 --> 00:14:31.530 |
|
I take the average from that and repeat |
|
|
|
00:14:31.530 --> 00:14:33.810 |
|
that many times, and then I can look at |
|
|
|
00:14:33.810 --> 00:14:35.579 |
|
the variance of those estimates that I |
|
|
|
00:14:35.580 --> 00:14:35.800 |
|
get. |
|
|
|
00:14:36.470 --> 00:14:38.050 |
|
And then I can use the variance of |
|
|
|
00:14:38.050 --> 00:14:40.430 |
|
those estimates to get a confidence |
|
|
|
00:14:40.430 --> 00:14:42.570 |
|
range on my estimate of the mean. |
|
|
|
00:14:43.810 --> 00:14:47.080 |
|
So bootstrap bootstrapping is a way to. |
|
|
|
00:14:47.190 --> 00:14:50.710 |
|
To estimate a particular parameter, in |
|
|
|
00:14:50.710 --> 00:14:53.035 |
|
this case the average age, as well as |
|
|
|
00:14:53.035 --> 00:14:55.040 |
|
my variance of my estimate of that |
|
|
|
00:14:55.040 --> 00:14:55.690 |
|
parameter. |
|
|
|
00:14:55.690 --> 00:14:58.550 |
|
So like how far off am I would expect |
|
|
|
00:14:58.550 --> 00:14:58.970 |
|
to be? |
|
|
|
00:15:02.560 --> 00:15:04.300 |
|
We can apply that idea to |
|
|
|
00:15:04.300 --> 00:15:08.918 |
|
classification to try to produce a more |
|
|
|
00:15:08.918 --> 00:15:11.266 |
|
stable estimate of the mean or to |
|
|
|
00:15:11.266 --> 00:15:13.370 |
|
produce a more stable prediction. |
|
|
|
00:15:13.370 --> 00:15:15.270 |
|
In other words, to reduce the variance |
|
|
|
00:15:15.270 --> 00:15:17.930 |
|
of my classifiers given a particular |
|
|
|
00:15:17.930 --> 00:15:18.620 |
|
data sample. |
|
|
|
00:15:20.250 --> 00:15:23.010 |
|
So the method is called bagging, which |
|
|
|
00:15:23.010 --> 00:15:24.890 |
|
stands for aggregate bootstrapping. |
|
|
|
00:15:25.990 --> 00:15:27.390 |
|
And the idea is pretty simple. |
|
|
|
00:15:28.630 --> 00:15:32.340 |
|
For M different times capital M, So I'm |
|
|
|
00:15:32.340 --> 00:15:34.730 |
|
going to train train M classifiers. |
|
|
|
00:15:35.430 --> 00:15:37.620 |
|
I draw some number of samples which |
|
|
|
00:15:37.620 --> 00:15:39.533 |
|
should be less than my total number of |
|
|
|
00:15:39.533 --> 00:15:40.800 |
|
samples, but I'm going to draw them |
|
|
|
00:15:40.800 --> 00:15:41.828 |
|
with replacement. |
|
|
|
00:15:41.828 --> 00:15:43.860 |
|
Draw with replacement means I can |
|
|
|
00:15:43.860 --> 00:15:45.310 |
|
choose the same sample twice. |
|
|
|
00:15:46.750 --> 00:15:48.410 |
|
Then I train a classifier on those |
|
|
|
00:15:48.410 --> 00:15:51.120 |
|
samples, and then at the end my final |
|
|
|
00:15:51.120 --> 00:15:54.290 |
|
classifier is an average of all of my |
|
|
|
00:15:54.290 --> 00:15:55.620 |
|
predictions from the individual |
|
|
|
00:15:55.620 --> 00:15:56.340 |
|
classifiers. |
|
|
|
00:15:57.080 --> 00:15:59.040 |
|
So if I'm doing regression, I would |
|
|
|
00:15:59.040 --> 00:16:01.940 |
|
just be averaging the continuous values |
|
|
|
00:16:01.940 --> 00:16:04.200 |
|
that the classifiers are aggressors |
|
|
|
00:16:04.200 --> 00:16:04.890 |
|
predicted. |
|
|
|
00:16:04.890 --> 00:16:07.555 |
|
If I'm doing classification, I would |
|
|
|
00:16:07.555 --> 00:16:10.116 |
|
average the probabilities or average |
|
|
|
00:16:10.116 --> 00:16:13.056 |
|
the most likely label from each of the |
|
|
|
00:16:13.056 --> 00:16:13.389 |
|
classifiers. |
|
|
|
00:16:14.380 --> 00:16:16.810 |
|
And there's lots of theory that shows |
|
|
|
00:16:16.810 --> 00:16:19.100 |
|
that this increases the stability of |
|
|
|
00:16:19.100 --> 00:16:21.500 |
|
the classifier and reduces reduces the |
|
|
|
00:16:21.500 --> 00:16:24.915 |
|
variance, and so the average of a bunch |
|
|
|
00:16:24.915 --> 00:16:26.630 |
|
of classifiers trained this way. |
|
|
|
00:16:27.300 --> 00:16:30.110 |
|
Typically outperform any individual |
|
|
|
00:16:30.110 --> 00:16:30.840 |
|
classifier. |
|
|
|
00:16:32.030 --> 00:16:33.870 |
|
In these classifiers will be different |
|
|
|
00:16:33.870 --> 00:16:36.490 |
|
from each other because there's a |
|
|
|
00:16:36.490 --> 00:16:37.100 |
|
difference. |
|
|
|
00:16:37.100 --> 00:16:39.670 |
|
Because the data is, a different sample |
|
|
|
00:16:39.670 --> 00:16:41.030 |
|
of data is drawn to train each |
|
|
|
00:16:41.030 --> 00:16:41.590 |
|
classifier. |
|
|
|
00:16:45.070 --> 00:16:46.790 |
|
So that's the question. |
|
|
|
00:17:00.050 --> 00:17:02.463 |
|
So not yeah, but not features, it's |
|
|
|
00:17:02.463 --> 00:17:03.186 |
|
samples. |
|
|
|
00:17:03.186 --> 00:17:06.700 |
|
So I have say 1000 data samples. |
|
|
|
00:17:07.340 --> 00:17:10.770 |
|
And I draw say 900 data samples, but |
|
|
|
00:17:10.770 --> 00:17:13.467 |
|
they're not 900 out of the thousand, |
|
|
|
00:17:13.467 --> 00:17:16.190 |
|
it's 900 with repetition. |
|
|
|
00:17:16.190 --> 00:17:17.720 |
|
So there might be 1 sample that I |
|
|
|
00:17:17.720 --> 00:17:19.596 |
|
choose draw three times, others that I |
|
|
|
00:17:19.596 --> 00:17:21.259 |
|
draw no times, others that I draw one |
|
|
|
00:17:21.260 --> 00:17:21.850 |
|
time. |
|
|
|
00:17:21.850 --> 00:17:23.700 |
|
So you can in terms of like |
|
|
|
00:17:23.700 --> 00:17:26.840 |
|
programming, you would just do a random |
|
|
|
00:17:26.840 --> 00:17:31.290 |
|
like 0 to 1 * N and then and then turn |
|
|
|
00:17:31.290 --> 00:17:33.397 |
|
it into an integer and then you get |
|
|
|
00:17:33.397 --> 00:17:35.159 |
|
like you get a random sample with |
|
|
|
00:17:35.160 --> 00:17:35.660 |
|
replacement. |
|
|
|
00:17:46.940 --> 00:17:47.720 |
|
Typically. |
|
|
|
00:17:47.720 --> 00:17:49.626 |
|
So usually each of the classifiers is |
|
|
|
00:17:49.626 --> 00:17:50.820 |
|
of the same form. |
|
|
|
00:17:50.820 --> 00:17:51.190 |
|
Yep. |
|
|
|
00:17:53.550 --> 00:17:55.270 |
|
So this is the idea behind random |
|
|
|
00:17:55.270 --> 00:17:57.760 |
|
forests, which is a really powerful |
|
|
|
00:17:57.760 --> 00:17:59.940 |
|
classifier, but very easy to explain at |
|
|
|
00:17:59.940 --> 00:18:01.500 |
|
least once you once you know about |
|
|
|
00:18:01.500 --> 00:18:02.270 |
|
decision trees. |
|
|
|
00:18:03.780 --> 00:18:06.040 |
|
So in a random forest, train a |
|
|
|
00:18:06.040 --> 00:18:07.150 |
|
collection of trees. |
|
|
|
00:18:08.140 --> 00:18:09.970 |
|
For each tree that you're going to |
|
|
|
00:18:09.970 --> 00:18:11.786 |
|
train, you sample some fraction in the |
|
|
|
00:18:11.786 --> 00:18:13.880 |
|
data, for example 90% of the data. |
|
|
|
00:18:13.880 --> 00:18:15.620 |
|
Sometimes people just sample all the |
|
|
|
00:18:15.620 --> 00:18:15.990 |
|
data. |
|
|
|
00:18:16.430 --> 00:18:19.948 |
|
Then you randomly sample some number of |
|
|
|
00:18:19.948 --> 00:18:20.325 |
|
features. |
|
|
|
00:18:20.325 --> 00:18:23.042 |
|
So for regression, one suggestion is to |
|
|
|
00:18:23.042 --> 00:18:24.648 |
|
use 1/3 of the features. |
|
|
|
00:18:24.648 --> 00:18:28.003 |
|
For classification you would use like. |
|
|
|
00:18:28.003 --> 00:18:30.000 |
|
Some suggestions are to use like a |
|
|
|
00:18:30.000 --> 00:18:31.565 |
|
square root of the number of features. |
|
|
|
00:18:31.565 --> 00:18:32.240 |
|
So if there's. |
|
|
|
00:18:32.970 --> 00:18:36.260 |
|
If there are 400 features, then you |
|
|
|
00:18:36.260 --> 00:18:38.290 |
|
randomly sample 20 of them. |
|
|
|
00:18:38.290 --> 00:18:40.240 |
|
Or another suggestion is to use log |
|
|
|
00:18:40.240 --> 00:18:40.820 |
|
base 2. |
|
|
|
00:18:41.650 --> 00:18:43.389 |
|
It's not really that critical, but you |
|
|
|
00:18:43.389 --> 00:18:44.820 |
|
want you want the number of features |
|
|
|
00:18:44.820 --> 00:18:46.995 |
|
that you select to be much less than |
|
|
|
00:18:46.995 --> 00:18:48.430 |
|
the total number of features. |
|
|
|
00:18:49.110 --> 00:18:51.800 |
|
So here previously I was talking about |
|
|
|
00:18:51.800 --> 00:18:53.760 |
|
when I say sample the data, what I mean |
|
|
|
00:18:53.760 --> 00:18:55.870 |
|
is like is choosing a subset of |
|
|
|
00:18:55.870 --> 00:18:56.790 |
|
training samples. |
|
|
|
00:18:57.910 --> 00:19:00.290 |
|
But when I say sample the features, I |
|
|
|
00:19:00.290 --> 00:19:02.699 |
|
mean choose a subset of the features of |
|
|
|
00:19:02.699 --> 00:19:05.365 |
|
the columns of your of your matrix if |
|
|
|
00:19:05.365 --> 00:19:06.914 |
|
the rows are samples and the columns |
|
|
|
00:19:06.914 --> 00:19:07.350 |
|
are features. |
|
|
|
00:19:09.360 --> 00:19:11.710 |
|
So the you need to sample the features |
|
|
|
00:19:11.710 --> 00:19:13.210 |
|
because otherwise if you train the tree |
|
|
|
00:19:13.210 --> 00:19:14.693 |
|
you're going to get the same result if |
|
|
|
00:19:14.693 --> 00:19:17.720 |
|
you're doing like minimizing the |
|
|
|
00:19:17.720 --> 00:19:19.440 |
|
maximizing mutual information for |
|
|
|
00:19:19.440 --> 00:19:19.890 |
|
example. |
|
|
|
00:19:20.700 --> 00:19:22.270 |
|
If you were to sample all your data and |
|
|
|
00:19:22.270 --> 00:19:23.600 |
|
all the features, you would just train |
|
|
|
00:19:23.600 --> 00:19:24.280 |
|
the same tree. |
|
|
|
00:19:25.070 --> 00:19:27.660 |
|
MN times and that would give you no |
|
|
|
00:19:27.660 --> 00:19:28.160 |
|
benefit. |
|
|
|
00:19:28.900 --> 00:19:30.240 |
|
All right, so you randomly sample some |
|
|
|
00:19:30.240 --> 00:19:31.540 |
|
features, train a tree. |
|
|
|
00:19:32.240 --> 00:19:34.497 |
|
Optionally, you can estimate your |
|
|
|
00:19:34.497 --> 00:19:36.020 |
|
validation error on the data that |
|
|
|
00:19:36.020 --> 00:19:38.283 |
|
wasn't used to train that tree, and you |
|
|
|
00:19:38.283 --> 00:19:41.140 |
|
can use the average of those validation |
|
|
|
00:19:41.140 --> 00:19:44.513 |
|
errors in order to get a estimate of |
|
|
|
00:19:44.513 --> 00:19:46.930 |
|
your error for the for your final |
|
|
|
00:19:46.930 --> 00:19:47.480 |
|
collection. |
|
|
|
00:19:50.000 --> 00:19:51.886 |
|
And after you've trained all the trees, |
|
|
|
00:19:51.886 --> 00:19:54.610 |
|
you just do that 100 times or whatever. |
|
|
|
00:19:54.610 --> 00:19:55.920 |
|
It's completely independent. |
|
|
|
00:19:55.920 --> 00:19:58.330 |
|
So it's just like a very if you've got |
|
|
|
00:19:58.330 --> 00:19:59.920 |
|
code to train a tree, it's just a very |
|
|
|
00:19:59.920 --> 00:20:01.090 |
|
small loop. |
|
|
|
00:20:02.370 --> 00:20:04.990 |
|
And then at the end you average the |
|
|
|
00:20:04.990 --> 00:20:06.766 |
|
prediction of all the trees. |
|
|
|
00:20:06.766 --> 00:20:08.930 |
|
So usually you would train your trees |
|
|
|
00:20:08.930 --> 00:20:09.535 |
|
to completion. |
|
|
|
00:20:09.535 --> 00:20:12.160 |
|
So if you're doing like classification |
|
|
|
00:20:12.160 --> 00:20:14.850 |
|
or in either case you would end up with |
|
|
|
00:20:14.850 --> 00:20:16.480 |
|
a leaf node that contains one data |
|
|
|
00:20:16.480 --> 00:20:16.926 |
|
sample. |
|
|
|
00:20:16.926 --> 00:20:19.060 |
|
So you're training like very high |
|
|
|
00:20:19.060 --> 00:20:21.530 |
|
variance trees, they're deep trees. |
|
|
|
00:20:22.650 --> 00:20:24.760 |
|
That have low bias, they can fit the |
|
|
|
00:20:24.760 --> 00:20:27.580 |
|
training data perfectly, but. |
|
|
|
00:20:29.470 --> 00:20:31.027 |
|
But then you're going to average all of |
|
|
|
00:20:31.027 --> 00:20:31.235 |
|
them. |
|
|
|
00:20:31.235 --> 00:20:34.534 |
|
So you start out with high bias or high |
|
|
|
00:20:34.534 --> 00:20:36.650 |
|
variance, low bias classifiers, and |
|
|
|
00:20:36.650 --> 00:20:37.743 |
|
then you average them. |
|
|
|
00:20:37.743 --> 00:20:40.044 |
|
So you end up with low bias, low |
|
|
|
00:20:40.044 --> 00:20:40.669 |
|
variance classifiers. |
|
|
|
00:20:49.930 --> 00:20:51.310 |
|
Yes, for each tree. |
|
|
|
00:20:51.310 --> 00:20:52.460 |
|
Yeah, for each tree. |
|
|
|
00:20:52.630 --> 00:20:53.160 |
|
Yeah. |
|
|
|
00:20:59.180 --> 00:21:02.920 |
|
You increase the number of trees, yeah, |
|
|
|
00:21:02.920 --> 00:21:03.410 |
|
so. |
|
|
|
00:21:04.110 --> 00:21:07.720 |
|
If you if so, think of it this way. |
|
|
|
00:21:07.720 --> 00:21:12.075 |
|
If I were to if I were to try to |
|
|
|
00:21:12.075 --> 00:21:14.995 |
|
estimate the sum of your ages, then as |
|
|
|
00:21:14.995 --> 00:21:17.900 |
|
I ask you your ages and add them up, my |
|
|
|
00:21:17.900 --> 00:21:19.463 |
|
estimate of the variance of the |
|
|
|
00:21:19.463 --> 00:21:21.288 |
|
variance on the estimate, the sum is |
|
|
|
00:21:21.288 --> 00:21:23.400 |
|
going to increase linearly, right? |
|
|
|
00:21:23.400 --> 00:21:26.680 |
|
It's going to keep on increasing until |
|
|
|
00:21:26.680 --> 00:21:30.660 |
|
sum is 100,000 ± 10,000 or something. |
|
|
|
00:21:31.480 --> 00:21:33.168 |
|
But if I'm trying to estimate the |
|
|
|
00:21:33.168 --> 00:21:35.700 |
|
average of your ages and I keep on |
|
|
|
00:21:35.700 --> 00:21:38.250 |
|
asking your ages, then my variance is |
|
|
|
00:21:38.250 --> 00:21:39.950 |
|
going to go down South. |
|
|
|
00:21:39.950 --> 00:21:43.040 |
|
The variance of the sum is North Times |
|
|
|
00:21:43.040 --> 00:21:47.030 |
|
Sigma squared, but the variance of the |
|
|
|
00:21:47.030 --> 00:21:50.980 |
|
average is N over Sigma I think just no |
|
|
|
00:21:50.980 --> 00:21:53.688 |
|
over Sigma or sorry, Sigma over N, |
|
|
|
00:21:53.688 --> 00:21:56.100 |
|
Sigma squared over N the variance of |
|
|
|
00:21:56.100 --> 00:21:58.513 |
|
the average is Sigma squared over N, |
|
|
|
00:21:58.513 --> 00:22:01.269 |
|
but the variance of the sum is N. |
|
|
|
00:22:01.330 --> 00:22:02.500 |
|
Times Sigma squared. |
|
|
|
00:22:04.490 --> 00:22:06.934 |
|
So the average reduces the variance. |
|
|
|
00:22:06.934 --> 00:22:08.135 |
|
Yeah, so if I. |
|
|
|
00:22:08.135 --> 00:22:09.960 |
|
So by averaging the trees I reduce the |
|
|
|
00:22:09.960 --> 00:22:10.160 |
|
variance. |
|
|
|
00:22:14.870 --> 00:22:17.250 |
|
So that's random forests and I will |
|
|
|
00:22:17.250 --> 00:22:17.840 |
|
talk more. |
|
|
|
00:22:17.840 --> 00:22:20.467 |
|
I'll give an example of use of random |
|
|
|
00:22:20.467 --> 00:22:22.280 |
|
forests and I'll talk about like some |
|
|
|
00:22:22.280 --> 00:22:24.780 |
|
studies about the performance of |
|
|
|
00:22:24.780 --> 00:22:26.750 |
|
various classifiers including random |
|
|
|
00:22:26.750 --> 00:22:27.320 |
|
forests. |
|
|
|
00:22:27.320 --> 00:22:29.946 |
|
But before I do that, I want to talk |
|
|
|
00:22:29.946 --> 00:22:31.330 |
|
about boosting, which is the other |
|
|
|
00:22:31.330 --> 00:22:31.890 |
|
strategy. |
|
|
|
00:22:33.860 --> 00:22:36.080 |
|
So I have the boosting terms here as |
|
|
|
00:22:36.080 --> 00:22:36.490 |
|
well. |
|
|
|
00:22:37.730 --> 00:22:38.170 |
|
All right. |
|
|
|
00:22:38.170 --> 00:22:41.085 |
|
So the first version of boosting and |
|
|
|
00:22:41.085 --> 00:22:42.740 |
|
one other thing I want to say about |
|
|
|
00:22:42.740 --> 00:22:45.350 |
|
this is random forest was popularized |
|
|
|
00:22:45.350 --> 00:22:47.885 |
|
by this paper by Bremen in 2001. |
|
|
|
00:22:47.885 --> 00:22:50.460 |
|
So decision trees go back to the 90s at |
|
|
|
00:22:50.460 --> 00:22:53.893 |
|
least, but they were never really, like |
|
|
|
00:22:53.893 --> 00:22:56.680 |
|
I said, were they're good for helping |
|
|
|
00:22:56.680 --> 00:22:59.750 |
|
for making decisions that people can |
|
|
|
00:22:59.750 --> 00:23:01.360 |
|
understand, that you can communicate |
|
|
|
00:23:01.360 --> 00:23:02.780 |
|
and explain like why it made this |
|
|
|
00:23:02.780 --> 00:23:03.130 |
|
decision. |
|
|
|
00:23:03.890 --> 00:23:05.710 |
|
And they're good for analyzing data, |
|
|
|
00:23:05.710 --> 00:23:07.040 |
|
but they're not really very good |
|
|
|
00:23:07.040 --> 00:23:08.770 |
|
classifiers or aggressors compared to |
|
|
|
00:23:08.770 --> 00:23:09.880 |
|
other methods that are out there. |
|
|
|
00:23:11.210 --> 00:23:14.390 |
|
But Bremen popularized random forests |
|
|
|
00:23:14.390 --> 00:23:16.530 |
|
in 2001 and showed that the |
|
|
|
00:23:16.530 --> 00:23:19.050 |
|
combinations of trees is actually super |
|
|
|
00:23:19.050 --> 00:23:20.380 |
|
powerful and super useful. |
|
|
|
00:23:21.840 --> 00:23:23.770 |
|
And provides like the theory for why it |
|
|
|
00:23:23.770 --> 00:23:25.800 |
|
works and why you should be sampling |
|
|
|
00:23:25.800 --> 00:23:27.780 |
|
different subsets of features, and the |
|
|
|
00:23:27.780 --> 00:23:29.160 |
|
idea that you want the trees to be |
|
|
|
00:23:29.160 --> 00:23:30.000 |
|
decorrelated. |
|
|
|
00:23:31.000 --> 00:23:34.130 |
|
To make different predictions but also |
|
|
|
00:23:34.130 --> 00:23:34.800 |
|
be powerful. |
|
|
|
00:23:37.140 --> 00:23:37.710 |
|
Alright. |
|
|
|
00:23:37.710 --> 00:23:41.140 |
|
So the other strategy is boosting and |
|
|
|
00:23:41.140 --> 00:23:42.910 |
|
the first boosting paper I think was |
|
|
|
00:23:42.910 --> 00:23:44.630 |
|
Shapira in 1989. |
|
|
|
00:23:45.500 --> 00:23:46.900 |
|
And that's one was pretty simple. |
|
|
|
00:23:47.680 --> 00:23:51.090 |
|
So the idea was that you first randomly |
|
|
|
00:23:51.090 --> 00:23:52.690 |
|
choose a set of samples. |
|
|
|
00:23:53.470 --> 00:23:55.280 |
|
Without replacement at this time. |
|
|
|
00:23:55.280 --> 00:23:57.970 |
|
So if you've got 1000, you randomly |
|
|
|
00:23:57.970 --> 00:24:00.133 |
|
choose, say, 800 of them without |
|
|
|
00:24:00.133 --> 00:24:00.539 |
|
replacement. |
|
|
|
00:24:01.440 --> 00:24:04.320 |
|
And you train a classifier on those |
|
|
|
00:24:04.320 --> 00:24:07.140 |
|
samples, that's the weak learner, C1. |
|
|
|
00:24:07.760 --> 00:24:10.170 |
|
So I've got the notation over here in |
|
|
|
00:24:10.170 --> 00:24:12.060 |
|
the literature you'll see things like |
|
|
|
00:24:12.060 --> 00:24:15.140 |
|
learner, hypothesis, classifier, they |
|
|
|
00:24:15.140 --> 00:24:16.130 |
|
all mean the same thing. |
|
|
|
00:24:16.130 --> 00:24:17.560 |
|
There's something that's some model |
|
|
|
00:24:17.560 --> 00:24:18.810 |
|
that's doing some prediction. |
|
|
|
00:24:19.960 --> 00:24:22.530 |
|
A weak learner is just a classifier |
|
|
|
00:24:22.530 --> 00:24:25.260 |
|
that can achieve less than 50% training |
|
|
|
00:24:25.260 --> 00:24:27.140 |
|
error over any training distribution. |
|
|
|
00:24:27.910 --> 00:24:30.120 |
|
So almost any classifier we would |
|
|
|
00:24:30.120 --> 00:24:32.217 |
|
consider is a weak learner. |
|
|
|
00:24:32.217 --> 00:24:34.000 |
|
As long as you can guarantee that it |
|
|
|
00:24:34.000 --> 00:24:35.970 |
|
will be able to get at least chance |
|
|
|
00:24:35.970 --> 00:24:38.030 |
|
performance in a two class problem, |
|
|
|
00:24:38.030 --> 00:24:39.309 |
|
then it's a weak learner. |
|
|
|
00:24:42.560 --> 00:24:45.286 |
|
A strong learner is a combination of |
|
|
|
00:24:45.286 --> 00:24:46.182 |
|
the weak learner. |
|
|
|
00:24:46.182 --> 00:24:47.852 |
|
It's a predictor that uses a |
|
|
|
00:24:47.852 --> 00:24:49.230 |
|
combination of the weak learners. |
|
|
|
00:24:49.230 --> 00:24:52.020 |
|
So first you train 1 classifier in a |
|
|
|
00:24:52.020 --> 00:24:52.940 |
|
subset of the data. |
|
|
|
00:24:53.620 --> 00:24:55.936 |
|
Then you draw a new sample, and this |
|
|
|
00:24:55.936 --> 00:24:58.490 |
|
new sample is drawn so that half the |
|
|
|
00:24:58.490 --> 00:24:59.310 |
|
samples. |
|
|
|
00:25:00.010 --> 00:25:04.960 |
|
Are misclassified by the 1st classifier |
|
|
|
00:25:04.960 --> 00:25:06.640 |
|
and this can be drawn with replacement. |
|
|
|
00:25:07.460 --> 00:25:10.172 |
|
So half of your N2 samples were |
|
|
|
00:25:10.172 --> 00:25:12.310 |
|
misclassified by C1 and half of them |
|
|
|
00:25:12.310 --> 00:25:14.009 |
|
were not misclassified by C1. |
|
|
|
00:25:14.900 --> 00:25:17.230 |
|
And so now in this new sample of data. |
|
|
|
00:25:18.500 --> 00:25:21.220 |
|
Your classifier C1 had a 5050 chance of |
|
|
|
00:25:21.220 --> 00:25:22.910 |
|
getting it right by construction. |
|
|
|
00:25:22.980 --> 00:25:23.150 |
|
Right. |
|
|
|
00:25:23.880 --> 00:25:25.640 |
|
Then you train C2. |
|
|
|
00:25:27.060 --> 00:25:29.590 |
|
To try to like do well on this new |
|
|
|
00:25:29.590 --> 00:25:30.560 |
|
distribution. |
|
|
|
00:25:30.560 --> 00:25:32.590 |
|
So C2 has like a more difficult job, |
|
|
|
00:25:32.590 --> 00:25:33.970 |
|
it's going to focus on the things that |
|
|
|
00:25:33.970 --> 00:25:35.240 |
|
C1 found more difficult. |
|
|
|
00:25:37.140 --> 00:25:39.250 |
|
Then finally you take all the samples |
|
|
|
00:25:39.250 --> 00:25:41.830 |
|
that C1 and C2 disagree on, and you |
|
|
|
00:25:41.830 --> 00:25:43.590 |
|
train a third week learner 1/3 |
|
|
|
00:25:43.590 --> 00:25:45.740 |
|
classifier just on those examples. |
|
|
|
00:25:46.420 --> 00:25:49.470 |
|
And then at the end you take an average |
|
|
|
00:25:49.470 --> 00:25:50.500 |
|
of those votes. |
|
|
|
00:25:50.500 --> 00:25:52.621 |
|
So basically you have like you have |
|
|
|
00:25:52.621 --> 00:25:54.050 |
|
like one person who's making a |
|
|
|
00:25:54.050 --> 00:25:54.740 |
|
prediction. |
|
|
|
00:25:55.810 --> 00:25:57.946 |
|
You take half the predictions that |
|
|
|
00:25:57.946 --> 00:26:00.770 |
|
person made incorrect and half that |
|
|
|
00:26:00.770 --> 00:26:02.320 |
|
were correct, and then you get a second |
|
|
|
00:26:02.320 --> 00:26:04.192 |
|
person to make predictions just looking |
|
|
|
00:26:04.192 --> 00:26:05.690 |
|
at that at those samples. |
|
|
|
00:26:06.470 --> 00:26:08.130 |
|
Then you get a third person to be the |
|
|
|
00:26:08.130 --> 00:26:09.915 |
|
tiebreaker between the first two people |
|
|
|
00:26:09.915 --> 00:26:11.440 |
|
if they made if they had different |
|
|
|
00:26:11.440 --> 00:26:13.320 |
|
answers, and then you take a vote of |
|
|
|
00:26:13.320 --> 00:26:14.790 |
|
those three people as you're finally |
|
|
|
00:26:14.790 --> 00:26:15.160 |
|
answer. |
|
|
|
00:26:16.780 --> 00:26:18.590 |
|
Where you can substitute classifier for |
|
|
|
00:26:18.590 --> 00:26:19.290 |
|
people. |
|
|
|
00:26:20.660 --> 00:26:22.100 |
|
So this is the boosting idea. |
|
|
|
00:26:23.100 --> 00:26:25.120 |
|
Now this actually became much more |
|
|
|
00:26:25.120 --> 00:26:27.000 |
|
popular when it was generalized a |
|
|
|
00:26:27.000 --> 00:26:28.480 |
|
little bit into this method called |
|
|
|
00:26:28.480 --> 00:26:31.450 |
|
Adaboost, which stands for adaptive |
|
|
|
00:26:31.450 --> 00:26:31.970 |
|
boosting. |
|
|
|
00:26:33.210 --> 00:26:33.650 |
|
So. |
|
|
|
00:26:34.390 --> 00:26:38.710 |
|
The in adaptive boosting, instead of |
|
|
|
00:26:38.710 --> 00:26:42.940 |
|
justice directly sampling the data, you |
|
|
|
00:26:42.940 --> 00:26:44.730 |
|
assign a weight to the data. |
|
|
|
00:26:44.730 --> 00:26:46.640 |
|
And I'll explain in the next slide, I |
|
|
|
00:26:46.640 --> 00:26:48.564 |
|
think more of what it means to like |
|
|
|
00:26:48.564 --> 00:26:49.860 |
|
weight the data when you're doing |
|
|
|
00:26:49.860 --> 00:26:50.850 |
|
parameter estimation. |
|
|
|
00:26:52.360 --> 00:26:55.200 |
|
But you assign assign new weights to |
|
|
|
00:26:55.200 --> 00:26:57.357 |
|
the data so that under that |
|
|
|
00:26:57.357 --> 00:27:00.036 |
|
distribution the previous weak learner, |
|
|
|
00:27:00.036 --> 00:27:02.140 |
|
the previous classifier has chance |
|
|
|
00:27:02.140 --> 00:27:04.150 |
|
accuracy at that weighted distribution. |
|
|
|
00:27:04.920 --> 00:27:07.775 |
|
So this was one way of doing achieving |
|
|
|
00:27:07.775 --> 00:27:10.010 |
|
the same thing where you just you draw |
|
|
|
00:27:10.010 --> 00:27:12.390 |
|
like whole samples so that the previous |
|
|
|
00:27:12.390 --> 00:27:14.150 |
|
week learner had a 5050 chance of |
|
|
|
00:27:14.150 --> 00:27:16.000 |
|
getting those samples correct. |
|
|
|
00:27:16.830 --> 00:27:18.540 |
|
But you can instead assign a softer |
|
|
|
00:27:18.540 --> 00:27:20.510 |
|
weight to just say that some samples |
|
|
|
00:27:20.510 --> 00:27:23.160 |
|
matter more than others, so that on the |
|
|
|
00:27:23.160 --> 00:27:24.950 |
|
distribution the previous classifier |
|
|
|
00:27:24.950 --> 00:27:26.330 |
|
has a 5050 chance. |
|
|
|
00:27:27.900 --> 00:27:30.680 |
|
Then you train a new classifier on the |
|
|
|
00:27:30.680 --> 00:27:31.820 |
|
reweighted samples. |
|
|
|
00:27:32.440 --> 00:27:33.350 |
|
And then you iterate. |
|
|
|
00:27:33.350 --> 00:27:34.800 |
|
So then you reweigh them again and |
|
|
|
00:27:34.800 --> 00:27:36.340 |
|
train a new classifier and keep doing |
|
|
|
00:27:36.340 --> 00:27:36.850 |
|
that. |
|
|
|
00:27:36.850 --> 00:27:38.870 |
|
And then at the end you take a weighted |
|
|
|
00:27:38.870 --> 00:27:41.560 |
|
vote of all of the weak classifiers as |
|
|
|
00:27:41.560 --> 00:27:42.510 |
|
your final predictor. |
|
|
|
00:27:43.430 --> 00:27:47.810 |
|
So each each sample is going to each |
|
|
|
00:27:47.810 --> 00:27:49.600 |
|
classifier is going to try to correct |
|
|
|
00:27:49.600 --> 00:27:50.760 |
|
the mistakes of the previous |
|
|
|
00:27:50.760 --> 00:27:53.090 |
|
classifiers, and then all of their |
|
|
|
00:27:53.090 --> 00:27:54.650 |
|
predictions are combined. |
|
|
|
00:27:55.920 --> 00:27:57.240 |
|
So I'm going to show a specific |
|
|
|
00:27:57.240 --> 00:27:59.650 |
|
algorithm in a moment, but first I want |
|
|
|
00:27:59.650 --> 00:28:00.520 |
|
to clarify. |
|
|
|
00:28:01.450 --> 00:28:03.610 |
|
What it means to take A to do, like a |
|
|
|
00:28:03.610 --> 00:28:05.880 |
|
weighted estimation or weighting your |
|
|
|
00:28:05.880 --> 00:28:06.720 |
|
training samples. |
|
|
|
00:28:07.560 --> 00:28:09.600 |
|
So essentially it just means that some |
|
|
|
00:28:09.600 --> 00:28:11.795 |
|
samples count more than others towards |
|
|
|
00:28:11.795 --> 00:28:13.780 |
|
your parameter estimation or your |
|
|
|
00:28:13.780 --> 00:28:14.660 |
|
learning objective. |
|
|
|
00:28:15.410 --> 00:28:17.500 |
|
So let's say that we're trying to build |
|
|
|
00:28:17.500 --> 00:28:19.880 |
|
a naive Bayes classifier, and so we |
|
|
|
00:28:19.880 --> 00:28:21.870 |
|
need to estimate the probability that |
|
|
|
00:28:21.870 --> 00:28:24.745 |
|
some feature is equal to 0 given that |
|
|
|
00:28:24.745 --> 00:28:26.130 |
|
the label is equal to 0. |
|
|
|
00:28:26.130 --> 00:28:28.200 |
|
That's like one of the parameters of |
|
|
|
00:28:28.200 --> 00:28:28.940 |
|
our model. |
|
|
|
00:28:29.960 --> 00:28:32.250 |
|
If we have an unweighted distribution, |
|
|
|
00:28:32.250 --> 00:28:35.940 |
|
then that would be a count of how many |
|
|
|
00:28:35.940 --> 00:28:39.290 |
|
times the feature is equal to 0 and the |
|
|
|
00:28:39.290 --> 00:28:40.440 |
|
label is equal to 0. |
|
|
|
00:28:41.070 --> 00:28:43.380 |
|
Divided by a count of how many times |
|
|
|
00:28:43.380 --> 00:28:45.290 |
|
the label is equal to 0, right? |
|
|
|
00:28:45.290 --> 00:28:47.489 |
|
So that's probability of X&Y |
|
|
|
00:28:47.490 --> 00:28:49.112 |
|
essentially divided by probability of |
|
|
|
00:28:49.112 --> 00:28:49.380 |
|
Y. |
|
|
|
00:28:51.950 --> 00:28:53.940 |
|
Times north on the numerator and |
|
|
|
00:28:53.940 --> 00:28:54.720 |
|
denominator. |
|
|
|
00:28:56.520 --> 00:28:58.780 |
|
Then if I want to take a weighted |
|
|
|
00:28:58.780 --> 00:29:01.430 |
|
sample, if I wanted an estimate of a |
|
|
|
00:29:01.430 --> 00:29:03.490 |
|
weighted distribution, I have a weight |
|
|
|
00:29:03.490 --> 00:29:04.840 |
|
assigned to each of these training |
|
|
|
00:29:04.840 --> 00:29:07.570 |
|
samples, and that's often done so that |
|
|
|
00:29:07.570 --> 00:29:11.140 |
|
the weights sum up to one, but it |
|
|
|
00:29:11.140 --> 00:29:12.619 |
|
doesn't have to be, but they have to be |
|
|
|
00:29:12.620 --> 00:29:13.240 |
|
non negative. |
|
|
|
00:29:15.290 --> 00:29:16.950 |
|
OK, so I have to wait for each of these |
|
|
|
00:29:16.950 --> 00:29:18.973 |
|
samples that says how important it is. |
|
|
|
00:29:18.973 --> 00:29:20.940 |
|
So when I count the number of times |
|
|
|
00:29:20.940 --> 00:29:25.320 |
|
that X n = 0 and Y n = 0, then I am |
|
|
|
00:29:25.320 --> 00:29:27.200 |
|
waiting those counts by won. |
|
|
|
00:29:27.200 --> 00:29:29.140 |
|
So it's the sum of the weights where |
|
|
|
00:29:29.140 --> 00:29:31.185 |
|
for the samples in which this condition |
|
|
|
00:29:31.185 --> 00:29:33.698 |
|
is true divided by the sum of the |
|
|
|
00:29:33.698 --> 00:29:35.886 |
|
weights for which YN is equal to 0. |
|
|
|
00:29:35.886 --> 00:29:37.649 |
|
So that's my weighted estimate of that |
|
|
|
00:29:37.650 --> 00:29:38.260 |
|
statistic. |
|
|
|
00:29:40.910 --> 00:29:41.470 |
|
Right. |
|
|
|
00:29:41.470 --> 00:29:42.960 |
|
So it's your turn. |
|
|
|
00:29:44.180 --> 00:29:46.470 |
|
Let's say that we have this table here. |
|
|
|
00:29:46.470 --> 00:29:48.810 |
|
So we've got weights on the left side, |
|
|
|
00:29:48.810 --> 00:29:51.850 |
|
X in the middle, Y and the right, and |
|
|
|
00:29:51.850 --> 00:29:53.735 |
|
I'm trying to estimate probability of X |
|
|
|
00:29:53.735 --> 00:29:55.440 |
|
= 0 given y = 0. |
|
|
|
00:29:56.140 --> 00:29:57.950 |
|
So I'll give you a moment to think |
|
|
|
00:29:57.950 --> 00:29:58.690 |
|
about it. |
|
|
|
00:29:58.690 --> 00:30:00.590 |
|
First, what is the unweighted |
|
|
|
00:30:00.590 --> 00:30:03.040 |
|
distribution and then what is the |
|
|
|
00:30:03.040 --> 00:30:04.380 |
|
weighted distribution? |
|
|
|
00:30:12.540 --> 00:30:13.100 |
|
Right. |
|
|
|
00:30:20.290 --> 00:30:21.170 |
|
Me too. |
|
|
|
00:30:21.170 --> 00:30:23.410 |
|
My daughter woke me up at 4:00 AM and I |
|
|
|
00:30:23.410 --> 00:30:24.700 |
|
couldn't fall back asleep. |
|
|
|
00:30:39.450 --> 00:30:41.990 |
|
I'll I will go through these are the |
|
|
|
00:30:41.990 --> 00:30:43.920 |
|
examples, so I'll go through it. |
|
|
|
00:30:45.400 --> 00:30:45.930 |
|
Alright. |
|
|
|
00:30:48.690 --> 00:30:50.650 |
|
Going, I'll step through it in a |
|
|
|
00:30:50.650 --> 00:30:50.930 |
|
moment. |
|
|
|
00:30:52.270 --> 00:30:53.404 |
|
Alright, so let's do the. |
|
|
|
00:30:53.404 --> 00:30:55.090 |
|
Let's do the unweighted first. |
|
|
|
00:30:56.800 --> 00:31:00.940 |
|
So how many times does X equal 0 and y |
|
|
|
00:31:00.940 --> 00:31:01.480 |
|
= 0? |
|
|
|
00:31:03.440 --> 00:31:05.030 |
|
Right, three. |
|
|
|
00:31:05.030 --> 00:31:06.350 |
|
OK, so I'm going to have three on the |
|
|
|
00:31:06.350 --> 00:31:09.665 |
|
numerator and how many times does y = |
|
|
|
00:31:09.665 --> 00:31:10.120 |
|
0? |
|
|
|
00:31:12.070 --> 00:31:13.000 |
|
OK, right. |
|
|
|
00:31:13.000 --> 00:31:15.710 |
|
So unweighted is going to be 3 out of |
|
|
|
00:31:15.710 --> 00:31:16.500 |
|
five, right? |
|
|
|
00:31:18.560 --> 00:31:20.470 |
|
Now let's do the weighted. |
|
|
|
00:31:20.470 --> 00:31:22.990 |
|
So what's the sum of the weights where |
|
|
|
00:31:22.990 --> 00:31:25.309 |
|
X = 0 and y = 0? |
|
|
|
00:31:31.640 --> 00:31:35.026 |
|
So there's three rows where X = 0 and y |
|
|
|
00:31:35.026 --> 00:31:35.619 |
|
= 0. |
|
|
|
00:31:36.360 --> 00:31:36.830 |
|
Right. |
|
|
|
00:31:39.410 --> 00:31:40.990 |
|
Right, yeah, three. |
|
|
|
00:31:40.990 --> 00:31:42.742 |
|
So there's just these three rows, and |
|
|
|
00:31:42.742 --> 00:31:44.230 |
|
there's a .1 for each of them. |
|
|
|
00:31:44.940 --> 00:31:46.030 |
|
So that's .3. |
|
|
|
00:31:46.800 --> 00:31:49.830 |
|
And what is the total weight for y = 0? |
|
|
|
00:31:51.710 --> 00:31:52.960 |
|
Right .7. |
|
|
|
00:31:54.060 --> 00:31:55.960 |
|
So the weighted distribution. |
|
|
|
00:31:55.960 --> 00:31:57.456 |
|
My estimate on the weighted |
|
|
|
00:31:57.456 --> 00:31:58.920 |
|
distribution is 3 out of seven. |
|
|
|
00:32:00.000 --> 00:32:01.120 |
|
So that's how it works. |
|
|
|
00:32:01.830 --> 00:32:04.770 |
|
And if you had so a lot of times we are |
|
|
|
00:32:04.770 --> 00:32:06.260 |
|
just estimating counts like this. |
|
|
|
00:32:06.260 --> 00:32:08.500 |
|
If we were training a shorter tree for |
|
|
|
00:32:08.500 --> 00:32:11.148 |
|
example, then we would be estimating |
|
|
|
00:32:11.148 --> 00:32:13.330 |
|
the probability of each class within |
|
|
|
00:32:13.330 --> 00:32:14.920 |
|
the leaf node, which would just be by |
|
|
|
00:32:14.920 --> 00:32:15.380 |
|
counting. |
|
|
|
00:32:17.040 --> 00:32:18.980 |
|
Other times, if you're doing like |
|
|
|
00:32:18.980 --> 00:32:21.515 |
|
logistic regression or had some other |
|
|
|
00:32:21.515 --> 00:32:24.000 |
|
kind of training or neural network, |
|
|
|
00:32:24.000 --> 00:32:26.660 |
|
then usually these weights would show |
|
|
|
00:32:26.660 --> 00:32:28.410 |
|
up as some kind of like weight on the |
|
|
|
00:32:28.410 --> 00:32:29.140 |
|
loss. |
|
|
|
00:32:29.140 --> 00:32:31.290 |
|
So we're going to talk about a |
|
|
|
00:32:31.290 --> 00:32:32.740 |
|
sarcastic gradient descent. |
|
|
|
00:32:33.750 --> 00:32:35.110 |
|
Starting in the next class. |
|
|
|
00:32:35.720 --> 00:32:37.725 |
|
And a higher weight would just be like |
|
|
|
00:32:37.725 --> 00:32:39.440 |
|
a direct multiple on how much you |
|
|
|
00:32:39.440 --> 00:32:42.230 |
|
adjust your model parameters. |
|
|
|
00:32:45.810 --> 00:32:47.920 |
|
So here's a specific algorithm called |
|
|
|
00:32:47.920 --> 00:32:49.040 |
|
Adaboost. |
|
|
|
00:32:49.440 --> 00:32:52.289 |
|
A real boost, I mean, there's like a |
|
|
|
00:32:52.290 --> 00:32:53.816 |
|
ton of boosting algorithms. |
|
|
|
00:32:53.816 --> 00:32:56.037 |
|
There's like discrete ETA boost, real |
|
|
|
00:32:56.037 --> 00:32:57.695 |
|
boost, logic boost. |
|
|
|
00:32:57.695 --> 00:32:59.186 |
|
I don't know. |
|
|
|
00:32:59.186 --> 00:33:01.880 |
|
There's like literally like probably 50 |
|
|
|
00:33:01.880 --> 00:33:02.260 |
|
of them. |
|
|
|
00:33:03.670 --> 00:33:05.660 |
|
But here's one of the mainstays. |
|
|
|
00:33:05.660 --> 00:33:08.930 |
|
So you start with the weights being |
|
|
|
00:33:08.930 --> 00:33:09.560 |
|
uniform. |
|
|
|
00:33:09.560 --> 00:33:11.700 |
|
They're one over north with N samples. |
|
|
|
00:33:11.700 --> 00:33:13.240 |
|
Then you're going to train M |
|
|
|
00:33:13.240 --> 00:33:14.160 |
|
classifiers. |
|
|
|
00:33:14.910 --> 00:33:17.605 |
|
You fit the classifier to obtain a |
|
|
|
00:33:17.605 --> 00:33:19.690 |
|
probability estimate, the probability |
|
|
|
00:33:19.690 --> 00:33:22.630 |
|
of the label being one based on the |
|
|
|
00:33:22.630 --> 00:33:23.620 |
|
weighted distribution. |
|
|
|
00:33:24.500 --> 00:33:26.130 |
|
So again, if you're doing trees, this |
|
|
|
00:33:26.130 --> 00:33:28.460 |
|
would be the fraction of samples in |
|
|
|
00:33:28.460 --> 00:33:30.040 |
|
each leaf node of the trees where the |
|
|
|
00:33:30.040 --> 00:33:31.000 |
|
label is equal to 1. |
|
|
|
00:33:31.850 --> 00:33:33.530 |
|
And where you'd be using a weighted |
|
|
|
00:33:33.530 --> 00:33:35.530 |
|
sample to compute that fraction, just |
|
|
|
00:33:35.530 --> 00:33:36.580 |
|
like we did in the last slide. |
|
|
|
00:33:37.750 --> 00:33:39.860 |
|
Then the prediction of this the score |
|
|
|
00:33:39.860 --> 00:33:43.369 |
|
essentially for the label one is this |
|
|
|
00:33:43.370 --> 00:33:44.110 |
|
logic. |
|
|
|
00:33:44.110 --> 00:33:47.960 |
|
It's the log probability of the label |
|
|
|
00:33:47.960 --> 00:33:50.240 |
|
being one over the probability not |
|
|
|
00:33:50.240 --> 00:33:51.943 |
|
being one, which is 1 minus the |
|
|
|
00:33:51.943 --> 00:33:52.892 |
|
probability of it being one. |
|
|
|
00:33:52.892 --> 00:33:54.470 |
|
This is for binary classifier. |
|
|
|
00:33:55.650 --> 00:33:57.570 |
|
That's 1/2 of that logic value. |
|
|
|
00:33:58.780 --> 00:34:03.040 |
|
And then I re weight the samples and I |
|
|
|
00:34:03.040 --> 00:34:05.330 |
|
take the previous weight of each sample |
|
|
|
00:34:05.330 --> 00:34:07.240 |
|
and I multiply it by east to the |
|
|
|
00:34:07.240 --> 00:34:09.440 |
|
negative yiff FMX. |
|
|
|
00:34:09.440 --> 00:34:11.047 |
|
So this again is a score. |
|
|
|
00:34:11.047 --> 00:34:13.370 |
|
So this score defined this way, if it's |
|
|
|
00:34:13.370 --> 00:34:15.260 |
|
greater than zero that means that. |
|
|
|
00:34:16.090 --> 00:34:21.220 |
|
If Y ifm is greater than zero, here Yi |
|
|
|
00:34:21.220 --> 00:34:24.144 |
|
is either -, 1 or one, so -, 1 is the |
|
|
|
00:34:24.144 --> 00:34:25.900 |
|
negative label, one is the positive |
|
|
|
00:34:25.900 --> 00:34:26.200 |
|
label. |
|
|
|
00:34:26.910 --> 00:34:28.484 |
|
If this is greater than zero, that |
|
|
|
00:34:28.484 --> 00:34:30.449 |
|
means that I'm correct, and if it's |
|
|
|
00:34:30.450 --> 00:34:31.969 |
|
less than zero it means that I'm |
|
|
|
00:34:31.970 --> 00:34:32.960 |
|
incorrect. |
|
|
|
00:34:32.960 --> 00:34:34.846 |
|
So if I predict a score of 1, it means |
|
|
|
00:34:34.846 --> 00:34:36.540 |
|
that I think it's positive. |
|
|
|
00:34:36.540 --> 00:34:40.597 |
|
But if the label is -, 1, then Y ifm is |
|
|
|
00:34:40.597 --> 00:34:41.850 |
|
-, 1, so. |
|
|
|
00:34:44.620 --> 00:34:48.350 |
|
So this negative Y ifm, if I'm correct |
|
|
|
00:34:48.350 --> 00:34:49.990 |
|
this is going to be less than one |
|
|
|
00:34:49.990 --> 00:34:53.450 |
|
because this is going to be east to the |
|
|
|
00:34:53.450 --> 00:34:54.860 |
|
negative sum value. |
|
|
|
00:34:55.970 --> 00:34:57.700 |
|
And if I'm incorrect, this is going to |
|
|
|
00:34:57.700 --> 00:34:58.600 |
|
be greater than one. |
|
|
|
00:34:59.270 --> 00:35:00.993 |
|
So if I'm correct, the weight is going |
|
|
|
00:35:00.993 --> 00:35:03.141 |
|
to go down, and if I'm incorrect the |
|
|
|
00:35:03.141 --> 00:35:04.070 |
|
weight is going to go up. |
|
|
|
00:35:04.830 --> 00:35:06.650 |
|
And if I'm like confidently correct, |
|
|
|
00:35:06.650 --> 00:35:07.908 |
|
then the way it's going to go down a |
|
|
|
00:35:07.908 --> 00:35:08.156 |
|
lot. |
|
|
|
00:35:08.156 --> 00:35:09.835 |
|
And if I'm confidently incorrect then |
|
|
|
00:35:09.835 --> 00:35:10.960 |
|
the weight is going to go up a lot. |
|
|
|
00:35:12.410 --> 00:35:13.480 |
|
That's kind of intuitive. |
|
|
|
00:35:14.120 --> 00:35:15.470 |
|
And then I just reweigh. |
|
|
|
00:35:15.470 --> 00:35:17.630 |
|
I just sum my. |
|
|
|
00:35:18.910 --> 00:35:19.480 |
|
My weight. |
|
|
|
00:35:19.480 --> 00:35:22.050 |
|
I renormalize my weights, so I make it |
|
|
|
00:35:22.050 --> 00:35:23.460 |
|
so that the weights sum to one by |
|
|
|
00:35:23.460 --> 00:35:24.479 |
|
dividing by the sum. |
|
|
|
00:35:25.980 --> 00:35:27.630 |
|
So then I just iterate, then I train |
|
|
|
00:35:27.630 --> 00:35:29.235 |
|
new classifier and the way distribution |
|
|
|
00:35:29.235 --> 00:35:31.430 |
|
recompute this, recompute the weights, |
|
|
|
00:35:31.430 --> 00:35:33.300 |
|
do that say 20 times. |
|
|
|
00:35:33.910 --> 00:35:36.642 |
|
And then at the end my classifier is. |
|
|
|
00:35:36.642 --> 00:35:38.607 |
|
My total score for the classifier is |
|
|
|
00:35:38.607 --> 00:35:40.430 |
|
the sum of the individual classifier |
|
|
|
00:35:40.430 --> 00:35:40.840 |
|
scores. |
|
|
|
00:35:42.130 --> 00:35:43.300 |
|
So it's not too complicated. |
|
|
|
00:35:44.220 --> 00:35:47.163 |
|
That theory is somewhat complicated, so |
|
|
|
00:35:47.163 --> 00:35:49.310 |
|
the derivation of why this is the right |
|
|
|
00:35:49.310 --> 00:35:51.240 |
|
answer and what it's minimizing, and |
|
|
|
00:35:51.240 --> 00:35:52.500 |
|
that it's like doing with just |
|
|
|
00:35:52.500 --> 00:35:54.840 |
|
aggression, et cetera, that's all a |
|
|
|
00:35:54.840 --> 00:35:56.960 |
|
little bit more complicated, but it's |
|
|
|
00:35:56.960 --> 00:35:58.250 |
|
well worth reading if you're |
|
|
|
00:35:58.250 --> 00:35:58.660 |
|
interested. |
|
|
|
00:35:58.660 --> 00:36:00.046 |
|
So there's a link here. |
|
|
|
00:36:00.046 --> 00:36:02.085 |
|
This is my favorite boosting paper, |
|
|
|
00:36:02.085 --> 00:36:03.780 |
|
that out of logistic regression paper. |
|
|
|
00:36:04.510 --> 00:36:07.660 |
|
But this paper is also probably a good |
|
|
|
00:36:07.660 --> 00:36:08.080 |
|
one to read. |
|
|
|
00:36:08.080 --> 00:36:11.440 |
|
First, the intro to boosting by friend |
|
|
|
00:36:11.440 --> 00:36:12.040 |
|
and Shapiro. |
|
|
|
00:36:16.960 --> 00:36:18.910 |
|
So we can use this with trees. |
|
|
|
00:36:18.910 --> 00:36:21.420 |
|
We initialize the weights to be |
|
|
|
00:36:21.420 --> 00:36:22.190 |
|
uniform. |
|
|
|
00:36:22.190 --> 00:36:24.250 |
|
Then for each tree, usually you do like |
|
|
|
00:36:24.250 --> 00:36:24.840 |
|
maybe 20. |
|
|
|
00:36:25.520 --> 00:36:27.740 |
|
You train a small tree this time. |
|
|
|
00:36:28.880 --> 00:36:31.370 |
|
So you want to train a small tree, |
|
|
|
00:36:31.370 --> 00:36:33.550 |
|
because the idea of boosting is that |
|
|
|
00:36:33.550 --> 00:36:36.020 |
|
you're going to reduce the variance by |
|
|
|
00:36:36.020 --> 00:36:38.270 |
|
having each subsequent classifier fix |
|
|
|
00:36:38.270 --> 00:36:39.810 |
|
the mistakes of the previous ones. |
|
|
|
00:36:40.880 --> 00:36:44.580 |
|
So in random forests you have high |
|
|
|
00:36:44.580 --> 00:36:46.730 |
|
variance, low bias classifiers that |
|
|
|
00:36:46.730 --> 00:36:49.650 |
|
you've averaged to get low biased low |
|
|
|
00:36:49.650 --> 00:36:50.490 |
|
variance classifiers. |
|
|
|
00:36:51.170 --> 00:36:53.560 |
|
In boosting you have low variance, high |
|
|
|
00:36:53.560 --> 00:36:56.400 |
|
bias classifiers that you incrementally |
|
|
|
00:36:56.400 --> 00:36:58.730 |
|
train to end up with a low biased, low |
|
|
|
00:36:58.730 --> 00:36:59.580 |
|
variance classifier. |
|
|
|
00:37:01.600 --> 00:37:04.470 |
|
So you the tree to a depth, typically |
|
|
|
00:37:04.470 --> 00:37:05.620 |
|
two to four. |
|
|
|
00:37:05.620 --> 00:37:07.960 |
|
So often it might sound silly, but |
|
|
|
00:37:07.960 --> 00:37:09.690 |
|
often you only choose one feature and |
|
|
|
00:37:09.690 --> 00:37:11.096 |
|
split based on that, and you just have |
|
|
|
00:37:11.096 --> 00:37:13.020 |
|
like the shortest tree possible, a tree |
|
|
|
00:37:13.020 --> 00:37:16.050 |
|
with two leaf nodes, and you train 200 |
|
|
|
00:37:16.050 --> 00:37:16.910 |
|
of these trees. |
|
|
|
00:37:17.600 --> 00:37:19.975 |
|
That actually is surprisingly it works. |
|
|
|
00:37:19.975 --> 00:37:22.810 |
|
It works quite well, but you might |
|
|
|
00:37:22.810 --> 00:37:23.840 |
|
train deeper trees. |
|
|
|
00:37:25.890 --> 00:37:28.880 |
|
So I've used this method for predicting |
|
|
|
00:37:28.880 --> 00:37:31.400 |
|
like whether pixels belong to the |
|
|
|
00:37:31.400 --> 00:37:34.300 |
|
ground or sky or et cetera, and I had |
|
|
|
00:37:34.300 --> 00:37:37.945 |
|
like trees that were of death three and |
|
|
|
00:37:37.945 --> 00:37:39.180 |
|
I trained 20 trees. |
|
|
|
00:37:40.810 --> 00:37:43.480 |
|
You estimate you estimate logic |
|
|
|
00:37:43.480 --> 00:37:44.810 |
|
prediction at each leaf node. |
|
|
|
00:37:44.810 --> 00:37:46.840 |
|
So just based on the count of how many |
|
|
|
00:37:46.840 --> 00:37:48.860 |
|
times each class appears in each leaf |
|
|
|
00:37:48.860 --> 00:37:50.780 |
|
node, reweigh the samples and repeat. |
|
|
|
00:37:52.060 --> 00:37:53.780 |
|
And then at the end you have the |
|
|
|
00:37:53.780 --> 00:37:55.290 |
|
prediction is the sum of the logic |
|
|
|
00:37:55.290 --> 00:37:56.610 |
|
predictions from all the trees. |
|
|
|
00:37:59.890 --> 00:38:02.470 |
|
So this is a. |
|
|
|
00:38:03.810 --> 00:38:07.490 |
|
There's like one study by there's a |
|
|
|
00:38:07.490 --> 00:38:09.590 |
|
couple of studies by Caruana of |
|
|
|
00:38:09.590 --> 00:38:11.110 |
|
comparing different machine learning |
|
|
|
00:38:11.110 --> 00:38:11.600 |
|
methods. |
|
|
|
00:38:12.320 --> 00:38:14.720 |
|
On a bunch of different datasets, so |
|
|
|
00:38:14.720 --> 00:38:16.660 |
|
this one is from 2006. |
|
|
|
00:38:17.480 --> 00:38:20.300 |
|
So these are all different data sets. |
|
|
|
00:38:20.300 --> 00:38:21.750 |
|
It's not too important what they are. |
|
|
|
00:38:22.950 --> 00:38:24.610 |
|
In this case, they're kind of smaller |
|
|
|
00:38:24.610 --> 00:38:26.470 |
|
data sets, not too not too many |
|
|
|
00:38:26.470 --> 00:38:27.890 |
|
samples, not too many features. |
|
|
|
00:38:28.620 --> 00:38:31.520 |
|
And the scores are normalized so that |
|
|
|
00:38:31.520 --> 00:38:34.040 |
|
one is like the best achievable score |
|
|
|
00:38:34.040 --> 00:38:37.130 |
|
and I guess zero would be like chance. |
|
|
|
00:38:37.130 --> 00:38:39.940 |
|
So that way you can average the |
|
|
|
00:38:39.940 --> 00:38:41.890 |
|
performance across different data sets |
|
|
|
00:38:41.890 --> 00:38:43.300 |
|
in a more meaningful way than if you |
|
|
|
00:38:43.300 --> 00:38:44.660 |
|
were just averaging their errors. |
|
|
|
00:38:46.020 --> 00:38:47.760 |
|
So here this is like a normalized |
|
|
|
00:38:47.760 --> 00:38:50.200 |
|
accuracy, so higher is better. |
|
|
|
00:38:51.260 --> 00:38:54.700 |
|
And then this BTDT is boosted decision |
|
|
|
00:38:54.700 --> 00:38:56.760 |
|
tree, our F is random forest and north |
|
|
|
00:38:56.760 --> 00:38:59.020 |
|
is neural network, Ann SVM, which we'll |
|
|
|
00:38:59.020 --> 00:39:01.420 |
|
talk about Thursday night Bayes, |
|
|
|
00:39:01.420 --> 00:39:02.630 |
|
logistic regression. |
|
|
|
00:39:02.630 --> 00:39:05.580 |
|
So Naive Bayes is like pulling up the |
|
|
|
00:39:05.580 --> 00:39:06.980 |
|
rear, not doing so well. |
|
|
|
00:39:06.980 --> 00:39:08.055 |
|
It's at the very bottom. |
|
|
|
00:39:08.055 --> 00:39:10.236 |
|
The district regression is just above |
|
|
|
00:39:10.236 --> 00:39:10.588 |
|
that. |
|
|
|
00:39:10.588 --> 00:39:12.370 |
|
Decision trees are just above that. |
|
|
|
00:39:13.160 --> 00:39:14.890 |
|
And then boosted stumps. |
|
|
|
00:39:14.890 --> 00:39:17.130 |
|
If you train a very shallow tree that |
|
|
|
00:39:17.130 --> 00:39:19.540 |
|
only has one feature in each tree, |
|
|
|
00:39:19.540 --> 00:39:20.810 |
|
that's the next best. |
|
|
|
00:39:20.810 --> 00:39:22.010 |
|
It's actually pretty similar to |
|
|
|
00:39:22.010 --> 00:39:22.930 |
|
logistic regression. |
|
|
|
00:39:24.050 --> 00:39:29.110 |
|
K&N near neural networks SVMS. |
|
|
|
00:39:29.760 --> 00:39:32.860 |
|
And then the top is boosted decision |
|
|
|
00:39:32.860 --> 00:39:33.940 |
|
trees and random forests. |
|
|
|
00:39:34.680 --> 00:39:36.440 |
|
And there's different versions of this, |
|
|
|
00:39:36.440 --> 00:39:37.903 |
|
which is just like different ways of |
|
|
|
00:39:37.903 --> 00:39:39.130 |
|
trying to calibrate your final |
|
|
|
00:39:39.130 --> 00:39:40.550 |
|
prediction, which means trying to make |
|
|
|
00:39:40.550 --> 00:39:41.890 |
|
it better fit of probability. |
|
|
|
00:39:41.890 --> 00:39:44.055 |
|
But that's not our topic for now, so |
|
|
|
00:39:44.055 --> 00:39:45.290 |
|
that's kind of ignorable. |
|
|
|
00:39:46.110 --> 00:39:48.350 |
|
Main the main conclusion is that in |
|
|
|
00:39:48.350 --> 00:39:50.690 |
|
this competition among classifiers. |
|
|
|
00:39:51.340 --> 00:39:54.690 |
|
Boosted decision trees is #1 and |
|
|
|
00:39:54.690 --> 00:39:56.950 |
|
following very close behind is random |
|
|
|
00:39:56.950 --> 00:39:58.810 |
|
forests with almost the same average |
|
|
|
00:39:58.810 --> 00:39:59.180 |
|
score. |
|
|
|
00:40:00.070 --> 00:40:01.890 |
|
So these two ensemble methods of trees |
|
|
|
00:40:01.890 --> 00:40:03.070 |
|
are the two best methods. |
|
|
|
00:40:04.040 --> 00:40:05.030 |
|
According to the study. |
|
|
|
00:40:06.160 --> 00:40:07.990 |
|
Then in 2008 they did another |
|
|
|
00:40:07.990 --> 00:40:11.110 |
|
comparison on high dimensional data. |
|
|
|
00:40:12.360 --> 00:40:14.570 |
|
So here they had the features range |
|
|
|
00:40:14.570 --> 00:40:17.900 |
|
from around 700 features to 685,000 |
|
|
|
00:40:17.900 --> 00:40:18.870 |
|
features. |
|
|
|
00:40:19.750 --> 00:40:21.540 |
|
This is like IMDb where you're trying |
|
|
|
00:40:21.540 --> 00:40:25.490 |
|
to predict the rating of movies. |
|
|
|
00:40:25.490 --> 00:40:28.750 |
|
I think spam classification and other |
|
|
|
00:40:28.750 --> 00:40:29.210 |
|
problems. |
|
|
|
00:40:30.100 --> 00:40:32.340 |
|
And then again, they're comparing the |
|
|
|
00:40:32.340 --> 00:40:33.460 |
|
different approaches. |
|
|
|
00:40:33.460 --> 00:40:36.675 |
|
So again, boosted decision trees gets |
|
|
|
00:40:36.675 --> 00:40:38.400 |
|
the best score on average. |
|
|
|
00:40:38.400 --> 00:40:41.030 |
|
I don't know exactly how the weighting |
|
|
|
00:40:41.030 --> 00:40:42.480 |
|
is done here, they can be greater than |
|
|
|
00:40:42.480 --> 00:40:42.580 |
|
one. |
|
|
|
00:40:43.270 --> 00:40:45.410 |
|
But boosted decision trees probably |
|
|
|
00:40:45.410 --> 00:40:46.963 |
|
compared to some baseline boosted |
|
|
|
00:40:46.963 --> 00:40:48.610 |
|
decision trees gets the best score on |
|
|
|
00:40:48.610 --> 00:40:49.340 |
|
average. |
|
|
|
00:40:49.340 --> 00:40:51.650 |
|
And random forests is number 2. |
|
|
|
00:40:51.650 --> 00:40:53.660 |
|
Again, it's naive Bayes on the bottom. |
|
|
|
00:40:53.750 --> 00:40:54.210 |
|
|
|
|
|
00:40:55.000 --> 00:40:56.420 |
|
Logistic regression does a bit better |
|
|
|
00:40:56.420 --> 00:40:57.780 |
|
and this high dimensional data. |
|
|
|
00:40:57.780 --> 00:40:59.420 |
|
Again, linear classifiers are more |
|
|
|
00:40:59.420 --> 00:41:00.950 |
|
powerful when you have more features, |
|
|
|
00:41:00.950 --> 00:41:03.980 |
|
but still not outperforming their |
|
|
|
00:41:03.980 --> 00:41:05.750 |
|
neural networks or SVM or random |
|
|
|
00:41:05.750 --> 00:41:06.140 |
|
forests. |
|
|
|
00:41:07.950 --> 00:41:10.620 |
|
But also, even though boosted decision |
|
|
|
00:41:10.620 --> 00:41:13.070 |
|
trees did the best on average, they're |
|
|
|
00:41:13.070 --> 00:41:15.150 |
|
not doing so when you have tons of |
|
|
|
00:41:15.150 --> 00:41:15.940 |
|
features. |
|
|
|
00:41:15.940 --> 00:41:17.926 |
|
They're random forest is doing the |
|
|
|
00:41:17.926 --> 00:41:18.189 |
|
best. |
|
|
|
00:41:19.490 --> 00:41:22.200 |
|
And the reason for that is that boosted |
|
|
|
00:41:22.200 --> 00:41:27.580 |
|
decision trees have a weakness of that. |
|
|
|
00:41:27.810 --> 00:41:29.700 |
|
High. |
|
|
|
00:41:29.770 --> 00:41:30.380 |
|
|
|
|
|
00:41:31.500 --> 00:41:31.932 |
|
They have. |
|
|
|
00:41:31.932 --> 00:41:33.480 |
|
They have a weakness of tending to |
|
|
|
00:41:33.480 --> 00:41:35.100 |
|
overfit the data if they've got too |
|
|
|
00:41:35.100 --> 00:41:36.210 |
|
much flexibility. |
|
|
|
00:41:36.210 --> 00:41:39.049 |
|
So if you have 600,000 features and |
|
|
|
00:41:39.050 --> 00:41:40.512 |
|
you're trying to just fix the mistakes |
|
|
|
00:41:40.512 --> 00:41:42.930 |
|
of the previous classifier iteratively, |
|
|
|
00:41:42.930 --> 00:41:44.400 |
|
then there's a pretty good chance that |
|
|
|
00:41:44.400 --> 00:41:45.840 |
|
you could fix those mistakes for the |
|
|
|
00:41:45.840 --> 00:41:46.365 |
|
wrong reason. |
|
|
|
00:41:46.365 --> 00:41:47.970 |
|
And so they tend to be. |
|
|
|
00:41:47.970 --> 00:41:49.847 |
|
When you have a lot of features, you |
|
|
|
00:41:49.847 --> 00:41:52.596 |
|
end up with high, high variance, high |
|
|
|
00:41:52.596 --> 00:41:55.186 |
|
bias features that you then reduce the |
|
|
|
00:41:55.186 --> 00:41:57.588 |
|
variance of, but you still end up with |
|
|
|
00:41:57.588 --> 00:41:59.840 |
|
high variance, low bias features |
|
|
|
00:41:59.840 --> 00:42:00.710 |
|
classifiers. |
|
|
|
00:42:05.030 --> 00:42:07.480 |
|
So just to recap that boosted decision |
|
|
|
00:42:07.480 --> 00:42:09.150 |
|
trees and random forests work for |
|
|
|
00:42:09.150 --> 00:42:10.063 |
|
different reasons. |
|
|
|
00:42:10.063 --> 00:42:12.345 |
|
Boosted trees use a lot of small trees |
|
|
|
00:42:12.345 --> 00:42:14.430 |
|
to iteratively refine the prediction, |
|
|
|
00:42:14.430 --> 00:42:16.445 |
|
and combining the prediction from many |
|
|
|
00:42:16.445 --> 00:42:18.020 |
|
trees reduces the bias. |
|
|
|
00:42:18.020 --> 00:42:20.380 |
|
But they have a danger of overfitting |
|
|
|
00:42:20.380 --> 00:42:22.717 |
|
if you have too many trees, or the |
|
|
|
00:42:22.717 --> 00:42:24.640 |
|
trees are too big or you have too many |
|
|
|
00:42:24.640 --> 00:42:25.160 |
|
features. |
|
|
|
00:42:25.820 --> 00:42:28.470 |
|
Then they may not generalize that well. |
|
|
|
00:42:29.740 --> 00:42:32.170 |
|
Random forests used big trees, which |
|
|
|
00:42:32.170 --> 00:42:34.050 |
|
are low bias and high variance. |
|
|
|
00:42:34.050 --> 00:42:36.000 |
|
They average a lot of those tree |
|
|
|
00:42:36.000 --> 00:42:38.303 |
|
predictions, which reduces the |
|
|
|
00:42:38.303 --> 00:42:40.170 |
|
variance, and it's kind of hard to make |
|
|
|
00:42:40.170 --> 00:42:41.079 |
|
them not work. |
|
|
|
00:42:41.080 --> 00:42:42.900 |
|
They're not always like the very best |
|
|
|
00:42:42.900 --> 00:42:46.320 |
|
thing you can do, but they always, as |
|
|
|
00:42:46.320 --> 00:42:48.240 |
|
far as I can see and I've ever seen, |
|
|
|
00:42:48.240 --> 00:42:49.810 |
|
they always work like at least pretty |
|
|
|
00:42:49.810 --> 00:42:50.110 |
|
well. |
|
|
|
00:42:51.130 --> 00:42:52.790 |
|
As long as you just train enough trees. |
|
|
|
00:42:55.870 --> 00:42:56.906 |
|
Ensemble. |
|
|
|
00:42:56.906 --> 00:43:00.090 |
|
There's other kinds of ensembles too, |
|
|
|
00:43:00.090 --> 00:43:01.635 |
|
so you can average the predictions of |
|
|
|
00:43:01.635 --> 00:43:03.280 |
|
any classifiers as long as they're not |
|
|
|
00:43:03.280 --> 00:43:04.210 |
|
duplicates of each other. |
|
|
|
00:43:04.210 --> 00:43:05.323 |
|
If they're duplicates of each other, |
|
|
|
00:43:05.323 --> 00:43:07.150 |
|
you don't get any benefit, obviously, |
|
|
|
00:43:07.150 --> 00:43:08.260 |
|
because they'll just make the same |
|
|
|
00:43:08.260 --> 00:43:08.720 |
|
prediction. |
|
|
|
00:43:10.000 --> 00:43:12.170 |
|
So you can also apply this to deep |
|
|
|
00:43:12.170 --> 00:43:13.510 |
|
neural networks, for example. |
|
|
|
00:43:13.510 --> 00:43:15.650 |
|
So here is something showing that |
|
|
|
00:43:15.650 --> 00:43:19.120 |
|
cascades and averages on average |
|
|
|
00:43:19.120 --> 00:43:21.430 |
|
ensembles of classifiers outperform |
|
|
|
00:43:21.430 --> 00:43:23.260 |
|
single classifiers even when you're |
|
|
|
00:43:23.260 --> 00:43:25.470 |
|
considering the computation required |
|
|
|
00:43:25.470 --> 00:43:26.110 |
|
for them. |
|
|
|
00:43:27.550 --> 00:43:29.460 |
|
And a cascade is when you train one |
|
|
|
00:43:29.460 --> 00:43:30.340 |
|
classifier. |
|
|
|
00:43:31.050 --> 00:43:34.512 |
|
And then you let it make its confident |
|
|
|
00:43:34.512 --> 00:43:36.180 |
|
decisions, and then subsequent |
|
|
|
00:43:36.180 --> 00:43:38.240 |
|
classifiers only make decisions about |
|
|
|
00:43:38.240 --> 00:43:39.280 |
|
the less confident. |
|
|
|
00:43:40.500 --> 00:43:41.660 |
|
Examples. |
|
|
|
00:43:41.660 --> 00:43:42.870 |
|
And then you keep on doing that. |
|
|
|
00:43:46.120 --> 00:43:49.770 |
|
Let me give you a two-minute stretch |
|
|
|
00:43:49.770 --> 00:43:51.430 |
|
break before I go into a detailed |
|
|
|
00:43:51.430 --> 00:43:53.670 |
|
example of using random forests. |
|
|
|
00:43:54.690 --> 00:43:56.620 |
|
And you can think about this question |
|
|
|
00:43:56.620 --> 00:43:57.220 |
|
if you want. |
|
|
|
00:43:57.920 --> 00:44:00.120 |
|
So suppose you had an infinite size |
|
|
|
00:44:00.120 --> 00:44:03.100 |
|
audience and where and they could |
|
|
|
00:44:03.100 --> 00:44:04.100 |
|
choose ABCD. |
|
|
|
00:44:05.500 --> 00:44:07.120 |
|
What is the situation where you're |
|
|
|
00:44:07.120 --> 00:44:08.845 |
|
guaranteed to have a correct answer? |
|
|
|
00:44:08.845 --> 00:44:11.410 |
|
What if, let's say, a randomly sampled |
|
|
|
00:44:11.410 --> 00:44:12.970 |
|
audience member is going to report an |
|
|
|
00:44:12.970 --> 00:44:14.800 |
|
answer with probability PY? |
|
|
|
00:44:15.770 --> 00:44:17.650 |
|
What guarantees a correct answer? |
|
|
|
00:44:17.650 --> 00:44:19.930 |
|
And let's say instead you choose a |
|
|
|
00:44:19.930 --> 00:44:21.850 |
|
friend which is a random member of the |
|
|
|
00:44:21.850 --> 00:44:22.830 |
|
audience in this case. |
|
|
|
00:44:23.570 --> 00:44:24.900 |
|
What's the probability that your |
|
|
|
00:44:24.900 --> 00:44:25.930 |
|
friend's answer is correct? |
|
|
|
00:44:26.560 --> 00:44:28.950 |
|
So think about those or don't. |
|
|
|
00:44:28.950 --> 00:44:30.280 |
|
It's up to you. |
|
|
|
00:44:30.280 --> 00:44:31.790 |
|
I'll give you the answer in 2 minutes. |
|
|
|
00:45:07.040 --> 00:45:09.180 |
|
Some people would, they would say like |
|
|
|
00:45:09.180 --> 00:45:11.130 |
|
cherry or yeah. |
|
|
|
00:45:13.980 --> 00:45:14.270 |
|
Yeah. |
|
|
|
00:45:15.730 --> 00:45:17.400 |
|
Or they might be color blind. |
|
|
|
00:45:18.390 --> 00:45:18.960 |
|
I see. |
|
|
|
00:45:24.750 --> 00:45:25.310 |
|
That's true. |
|
|
|
00:45:29.140 --> 00:45:31.120 |
|
It's actually pretty hard not get a |
|
|
|
00:45:31.120 --> 00:45:32.550 |
|
correct answer, I would say. |
|
|
|
00:45:43.340 --> 00:45:46.300 |
|
Correct decision wide away look goes |
|
|
|
00:45:46.300 --> 00:45:49.670 |
|
down because you want the subsequent |
|
|
|
00:45:49.670 --> 00:45:51.240 |
|
classifiers to focus more on the |
|
|
|
00:45:51.240 --> 00:45:52.050 |
|
mistakes. |
|
|
|
00:45:52.050 --> 00:45:56.300 |
|
So if it's incorrect then the weight |
|
|
|
00:45:56.300 --> 00:45:57.920 |
|
goes up so then it matters more to the |
|
|
|
00:45:57.920 --> 00:45:58.730 |
|
next classifier. |
|
|
|
00:46:02.730 --> 00:46:04.160 |
|
Unclassified award goes to. |
|
|
|
00:46:06.000 --> 00:46:07.700 |
|
It could go back up, yeah. |
|
|
|
00:46:10.830 --> 00:46:12.670 |
|
The weights keeping being multiplied by |
|
|
|
00:46:12.670 --> 00:46:14.500 |
|
that factor, so yeah. |
|
|
|
00:46:15.520 --> 00:46:15.870 |
|
Yeah. |
|
|
|
00:46:17.280 --> 00:46:17.700 |
|
You're welcome. |
|
|
|
00:46:25.930 --> 00:46:27.410 |
|
All right, times up. |
|
|
|
00:46:28.930 --> 00:46:32.470 |
|
So what is like the weakest condition? |
|
|
|
00:46:32.470 --> 00:46:34.270 |
|
I should have made it a little harder. |
|
|
|
00:46:34.270 --> 00:46:35.900 |
|
Obviously there's one condition, which |
|
|
|
00:46:35.900 --> 00:46:37.450 |
|
is that every audience member knows the |
|
|
|
00:46:37.450 --> 00:46:37.820 |
|
answer. |
|
|
|
00:46:37.820 --> 00:46:38.380 |
|
That's easy. |
|
|
|
00:46:39.350 --> 00:46:41.160 |
|
But what's the weakest condition that |
|
|
|
00:46:41.160 --> 00:46:43.090 |
|
guarantees a correct answer? |
|
|
|
00:46:43.090 --> 00:46:45.725 |
|
So what has to be true for this answer |
|
|
|
00:46:45.725 --> 00:46:47.330 |
|
to be correct with an infinite audience |
|
|
|
00:46:47.330 --> 00:46:47.710 |
|
size? |
|
|
|
00:46:52.040 --> 00:46:52.530 |
|
Right. |
|
|
|
00:46:54.740 --> 00:46:56.290 |
|
Yes, one audience member. |
|
|
|
00:46:56.290 --> 00:46:57.810 |
|
No, that won't work. |
|
|
|
00:46:57.810 --> 00:46:59.550 |
|
So because then the probability would |
|
|
|
00:46:59.550 --> 00:47:03.790 |
|
be 0 right of the correct answer if all |
|
|
|
00:47:03.790 --> 00:47:05.470 |
|
the other audience members thought it |
|
|
|
00:47:05.470 --> 00:47:06.280 |
|
was a different answer. |
|
|
|
00:47:10.760 --> 00:47:12.740 |
|
If this size of the audience is one, |
|
|
|
00:47:12.740 --> 00:47:14.936 |
|
yeah, but you have an infinite size |
|
|
|
00:47:14.936 --> 00:47:15.940 |
|
audience and the problem. |
|
|
|
00:47:18.270 --> 00:47:18.770 |
|
Does anybody? |
|
|
|
00:47:18.770 --> 00:47:19.940 |
|
Yeah. |
|
|
|
00:47:23.010 --> 00:47:24.938 |
|
Yes, the probability of the correct |
|
|
|
00:47:24.938 --> 00:47:26.070 |
|
answer has to be the highest. |
|
|
|
00:47:26.070 --> 00:47:27.548 |
|
So if the probability of the correct |
|
|
|
00:47:27.548 --> 00:47:30.714 |
|
answer is say 26%, but the probability |
|
|
|
00:47:30.714 --> 00:47:33.220 |
|
of all the other answers is like just |
|
|
|
00:47:33.220 --> 00:47:35.923 |
|
under 25%, then you'll get the correct |
|
|
|
00:47:35.923 --> 00:47:36.226 |
|
answer. |
|
|
|
00:47:36.226 --> 00:47:38.578 |
|
So even though almost three out of four |
|
|
|
00:47:38.578 --> 00:47:41.013 |
|
of the audience members can be wrong, |
|
|
|
00:47:41.013 --> 00:47:41.569 |
|
it's. |
|
|
|
00:47:41.570 --> 00:47:43.378 |
|
I mean, it's possible for three out of |
|
|
|
00:47:43.378 --> 00:47:45.038 |
|
four of the audience members to be |
|
|
|
00:47:45.038 --> 00:47:46.698 |
|
wrong almost, but still get the correct |
|
|
|
00:47:46.698 --> 00:47:48.140 |
|
answer, still be guaranteed they're |
|
|
|
00:47:48.140 --> 00:47:48.760 |
|
correct answer. |
|
|
|
00:47:50.250 --> 00:47:52.385 |
|
If you were to pull the infinite size |
|
|
|
00:47:52.385 --> 00:47:53.940 |
|
audience, of course with the limited |
|
|
|
00:47:53.940 --> 00:47:55.930 |
|
audience you also have then variance, |
|
|
|
00:47:55.930 --> 00:47:57.800 |
|
so you would want a bigger margin to be |
|
|
|
00:47:57.800 --> 00:47:58.190 |
|
confident. |
|
|
|
00:47:59.100 --> 00:48:01.480 |
|
And if a friend is a random member of |
|
|
|
00:48:01.480 --> 00:48:02.660 |
|
the audience, this is an easier |
|
|
|
00:48:02.660 --> 00:48:03.270 |
|
question. |
|
|
|
00:48:03.270 --> 00:48:05.190 |
|
Then what's the probability that your |
|
|
|
00:48:05.190 --> 00:48:06.290 |
|
friend's answer is correct? |
|
|
|
00:48:09.150 --> 00:48:09.440 |
|
Right. |
|
|
|
00:48:10.320 --> 00:48:11.852 |
|
Yeah, P of A, yeah. |
|
|
|
00:48:11.852 --> 00:48:13.830 |
|
So in this setting, so it's possible |
|
|
|
00:48:13.830 --> 00:48:15.898 |
|
that your friend could only have a 25% |
|
|
|
00:48:15.898 --> 00:48:17.650 |
|
chance of being correct, but the |
|
|
|
00:48:17.650 --> 00:48:19.595 |
|
audience has a 100% chance of being |
|
|
|
00:48:19.595 --> 00:48:19.859 |
|
correct. |
|
|
|
00:48:24.800 --> 00:48:26.830 |
|
Alright, so I'm going to give a |
|
|
|
00:48:26.830 --> 00:48:29.010 |
|
detailed example of random forests. |
|
|
|
00:48:29.010 --> 00:48:30.950 |
|
If you took computational photography |
|
|
|
00:48:30.950 --> 00:48:32.850 |
|
with me, then you saw this example, but |
|
|
|
00:48:32.850 --> 00:48:34.100 |
|
now you will see it in a new light. |
|
|
|
00:48:34.950 --> 00:48:37.960 |
|
And so this is using this is the Kinect |
|
|
|
00:48:37.960 --> 00:48:38.490 |
|
algorithm. |
|
|
|
00:48:38.490 --> 00:48:40.220 |
|
So you guys might remember the Kinect |
|
|
|
00:48:40.220 --> 00:48:42.740 |
|
came out in around 2011. |
|
|
|
00:48:43.720 --> 00:48:46.080 |
|
For gaming and then was like widely |
|
|
|
00:48:46.080 --> 00:48:47.590 |
|
adopted by the robotics community |
|
|
|
00:48:47.590 --> 00:48:48.270 |
|
question. |
|
|
|
00:48:56.480 --> 00:48:59.850 |
|
Alright, the answer is probability of a |
|
|
|
00:48:59.850 --> 00:49:04.080 |
|
can be just marginally above 25% and |
|
|
|
00:49:04.080 --> 00:49:06.360 |
|
the other probabilities are marginally |
|
|
|
00:49:06.360 --> 00:49:07.440 |
|
below 25%. |
|
|
|
00:49:09.310 --> 00:49:09.720 |
|
Yeah. |
|
|
|
00:49:11.560 --> 00:49:15.050 |
|
All right, so the Kinect came out, you |
|
|
|
00:49:15.050 --> 00:49:17.280 |
|
could play lots of games with it and it |
|
|
|
00:49:17.280 --> 00:49:18.570 |
|
was also used for robotics. |
|
|
|
00:49:18.570 --> 00:49:20.864 |
|
But for the games anyway, one of the |
|
|
|
00:49:20.864 --> 00:49:22.950 |
|
one of the key things they had to solve |
|
|
|
00:49:22.950 --> 00:49:23.943 |
|
was to. |
|
|
|
00:49:23.943 --> 00:49:26.635 |
|
So first the Kinect has it does some |
|
|
|
00:49:26.635 --> 00:49:28.120 |
|
like structured light thing in order to |
|
|
|
00:49:28.120 --> 00:49:28.990 |
|
get a depth image. |
|
|
|
00:49:29.660 --> 00:49:30.550 |
|
And then? |
|
|
|
00:49:30.720 --> 00:49:31.330 |
|
And. |
|
|
|
00:49:32.070 --> 00:49:34.040 |
|
And then the Kinect needs to estimate |
|
|
|
00:49:34.040 --> 00:49:37.000 |
|
body purpose given the depth image, so |
|
|
|
00:49:37.000 --> 00:49:38.940 |
|
that it can tell if you're like dancing |
|
|
|
00:49:38.940 --> 00:49:40.810 |
|
correctly or doing the sport or |
|
|
|
00:49:40.810 --> 00:49:44.000 |
|
whatever corresponds to the game. |
|
|
|
00:49:45.020 --> 00:49:47.260 |
|
So given this depth image, you have to |
|
|
|
00:49:47.260 --> 00:49:50.580 |
|
try to predict for like what are the |
|
|
|
00:49:50.580 --> 00:49:52.300 |
|
key points of the body pose. |
|
|
|
00:49:52.300 --> 00:49:53.050 |
|
That's the problem. |
|
|
|
00:49:54.850 --> 00:49:56.840 |
|
And they need to do it really fast too, |
|
|
|
00:49:56.840 --> 00:49:59.230 |
|
because they're because they only get a |
|
|
|
00:49:59.230 --> 00:50:02.064 |
|
small fraction of the GPU of the Xbox |
|
|
|
00:50:02.064 --> 00:50:05.222 |
|
to do this, 2% of the GPU of the Xbox |
|
|
|
00:50:05.222 --> 00:50:06.740 |
|
to do this in real time. |
|
|
|
00:50:09.190 --> 00:50:12.370 |
|
So the basic algorithm is from. |
|
|
|
00:50:12.370 --> 00:50:15.450 |
|
This is described in this paper by |
|
|
|
00:50:15.450 --> 00:50:16.640 |
|
Microsoft Cambridge. |
|
|
|
00:50:17.400 --> 00:50:21.430 |
|
And the overall the processes, you go |
|
|
|
00:50:21.430 --> 00:50:23.180 |
|
from a depth image and segment it. |
|
|
|
00:50:23.180 --> 00:50:25.950 |
|
Then you predict for each pixel which |
|
|
|
00:50:25.950 --> 00:50:28.200 |
|
of the body parts corresponds to that |
|
|
|
00:50:28.200 --> 00:50:29.200 |
|
pixel. |
|
|
|
00:50:29.200 --> 00:50:30.410 |
|
Is it like the right side of the face |
|
|
|
00:50:30.410 --> 00:50:31.380 |
|
or left side of the face? |
|
|
|
00:50:32.180 --> 00:50:34.540 |
|
And then you take those predictions and |
|
|
|
00:50:34.540 --> 00:50:36.210 |
|
combine them to get a key point |
|
|
|
00:50:36.210 --> 00:50:36.730 |
|
estimate. |
|
|
|
00:50:38.490 --> 00:50:39.730 |
|
So here's another view of it. |
|
|
|
00:50:40.400 --> 00:50:42.905 |
|
Given RGB image, that's Jamie shot in |
|
|
|
00:50:42.905 --> 00:50:45.846 |
|
the first author you then and a depth |
|
|
|
00:50:45.846 --> 00:50:46.223 |
|
image. |
|
|
|
00:50:46.223 --> 00:50:48.120 |
|
You don't use the RGB actually, you |
|
|
|
00:50:48.120 --> 00:50:49.983 |
|
just segment out the body from the |
|
|
|
00:50:49.983 --> 00:50:50.199 |
|
depth. |
|
|
|
00:50:50.200 --> 00:50:51.900 |
|
It's like the near pixels. |
|
|
|
00:50:52.670 --> 00:50:55.185 |
|
And then you label them into parts and |
|
|
|
00:50:55.185 --> 00:50:57.790 |
|
then you assign the joint positions. |
|
|
|
00:51:00.690 --> 00:51:03.489 |
|
So the reason this is kind of this is |
|
|
|
00:51:03.490 --> 00:51:05.050 |
|
pretty hard because you're going to |
|
|
|
00:51:05.050 --> 00:51:06.470 |
|
have a lot of different bodies and |
|
|
|
00:51:06.470 --> 00:51:08.370 |
|
orientations and poses and wearing |
|
|
|
00:51:08.370 --> 00:51:10.500 |
|
different kinds of clothes, and you |
|
|
|
00:51:10.500 --> 00:51:12.490 |
|
want this to work for everybody because |
|
|
|
00:51:12.490 --> 00:51:14.400 |
|
if it fails, then the games not any |
|
|
|
00:51:14.400 --> 00:51:14.710 |
|
fun. |
|
|
|
00:51:15.740 --> 00:51:19.610 |
|
And So what they did is they collected |
|
|
|
00:51:19.610 --> 00:51:22.995 |
|
a lot of examples of motion capture |
|
|
|
00:51:22.995 --> 00:51:24.990 |
|
they had like different people do like |
|
|
|
00:51:24.990 --> 00:51:26.970 |
|
motion capture and got like real |
|
|
|
00:51:26.970 --> 00:51:30.190 |
|
examples and then they took those body |
|
|
|
00:51:30.190 --> 00:51:33.270 |
|
frames and rigged a synthetic models. |
|
|
|
00:51:33.940 --> 00:51:35.700 |
|
And generated even more synthetic |
|
|
|
00:51:35.700 --> 00:51:37.550 |
|
examples of people in the same poses. |
|
|
|
00:51:38.150 --> 00:51:40.020 |
|
And on these synthetic examples, it was |
|
|
|
00:51:40.020 --> 00:51:41.945 |
|
easy to label the parts because they're |
|
|
|
00:51:41.945 --> 00:51:42.450 |
|
synthetic. |
|
|
|
00:51:42.450 --> 00:51:44.080 |
|
So they could just like essentially |
|
|
|
00:51:44.080 --> 00:51:46.740 |
|
texture the parts and then they would |
|
|
|
00:51:46.740 --> 00:51:48.880 |
|
know like which pixel corresponds to |
|
|
|
00:51:48.880 --> 00:51:49.410 |
|
each label. |
|
|
|
00:51:51.640 --> 00:51:53.930 |
|
So the same this is showing that the |
|
|
|
00:51:53.930 --> 00:51:58.010 |
|
same body part this wrist or hand here. |
|
|
|
00:51:58.740 --> 00:52:00.300 |
|
Can look quite different. |
|
|
|
00:52:00.300 --> 00:52:02.050 |
|
It's the same part in all of these |
|
|
|
00:52:02.050 --> 00:52:04.200 |
|
images, but depending on where it is |
|
|
|
00:52:04.200 --> 00:52:05.700 |
|
and how the body is posed, then the |
|
|
|
00:52:05.700 --> 00:52:06.820 |
|
image looks pretty different. |
|
|
|
00:52:06.820 --> 00:52:09.060 |
|
So this is a pretty challenging problem |
|
|
|
00:52:09.060 --> 00:52:11.590 |
|
to know that this pixel in the center |
|
|
|
00:52:11.590 --> 00:52:14.520 |
|
of the cross is the wrist. |
|
|
|
00:52:15.390 --> 00:52:16.090 |
|
Where the hand? |
|
|
|
00:52:19.180 --> 00:52:21.070 |
|
All right, so the thresholding of the |
|
|
|
00:52:21.070 --> 00:52:24.640 |
|
depth is relatively straightforward. |
|
|
|
00:52:24.640 --> 00:52:27.190 |
|
And then they need to learn to predict |
|
|
|
00:52:27.190 --> 00:52:30.599 |
|
for each pixel whether which of the |
|
|
|
00:52:30.600 --> 00:52:32.700 |
|
possible body parts that pixel |
|
|
|
00:52:32.700 --> 00:52:33.510 |
|
corresponds to. |
|
|
|
00:52:34.910 --> 00:52:37.015 |
|
And these really simple features, the |
|
|
|
00:52:37.015 --> 00:52:41.500 |
|
features are either a an offset feature |
|
|
|
00:52:41.500 --> 00:52:43.270 |
|
where if you're trying to predict for |
|
|
|
00:52:43.270 --> 00:52:46.610 |
|
this pixel at the center, here you |
|
|
|
00:52:46.610 --> 00:52:49.570 |
|
shift some number of pixels that are |
|
|
|
00:52:49.570 --> 00:52:51.650 |
|
dependent, so some pixels times depth. |
|
|
|
00:52:52.360 --> 00:52:54.100 |
|
In some direction, and you look at the |
|
|
|
00:52:54.100 --> 00:52:55.740 |
|
depth of that corresponding pixel, |
|
|
|
00:52:55.740 --> 00:52:58.230 |
|
which could be like a particular value |
|
|
|
00:52:58.230 --> 00:52:59.660 |
|
to indicate that it's off the body. |
|
|
|
00:53:01.290 --> 00:53:03.020 |
|
So if you're at this pixel and you use |
|
|
|
00:53:03.020 --> 00:53:05.205 |
|
this feature Theta one, then you end up |
|
|
|
00:53:05.205 --> 00:53:05.667 |
|
over here. |
|
|
|
00:53:05.667 --> 00:53:07.144 |
|
If you're looking at this pixel then |
|
|
|
00:53:07.144 --> 00:53:08.770 |
|
you end up on the head over here in |
|
|
|
00:53:08.770 --> 00:53:09.450 |
|
this example. |
|
|
|
00:53:10.350 --> 00:53:12.440 |
|
And then you have other features that |
|
|
|
00:53:12.440 --> 00:53:14.210 |
|
are based on the difference of depths. |
|
|
|
00:53:14.210 --> 00:53:16.870 |
|
So given some position, you look at 2 |
|
|
|
00:53:16.870 --> 00:53:19.000 |
|
offsets and take the difference of |
|
|
|
00:53:19.000 --> 00:53:19.600 |
|
those depths. |
|
|
|
00:53:21.300 --> 00:53:23.260 |
|
And then you can generate like |
|
|
|
00:53:23.260 --> 00:53:25.020 |
|
basically infinite numbers of those |
|
|
|
00:53:25.020 --> 00:53:26.010 |
|
features, right? |
|
|
|
00:53:26.010 --> 00:53:27.895 |
|
There's like a lot of combinations of |
|
|
|
00:53:27.895 --> 00:53:29.655 |
|
features using different offsets that |
|
|
|
00:53:29.655 --> 00:53:30.485 |
|
you could create. |
|
|
|
00:53:30.485 --> 00:53:32.510 |
|
And they also have lots of data, which |
|
|
|
00:53:32.510 --> 00:53:34.500 |
|
as I mentioned came from mocap and then |
|
|
|
00:53:34.500 --> 00:53:35.260 |
|
synthetic data. |
|
|
|
00:53:36.390 --> 00:53:39.060 |
|
And so they train, they train random |
|
|
|
00:53:39.060 --> 00:53:42.990 |
|
forests based on these features on all |
|
|
|
00:53:42.990 --> 00:53:43.640 |
|
this data. |
|
|
|
00:53:43.640 --> 00:53:45.030 |
|
So again, they have millions of |
|
|
|
00:53:45.030 --> 00:53:45.900 |
|
examples. |
|
|
|
00:53:45.900 --> 00:53:47.995 |
|
They can like practically infinite |
|
|
|
00:53:47.995 --> 00:53:49.680 |
|
features, but you'd sample some number |
|
|
|
00:53:49.680 --> 00:53:50.930 |
|
of features and tree in a tree. |
|
|
|
00:53:53.210 --> 00:53:54.500 |
|
I think I just explained that. |
|
|
|
00:53:56.320 --> 00:53:58.270 |
|
Sorry, I got a little ahead of myself, |
|
|
|
00:53:58.270 --> 00:54:00.264 |
|
but this is just an illustration of |
|
|
|
00:54:00.264 --> 00:54:03.808 |
|
their training data, 500,000 frames and |
|
|
|
00:54:03.808 --> 00:54:07.414 |
|
then they got 3D models for 15 bodies |
|
|
|
00:54:07.414 --> 00:54:09.990 |
|
and then they synthesized all the |
|
|
|
00:54:09.990 --> 00:54:11.860 |
|
motion capture data on all of those |
|
|
|
00:54:11.860 --> 00:54:14.160 |
|
bodies to get their training and test |
|
|
|
00:54:14.160 --> 00:54:15.319 |
|
in synthetic test data. |
|
|
|
00:54:16.200 --> 00:54:17.730 |
|
So this is showing similar synthetic |
|
|
|
00:54:17.730 --> 00:54:18.110 |
|
data. |
|
|
|
00:54:21.210 --> 00:54:24.110 |
|
And then so they so they're classifier |
|
|
|
00:54:24.110 --> 00:54:26.500 |
|
is a random forest, so again they just. |
|
|
|
00:54:26.570 --> 00:54:27.060 |
|
|
|
|
|
00:54:27.830 --> 00:54:31.095 |
|
Randomly sample a set of those possible |
|
|
|
00:54:31.095 --> 00:54:33.030 |
|
features, or generate a set of features |
|
|
|
00:54:33.030 --> 00:54:35.700 |
|
and randomly subsample their training |
|
|
|
00:54:35.700 --> 00:54:36.030 |
|
data. |
|
|
|
00:54:36.900 --> 00:54:39.315 |
|
And then train a tree to completion and |
|
|
|
00:54:39.315 --> 00:54:41.810 |
|
then each tree or maybe to maximum |
|
|
|
00:54:41.810 --> 00:54:42.100 |
|
depth. |
|
|
|
00:54:42.100 --> 00:54:43.575 |
|
In this case you might not change the |
|
|
|
00:54:43.575 --> 00:54:44.820 |
|
completion since you may have like |
|
|
|
00:54:44.820 --> 00:54:45.680 |
|
millions of samples. |
|
|
|
00:54:46.770 --> 00:54:48.660 |
|
But you trained to some depth and then |
|
|
|
00:54:48.660 --> 00:54:50.570 |
|
each node will have some probability |
|
|
|
00:54:50.570 --> 00:54:52.160 |
|
estimate for each of the classes. |
|
|
|
00:54:52.970 --> 00:54:54.626 |
|
And then you generate a new tree and |
|
|
|
00:54:54.626 --> 00:54:56.400 |
|
you keep on doing that independently. |
|
|
|
00:54:57.510 --> 00:54:59.100 |
|
And then you at the end you're |
|
|
|
00:54:59.100 --> 00:55:01.282 |
|
predictor is an average of the |
|
|
|
00:55:01.282 --> 00:55:03.230 |
|
probabilities, the class probabilities |
|
|
|
00:55:03.230 --> 00:55:04.530 |
|
that each of the trees predicts. |
|
|
|
00:55:05.970 --> 00:55:09.780 |
|
So it may sound like at first glance |
|
|
|
00:55:09.780 --> 00:55:11.030 |
|
when you look at this you might think, |
|
|
|
00:55:11.030 --> 00:55:13.530 |
|
well this seems really slow you then in |
|
|
|
00:55:13.530 --> 00:55:14.880 |
|
order to. |
|
|
|
00:55:15.410 --> 00:55:16.040 |
|
Make a prediction. |
|
|
|
00:55:16.040 --> 00:55:17.936 |
|
You have to query all of these trees |
|
|
|
00:55:17.936 --> 00:55:19.760 |
|
and then sum up their responses. |
|
|
|
00:55:19.760 --> 00:55:21.940 |
|
But when you're implementing an GPU, |
|
|
|
00:55:21.940 --> 00:55:23.658 |
|
it's actually really fast because these |
|
|
|
00:55:23.658 --> 00:55:24.840 |
|
can all be done in parallel. |
|
|
|
00:55:24.840 --> 00:55:26.334 |
|
The trees don't depend on each other, |
|
|
|
00:55:26.334 --> 00:55:29.161 |
|
so you can do the inference on all the |
|
|
|
00:55:29.161 --> 00:55:31.045 |
|
trees simultaneously, and you can do |
|
|
|
00:55:31.045 --> 00:55:32.120 |
|
inference for all the pixels |
|
|
|
00:55:32.120 --> 00:55:33.600 |
|
simultaneously if you have enough |
|
|
|
00:55:33.600 --> 00:55:33.968 |
|
memory. |
|
|
|
00:55:33.968 --> 00:55:36.919 |
|
And so it's actually can be done in |
|
|
|
00:55:36.920 --> 00:55:38.225 |
|
remarkably fast. |
|
|
|
00:55:38.225 --> 00:55:41.300 |
|
So they can do this in real time using |
|
|
|
00:55:41.300 --> 00:55:43.506 |
|
2% of the computational resources of |
|
|
|
00:55:43.506 --> 00:55:44.280 |
|
the Xbox. |
|
|
|
00:55:48.160 --> 00:55:48.770 |
|
|
|
|
|
00:55:49.810 --> 00:55:53.730 |
|
And then finally they would get the, so |
|
|
|
00:55:53.730 --> 00:55:54.700 |
|
I'll show it here. |
|
|
|
00:55:54.700 --> 00:55:56.249 |
|
So first they are like labeling the |
|
|
|
00:55:56.250 --> 00:55:57.465 |
|
pixels like this. |
|
|
|
00:55:57.465 --> 00:56:01.607 |
|
So this is the, sorry, over here the |
|
|
|
00:56:01.607 --> 00:56:03.690 |
|
Pixel labels can be like a little bit |
|
|
|
00:56:03.690 --> 00:56:05.410 |
|
of noise, a little bit noisy, but at |
|
|
|
00:56:05.410 --> 00:56:07.170 |
|
the end they don't need a pixel perfect |
|
|
|
00:56:07.170 --> 00:56:09.430 |
|
segmentation or pixel perfect labeling. |
|
|
|
00:56:10.060 --> 00:56:11.990 |
|
What they really care about is the |
|
|
|
00:56:11.990 --> 00:56:13.950 |
|
position of the joints, the 3D position |
|
|
|
00:56:13.950 --> 00:56:14.790 |
|
of the joints. |
|
|
|
00:56:15.710 --> 00:56:17.899 |
|
And so based on the depth and based on |
|
|
|
00:56:17.900 --> 00:56:19.416 |
|
which pixels are labeled with each |
|
|
|
00:56:19.416 --> 00:56:22.290 |
|
joint, they can get the average 3D |
|
|
|
00:56:22.290 --> 00:56:24.420 |
|
position of these labels. |
|
|
|
00:56:24.420 --> 00:56:27.280 |
|
And then they just put it like slightly |
|
|
|
00:56:27.280 --> 00:56:29.070 |
|
behind that in a joint dependent way. |
|
|
|
00:56:29.070 --> 00:56:31.429 |
|
So like if that the average depth of |
|
|
|
00:56:31.429 --> 00:56:33.346 |
|
these pixels on my shoulder, then that |
|
|
|
00:56:33.346 --> 00:56:34.860 |
|
the center of my shoulder is going to |
|
|
|
00:56:34.860 --> 00:56:36.950 |
|
be an inch and 1/2 behind that or |
|
|
|
00:56:36.950 --> 00:56:37.619 |
|
something like that. |
|
|
|
00:56:38.450 --> 00:56:40.600 |
|
So then you get the 3D position of my |
|
|
|
00:56:40.600 --> 00:56:41.030 |
|
shoulder. |
|
|
|
00:56:42.480 --> 00:56:44.303 |
|
And so even though they're pixel |
|
|
|
00:56:44.303 --> 00:56:46.280 |
|
predictions might be a little noisy, |
|
|
|
00:56:46.280 --> 00:56:48.130 |
|
the joint predictions are more accurate |
|
|
|
00:56:48.130 --> 00:56:49.550 |
|
because they're based on a combination |
|
|
|
00:56:49.550 --> 00:56:50.499 |
|
of pixel predictions. |
|
|
|
00:56:54.090 --> 00:56:55.595 |
|
So here is showing the ground truth. |
|
|
|
00:56:55.595 --> 00:56:57.360 |
|
This is the depth image, this is a |
|
|
|
00:56:57.360 --> 00:57:00.160 |
|
pixel labels and then this is the joint |
|
|
|
00:57:00.160 --> 00:57:00.780 |
|
labels. |
|
|
|
00:57:01.450 --> 00:57:03.850 |
|
And then and. |
|
|
|
00:57:03.850 --> 00:57:06.005 |
|
This is showing the actual predictions |
|
|
|
00:57:06.005 --> 00:57:07.210 |
|
and some examples. |
|
|
|
00:57:09.420 --> 00:57:11.020 |
|
And here you can see the same thing. |
|
|
|
00:57:11.020 --> 00:57:13.630 |
|
So these are the input depth images. |
|
|
|
00:57:14.400 --> 00:57:16.480 |
|
This is the pixel predictions on those |
|
|
|
00:57:16.480 --> 00:57:17.210 |
|
depth images. |
|
|
|
00:57:17.860 --> 00:57:19.870 |
|
And then this is showing the estimated |
|
|
|
00:57:19.870 --> 00:57:22.385 |
|
pose from different perspectives so |
|
|
|
00:57:22.385 --> 00:57:24.910 |
|
that you can see it looks kind of |
|
|
|
00:57:24.910 --> 00:57:25.100 |
|
right. |
|
|
|
00:57:25.100 --> 00:57:26.780 |
|
So like in this case for example, it's |
|
|
|
00:57:26.780 --> 00:57:28.570 |
|
estimating that the person is standing |
|
|
|
00:57:28.570 --> 00:57:30.840 |
|
with his hands like out and slightly in |
|
|
|
00:57:30.840 --> 00:57:31.110 |
|
front. |
|
|
|
00:57:36.130 --> 00:57:38.440 |
|
And you can see if you vary the number |
|
|
|
00:57:38.440 --> 00:57:41.810 |
|
of training samples, you get like |
|
|
|
00:57:41.810 --> 00:57:42.670 |
|
pretty good. |
|
|
|
00:57:42.670 --> 00:57:45.860 |
|
I mean essentially what I would say is |
|
|
|
00:57:45.860 --> 00:57:47.239 |
|
that you need a lot of training samples |
|
|
|
00:57:47.240 --> 00:57:48.980 |
|
to do well in this task. |
|
|
|
00:57:49.660 --> 00:57:52.330 |
|
So as you start to get up to 100,000 or |
|
|
|
00:57:52.330 --> 00:57:53.640 |
|
a million training samples. |
|
|
|
00:57:54.300 --> 00:57:58.360 |
|
Your average accuracy gets up to 60%. |
|
|
|
00:57:59.990 --> 00:58:02.350 |
|
And 60% might not sound that good, but |
|
|
|
00:58:02.350 --> 00:58:04.339 |
|
it's actually fine because a lot of the |
|
|
|
00:58:04.340 --> 00:58:05.930 |
|
errors will just be on the margin where |
|
|
|
00:58:05.930 --> 00:58:08.050 |
|
you're like whether this pixel is the |
|
|
|
00:58:08.050 --> 00:58:09.500 |
|
upper arm or the shoulder. |
|
|
|
00:58:09.500 --> 00:58:13.110 |
|
And so the per pixel accuracy of 60% |
|
|
|
00:58:13.110 --> 00:58:14.420 |
|
gives you pretty accurate joint |
|
|
|
00:58:14.420 --> 00:58:15.030 |
|
positions. |
|
|
|
00:58:16.680 --> 00:58:18.460 |
|
One of the surprising things about the |
|
|
|
00:58:18.460 --> 00:58:21.979 |
|
paper was that the synthetic data was |
|
|
|
00:58:21.980 --> 00:58:24.000 |
|
so effective because in all past |
|
|
|
00:58:24.000 --> 00:58:26.322 |
|
research, pretty much when people use |
|
|
|
00:58:26.322 --> 00:58:27.720 |
|
synthetic data it didn't like |
|
|
|
00:58:27.720 --> 00:58:29.700 |
|
generalize that did the test data. |
|
|
|
00:58:29.700 --> 00:58:30.940 |
|
And I think the reason that it |
|
|
|
00:58:30.940 --> 00:58:32.580 |
|
generalizes well in this case is that |
|
|
|
00:58:32.580 --> 00:58:34.830 |
|
depth data is a lot easier to simulate |
|
|
|
00:58:34.830 --> 00:58:35.290 |
|
than. |
|
|
|
00:58:35.930 --> 00:58:37.170 |
|
RGB data. |
|
|
|
00:58:37.170 --> 00:58:39.810 |
|
So now people have used RGB data |
|
|
|
00:58:39.810 --> 00:58:40.340 |
|
somewhat. |
|
|
|
00:58:40.340 --> 00:58:43.440 |
|
It's often used in autonomous vehicle |
|
|
|
00:58:43.440 --> 00:58:46.760 |
|
training, but at the time it had not |
|
|
|
00:58:46.760 --> 00:58:47.920 |
|
really been used effectively. |
|
|
|
00:58:58.700 --> 00:58:58.980 |
|
OK. |
|
|
|
00:59:00.020 --> 00:59:01.500 |
|
Is there any questions about that? |
|
|
|
00:59:04.850 --> 00:59:06.820 |
|
And then the last big thing I want to |
|
|
|
00:59:06.820 --> 00:59:08.140 |
|
do you're probably not. |
|
|
|
00:59:08.500 --> 00:59:11.210 |
|
Emotionally ready for homework 2 yet, |
|
|
|
00:59:11.210 --> 00:59:12.740 |
|
but I'll give it to you anyway. |
|
|
|
00:59:14.930 --> 00:59:16.510 |
|
Is to show you homework too. |
|
|
|
00:59:25.020 --> 00:59:27.760 |
|
Alright, so at least in some parts of |
|
|
|
00:59:27.760 --> 00:59:30.070 |
|
this are going to be a bit familiar. |
|
|
|
00:59:32.020 --> 00:59:32.640 |
|
Yeah. |
|
|
|
00:59:32.640 --> 00:59:33.140 |
|
Thank you. |
|
|
|
00:59:34.070 --> 00:59:34.750 |
|
I always forget. |
|
|
|
00:59:35.730 --> 00:59:37.640 |
|
With that, let me get rid of that. |
|
|
|
00:59:38.500 --> 00:59:39.000 |
|
OK. |
|
|
|
00:59:42.850 --> 00:59:43.480 |
|
Damn it. |
|
|
|
00:59:51.800 --> 00:59:55.390 |
|
Alright, let's see me in a bit. |
|
|
|
00:59:56.330 --> 00:59:56.840 |
|
OK. |
|
|
|
00:59:57.980 --> 00:59:59.290 |
|
All right, so there's three parts of |
|
|
|
00:59:59.290 --> 00:59:59.900 |
|
this. |
|
|
|
00:59:59.900 --> 01:00:04.780 |
|
The first part is looking at the |
|
|
|
01:00:04.780 --> 01:00:06.920 |
|
effects of model complexity with tree |
|
|
|
01:00:06.920 --> 01:00:07.610 |
|
regressors. |
|
|
|
01:00:08.870 --> 01:00:12.560 |
|
So you train trees with different |
|
|
|
01:00:12.560 --> 01:00:13.190 |
|
depths. |
|
|
|
01:00:13.800 --> 01:00:17.380 |
|
And Oregon, random forests with |
|
|
|
01:00:17.380 --> 01:00:18.090 |
|
different depths. |
|
|
|
01:00:19.120 --> 01:00:22.745 |
|
And then you plot the error versus the |
|
|
|
01:00:22.745 --> 01:00:24.150 |
|
versus the size. |
|
|
|
01:00:25.280 --> 01:00:26.440 |
|
So it's actually. |
|
|
|
01:00:26.440 --> 01:00:27.350 |
|
This is actually. |
|
|
|
01:00:29.290 --> 01:00:29.980 |
|
Pretty easy. |
|
|
|
01:00:29.980 --> 01:00:31.720 |
|
Code wise, it's, I'll show you. |
|
|
|
01:00:31.720 --> 01:00:34.240 |
|
It's just to get to just see for |
|
|
|
01:00:34.240 --> 01:00:35.890 |
|
yourself like the effects of depth. |
|
|
|
01:00:37.260 --> 01:00:38.830 |
|
So in this case you don't need to |
|
|
|
01:00:38.830 --> 01:00:40.590 |
|
implement the trees or the random |
|
|
|
01:00:40.590 --> 01:00:41.920 |
|
forests, you can use the library. |
|
|
|
01:00:42.740 --> 01:00:43.940 |
|
So, and we're going to use the |
|
|
|
01:00:43.940 --> 01:00:44.640 |
|
temperature data. |
|
|
|
01:00:46.350 --> 01:00:48.910 |
|
Essentially you would iterate over |
|
|
|
01:00:48.910 --> 01:00:51.360 |
|
these Max depths which range from 2 to |
|
|
|
01:00:51.360 --> 01:00:52.020 |
|
32. |
|
|
|
01:00:52.970 --> 01:00:54.890 |
|
And then for each depth you would call |
|
|
|
01:00:54.890 --> 01:00:58.790 |
|
these functions and get the error and |
|
|
|
01:00:58.790 --> 01:01:00.300 |
|
then you can. |
|
|
|
01:01:01.500 --> 01:01:04.570 |
|
And then you can call this code to plot |
|
|
|
01:01:04.570 --> 01:01:05.030 |
|
the error. |
|
|
|
01:01:05.670 --> 01:01:07.610 |
|
And then you'll look at that plot, and |
|
|
|
01:01:07.610 --> 01:01:08.440 |
|
then you'll. |
|
|
|
01:01:09.250 --> 01:01:11.580 |
|
Provide the plot and answer some |
|
|
|
01:01:11.580 --> 01:01:12.120 |
|
questions. |
|
|
|
01:01:12.720 --> 01:01:16.180 |
|
So in the report there's some questions |
|
|
|
01:01:16.180 --> 01:01:18.090 |
|
for you to answer based on your |
|
|
|
01:01:18.090 --> 01:01:18.820 |
|
analysis. |
|
|
|
01:01:20.350 --> 01:01:21.846 |
|
They're like, given a maximum depth |
|
|
|
01:01:21.846 --> 01:01:26.130 |
|
tree, which model has the lowest bias |
|
|
|
01:01:26.130 --> 01:01:28.089 |
|
for regression trees, what tree depth |
|
|
|
01:01:28.090 --> 01:01:29.900 |
|
achieves the minimum validation error? |
|
|
|
01:01:31.080 --> 01:01:33.440 |
|
When is which model is least prone to |
|
|
|
01:01:33.440 --> 01:01:34.810 |
|
overfitting, for example? |
|
|
|
01:01:37.480 --> 01:01:38.970 |
|
So that's the first problem. |
|
|
|
01:01:40.030 --> 01:01:41.530 |
|
The second problem, this is the one |
|
|
|
01:01:41.530 --> 01:01:43.485 |
|
that's going to take you the most time, |
|
|
|
01:01:43.485 --> 01:01:46.950 |
|
is using MLPS, so multilayer |
|
|
|
01:01:46.950 --> 01:01:48.390 |
|
perceptrons with MNIST. |
|
|
|
01:01:49.590 --> 01:01:52.770 |
|
It takes about 3 minutes to train it, |
|
|
|
01:01:52.770 --> 01:01:54.420 |
|
so it's not too bad compared to your |
|
|
|
01:01:54.420 --> 01:01:55.360 |
|
nearest neighbor training. |
|
|
|
01:01:56.310 --> 01:01:56.840 |
|
And. |
|
|
|
01:01:57.680 --> 01:02:01.610 |
|
And you need you need to basically |
|
|
|
01:02:01.610 --> 01:02:02.680 |
|
like. |
|
|
|
01:02:02.680 --> 01:02:05.225 |
|
We're going to use Pytorch, which is |
|
|
|
01:02:05.225 --> 01:02:06.800 |
|
like a really good package for deep |
|
|
|
01:02:06.800 --> 01:02:07.160 |
|
learning. |
|
|
|
01:02:08.180 --> 01:02:09.990 |
|
And you need to. |
|
|
|
01:02:11.750 --> 01:02:15.500 |
|
Fill out the forward and. |
|
|
|
01:02:16.850 --> 01:02:20.370 |
|
And the like model specification. |
|
|
|
01:02:20.370 --> 01:02:23.650 |
|
So I provide in the chips a link to a |
|
|
|
01:02:23.650 --> 01:02:25.500 |
|
tutorial and you can also look up other |
|
|
|
01:02:25.500 --> 01:02:28.320 |
|
tutorials that explain in the tips. |
|
|
|
01:02:28.320 --> 01:02:30.510 |
|
Also gives you kind of the basic code |
|
|
|
01:02:30.510 --> 01:02:30.930 |
|
structure. |
|
|
|
01:02:31.640 --> 01:02:33.850 |
|
But you can see like how these things |
|
|
|
01:02:33.850 --> 01:02:36.030 |
|
are coded, essentially that you define |
|
|
|
01:02:36.030 --> 01:02:37.280 |
|
the layers of the network here. |
|
|
|
01:02:37.870 --> 01:02:40.560 |
|
And then you define like how the data |
|
|
|
01:02:40.560 --> 01:02:42.030 |
|
progresses through the network to make |
|
|
|
01:02:42.030 --> 01:02:45.429 |
|
a prediction and then you and then you |
|
|
|
01:02:45.430 --> 01:02:46.430 |
|
can train your network. |
|
|
|
01:02:48.040 --> 01:02:49.410 |
|
Obviously we haven't talked about this |
|
|
|
01:02:49.410 --> 01:02:50.900 |
|
yet, so it might not make complete |
|
|
|
01:02:50.900 --> 01:02:52.200 |
|
sense yet, but it will. |
|
|
|
01:02:53.760 --> 01:02:55.048 |
|
So then you're going to train a |
|
|
|
01:02:55.048 --> 01:02:57.019 |
|
network, then you're going to try |
|
|
|
01:02:57.020 --> 01:02:58.638 |
|
different learning rates, and then |
|
|
|
01:02:58.638 --> 01:03:00.230 |
|
you're going to try to get the best |
|
|
|
01:03:00.230 --> 01:03:03.340 |
|
network you can with the target of 25% |
|
|
|
01:03:03.340 --> 01:03:04.000 |
|
validation error. |
|
|
|
01:03:05.770 --> 01:03:07.150 |
|
And then a third problem. |
|
|
|
01:03:07.150 --> 01:03:09.450 |
|
We're looking at this new data set |
|
|
|
01:03:09.450 --> 01:03:11.820 |
|
called the Penguin data set, the Palmer |
|
|
|
01:03:11.820 --> 01:03:13.500 |
|
Archipelago Penguin data set. |
|
|
|
01:03:14.410 --> 01:03:16.800 |
|
And this is a data set of like some |
|
|
|
01:03:16.800 --> 01:03:18.500 |
|
various physical measurements of the |
|
|
|
01:03:18.500 --> 01:03:19.970 |
|
Penguins, whether they're male or |
|
|
|
01:03:19.970 --> 01:03:21.813 |
|
female, what island they came from, and |
|
|
|
01:03:21.813 --> 01:03:23.140 |
|
what kind of species it is. |
|
|
|
01:03:23.990 --> 01:03:25.800 |
|
So we created a clean version of the |
|
|
|
01:03:25.800 --> 01:03:28.510 |
|
data here and. |
|
|
|
01:03:29.670 --> 01:03:31.500 |
|
And then we have like some starter code |
|
|
|
01:03:31.500 --> 01:03:32.380 |
|
to load that data. |
|
|
|
01:03:33.210 --> 01:03:35.370 |
|
And you're going to 1st like visualize |
|
|
|
01:03:35.370 --> 01:03:36.470 |
|
some of the features. |
|
|
|
01:03:36.470 --> 01:03:40.270 |
|
So we did one example for you if you |
|
|
|
01:03:40.270 --> 01:03:41.970 |
|
look at the different species of |
|
|
|
01:03:41.970 --> 01:03:42.740 |
|
Penguins. |
|
|
|
01:03:44.890 --> 01:03:46.880 |
|
This is like a scatter plot of body |
|
|
|
01:03:46.880 --> 01:03:48.900 |
|
mass versus flipper length for some |
|
|
|
01:03:48.900 --> 01:03:49.980 |
|
different Penguins. |
|
|
|
01:03:49.980 --> 01:03:51.950 |
|
So you can see that this would be like |
|
|
|
01:03:51.950 --> 01:03:53.880 |
|
pretty good at distinguishing Gentoo |
|
|
|
01:03:53.880 --> 01:03:57.230 |
|
from a deli and chinstrap, but not so |
|
|
|
01:03:57.230 --> 01:03:59.030 |
|
good at distinguishing chinstrap in a |
|
|
|
01:03:59.030 --> 01:03:59.280 |
|
deli. |
|
|
|
01:03:59.280 --> 01:04:00.790 |
|
So you can do this for different |
|
|
|
01:04:00.790 --> 01:04:01.792 |
|
combinations of features. |
|
|
|
01:04:01.792 --> 01:04:03.120 |
|
There's not a lot of features. |
|
|
|
01:04:03.120 --> 01:04:03.989 |
|
I think there's 13. |
|
|
|
01:04:06.080 --> 01:04:07.020 |
|
And then? |
|
|
|
01:04:07.100 --> 01:04:07.730 |
|
|
|
|
|
01:04:08.440 --> 01:04:10.140 |
|
And then in the report it asks like |
|
|
|
01:04:10.140 --> 01:04:12.410 |
|
some kinds of like analysis questions |
|
|
|
01:04:12.410 --> 01:04:14.060 |
|
based on that feature analysis. |
|
|
|
01:04:15.490 --> 01:04:17.410 |
|
Then the second question is to come up |
|
|
|
01:04:17.410 --> 01:04:19.889 |
|
with a simple, really simple rule A2 |
|
|
|
01:04:19.890 --> 01:04:21.330 |
|
part rule that will allow you to |
|
|
|
01:04:21.330 --> 01:04:22.980 |
|
perfectly classify Gentius. |
|
|
|
01:04:24.330 --> 01:04:27.170 |
|
And then the third part is to design an |
|
|
|
01:04:27.170 --> 01:04:29.385 |
|
mill model to maximize your accuracy on |
|
|
|
01:04:29.385 --> 01:04:30.160 |
|
this problem. |
|
|
|
01:04:30.160 --> 01:04:33.070 |
|
And you can use you can use like the |
|
|
|
01:04:33.070 --> 01:04:35.280 |
|
library to do cross validation. |
|
|
|
01:04:35.280 --> 01:04:37.610 |
|
So essentially you can use the |
|
|
|
01:04:37.610 --> 01:04:39.190 |
|
libraries for your models as well. |
|
|
|
01:04:39.190 --> 01:04:40.390 |
|
So you just need to choose the |
|
|
|
01:04:40.390 --> 01:04:42.100 |
|
parameters of your models and then try |
|
|
|
01:04:42.100 --> 01:04:43.569 |
|
to get the best performance you can. |
|
|
|
01:04:47.330 --> 01:04:49.180 |
|
Then the stretch goals are to improve |
|
|
|
01:04:49.180 --> 01:04:52.020 |
|
the MNIST using MLPS to find a second |
|
|
|
01:04:52.020 --> 01:04:54.330 |
|
rule for classifying Gentius. |
|
|
|
01:04:55.050 --> 01:04:57.660 |
|
And then this one is positional |
|
|
|
01:04:57.660 --> 01:05:00.765 |
|
encoding, which is a way of like |
|
|
|
01:05:00.765 --> 01:05:03.130 |
|
encoding positions that lets networks |
|
|
|
01:05:03.130 --> 01:05:05.170 |
|
work better on it, but I won't go into |
|
|
|
01:05:05.170 --> 01:05:06.490 |
|
details there since we haven't talked |
|
|
|
01:05:06.490 --> 01:05:07.070 |
|
about networks. |
|
|
|
01:05:09.040 --> 01:05:11.270 |
|
Any questions about homework 2? |
|
|
|
01:05:14.740 --> 01:05:16.100 |
|
There will be, yes. |
|
|
|
01:05:17.910 --> 01:05:18.190 |
|
OK. |
|
|
|
01:05:29.410 --> 01:05:29.700 |
|
No. |
|
|
|
01:05:29.700 --> 01:05:31.484 |
|
It says in that you don't need to |
|
|
|
01:05:31.484 --> 01:05:31.893 |
|
answer them. |
|
|
|
01:05:31.893 --> 01:05:34.470 |
|
You don't need to report on them. |
|
|
|
01:05:34.470 --> 01:05:36.450 |
|
So you should answer them in your head |
|
|
|
01:05:36.450 --> 01:05:37.936 |
|
and you'll learn more that way, but you |
|
|
|
01:05:37.936 --> 01:05:39.220 |
|
don't need to provide the answer. |
|
|
|
01:05:40.190 --> 01:05:40.710 |
|
Yeah. |
|
|
|
01:05:43.900 --> 01:05:44.230 |
|
Why? |
|
|
|
01:05:47.670 --> 01:05:48.830 |
|
Will not make a cost. |
|
|
|
01:05:51.690 --> 01:05:53.280 |
|
No, it won't hurt you either. |
|
|
|
01:05:54.650 --> 01:05:54.910 |
|
Yeah. |
|
|
|
01:05:55.930 --> 01:05:56.740 |
|
You're not required. |
|
|
|
01:05:56.740 --> 01:05:58.397 |
|
You're only required to fill out what's |
|
|
|
01:05:58.397 --> 01:05:59.172 |
|
in the template. |
|
|
|
01:05:59.172 --> 01:06:01.880 |
|
So sometimes I say to do like slightly |
|
|
|
01:06:01.880 --> 01:06:03.406 |
|
more than what's in the template. |
|
|
|
01:06:03.406 --> 01:06:05.300 |
|
The template is basically to show that |
|
|
|
01:06:05.300 --> 01:06:07.226 |
|
you've done it, so sometimes you can |
|
|
|
01:06:07.226 --> 01:06:08.520 |
|
show that you've done it without |
|
|
|
01:06:08.520 --> 01:06:09.840 |
|
providing all the details. |
|
|
|
01:06:09.840 --> 01:06:10.220 |
|
So. |
|
|
|
01:06:16.180 --> 01:06:17.810 |
|
So the question is, can you resubmit |
|
|
|
01:06:17.810 --> 01:06:18.570 |
|
the assignment? |
|
|
|
01:06:18.570 --> 01:06:20.363 |
|
I wouldn't really recommend it. |
|
|
|
01:06:20.363 --> 01:06:21.176 |
|
You would get. |
|
|
|
01:06:21.176 --> 01:06:23.570 |
|
So the way that it works is that at the |
|
|
|
01:06:23.570 --> 01:06:25.653 |
|
time that the T at the, it's mainly T |
|
|
|
01:06:25.653 --> 01:06:27.459 |
|
is greeting, so at the time that the |
|
|
|
01:06:27.460 --> 01:06:28.250 |
|
tea is green. |
|
|
|
01:06:29.270 --> 01:06:31.060 |
|
Whatever is submitted last will be |
|
|
|
01:06:31.060 --> 01:06:31.480 |
|
graded. |
|
|
|
01:06:32.390 --> 01:06:34.930 |
|
And whatever, like with whatever late |
|
|
|
01:06:34.930 --> 01:06:36.950 |
|
days have accrued for that, for that |
|
|
|
01:06:36.950 --> 01:06:37.360 |
|
submission. |
|
|
|
01:06:37.360 --> 01:06:40.140 |
|
If it's late so you can resubmit, but |
|
|
|
01:06:40.140 --> 01:06:41.590 |
|
then once they've graded, then it's |
|
|
|
01:06:41.590 --> 01:06:43.270 |
|
graded and then you can't resubmit |
|
|
|
01:06:43.270 --> 01:06:43.640 |
|
anymore. |
|
|
|
01:06:46.300 --> 01:06:47.150 |
|
There were. |
|
|
|
01:06:47.150 --> 01:06:48.910 |
|
We basically assume that if it's past |
|
|
|
01:06:48.910 --> 01:06:50.530 |
|
the deadline and you've submitted, then |
|
|
|
01:06:50.530 --> 01:06:54.580 |
|
we can grade it and so it might get and |
|
|
|
01:06:54.580 --> 01:06:56.750 |
|
generally if you want to get extra |
|
|
|
01:06:56.750 --> 01:06:57.170 |
|
points. |
|
|
|
01:06:57.900 --> 01:06:59.330 |
|
I would just recommend a move on to |
|
|
|
01:06:59.330 --> 01:07:01.053 |
|
homework two and do extra points for |
|
|
|
01:07:01.053 --> 01:07:02.430 |
|
homework two rather than getting stuck |
|
|
|
01:07:02.430 --> 01:07:03.925 |
|
on homework one and getting late days |
|
|
|
01:07:03.925 --> 01:07:06.040 |
|
and then like having trouble getting up |
|
|
|
01:07:06.040 --> 01:07:07.250 |
|
getting homework 2 done. |
|
|
|
01:07:13.630 --> 01:07:16.730 |
|
All right, so the things to remember |
|
|
|
01:07:16.730 --> 01:07:17.420 |
|
from this class. |
|
|
|
01:07:18.180 --> 01:07:20.180 |
|
Ensembles improve accuracy and |
|
|
|
01:07:20.180 --> 01:07:22.325 |
|
confidence estimates by reducing the |
|
|
|
01:07:22.325 --> 01:07:23.990 |
|
bias and Oregon the variance. |
|
|
|
01:07:23.990 --> 01:07:25.730 |
|
And there's like this really important |
|
|
|
01:07:25.730 --> 01:07:28.100 |
|
principle that test error can be |
|
|
|
01:07:28.100 --> 01:07:30.690 |
|
decomposed into variance, bias and |
|
|
|
01:07:30.690 --> 01:07:31.670 |
|
irreducible noise. |
|
|
|
01:07:32.680 --> 01:07:33.970 |
|
And because the trees and random |
|
|
|
01:07:33.970 --> 01:07:35.870 |
|
forests are really powerful and widely |
|
|
|
01:07:35.870 --> 01:07:38.000 |
|
applicable classifiers and regressors. |
|
|
|
01:07:39.990 --> 01:07:43.440 |
|
So in the next class I'm going to talk |
|
|
|
01:07:43.440 --> 01:07:45.765 |
|
about SVM support vector machines, |
|
|
|
01:07:45.765 --> 01:07:48.910 |
|
which were very popular approach, and |
|
|
|
01:07:48.910 --> 01:07:50.830 |
|
stochastic gradient descent, which is a |
|
|
|
01:07:50.830 --> 01:07:52.310 |
|
method to optimize them that also |
|
|
|
01:07:52.310 --> 01:07:54.245 |
|
applies to neural Nets and deep Nets. |
|
|
|
01:07:54.245 --> 01:07:56.300 |
|
So thank you, I'll see you on Thursday. |
|
|
|
01:19:53.620 --> 01:19:54.020 |
|
Yeah. |
|
|
|
01:19:56.250 --> 01:19:56.660 |
|
Testing. |
|
|
|
01:19:58.350 --> 01:19:58.590 |
|
Yeah. |
|
|
|
|