|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:52:10.2470009Z by ClassTranscribe |
|
|
|
00:01:22.340 --> 00:01:22.750 |
|
Good morning. |
|
|
|
00:01:24.260 --> 00:01:27.280 |
|
Alright, so I'm going to just first |
|
|
|
00:01:27.280 --> 00:01:29.738 |
|
finish up what I was, what I was going |
|
|
|
00:01:29.738 --> 00:01:31.660 |
|
to cover at the end of the last lecture |
|
|
|
00:01:31.660 --> 00:01:32.980 |
|
about Cannon. |
|
|
|
00:01:33.640 --> 00:01:36.550 |
|
And then I'll talk about probabilities |
|
|
|
00:01:36.550 --> 00:01:37.540 |
|
and Naive Bayes. |
|
|
|
00:01:38.260 --> 00:01:39.940 |
|
And so I wanted to give an example of |
|
|
|
00:01:39.940 --> 00:01:41.930 |
|
how K&N is used in practice. |
|
|
|
00:01:42.530 --> 00:01:44.880 |
|
Here's one example of using it for face |
|
|
|
00:01:44.880 --> 00:01:45.920 |
|
recognition. |
|
|
|
00:01:46.750 --> 00:01:48.480 |
|
A lot of times when it's used in |
|
|
|
00:01:48.480 --> 00:01:50.030 |
|
practice, there's a lot of feature |
|
|
|
00:01:50.030 --> 00:01:51.780 |
|
learning that goes on ahead of the |
|
|
|
00:01:51.780 --> 00:01:52.588 |
|
nearest neighbor. |
|
|
|
00:01:52.588 --> 00:01:54.510 |
|
So nearest neighbor itself is really |
|
|
|
00:01:54.510 --> 00:01:55.125 |
|
simple. |
|
|
|
00:01:55.125 --> 00:01:58.530 |
|
It's efficacy depends on learning good |
|
|
|
00:01:58.530 --> 00:02:00.039 |
|
representation so that. |
|
|
|
00:02:00.800 --> 00:02:02.640 |
|
Data points that are near each other |
|
|
|
00:02:02.640 --> 00:02:04.410 |
|
actually have similar labels. |
|
|
|
00:02:05.450 --> 00:02:07.385 |
|
Here's one example. |
|
|
|
00:02:07.385 --> 00:02:10.550 |
|
They want to try to be able to |
|
|
|
00:02:10.550 --> 00:02:12.330 |
|
recognize whether two faces are the |
|
|
|
00:02:12.330 --> 00:02:13.070 |
|
same person. |
|
|
|
00:02:13.820 --> 00:02:16.460 |
|
And so the method is that you Detect |
|
|
|
00:02:16.460 --> 00:02:18.940 |
|
facial features and then use those |
|
|
|
00:02:18.940 --> 00:02:21.630 |
|
feature detections to align the image |
|
|
|
00:02:21.630 --> 00:02:23.300 |
|
so that the face looks more frontal. |
|
|
|
00:02:24.060 --> 00:02:26.480 |
|
Then they use a CNN convolutional |
|
|
|
00:02:26.480 --> 00:02:29.240 |
|
neural network to train Features that |
|
|
|
00:02:29.240 --> 00:02:32.600 |
|
will be good for recognizing faces. |
|
|
|
00:02:32.600 --> 00:02:34.360 |
|
And the way they did that is that they |
|
|
|
00:02:34.360 --> 00:02:37.950 |
|
first collected hundreds of Faces from |
|
|
|
00:02:37.950 --> 00:02:40.300 |
|
a few thousand different people. |
|
|
|
00:02:40.300 --> 00:02:41.680 |
|
I think it was their employees of |
|
|
|
00:02:41.680 --> 00:02:42.250 |
|
Facebook. |
|
|
|
00:02:43.030 --> 00:02:46.420 |
|
And they trained a classifier to say |
|
|
|
00:02:46.420 --> 00:02:48.970 |
|
which, given a face, which of these |
|
|
|
00:02:48.970 --> 00:02:50.960 |
|
people does the face belong to. |
|
|
|
00:02:52.030 --> 00:02:54.340 |
|
And from that, they learn a |
|
|
|
00:02:54.340 --> 00:02:55.210 |
|
REPRESENTATION. |
|
|
|
00:02:55.210 --> 00:02:57.030 |
|
Those classifiers aren't very useful, |
|
|
|
00:02:57.030 --> 00:02:59.300 |
|
because nobody's interested in seeing |
|
|
|
00:02:59.300 --> 00:03:00.230 |
|
given a face. |
|
|
|
00:03:00.230 --> 00:03:01.843 |
|
Which of the Facebook employees is that |
|
|
|
00:03:01.843 --> 00:03:02.914 |
|
they want to know? |
|
|
|
00:03:02.914 --> 00:03:04.932 |
|
Like, is it you want to know? |
|
|
|
00:03:04.932 --> 00:03:07.460 |
|
Like, organize your photo album or see |
|
|
|
00:03:07.460 --> 00:03:08.800 |
|
whether you've been tagged in another |
|
|
|
00:03:08.800 --> 00:03:09.960 |
|
photo or something like that? |
|
|
|
00:03:10.630 --> 00:03:12.050 |
|
And so then they throw out the |
|
|
|
00:03:12.050 --> 00:03:13.980 |
|
Classifier and they just use the |
|
|
|
00:03:13.980 --> 00:03:16.280 |
|
feature representation that was learned |
|
|
|
00:03:16.280 --> 00:03:21.070 |
|
and use nearest neighbor to identify a |
|
|
|
00:03:21.070 --> 00:03:22.510 |
|
person that's been detected in a |
|
|
|
00:03:22.510 --> 00:03:23.090 |
|
photograph. |
|
|
|
00:03:24.830 --> 00:03:26.540 |
|
So in their paper, they showed that |
|
|
|
00:03:26.540 --> 00:03:28.565 |
|
this performs similarly to humans in |
|
|
|
00:03:28.565 --> 00:03:30.470 |
|
this data set called label faces in the |
|
|
|
00:03:30.470 --> 00:03:31.970 |
|
wild where you're trying to recognize |
|
|
|
00:03:31.970 --> 00:03:32.560 |
|
celebrities. |
|
|
|
00:03:34.140 --> 00:03:35.770 |
|
But it can be used for many things. |
|
|
|
00:03:35.770 --> 00:03:37.516 |
|
So you can organize photo albums, you |
|
|
|
00:03:37.516 --> 00:03:40.360 |
|
can detect faces and then you try to |
|
|
|
00:03:40.360 --> 00:03:41.970 |
|
match Faces across the photos. |
|
|
|
00:03:41.970 --> 00:03:44.175 |
|
So then you can organize like which |
|
|
|
00:03:44.175 --> 00:03:46.320 |
|
photos have a particular person. |
|
|
|
00:03:47.070 --> 00:03:49.950 |
|
Again, you can't identify celebrities |
|
|
|
00:03:49.950 --> 00:03:51.860 |
|
or famous people by building up a |
|
|
|
00:03:51.860 --> 00:03:54.919 |
|
database of faces of famous people. |
|
|
|
00:03:55.870 --> 00:03:58.110 |
|
And you can also alert, alert somebody |
|
|
|
00:03:58.110 --> 00:04:00.100 |
|
if somebody else uploads a photo of |
|
|
|
00:04:00.100 --> 00:04:00.330 |
|
them. |
|
|
|
00:04:00.330 --> 00:04:02.922 |
|
So you can see if somebody uploads a |
|
|
|
00:04:02.922 --> 00:04:05.364 |
|
photo, then you can detect faces, you |
|
|
|
00:04:05.364 --> 00:04:07.830 |
|
can see what their friends network is, |
|
|
|
00:04:07.830 --> 00:04:10.056 |
|
see what other which of their faces |
|
|
|
00:04:10.056 --> 00:04:12.220 |
|
have been uploaded and then Detect the |
|
|
|
00:04:12.220 --> 00:04:14.330 |
|
other users whose faces have been |
|
|
|
00:04:14.330 --> 00:04:16.580 |
|
uploaded and ask them for permission to |
|
|
|
00:04:16.580 --> 00:04:17.930 |
|
like make this photo public. |
|
|
|
00:04:19.750 --> 00:04:22.020 |
|
So this algorithm is actually used by |
|
|
|
00:04:22.020 --> 00:04:22.560 |
|
Facebook. |
|
|
|
00:04:22.560 --> 00:04:24.340 |
|
It has been for several years. |
|
|
|
00:04:24.340 --> 00:04:28.640 |
|
They're limiting some of its use more |
|
|
|
00:04:28.640 --> 00:04:30.544 |
|
recently, but they've been. |
|
|
|
00:04:30.544 --> 00:04:32.010 |
|
But it's been used really heavily. |
|
|
|
00:04:32.680 --> 00:04:34.410 |
|
And of course they have expanded |
|
|
|
00:04:34.410 --> 00:04:36.365 |
|
training data because whenever anybody |
|
|
|
00:04:36.365 --> 00:04:37.940 |
|
uploads photos then they can |
|
|
|
00:04:37.940 --> 00:04:40.353 |
|
automatically detect them and add them |
|
|
|
00:04:40.353 --> 00:04:42.360 |
|
to the database. |
|
|
|
00:04:42.360 --> 00:04:45.150 |
|
So here the use of KN is important |
|
|
|
00:04:45.150 --> 00:04:47.220 |
|
because KNN doesn't require any |
|
|
|
00:04:47.220 --> 00:04:47.490 |
|
training. |
|
|
|
00:04:47.490 --> 00:04:49.295 |
|
So every time somebody uploads a new |
|
|
|
00:04:49.295 --> 00:04:50.930 |
|
face you can update the model just by |
|
|
|
00:04:50.930 --> 00:04:54.430 |
|
adding this four 4096 dimensional |
|
|
|
00:04:54.430 --> 00:04:56.646 |
|
feature vector that corresponds to the |
|
|
|
00:04:56.646 --> 00:05:00.230 |
|
face and then use it in like based on |
|
|
|
00:05:00.230 --> 00:05:02.550 |
|
the friend networks to. |
|
|
|
00:05:02.910 --> 00:05:04.840 |
|
To recognize faces that are associated |
|
|
|
00:05:04.840 --> 00:05:05.410 |
|
with somebody. |
|
|
|
00:05:07.530 --> 00:05:11.270 |
|
I won't take time to discuss it now, |
|
|
|
00:05:11.270 --> 00:05:13.473 |
|
but it's worth thinking about some of |
|
|
|
00:05:13.473 --> 00:05:15.710 |
|
the consequences of the way that the |
|
|
|
00:05:15.710 --> 00:05:17.888 |
|
algorithm was trained and the way that |
|
|
|
00:05:17.888 --> 00:05:18.620 |
|
it's deployed. |
|
|
|
00:05:18.620 --> 00:05:19.600 |
|
So for example. |
|
|
|
00:05:20.510 --> 00:05:22.680 |
|
If you think about that, it was that |
|
|
|
00:05:22.680 --> 00:05:24.650 |
|
the initial Features were learned on |
|
|
|
00:05:24.650 --> 00:05:26.030 |
|
Facebook employees. |
|
|
|
00:05:26.030 --> 00:05:27.440 |
|
That's not a very. |
|
|
|
00:05:28.070 --> 00:05:29.630 |
|
That's not very representative |
|
|
|
00:05:29.630 --> 00:05:32.120 |
|
demographic of the US the employees |
|
|
|
00:05:32.120 --> 00:05:35.000 |
|
tend to be younger and. |
|
|
|
00:05:35.490 --> 00:05:38.446 |
|
Probably skew towards male might skew |
|
|
|
00:05:38.446 --> 00:05:40.210 |
|
towards certain ethnicities. |
|
|
|
00:05:40.820 --> 00:05:43.210 |
|
And so the Algorithm may be much better |
|
|
|
00:05:43.210 --> 00:05:45.030 |
|
at recognizing some kinds of Faces than |
|
|
|
00:05:45.030 --> 00:05:46.016 |
|
other faces. |
|
|
|
00:05:46.016 --> 00:05:47.628 |
|
And then, of course, there's lots and |
|
|
|
00:05:47.628 --> 00:05:49.495 |
|
lots of ethical issues that surround |
|
|
|
00:05:49.495 --> 00:05:51.830 |
|
the use of face recognition and its |
|
|
|
00:05:51.830 --> 00:05:52.610 |
|
applications. |
|
|
|
00:05:53.930 --> 00:05:55.550 |
|
Of course, like in many ways, this is |
|
|
|
00:05:55.550 --> 00:05:58.150 |
|
used to help people maintain privacy. |
|
|
|
00:05:58.150 --> 00:06:00.080 |
|
But even the use of recognition at all |
|
|
|
00:06:00.080 --> 00:06:03.120 |
|
raises privacy concerns, and that's why |
|
|
|
00:06:03.120 --> 00:06:04.860 |
|
they've limited the use to some extent. |
|
|
|
00:06:06.470 --> 00:06:08.060 |
|
So just something to think about. |
|
|
|
00:06:09.980 --> 00:06:13.430 |
|
So just to recap kann, the key |
|
|
|
00:06:13.430 --> 00:06:16.480 |
|
assumptions of K&N are that K nearest |
|
|
|
00:06:16.480 --> 00:06:18.260 |
|
neighbors that Samples with similar |
|
|
|
00:06:18.260 --> 00:06:19.730 |
|
features will have similar output |
|
|
|
00:06:19.730 --> 00:06:20.695 |
|
predictions. |
|
|
|
00:06:20.695 --> 00:06:23.290 |
|
And for most of the Distance measures |
|
|
|
00:06:23.290 --> 00:06:25.590 |
|
you implicitly assumes that all the |
|
|
|
00:06:25.590 --> 00:06:27.200 |
|
dimensions are equally important. |
|
|
|
00:06:27.200 --> 00:06:29.820 |
|
So it requires some kind of scaling or |
|
|
|
00:06:29.820 --> 00:06:31.500 |
|
learning to be really effective. |
|
|
|
00:06:33.540 --> 00:06:35.620 |
|
The parameters are just the data |
|
|
|
00:06:35.620 --> 00:06:36.080 |
|
itself. |
|
|
|
00:06:36.080 --> 00:06:37.870 |
|
You don't really have to learn any kind |
|
|
|
00:06:37.870 --> 00:06:40.526 |
|
of statistics of the data. |
|
|
|
00:06:40.526 --> 00:06:42.270 |
|
The data are the parameters. |
|
|
|
00:06:43.820 --> 00:06:46.160 |
|
The designs are mainly the choice of K |
|
|
|
00:06:46.160 --> 00:06:48.130 |
|
if you have higher K then it gets |
|
|
|
00:06:48.130 --> 00:06:49.360 |
|
smoother Prediction. |
|
|
|
00:06:50.340 --> 00:06:51.730 |
|
You can decide how you're going to |
|
|
|
00:06:51.730 --> 00:06:54.400 |
|
combine predictions if K is greater |
|
|
|
00:06:54.400 --> 00:06:56.750 |
|
than one, usually it's just voting or |
|
|
|
00:06:56.750 --> 00:06:57.280 |
|
averaging. |
|
|
|
00:06:58.610 --> 00:07:00.920 |
|
You can try to design the features and |
|
|
|
00:07:00.920 --> 00:07:03.450 |
|
that's where things can get a lot more |
|
|
|
00:07:03.450 --> 00:07:03.930 |
|
creative. |
|
|
|
00:07:04.680 --> 00:07:06.770 |
|
And you can choose a Distance function. |
|
|
|
00:07:08.900 --> 00:07:12.370 |
|
So this K&N is useful in many cases. |
|
|
|
00:07:12.370 --> 00:07:14.520 |
|
So if you have very few examples per |
|
|
|
00:07:14.520 --> 00:07:16.605 |
|
class then it can be applied even if |
|
|
|
00:07:16.605 --> 00:07:17.320 |
|
you just have one. |
|
|
|
00:07:18.080 --> 00:07:20.290 |
|
It can also work if you have many |
|
|
|
00:07:20.290 --> 00:07:21.560 |
|
Examples per class. |
|
|
|
00:07:22.200 --> 00:07:24.910 |
|
It's best if the features are all |
|
|
|
00:07:24.910 --> 00:07:26.960 |
|
roughly equally important, because K&N |
|
|
|
00:07:26.960 --> 00:07:28.540 |
|
itself doesn't really learn which |
|
|
|
00:07:28.540 --> 00:07:29.449 |
|
features are important. |
|
|
|
00:07:31.570 --> 00:07:33.910 |
|
It's good if the training data is |
|
|
|
00:07:33.910 --> 00:07:34.585 |
|
changing frequently. |
|
|
|
00:07:34.585 --> 00:07:37.520 |
|
In the face recognition Example face, |
|
|
|
00:07:37.520 --> 00:07:38.830 |
|
there's no way that Facebook will |
|
|
|
00:07:38.830 --> 00:07:41.160 |
|
collect everybody's Faces up front. |
|
|
|
00:07:41.160 --> 00:07:43.030 |
|
People keep on joining and leaving the |
|
|
|
00:07:43.030 --> 00:07:45.480 |
|
social network, and so they and they |
|
|
|
00:07:45.480 --> 00:07:47.080 |
|
don't want to have to keep retraining |
|
|
|
00:07:47.080 --> 00:07:49.850 |
|
models every time somebody uploads a |
|
|
|
00:07:49.850 --> 00:07:52.005 |
|
image with a new face in it or tags a |
|
|
|
00:07:52.005 --> 00:07:52.615 |
|
new face. |
|
|
|
00:07:52.615 --> 00:07:54.990 |
|
And so the ability to instantly update |
|
|
|
00:07:54.990 --> 00:07:56.330 |
|
your model is very important. |
|
|
|
00:07:58.160 --> 00:07:59.850 |
|
You can apply it to classification or |
|
|
|
00:07:59.850 --> 00:08:01.740 |
|
regression whether you have discrete or |
|
|
|
00:08:01.740 --> 00:08:04.570 |
|
continuous values, and its most |
|
|
|
00:08:04.570 --> 00:08:06.020 |
|
powerful when you do some feature |
|
|
|
00:08:06.020 --> 00:08:08.180 |
|
learning as an upfront operation. |
|
|
|
00:08:10.130 --> 00:08:12.210 |
|
So there's cases where it has its |
|
|
|
00:08:12.210 --> 00:08:13.330 |
|
downsides though. |
|
|
|
00:08:13.330 --> 00:08:15.650 |
|
One is that if you have a lot of |
|
|
|
00:08:15.650 --> 00:08:18.250 |
|
examples that are available per class, |
|
|
|
00:08:18.250 --> 00:08:20.360 |
|
then usually training a Logistic |
|
|
|
00:08:20.360 --> 00:08:23.690 |
|
regressor other Linear Classifier will |
|
|
|
00:08:23.690 --> 00:08:26.200 |
|
outperform because it's able to learn |
|
|
|
00:08:26.200 --> 00:08:27.990 |
|
the importance of different Features. |
|
|
|
00:08:28.950 --> 00:08:32.125 |
|
Also, K&N requires that you store all |
|
|
|
00:08:32.125 --> 00:08:34.692 |
|
the training data and that may require |
|
|
|
00:08:34.692 --> 00:08:38.153 |
|
a lot of storage and it requires a lot |
|
|
|
00:08:38.153 --> 00:08:40.145 |
|
of computation, and that you have to |
|
|
|
00:08:40.145 --> 00:08:42.200 |
|
compare each new input to all of the |
|
|
|
00:08:42.200 --> 00:08:43.750 |
|
inputs in your training data. |
|
|
|
00:08:43.750 --> 00:08:45.525 |
|
So in the case of Facebook for example, |
|
|
|
00:08:45.525 --> 00:08:47.745 |
|
they don't need if somebody uploads, if |
|
|
|
00:08:47.745 --> 00:08:49.780 |
|
they detect a face in somebody's image, |
|
|
|
00:08:49.780 --> 00:08:51.520 |
|
they don't need to compare it to the |
|
|
|
00:08:51.520 --> 00:08:53.410 |
|
other, like 2 billion Facebook users. |
|
|
|
00:08:53.410 --> 00:08:55.176 |
|
They just would compare it to people in |
|
|
|
00:08:55.176 --> 00:08:56.570 |
|
the person's social network, which will |
|
|
|
00:08:56.570 --> 00:08:58.900 |
|
be a much smaller number of Faces. |
|
|
|
00:08:58.970 --> 00:09:01.240 |
|
So they're able to limit the |
|
|
|
00:09:01.240 --> 00:09:02.190 |
|
computation that way. |
|
|
|
00:09:05.940 --> 00:09:08.760 |
|
And then finally, to recap what we |
|
|
|
00:09:08.760 --> 00:09:12.180 |
|
learned on Thursday, there's a basic |
|
|
|
00:09:12.180 --> 00:09:14.420 |
|
machine learning process, which is that |
|
|
|
00:09:14.420 --> 00:09:16.170 |
|
you've got training data, validation |
|
|
|
00:09:16.170 --> 00:09:17.260 |
|
data and TestData. |
|
|
|
00:09:18.160 --> 00:09:19.980 |
|
Given the training data, which are |
|
|
|
00:09:19.980 --> 00:09:22.730 |
|
pairs of Features and labels, you fit |
|
|
|
00:09:22.730 --> 00:09:25.060 |
|
the parameters of your Model. |
|
|
|
00:09:25.060 --> 00:09:26.950 |
|
Then you use the validation Model to |
|
|
|
00:09:26.950 --> 00:09:28.670 |
|
check how good the Model is and maybe |
|
|
|
00:09:28.670 --> 00:09:29.805 |
|
check many models. |
|
|
|
00:09:29.805 --> 00:09:31.960 |
|
You choose the best one and then you |
|
|
|
00:09:31.960 --> 00:09:33.590 |
|
get your final estimate of performance |
|
|
|
00:09:33.590 --> 00:09:34.410 |
|
on the TestData. |
|
|
|
00:09:36.790 --> 00:09:39.670 |
|
We talked about KNN, which is simple |
|
|
|
00:09:39.670 --> 00:09:42.040 |
|
but effective Classifier and regressor |
|
|
|
00:09:42.040 --> 00:09:44.140 |
|
that predicts the label of the most |
|
|
|
00:09:44.140 --> 00:09:45.540 |
|
similar training Example. |
|
|
|
00:09:46.770 --> 00:09:49.110 |
|
And then we talked about kind of |
|
|
|
00:09:49.110 --> 00:09:51.110 |
|
patterns of error and what causes |
|
|
|
00:09:51.110 --> 00:09:51.580 |
|
errors. |
|
|
|
00:09:51.580 --> 00:09:53.780 |
|
So it's important to remember that as |
|
|
|
00:09:53.780 --> 00:09:56.069 |
|
you get more training, more training |
|
|
|
00:09:56.070 --> 00:09:57.830 |
|
samples, you would expect that fitting |
|
|
|
00:09:57.830 --> 00:09:58.962 |
|
the training data gets harder. |
|
|
|
00:09:58.962 --> 00:10:01.500 |
|
So your error will tend to go up while |
|
|
|
00:10:01.500 --> 00:10:03.390 |
|
your error on the TestData will get |
|
|
|
00:10:03.390 --> 00:10:05.535 |
|
lower because the training data better |
|
|
|
00:10:05.535 --> 00:10:07.010 |
|
represents the TestData or better |
|
|
|
00:10:07.010 --> 00:10:08.430 |
|
represents the full distribution. |
|
|
|
00:10:09.770 --> 00:10:11.840 |
|
And there's many reasons why at the end |
|
|
|
00:10:11.840 --> 00:10:13.250 |
|
of training your Algorithm, you're |
|
|
|
00:10:13.250 --> 00:10:14.720 |
|
still going to have error in most |
|
|
|
00:10:14.720 --> 00:10:15.220 |
|
cases. |
|
|
|
00:10:15.880 --> 00:10:17.400 |
|
It could be that the problem is |
|
|
|
00:10:17.400 --> 00:10:20.940 |
|
intrinsically difficult, or it's |
|
|
|
00:10:20.940 --> 00:10:22.590 |
|
impossible to have 0 error. |
|
|
|
00:10:22.590 --> 00:10:24.232 |
|
It could be that you're Model has |
|
|
|
00:10:24.232 --> 00:10:24.845 |
|
limited power. |
|
|
|
00:10:24.845 --> 00:10:27.370 |
|
It could be that your Model has plenty |
|
|
|
00:10:27.370 --> 00:10:29.015 |
|
of power, but you have limited data so |
|
|
|
00:10:29.015 --> 00:10:30.710 |
|
you can't Estimate the parameters |
|
|
|
00:10:30.710 --> 00:10:31.290 |
|
exactly. |
|
|
|
00:10:32.050 --> 00:10:33.100 |
|
And it could be that there's |
|
|
|
00:10:33.100 --> 00:10:34.550 |
|
differences in the training test |
|
|
|
00:10:34.550 --> 00:10:35.280 |
|
distribution. |
|
|
|
00:10:37.020 --> 00:10:38.980 |
|
And then finally it's important to |
|
|
|
00:10:38.980 --> 00:10:41.315 |
|
remember that this Model fitting, the |
|
|
|
00:10:41.315 --> 00:10:42.980 |
|
model design and fitting is just one |
|
|
|
00:10:42.980 --> 00:10:44.750 |
|
part of a larger processing collecting |
|
|
|
00:10:44.750 --> 00:10:46.600 |
|
data and fitting it into an |
|
|
|
00:10:46.600 --> 00:10:47.610 |
|
application. |
|
|
|
00:10:47.610 --> 00:10:51.230 |
|
So both the cases of in Facebook's case |
|
|
|
00:10:51.230 --> 00:10:54.160 |
|
for example they had pre training stage |
|
|
|
00:10:54.160 --> 00:10:56.663 |
|
which is like training a classifier and |
|
|
|
00:10:56.663 --> 00:10:58.852 |
|
then they use that in a different, they |
|
|
|
00:10:58.852 --> 00:11:01.370 |
|
use it in a different way as a nearest |
|
|
|
00:11:01.370 --> 00:11:05.320 |
|
neighbor recognizer on their pool of |
|
|
|
00:11:05.320 --> 00:11:06.010 |
|
user data. |
|
|
|
00:11:07.070 --> 00:11:10.384 |
|
And so they're kind of building a model |
|
|
|
00:11:10.384 --> 00:11:11.212 |
|
using it. |
|
|
|
00:11:11.212 --> 00:11:13.700 |
|
They're building a model one way and |
|
|
|
00:11:13.700 --> 00:11:15.150 |
|
then using it in a different way. |
|
|
|
00:11:15.150 --> 00:11:16.660 |
|
So often that's the case that you have |
|
|
|
00:11:16.660 --> 00:11:17.590 |
|
to kind of be creative. |
|
|
|
00:11:18.360 --> 00:11:20.580 |
|
About how you collect data and how you |
|
|
|
00:11:20.580 --> 00:11:23.800 |
|
can get the model that you need to |
|
|
|
00:11:23.800 --> 00:11:24.860 |
|
solve your application. |
|
|
|
00:11:28.010 --> 00:11:30.033 |
|
Alright, so now I'm going to move on to |
|
|
|
00:11:30.033 --> 00:11:31.640 |
|
the main topic of today's lecture, |
|
|
|
00:11:31.640 --> 00:11:34.880 |
|
which is probabilities and the night |
|
|
|
00:11:34.880 --> 00:11:35.935 |
|
based Classifier. |
|
|
|
00:11:35.935 --> 00:11:39.690 |
|
So the knight based Classifier is |
|
|
|
00:11:39.690 --> 00:11:41.220 |
|
unlike nearest neighbor, it's not. |
|
|
|
00:11:41.990 --> 00:11:44.020 |
|
Usually like the final approach that |
|
|
|
00:11:44.020 --> 00:11:46.080 |
|
somebody takes, but it's sometimes a |
|
|
|
00:11:46.080 --> 00:11:49.460 |
|
piece of a piece of how somebody is |
|
|
|
00:11:49.460 --> 00:11:51.210 |
|
estimating probabilities as part of |
|
|
|
00:11:51.210 --> 00:11:51.870 |
|
their approach. |
|
|
|
00:11:52.690 --> 00:11:55.610 |
|
And it's a good introduction to |
|
|
|
00:11:55.610 --> 00:11:56.630 |
|
Probabilistic models. |
|
|
|
00:11:59.220 --> 00:12:02.525 |
|
So with the nearest neighbor |
|
|
|
00:12:02.525 --> 00:12:04.670 |
|
classifier, that's an instance based |
|
|
|
00:12:04.670 --> 00:12:05.960 |
|
Classifier, which means that you're |
|
|
|
00:12:05.960 --> 00:12:07.800 |
|
assigning labels just based on matching |
|
|
|
00:12:07.800 --> 00:12:08.515 |
|
other instances. |
|
|
|
00:12:08.515 --> 00:12:11.160 |
|
The instances the data are the Model. |
|
|
|
00:12:12.260 --> 00:12:14.590 |
|
Now we're going to start talking about |
|
|
|
00:12:14.590 --> 00:12:15.910 |
|
Probabilistic models. |
|
|
|
00:12:15.910 --> 00:12:18.290 |
|
In a Probabilistic Model, you choose |
|
|
|
00:12:18.290 --> 00:12:21.060 |
|
the label that is most likely given the |
|
|
|
00:12:21.060 --> 00:12:21.630 |
|
Features. |
|
|
|
00:12:21.630 --> 00:12:23.390 |
|
So that's kind of an intuitive thing to |
|
|
|
00:12:23.390 --> 00:12:25.510 |
|
do if you want to know. |
|
|
|
00:12:26.520 --> 00:12:28.690 |
|
Which if you're looking at an image and |
|
|
|
00:12:28.690 --> 00:12:30.390 |
|
trying to classify it into a Digit, it |
|
|
|
00:12:30.390 --> 00:12:32.074 |
|
makes sense that you would assign it to |
|
|
|
00:12:32.074 --> 00:12:34.000 |
|
the Digit that is most likely given the |
|
|
|
00:12:34.000 --> 00:12:35.940 |
|
Features given the pixel intensities. |
|
|
|
00:12:36.610 --> 00:12:38.170 |
|
But of course, like the challenge is |
|
|
|
00:12:38.170 --> 00:12:40.030 |
|
modeling this probability function, how |
|
|
|
00:12:40.030 --> 00:12:42.590 |
|
do you Model the probability of the |
|
|
|
00:12:42.590 --> 00:12:44.000 |
|
label given the data? |
|
|
|
00:12:45.340 --> 00:12:47.520 |
|
So this is just a very compact way of |
|
|
|
00:12:47.520 --> 00:12:48.135 |
|
writing that. |
|
|
|
00:12:48.135 --> 00:12:50.270 |
|
So I have Y star is the predicted |
|
|
|
00:12:50.270 --> 00:12:53.150 |
|
label, and that's equal to the argmax |
|
|
|
00:12:53.150 --> 00:12:53.836 |
|
over Y. |
|
|
|
00:12:53.836 --> 00:12:55.770 |
|
So it's the Y that maximizes |
|
|
|
00:12:55.770 --> 00:12:56.950 |
|
probability of Y given X. |
|
|
|
00:12:56.950 --> 00:12:59.250 |
|
So you assign the label that's most |
|
|
|
00:12:59.250 --> 00:13:00.590 |
|
likely given the data. |
|
|
|
00:13:03.170 --> 00:13:05.210 |
|
So I just want to do a very brief |
|
|
|
00:13:05.210 --> 00:13:08.240 |
|
review of some probability things. |
|
|
|
00:13:08.240 --> 00:13:10.730 |
|
Hopefully this looks familiar, but it's |
|
|
|
00:13:10.730 --> 00:13:12.920 |
|
still useful to refresh on it. |
|
|
|
00:13:13.720 --> 00:13:15.290 |
|
So first Joint and conditional |
|
|
|
00:13:15.290 --> 00:13:16.260 |
|
probability. |
|
|
|
00:13:16.260 --> 00:13:19.040 |
|
If you say probability of X&Y then that |
|
|
|
00:13:19.040 --> 00:13:20.900 |
|
means the probability that both of |
|
|
|
00:13:20.900 --> 00:13:24.180 |
|
those values are true at the same time, |
|
|
|
00:13:24.180 --> 00:13:25.030 |
|
so. |
|
|
|
00:13:26.330 --> 00:13:28.400 |
|
So if you say like the probability that |
|
|
|
00:13:28.400 --> 00:13:29.290 |
|
it's sunny. |
|
|
|
00:13:29.980 --> 00:13:32.540 |
|
And it's rainy, then that's probably a |
|
|
|
00:13:32.540 --> 00:13:33.910 |
|
very low probability, because those |
|
|
|
00:13:33.910 --> 00:13:35.700 |
|
usually don't happen at the same time. |
|
|
|
00:13:35.700 --> 00:13:37.635 |
|
Both X&Y are true. |
|
|
|
00:13:37.635 --> 00:13:40.396 |
|
That's equal to the probability of X |
|
|
|
00:13:40.396 --> 00:13:42.179 |
|
given Y times probability of Y. |
|
|
|
00:13:42.180 --> 00:13:45.725 |
|
So probability of X given Y is the |
|
|
|
00:13:45.725 --> 00:13:48.700 |
|
probability that X is true given the |
|
|
|
00:13:48.700 --> 00:13:50.956 |
|
known values of Y times the probability |
|
|
|
00:13:50.956 --> 00:13:52.280 |
|
that Y is true. |
|
|
|
00:13:52.970 --> 00:13:54.789 |
|
And that's also equal to probability of |
|
|
|
00:13:54.790 --> 00:13:56.769 |
|
Y given X times probability of X. |
|
|
|
00:13:56.770 --> 00:13:59.450 |
|
So you can take a Joint probability and |
|
|
|
00:13:59.450 --> 00:14:01.580 |
|
turn it into a conditional probability |
|
|
|
00:14:01.580 --> 00:14:04.370 |
|
times the probability of their meaning |
|
|
|
00:14:04.370 --> 00:14:06.190 |
|
variables, the condition variables. |
|
|
|
00:14:07.010 --> 00:14:08.660 |
|
And you can apply that down a chain. |
|
|
|
00:14:08.660 --> 00:14:11.341 |
|
So probability of ABC is probability of |
|
|
|
00:14:11.341 --> 00:14:13.531 |
|
a given BC times probability of B given |
|
|
|
00:14:13.531 --> 00:14:14.900 |
|
C times probability of C. |
|
|
|
00:14:17.320 --> 00:14:18.730 |
|
And then it's important to remember |
|
|
|
00:14:18.730 --> 00:14:21.110 |
|
Bayes rule, which is a way of relating |
|
|
|
00:14:21.110 --> 00:14:23.160 |
|
probability of X given Y and |
|
|
|
00:14:23.160 --> 00:14:24.869 |
|
probability of Y given X. |
|
|
|
00:14:25.520 --> 00:14:27.440 |
|
So of X given Y. |
|
|
|
00:14:28.100 --> 00:14:30.516 |
|
Is equal to probability of Y given X |
|
|
|
00:14:30.516 --> 00:14:32.222 |
|
times probability of X over probability |
|
|
|
00:14:32.222 --> 00:14:35.090 |
|
of Y and you can get that by saying |
|
|
|
00:14:35.090 --> 00:14:38.595 |
|
probability of X given Y is probability |
|
|
|
00:14:38.595 --> 00:14:41.599 |
|
of X&Y over probability of Y. |
|
|
|
00:14:41.600 --> 00:14:43.730 |
|
So what was done here is you multiply |
|
|
|
00:14:43.730 --> 00:14:45.910 |
|
this by probability of Y and then |
|
|
|
00:14:45.910 --> 00:14:47.771 |
|
divide it by probability of Y and |
|
|
|
00:14:47.771 --> 00:14:49.501 |
|
probability of X given Y times |
|
|
|
00:14:49.501 --> 00:14:51.519 |
|
probability of Y is probability of X&Y. |
|
|
|
00:14:52.600 --> 00:14:54.390 |
|
And then the probability of X&Y is |
|
|
|
00:14:54.390 --> 00:14:56.030 |
|
broken out into probability of Y given |
|
|
|
00:14:56.030 --> 00:14:57.209 |
|
X times probability of X. |
|
|
|
00:14:59.150 --> 00:15:01.040 |
|
So often it's the case that you want to |
|
|
|
00:15:01.040 --> 00:15:03.484 |
|
kind of switch things you the label and |
|
|
|
00:15:03.484 --> 00:15:06.339 |
|
you want to know the likelihood of the |
|
|
|
00:15:06.339 --> 00:15:08.350 |
|
Features, but you have like a |
|
|
|
00:15:08.350 --> 00:15:10.544 |
|
likelihood for that, but you want a |
|
|
|
00:15:10.544 --> 00:15:11.830 |
|
likelihood the other way of the |
|
|
|
00:15:11.830 --> 00:15:13.654 |
|
probability of the label given the |
|
|
|
00:15:13.654 --> 00:15:13.868 |
|
Features. |
|
|
|
00:15:13.868 --> 00:15:15.529 |
|
And so you use Bayes rule to kind of |
|
|
|
00:15:15.530 --> 00:15:17.550 |
|
turn the tables on your likelihood |
|
|
|
00:15:17.550 --> 00:15:17.950 |
|
function. |
|
|
|
00:15:20.620 --> 00:15:25.810 |
|
So using using using these rules of |
|
|
|
00:15:25.810 --> 00:15:26.530 |
|
probability. |
|
|
|
00:15:27.210 --> 00:15:29.830 |
|
We can show that if I want to find the |
|
|
|
00:15:29.830 --> 00:15:33.250 |
|
Y that maximizes the likelihood of the |
|
|
|
00:15:33.250 --> 00:15:34.690 |
|
label given the data. |
|
|
|
00:15:35.370 --> 00:15:38.490 |
|
That's equivalent to finding the Y that |
|
|
|
00:15:38.490 --> 00:15:41.240 |
|
maximizes the likelihood of the data |
|
|
|
00:15:41.240 --> 00:15:44.520 |
|
given the label times the probability |
|
|
|
00:15:44.520 --> 00:15:45.210 |
|
of the label. |
|
|
|
00:15:45.920 --> 00:15:47.690 |
|
So in other words, if you wanted to |
|
|
|
00:15:47.690 --> 00:15:50.030 |
|
say, well, what is the probability that |
|
|
|
00:15:50.030 --> 00:15:53.550 |
|
my face is Derek given my facial |
|
|
|
00:15:53.550 --> 00:15:54.220 |
|
features? |
|
|
|
00:15:54.950 --> 00:15:56.100 |
|
That's the top. |
|
|
|
00:15:56.100 --> 00:15:58.323 |
|
That's equivalent to saying what's the |
|
|
|
00:15:58.323 --> 00:16:00.400 |
|
probability that it's me without |
|
|
|
00:16:00.400 --> 00:16:02.635 |
|
looking at the Features times the |
|
|
|
00:16:02.635 --> 00:16:04.270 |
|
probability of my Features given that |
|
|
|
00:16:04.270 --> 00:16:04.870 |
|
it's me? |
|
|
|
00:16:04.870 --> 00:16:05.980 |
|
Those are the same. |
|
|
|
00:16:06.330 --> 00:16:09.770 |
|
Those the why that maximizes that is |
|
|
|
00:16:09.770 --> 00:16:11.150 |
|
going to be the same so. |
|
|
|
00:16:12.990 --> 00:16:15.230 |
|
And the reason for that is derived down |
|
|
|
00:16:15.230 --> 00:16:15.720 |
|
here. |
|
|
|
00:16:15.720 --> 00:16:17.473 |
|
So I can take Y given X. |
|
|
|
00:16:17.473 --> 00:16:20.686 |
|
So argmax of Y given X is the as argmax |
|
|
|
00:16:20.686 --> 00:16:23.029 |
|
of Y given X times probability of X. |
|
|
|
00:16:23.780 --> 00:16:26.000 |
|
And the reason for that is just that |
|
|
|
00:16:26.000 --> 00:16:27.880 |
|
probability of X doesn't depend on Y. |
|
|
|
00:16:27.880 --> 00:16:31.140 |
|
So I can multiply multiply this thing |
|
|
|
00:16:31.140 --> 00:16:33.092 |
|
in the argmax by anything that doesn't |
|
|
|
00:16:33.092 --> 00:16:35.410 |
|
depend on Y and it's going to be |
|
|
|
00:16:35.410 --> 00:16:37.890 |
|
unchanged because it's just going to. |
|
|
|
00:16:38.870 --> 00:16:41.460 |
|
The way that maximizes it will be the |
|
|
|
00:16:41.460 --> 00:16:41.780 |
|
same. |
|
|
|
00:16:43.410 --> 00:16:44.940 |
|
So then I turn that. |
|
|
|
00:16:45.530 --> 00:16:47.810 |
|
I turned that into the Joint Y&X and |
|
|
|
00:16:47.810 --> 00:16:48.940 |
|
then I broke it out again. |
|
|
|
00:16:49.900 --> 00:16:51.300 |
|
Right, so the reason why this is |
|
|
|
00:16:51.300 --> 00:16:54.430 |
|
important is that I can choose to |
|
|
|
00:16:54.430 --> 00:16:57.562 |
|
either Model directly the probability |
|
|
|
00:16:57.562 --> 00:17:00.659 |
|
of the label given the data, or I can |
|
|
|
00:17:00.659 --> 00:17:02.231 |
|
choose the Model the probability of the |
|
|
|
00:17:02.231 --> 00:17:03.129 |
|
data given the label. |
|
|
|
00:17:03.910 --> 00:17:06.172 |
|
In a Naive Bayes, we're going to Model |
|
|
|
00:17:06.172 --> 00:17:07.950 |
|
probability the data given the label, |
|
|
|
00:17:07.950 --> 00:17:09.510 |
|
and then in the next class we'll talk |
|
|
|
00:17:09.510 --> 00:17:11.425 |
|
about logistic regression where we try |
|
|
|
00:17:11.425 --> 00:17:12.930 |
|
to directly Model the probability of |
|
|
|
00:17:12.930 --> 00:17:14.000 |
|
the label given the data. |
|
|
|
00:17:22.090 --> 00:17:24.760 |
|
All right, so let's just. |
|
|
|
00:17:26.170 --> 00:17:29.400 |
|
Do a simple probability exercise just |
|
|
|
00:17:29.400 --> 00:17:31.430 |
|
to kind of make sure that. |
|
|
|
00:17:33.430 --> 00:17:34.730 |
|
That we get. |
|
|
|
00:17:37.010 --> 00:17:38.230 |
|
So let's say. |
|
|
|
00:17:39.620 --> 00:17:41.060 |
|
Here I have a feature. |
|
|
|
00:17:41.060 --> 00:17:41.970 |
|
Doesn't really matter what the |
|
|
|
00:17:41.970 --> 00:17:43.440 |
|
Features, but let's say that it's |
|
|
|
00:17:43.440 --> 00:17:45.233 |
|
whether something is larger than £10 |
|
|
|
00:17:45.233 --> 00:17:48.210 |
|
and I collected a bunch of different |
|
|
|
00:17:48.210 --> 00:17:50.530 |
|
animals, cats and dogs and measured |
|
|
|
00:17:50.530 --> 00:17:50.770 |
|
them. |
|
|
|
00:17:51.450 --> 00:17:53.130 |
|
And I want to train something that will |
|
|
|
00:17:53.130 --> 00:17:54.510 |
|
tell me whether or not something is a |
|
|
|
00:17:54.510 --> 00:17:54.810 |
|
cat. |
|
|
|
00:17:55.730 --> 00:17:57.370 |
|
And so. |
|
|
|
00:17:58.190 --> 00:18:00.985 |
|
Or a dog, and so I have like 40 |
|
|
|
00:18:00.985 --> 00:18:03.280 |
|
different cats and 45 different dogs, |
|
|
|
00:18:03.280 --> 00:18:04.860 |
|
and I measured whether or not they're |
|
|
|
00:18:04.860 --> 00:18:06.693 |
|
bigger than £10. |
|
|
|
00:18:06.693 --> 00:18:10.270 |
|
So first, given this empirical |
|
|
|
00:18:10.270 --> 00:18:12.505 |
|
distribution, given these samples that |
|
|
|
00:18:12.505 --> 00:18:15.120 |
|
I have, what's the probability that Y |
|
|
|
00:18:15.120 --> 00:18:15.810 |
|
is a cat? |
|
|
|
00:18:22.430 --> 00:18:25.970 |
|
So it's actually 40 / 85 because it's |
|
|
|
00:18:25.970 --> 00:18:26.960 |
|
going to be. |
|
|
|
00:18:27.640 --> 00:18:29.030 |
|
Let me see if I can write on this. |
|
|
|
00:18:36.840 --> 00:18:37.330 |
|
OK. |
|
|
|
00:18:39.520 --> 00:18:40.460 |
|
That's not what I wanted. |
|
|
|
00:18:43.970 --> 00:18:45.500 |
|
If I can get the pen to work. |
|
|
|
00:18:48.610 --> 00:18:50.360 |
|
OK, it doesn't work that well. |
|
|
|
00:18:55.010 --> 00:18:56.250 |
|
OK, forget that. |
|
|
|
00:18:56.250 --> 00:18:57.420 |
|
Alright, I'll write it on the board. |
|
|
|
00:18:57.420 --> 00:18:59.639 |
|
So it's 40 / 85. |
|
|
|
00:19:01.780 --> 00:19:05.010 |
|
So it's 40 / 40 + 45. |
|
|
|
00:19:05.920 --> 00:19:08.595 |
|
And that's because there's 40 cats and |
|
|
|
00:19:08.595 --> 00:19:09.888 |
|
there's 45 dogs. |
|
|
|
00:19:09.888 --> 00:19:13.040 |
|
So I take the count of all the cats and |
|
|
|
00:19:13.040 --> 00:19:14.970 |
|
divide it by the count of all the data |
|
|
|
00:19:14.970 --> 00:19:16.635 |
|
in total, all the cats and dogs. |
|
|
|
00:19:16.635 --> 00:19:17.860 |
|
So that's 40 / 85. |
|
|
|
00:19:18.580 --> 00:19:20.470 |
|
And what's the probability that Y is a |
|
|
|
00:19:20.470 --> 00:19:22.810 |
|
cat given that X is false? |
|
|
|
00:19:29.380 --> 00:19:31.510 |
|
So it's right? |
|
|
|
00:19:31.510 --> 00:19:34.240 |
|
So it's 15 / 20 or 3 / 4. |
|
|
|
00:19:34.240 --> 00:19:35.890 |
|
And that's because given that X is |
|
|
|
00:19:35.890 --> 00:19:37.620 |
|
false, I'm just in this one column |
|
|
|
00:19:37.620 --> 00:19:40.799 |
|
here, so it's 15 / 15 / 20. |
|
|
|
00:19:42.090 --> 00:19:45.110 |
|
And what's the probability that X is |
|
|
|
00:19:45.110 --> 00:19:46.650 |
|
false given that Y is a cat? |
|
|
|
00:19:49.320 --> 00:19:51.570 |
|
Right, 15 / 480 because if I know that |
|
|
|
00:19:51.570 --> 00:19:53.500 |
|
Y is a Cat, then I'm in the top row, so |
|
|
|
00:19:53.500 --> 00:19:55.590 |
|
it's just 15 divided by all the cats, |
|
|
|
00:19:55.590 --> 00:19:56.650 |
|
so 15 / 40. |
|
|
|
00:19:58.320 --> 00:20:00.737 |
|
OK, and it's important to remember that |
|
|
|
00:20:00.737 --> 00:20:03.119 |
|
Y given X is different than X given Y. |
|
|
|
00:20:05.110 --> 00:20:08.276 |
|
Right, so some other simple rules of |
|
|
|
00:20:08.276 --> 00:20:08.572 |
|
probability. |
|
|
|
00:20:08.572 --> 00:20:11.150 |
|
One is the law of total probability. |
|
|
|
00:20:11.150 --> 00:20:13.060 |
|
That is, if you sum over all the values |
|
|
|
00:20:13.060 --> 00:20:16.020 |
|
of a variable, then the sum of those |
|
|
|
00:20:16.020 --> 00:20:17.630 |
|
probabilities is equal to 1. |
|
|
|
00:20:18.240 --> 00:20:20.450 |
|
And if this were a continuous variable, |
|
|
|
00:20:20.450 --> 00:20:21.840 |
|
it would just be an integral over the |
|
|
|
00:20:21.840 --> 00:20:23.716 |
|
domain of X over all the values of X |
|
|
|
00:20:23.716 --> 00:20:26.180 |
|
and then the integral over P of X is |
|
|
|
00:20:26.180 --> 00:20:26.690 |
|
equal to 1. |
|
|
|
00:20:27.980 --> 00:20:29.470 |
|
Then I've got Marginalization. |
|
|
|
00:20:29.470 --> 00:20:31.990 |
|
So if I have a joint probability of two |
|
|
|
00:20:31.990 --> 00:20:34.150 |
|
variables and I want to get rid of one |
|
|
|
00:20:34.150 --> 00:20:34.520 |
|
of them. |
|
|
|
00:20:35.280 --> 00:20:37.630 |
|
Then I take this sum over all the |
|
|
|
00:20:37.630 --> 00:20:39.290 |
|
values of 1 and the variables. |
|
|
|
00:20:39.290 --> 00:20:41.052 |
|
In this case it's the sum over all the |
|
|
|
00:20:41.052 --> 00:20:41.900 |
|
values of X. |
|
|
|
00:20:42.570 --> 00:20:46.268 |
|
Of X&Y and that's going to be equal to |
|
|
|
00:20:46.268 --> 00:20:46.910 |
|
P of Y. |
|
|
|
00:20:53.440 --> 00:20:55.380 |
|
And then finally independence. |
|
|
|
00:20:55.380 --> 00:20:59.691 |
|
So A is independent of B if and only if |
|
|
|
00:20:59.691 --> 00:21:02.414 |
|
the probability of A&B is equal to the |
|
|
|
00:21:02.414 --> 00:21:04.115 |
|
probability of a times the probability |
|
|
|
00:21:04.115 --> 00:21:04.660 |
|
of B. |
|
|
|
00:21:05.430 --> 00:21:07.974 |
|
Or another way to write it is that |
|
|
|
00:21:07.974 --> 00:21:10.142 |
|
probability that what this implies is |
|
|
|
00:21:10.142 --> 00:21:12.500 |
|
that probability of a given B is equal |
|
|
|
00:21:12.500 --> 00:21:13.890 |
|
to probability of a. |
|
|
|
00:21:13.890 --> 00:21:15.680 |
|
So if I just divide both sides by |
|
|
|
00:21:15.680 --> 00:21:17.250 |
|
probability of B then I get that. |
|
|
|
00:21:18.160 --> 00:21:20.855 |
|
Or probability of B given A equals |
|
|
|
00:21:20.855 --> 00:21:22.010 |
|
probability of B. |
|
|
|
00:21:22.010 --> 00:21:24.150 |
|
So these things are the top one. |
|
|
|
00:21:24.150 --> 00:21:25.700 |
|
Might not be something that pops into |
|
|
|
00:21:25.700 --> 00:21:26.420 |
|
your head right away. |
|
|
|
00:21:26.420 --> 00:21:28.450 |
|
It's not necessarily as intuitive, but |
|
|
|
00:21:28.450 --> 00:21:30.001 |
|
these are pretty intuitive that |
|
|
|
00:21:30.001 --> 00:21:32.376 |
|
probability of a given B equals |
|
|
|
00:21:32.376 --> 00:21:33.564 |
|
probability of a. |
|
|
|
00:21:33.564 --> 00:21:36.050 |
|
So in other words, whether or not a is |
|
|
|
00:21:36.050 --> 00:21:37.470 |
|
true doesn't depend on B at all. |
|
|
|
00:21:38.720 --> 00:21:40.430 |
|
And whether or not B is true doesn't |
|
|
|
00:21:40.430 --> 00:21:42.360 |
|
depend on A at all, and then you can |
|
|
|
00:21:42.360 --> 00:21:44.810 |
|
easily get to the one up there just by |
|
|
|
00:21:44.810 --> 00:21:47.410 |
|
multiplying here both sides by |
|
|
|
00:21:47.410 --> 00:21:48.100 |
|
probability of a. |
|
|
|
00:21:56.140 --> 00:21:59.180 |
|
Alright, so in some of the slides |
|
|
|
00:21:59.180 --> 00:22:00.650 |
|
there's going to be a bunch of like |
|
|
|
00:22:00.650 --> 00:22:02.760 |
|
indices, so I just wanted to try to be |
|
|
|
00:22:02.760 --> 00:22:04.370 |
|
consistent in the way that I use them. |
|
|
|
00:22:05.030 --> 00:22:07.674 |
|
And also like usually verbally say what |
|
|
|
00:22:07.674 --> 00:22:10.543 |
|
the what the variables mean, but when I |
|
|
|
00:22:10.543 --> 00:22:14.300 |
|
say XI mean the ith feature so I is a |
|
|
|
00:22:14.300 --> 00:22:15.085 |
|
feature index. |
|
|
|
00:22:15.085 --> 00:22:18.619 |
|
When I say XNI mean the nth sample, so |
|
|
|
00:22:18.620 --> 00:22:20.520 |
|
north is the sample index and Lynn |
|
|
|
00:22:20.520 --> 00:22:21.590 |
|
would be the nth label. |
|
|
|
00:22:22.370 --> 00:22:24.993 |
|
So if I say X and I, then that's the |
|
|
|
00:22:24.993 --> 00:22:26.760 |
|
ith feature of the nth label. |
|
|
|
00:22:26.760 --> 00:22:29.763 |
|
So for digits for example, would be the |
|
|
|
00:22:29.763 --> 00:22:33.720 |
|
ith pixel of the nth Digit Example. |
|
|
|
00:22:35.070 --> 00:22:37.580 |
|
I used this delta here to indicate with |
|
|
|
00:22:37.580 --> 00:22:39.900 |
|
some expression inside to indicate that |
|
|
|
00:22:39.900 --> 00:22:42.780 |
|
it returns true or returns one if the |
|
|
|
00:22:42.780 --> 00:22:44.850 |
|
expression inside it is true and 0 |
|
|
|
00:22:44.850 --> 00:22:45.410 |
|
otherwise. |
|
|
|
00:22:46.200 --> 00:22:48.110 |
|
And I'll Use V for a feature value. |
|
|
|
00:22:55.320 --> 00:22:57.900 |
|
So if I want to Estimate the |
|
|
|
00:22:57.900 --> 00:22:59.830 |
|
probabilities of some function, I can |
|
|
|
00:22:59.830 --> 00:23:00.578 |
|
just do it by counting. |
|
|
|
00:23:00.578 --> 00:23:02.760 |
|
So if I want to say what is the |
|
|
|
00:23:02.760 --> 00:23:04.950 |
|
probability that X equals some value |
|
|
|
00:23:04.950 --> 00:23:07.600 |
|
and I have capital N Samples, then I |
|
|
|
00:23:07.600 --> 00:23:09.346 |
|
can just take a sum over all the |
|
|
|
00:23:09.346 --> 00:23:11.350 |
|
samples and count for how many of them |
|
|
|
00:23:11.350 --> 00:23:14.030 |
|
XN equals V so that's kind of intuitive |
|
|
|
00:23:14.030 --> 00:23:14.480 |
|
if I have. |
|
|
|
00:23:15.870 --> 00:23:17.750 |
|
If I have a month full of days and I |
|
|
|
00:23:17.750 --> 00:23:19.280 |
|
want to say what's the probability that |
|
|
|
00:23:19.280 --> 00:23:21.610 |
|
one of those days is sunny, then I can |
|
|
|
00:23:21.610 --> 00:23:23.809 |
|
just take a sum over all the I can |
|
|
|
00:23:23.810 --> 00:23:25.370 |
|
count how many sunny days there were |
|
|
|
00:23:25.370 --> 00:23:26.908 |
|
divided by the total number of days and |
|
|
|
00:23:26.908 --> 00:23:27.930 |
|
that gives me an Estimate. |
|
|
|
00:23:31.930 --> 00:23:35.340 |
|
But what if I have 100 variables? |
|
|
|
00:23:35.340 --> 00:23:36.380 |
|
So if I have. |
|
|
|
00:23:37.310 --> 00:23:39.220 |
|
For example, in the digits case I have |
|
|
|
00:23:39.220 --> 00:23:42.840 |
|
784 different and pixel intensities. |
|
|
|
00:23:43.710 --> 00:23:46.350 |
|
And there's no way I can count over all |
|
|
|
00:23:46.350 --> 00:23:48.222 |
|
possible combinations of pixel |
|
|
|
00:23:48.222 --> 00:23:49.000 |
|
intensities, right? |
|
|
|
00:23:49.000 --> 00:23:51.470 |
|
Even if I were to turn them into binary |
|
|
|
00:23:51.470 --> 00:23:56.070 |
|
values, there would be 2 to the 784 |
|
|
|
00:23:56.070 --> 00:23:58.107 |
|
different combinations of pixel |
|
|
|
00:23:58.107 --> 00:23:58.670 |
|
intensities. |
|
|
|
00:23:58.670 --> 00:24:01.635 |
|
So you would need like data samples |
|
|
|
00:24:01.635 --> 00:24:03.520 |
|
that are equal to like number of atoms |
|
|
|
00:24:03.520 --> 00:24:05.300 |
|
in the universe or something like that |
|
|
|
00:24:05.300 --> 00:24:07.415 |
|
in order to even begin to Estimate it. |
|
|
|
00:24:07.415 --> 00:24:08.900 |
|
And that would that would only be |
|
|
|
00:24:08.900 --> 00:24:10.460 |
|
giving you very few samples per |
|
|
|
00:24:10.460 --> 00:24:11.050 |
|
combination. |
|
|
|
00:24:12.860 --> 00:24:15.407 |
|
So obviously, like jointly modeling a |
|
|
|
00:24:15.407 --> 00:24:17.799 |
|
whole bunch of different, the |
|
|
|
00:24:17.800 --> 00:24:19.431 |
|
probability of a whole bunch of |
|
|
|
00:24:19.431 --> 00:24:20.740 |
|
different variables is usually |
|
|
|
00:24:20.740 --> 00:24:23.490 |
|
impossible, and even approximating it, |
|
|
|
00:24:23.490 --> 00:24:24.880 |
|
it's very challenging. |
|
|
|
00:24:24.880 --> 00:24:26.260 |
|
You have to try to solve for the |
|
|
|
00:24:26.260 --> 00:24:28.036 |
|
dependency structures and then solve |
|
|
|
00:24:28.036 --> 00:24:30.236 |
|
for different combinations of variables |
|
|
|
00:24:30.236 --> 00:24:30.699 |
|
and. |
|
|
|
00:24:31.550 --> 00:24:33.740 |
|
And then worry about the dependencies |
|
|
|
00:24:33.740 --> 00:24:35.040 |
|
that aren't fully accounted for. |
|
|
|
00:24:35.880 --> 00:24:37.670 |
|
And so it's just really difficult to |
|
|
|
00:24:37.670 --> 00:24:40.160 |
|
estimate the probability of all your |
|
|
|
00:24:40.160 --> 00:24:41.810 |
|
Features given the label. |
|
|
|
00:24:42.900 --> 00:24:43.610 |
|
Jointly. |
|
|
|
00:24:44.440 --> 00:24:47.540 |
|
And so that's the Naive Bayes Model |
|
|
|
00:24:47.540 --> 00:24:48.240 |
|
comes in. |
|
|
|
00:24:48.240 --> 00:24:50.430 |
|
It makes us greatly simplifying |
|
|
|
00:24:50.430 --> 00:24:51.060 |
|
assumption. |
|
|
|
00:24:51.730 --> 00:24:54.132 |
|
Which is that all of the features are |
|
|
|
00:24:54.132 --> 00:24:56.010 |
|
independent given the label, so it |
|
|
|
00:24:56.010 --> 00:24:57.480 |
|
doesn't mean the Features are |
|
|
|
00:24:57.480 --> 00:24:57.840 |
|
independent. |
|
|
|
00:24:57.940 --> 00:25:00.200 |
|
Unconditionally, but they're |
|
|
|
00:25:00.200 --> 00:25:02.370 |
|
independent given the label, so. |
|
|
|
00:25:03.550 --> 00:25:05.716 |
|
So because of because they're |
|
|
|
00:25:05.716 --> 00:25:06.149 |
|
independent. |
|
|
|
00:25:06.150 --> 00:25:08.400 |
|
Remember that probability of A&B equals |
|
|
|
00:25:08.400 --> 00:25:11.173 |
|
probability of a * b times probability |
|
|
|
00:25:11.173 --> 00:25:12.603 |
|
B if they're independent. |
|
|
|
00:25:12.603 --> 00:25:15.160 |
|
So probability of X that's like a Joint |
|
|
|
00:25:15.160 --> 00:25:17.920 |
|
X, all the Features given Y is equal to |
|
|
|
00:25:17.920 --> 00:25:20.501 |
|
the product over all the features of |
|
|
|
00:25:20.501 --> 00:25:22.919 |
|
probability of each feature given Y. |
|
|
|
00:25:24.880 --> 00:25:28.866 |
|
And so then I can make my Classifier as |
|
|
|
00:25:28.866 --> 00:25:30.450 |
|
the Y star. |
|
|
|
00:25:30.450 --> 00:25:32.880 |
|
The most likely label is the one that |
|
|
|
00:25:32.880 --> 00:25:35.415 |
|
maximizes this joint probability of |
|
|
|
00:25:35.415 --> 00:25:37.930 |
|
probability of X given Y times |
|
|
|
00:25:37.930 --> 00:25:38.779 |
|
probability of Y. |
|
|
|
00:25:39.810 --> 00:25:42.715 |
|
And this thing, the joint probability |
|
|
|
00:25:42.715 --> 00:25:44.985 |
|
of X given Y would be really hard to |
|
|
|
00:25:44.985 --> 00:25:45.240 |
|
Estimate. |
|
|
|
00:25:45.240 --> 00:25:47.490 |
|
You need tons of data, but this is not |
|
|
|
00:25:47.490 --> 00:25:49.120 |
|
so hard to Estimate because you're just |
|
|
|
00:25:49.120 --> 00:25:50.590 |
|
estimating the probability of 1 |
|
|
|
00:25:50.590 --> 00:25:51.590 |
|
variable at a time. |
|
|
|
00:25:57.200 --> 00:25:59.190 |
|
So for example if I. |
|
|
|
00:25:59.810 --> 00:26:01.900 |
|
In the Digit Example, this would be |
|
|
|
00:26:01.900 --> 00:26:03.860 |
|
saying that the I'm going to choose the |
|
|
|
00:26:03.860 --> 00:26:07.310 |
|
label that maximizes the product of |
|
|
|
00:26:07.310 --> 00:26:09.220 |
|
likelihoods of each of the pixel |
|
|
|
00:26:09.220 --> 00:26:09.980 |
|
intensities. |
|
|
|
00:26:10.690 --> 00:26:12.555 |
|
So I'm just going to consider each |
|
|
|
00:26:12.555 --> 00:26:13.170 |
|
pixel. |
|
|
|
00:26:13.170 --> 00:26:15.170 |
|
How likely is each pixel to have its |
|
|
|
00:26:15.170 --> 00:26:16.959 |
|
intensity given the label? |
|
|
|
00:26:16.960 --> 00:26:18.230 |
|
And then I choose the label that |
|
|
|
00:26:18.230 --> 00:26:20.132 |
|
maximizes that, taking the product of |
|
|
|
00:26:20.132 --> 00:26:21.760 |
|
all the all those likelihoods over the |
|
|
|
00:26:21.760 --> 00:26:22.140 |
|
pixels. |
|
|
|
00:26:23.210 --> 00:26:23.690 |
|
So. |
|
|
|
00:26:24.650 --> 00:26:26.880 |
|
Obviously it's not a perfect Model, |
|
|
|
00:26:26.880 --> 00:26:28.210 |
|
even if I know that. |
|
|
|
00:26:28.210 --> 00:26:30.610 |
|
If I'm given that it's a three, knowing |
|
|
|
00:26:30.610 --> 00:26:32.759 |
|
that one pixel has an intensity of 1 |
|
|
|
00:26:32.760 --> 00:26:33.920 |
|
makes it more likely that the |
|
|
|
00:26:33.920 --> 00:26:35.815 |
|
neighboring pixel has a likelihood of |
|
|
|
00:26:35.815 --> 00:26:36.240 |
|
1. |
|
|
|
00:26:36.240 --> 00:26:37.630 |
|
On the other hand, it's not a terrible |
|
|
|
00:26:37.630 --> 00:26:38.710 |
|
Model either. |
|
|
|
00:26:38.710 --> 00:26:41.028 |
|
If I know that it's a 3, then I have a |
|
|
|
00:26:41.028 --> 00:26:43.210 |
|
pretty good idea of the expected |
|
|
|
00:26:43.210 --> 00:26:45.177 |
|
intensity of each pixel, so I have a |
|
|
|
00:26:45.177 --> 00:26:46.503 |
|
pretty good idea of how likely each |
|
|
|
00:26:46.503 --> 00:26:47.920 |
|
pixel is to be a one or a zero. |
|
|
|
00:26:50.490 --> 00:26:51.780 |
|
In the case of the temperature |
|
|
|
00:26:51.780 --> 00:26:53.760 |
|
Regression will make a slightly |
|
|
|
00:26:53.760 --> 00:26:55.040 |
|
different assumption. |
|
|
|
00:26:55.040 --> 00:26:57.736 |
|
So here we have continuous Features and |
|
|
|
00:26:57.736 --> 00:26:59.320 |
|
a continuous Prediction. |
|
|
|
00:27:00.030 --> 00:27:02.840 |
|
So we're going to assume that each |
|
|
|
00:27:02.840 --> 00:27:05.490 |
|
feature predicts the temperature that |
|
|
|
00:27:05.490 --> 00:27:07.690 |
|
we're trying to predict the tomorrow's |
|
|
|
00:27:07.690 --> 00:27:10.160 |
|
Cleveland temperature with some offset |
|
|
|
00:27:10.160 --> 00:27:10.673 |
|
and variance. |
|
|
|
00:27:10.673 --> 00:27:13.100 |
|
So for example, if I know yesterday's |
|
|
|
00:27:13.100 --> 00:27:14.670 |
|
Cleveland temperature, then tomorrow's |
|
|
|
00:27:14.670 --> 00:27:16.633 |
|
Cleveland temperature is probably about |
|
|
|
00:27:16.633 --> 00:27:19.300 |
|
the same, but with some variance around |
|
|
|
00:27:19.300 --> 00:27:19.577 |
|
it. |
|
|
|
00:27:19.577 --> 00:27:21.239 |
|
If I know the Cleveland temperature |
|
|
|
00:27:21.240 --> 00:27:23.520 |
|
from three days ago, then tomorrow's is |
|
|
|
00:27:23.520 --> 00:27:25.732 |
|
also expected to be about the same but |
|
|
|
00:27:25.732 --> 00:27:26.525 |
|
with higher variance. |
|
|
|
00:27:26.525 --> 00:27:28.596 |
|
If I know the temperature of Austin, |
|
|
|
00:27:28.596 --> 00:27:30.590 |
|
TX, then probably Cleveland is a bit |
|
|
|
00:27:30.590 --> 00:27:31.819 |
|
colder with some variance. |
|
|
|
00:27:33.550 --> 00:27:34.940 |
|
And so I'm going to use just that |
|
|
|
00:27:34.940 --> 00:27:37.100 |
|
combination of individual predictions |
|
|
|
00:27:37.100 --> 00:27:38.480 |
|
to make my final prediction. |
|
|
|
00:27:44.170 --> 00:27:48.680 |
|
So here is the Naive Bayes Algorithm. |
|
|
|
00:27:49.540 --> 00:27:53.250 |
|
For training, I Estimate the parameters |
|
|
|
00:27:53.250 --> 00:27:55.370 |
|
for each of my likelihood functions, |
|
|
|
00:27:55.370 --> 00:27:57.290 |
|
the probability of each feature given |
|
|
|
00:27:57.290 --> 00:27:57.910 |
|
the label. |
|
|
|
00:27:58.940 --> 00:28:01.878 |
|
And I Estimate the parameters for my |
|
|
|
00:28:01.878 --> 00:28:02.232 |
|
prior. |
|
|
|
00:28:02.232 --> 00:28:06.640 |
|
The prior is like the my Estimate, my |
|
|
|
00:28:06.640 --> 00:28:08.370 |
|
likelihood of the label when I don't |
|
|
|
00:28:08.370 --> 00:28:10.180 |
|
know anything else, just before I look |
|
|
|
00:28:10.180 --> 00:28:11.200 |
|
at anything. |
|
|
|
00:28:11.200 --> 00:28:13.475 |
|
So the probability of the label. |
|
|
|
00:28:13.475 --> 00:28:14.770 |
|
And that's usually really easy to |
|
|
|
00:28:14.770 --> 00:28:15.140 |
|
Estimate. |
|
|
|
00:28:17.020 --> 00:28:19.280 |
|
And then at Prediction time, I'm going |
|
|
|
00:28:19.280 --> 00:28:22.970 |
|
to solve for the label that maximizes |
|
|
|
00:28:22.970 --> 00:28:26.330 |
|
the probability of X&Y or the and which |
|
|
|
00:28:26.330 --> 00:28:28.620 |
|
the Naive Bayes assumption is the |
|
|
|
00:28:28.620 --> 00:28:31.110 |
|
product over I of probability of XI |
|
|
|
00:28:31.110 --> 00:28:32.649 |
|
given Y times probability of Y. |
|
|
|
00:28:36.470 --> 00:28:40.455 |
|
The Naive Naive Bayes is that it's just |
|
|
|
00:28:40.455 --> 00:28:42.050 |
|
the independence assumption. |
|
|
|
00:28:42.050 --> 00:28:45.150 |
|
It's not an insult to Thomas Bayes that |
|
|
|
00:28:45.150 --> 00:28:46.890 |
|
he's an idiot or something. |
|
|
|
00:28:46.890 --> 00:28:49.970 |
|
It's just that we're going to make this |
|
|
|
00:28:49.970 --> 00:28:52.140 |
|
very simplifying assumption. |
|
|
|
00:28:58.170 --> 00:29:00.550 |
|
So all right, so the first thing we |
|
|
|
00:29:00.550 --> 00:29:02.710 |
|
have to deal with is how do we Estimate |
|
|
|
00:29:02.710 --> 00:29:03.590 |
|
this probability? |
|
|
|
00:29:03.590 --> 00:29:06.500 |
|
We want to get some probability of each |
|
|
|
00:29:06.500 --> 00:29:08.050 |
|
feature given the data. |
|
|
|
00:29:08.960 --> 00:29:10.990 |
|
And the basic principles are that you |
|
|
|
00:29:10.990 --> 00:29:12.909 |
|
want to choose parameters. |
|
|
|
00:29:12.910 --> 00:29:14.550 |
|
First you have to have a model for your |
|
|
|
00:29:14.550 --> 00:29:16.610 |
|
likelihood, and then you have to |
|
|
|
00:29:16.610 --> 00:29:19.394 |
|
maximize the parameters of that model |
|
|
|
00:29:19.394 --> 00:29:21.908 |
|
that you have to, sorry, Choose the |
|
|
|
00:29:21.908 --> 00:29:22.885 |
|
parameters of that Model. |
|
|
|
00:29:22.885 --> 00:29:25.180 |
|
That makes your training data most |
|
|
|
00:29:25.180 --> 00:29:25.600 |
|
likely. |
|
|
|
00:29:25.600 --> 00:29:27.210 |
|
That's the main principle. |
|
|
|
00:29:27.210 --> 00:29:29.780 |
|
So if I say somebody says maximum |
|
|
|
00:29:29.780 --> 00:29:32.390 |
|
likelihood estimation or Emily, that's |
|
|
|
00:29:32.390 --> 00:29:34.190 |
|
like straight up maximizes the |
|
|
|
00:29:34.190 --> 00:29:37.865 |
|
probability of the data given your |
|
|
|
00:29:37.865 --> 00:29:38.800 |
|
parameters in your model. |
|
|
|
00:29:40.320 --> 00:29:42.480 |
|
Sometimes that can result in |
|
|
|
00:29:42.480 --> 00:29:44.120 |
|
overconfident estimates. |
|
|
|
00:29:44.120 --> 00:29:46.210 |
|
So for example if I just have like. |
|
|
|
00:29:46.970 --> 00:29:47.800 |
|
If I. |
|
|
|
00:29:48.430 --> 00:29:51.810 |
|
If I have like 2 measurements, let's |
|
|
|
00:29:51.810 --> 00:29:53.470 |
|
say I want to know what's the average |
|
|
|
00:29:53.470 --> 00:29:56.044 |
|
weight of a bird and I just have two |
|
|
|
00:29:56.044 --> 00:29:58.480 |
|
birds, and I say it's probably like a |
|
|
|
00:29:58.480 --> 00:29:59.585 |
|
Gaussian distribution. |
|
|
|
00:29:59.585 --> 00:30:02.012 |
|
I can Estimate a mean and a variance |
|
|
|
00:30:02.012 --> 00:30:05.970 |
|
from those two birds, but that Estimate |
|
|
|
00:30:05.970 --> 00:30:07.105 |
|
could be like way off. |
|
|
|
00:30:07.105 --> 00:30:09.100 |
|
So often it's a good idea to have some |
|
|
|
00:30:09.100 --> 00:30:11.530 |
|
kind of Prior or to prevent the |
|
|
|
00:30:11.530 --> 00:30:12.780 |
|
variance from going too low. |
|
|
|
00:30:12.780 --> 00:30:14.740 |
|
So if I looked at two birds and I said |
|
|
|
00:30:14.740 --> 00:30:16.860 |
|
and they both happen to be like 47 |
|
|
|
00:30:16.860 --> 00:30:17.510 |
|
grams. |
|
|
|
00:30:17.870 --> 00:30:19.965 |
|
I probably wouldn't want to say that |
|
|
|
00:30:19.965 --> 00:30:22.966 |
|
the mean is 47 and the variance is 0, |
|
|
|
00:30:22.966 --> 00:30:25.170 |
|
because then I would be saying like if |
|
|
|
00:30:25.170 --> 00:30:27.090 |
|
there's another bird that has 48 grams, |
|
|
|
00:30:27.090 --> 00:30:28.550 |
|
that's like infinitely unlikely. |
|
|
|
00:30:28.550 --> 00:30:29.880 |
|
It's a 0 probability. |
|
|
|
00:30:29.880 --> 00:30:31.600 |
|
So often you want to have some kind of |
|
|
|
00:30:31.600 --> 00:30:34.270 |
|
Prior over your variables as well in |
|
|
|
00:30:34.270 --> 00:30:37.025 |
|
order to prevent likelihoods going to 0 |
|
|
|
00:30:37.025 --> 00:30:38.430 |
|
because you just didn't have enough |
|
|
|
00:30:38.430 --> 00:30:40.120 |
|
data to correctly Estimate them. |
|
|
|
00:30:40.930 --> 00:30:42.650 |
|
So it's like Warren Buffett says with |
|
|
|
00:30:42.650 --> 00:30:43.230 |
|
investing. |
|
|
|
00:30:43.850 --> 00:30:45.550 |
|
It's not just about maximizing the |
|
|
|
00:30:45.550 --> 00:30:47.690 |
|
expectation, it's also about making |
|
|
|
00:30:47.690 --> 00:30:48.890 |
|
sure there are no zeros. |
|
|
|
00:30:48.890 --> 00:30:50.190 |
|
Because if you have a zero and your |
|
|
|
00:30:50.190 --> 00:30:51.670 |
|
product of likelihoods, the whole thing |
|
|
|
00:30:51.670 --> 00:30:52.090 |
|
is 0. |
|
|
|
00:30:53.690 --> 00:30:55.995 |
|
And if you have a zero, return your |
|
|
|
00:30:55.995 --> 00:30:57.900 |
|
whole investment at any point, your |
|
|
|
00:30:57.900 --> 00:30:59.330 |
|
whole bank account is 0. |
|
|
|
00:31:03.120 --> 00:31:06.550 |
|
All right, so we have so. |
|
|
|
00:31:06.920 --> 00:31:08.840 |
|
How do we Estimate P of X given Y given |
|
|
|
00:31:08.840 --> 00:31:09.340 |
|
the data? |
|
|
|
00:31:09.340 --> 00:31:10.980 |
|
It's always based on maximizing the |
|
|
|
00:31:10.980 --> 00:31:11.930 |
|
likelihood of the data. |
|
|
|
00:31:12.690 --> 00:31:14.360 |
|
Over your parameters, but you have |
|
|
|
00:31:14.360 --> 00:31:15.940 |
|
different solutions depending on your |
|
|
|
00:31:15.940 --> 00:31:18.200 |
|
Model and. |
|
|
|
00:31:18.370 --> 00:31:19.860 |
|
I guess it just depends on your Model. |
|
|
|
00:31:20.520 --> 00:31:24.180 |
|
So for binomial, a binomial is just if |
|
|
|
00:31:24.180 --> 00:31:25.790 |
|
you have a binary variable, then |
|
|
|
00:31:25.790 --> 00:31:27.314 |
|
there's some probability that the |
|
|
|
00:31:27.314 --> 00:31:29.450 |
|
variable is 1 and 1 minus that |
|
|
|
00:31:29.450 --> 00:31:31.790 |
|
probability that the variable is 0. |
|
|
|
00:31:31.790 --> 00:31:36.126 |
|
So Theta Ki is the probability that X I |
|
|
|
00:31:36.126 --> 00:31:38.510 |
|
= 1 given y = K. |
|
|
|
00:31:39.510 --> 00:31:40.590 |
|
And you can write it. |
|
|
|
00:31:40.590 --> 00:31:42.349 |
|
It's kind of a weird way. |
|
|
|
00:31:42.350 --> 00:31:43.700 |
|
I mean it looks like a weird way to |
|
|
|
00:31:43.700 --> 00:31:44.390 |
|
write it. |
|
|
|
00:31:44.390 --> 00:31:46.190 |
|
But if you think about it, if XI equals |
|
|
|
00:31:46.190 --> 00:31:48.760 |
|
one, then the probability is Theta Ki. |
|
|
|
00:31:49.390 --> 00:31:51.630 |
|
And if XI equals zero, then the |
|
|
|
00:31:51.630 --> 00:31:54.160 |
|
probability is 1 minus Theta Ki so. |
|
|
|
00:31:54.800 --> 00:31:55.440 |
|
Makes sense? |
|
|
|
00:31:56.390 --> 00:31:58.390 |
|
And if I want to Estimate this, all I |
|
|
|
00:31:58.390 --> 00:32:00.530 |
|
have to do is count over all my data |
|
|
|
00:32:00.530 --> 00:32:01.180 |
|
Samples. |
|
|
|
00:32:01.180 --> 00:32:06.410 |
|
How many times does xni equal 1 and y = |
|
|
|
00:32:06.410 --> 00:32:06.880 |
|
K? |
|
|
|
00:32:07.530 --> 00:32:09.310 |
|
Divided by the total number of times |
|
|
|
00:32:09.310 --> 00:32:10.490 |
|
that Y and equals K. |
|
|
|
00:32:11.610 --> 00:32:13.290 |
|
And then here it is in Python. |
|
|
|
00:32:13.290 --> 00:32:15.620 |
|
So it's just a sum over all my data. |
|
|
|
00:32:15.620 --> 00:32:18.170 |
|
I'm looking at the ith feature here, |
|
|
|
00:32:18.170 --> 00:32:20.377 |
|
checking how many times these equal 1 |
|
|
|
00:32:20.377 --> 00:32:23.585 |
|
and the label is equal to K divided by |
|
|
|
00:32:23.585 --> 00:32:25.170 |
|
the number of times the label is equal |
|
|
|
00:32:25.170 --> 00:32:25.580 |
|
to K. |
|
|
|
00:32:27.240 --> 00:32:28.780 |
|
And if I have a multinomial, it's |
|
|
|
00:32:28.780 --> 00:32:31.100 |
|
basically the same thing except that I |
|
|
|
00:32:31.100 --> 00:32:35.342 |
|
sum over the number of times that X and |
|
|
|
00:32:35.342 --> 00:32:37.990 |
|
I = V, where V could be say, zero to 10 |
|
|
|
00:32:37.990 --> 00:32:38.840 |
|
or something like that. |
|
|
|
00:32:39.740 --> 00:32:42.490 |
|
And otherwise it's the same. |
|
|
|
00:32:42.490 --> 00:32:46.040 |
|
So I can Estimate if I have 10 |
|
|
|
00:32:46.040 --> 00:32:49.576 |
|
different variables and I Estimate |
|
|
|
00:32:49.576 --> 00:32:52.590 |
|
Theta KIV for all 10 variables, then |
|
|
|
00:32:52.590 --> 00:32:54.410 |
|
the sum of those Theta kives should be |
|
|
|
00:32:54.410 --> 00:32:54.624 |
|
one. |
|
|
|
00:32:54.624 --> 00:32:56.540 |
|
So one of those is a constrained |
|
|
|
00:32:56.540 --> 00:32:56.910 |
|
variable. |
|
|
|
00:32:58.820 --> 00:33:00.420 |
|
And it will workout that way if you |
|
|
|
00:33:00.420 --> 00:33:01.270 |
|
Estimate it this way. |
|
|
|
00:33:05.970 --> 00:33:08.733 |
|
So if we have a continuous variable by |
|
|
|
00:33:08.733 --> 00:33:11.730 |
|
the way, like, these can be fairly |
|
|
|
00:33:11.730 --> 00:33:15.360 |
|
easily derived just by writing out the |
|
|
|
00:33:15.360 --> 00:33:18.720 |
|
likelihood terms and taking a partial |
|
|
|
00:33:18.720 --> 00:33:21.068 |
|
derivative with respect to the variable |
|
|
|
00:33:21.068 --> 00:33:22.930 |
|
and setting that equal to 0. |
|
|
|
00:33:22.930 --> 00:33:24.810 |
|
But it does take like a page of |
|
|
|
00:33:24.810 --> 00:33:26.940 |
|
equations, so I decided not to subject |
|
|
|
00:33:26.940 --> 00:33:27.379 |
|
you to it. |
|
|
|
00:33:28.260 --> 00:33:30.190 |
|
Since since, solving for these is not |
|
|
|
00:33:30.190 --> 00:33:30.920 |
|
the point right now. |
|
|
|
00:33:32.920 --> 00:33:34.730 |
|
And so. |
|
|
|
00:33:34.800 --> 00:33:36.000 |
|
Are. |
|
|
|
00:33:36.000 --> 00:33:38.620 |
|
Let's say X is a continuous variable. |
|
|
|
00:33:38.620 --> 00:33:40.740 |
|
Maybe I want to assume that XI is a |
|
|
|
00:33:40.740 --> 00:33:44.052 |
|
Gaussian given some label, where the |
|
|
|
00:33:44.052 --> 00:33:45.770 |
|
label is a discrete variable. |
|
|
|
00:33:47.220 --> 00:33:51.023 |
|
So Gaussians, if you took hopefully you |
|
|
|
00:33:51.023 --> 00:33:52.625 |
|
took probably your statistics and you |
|
|
|
00:33:52.625 --> 00:33:53.940 |
|
probably ran into Gaussians all the |
|
|
|
00:33:53.940 --> 00:33:54.230 |
|
time. |
|
|
|
00:33:54.230 --> 00:33:55.820 |
|
Gaussians come up a lot for many |
|
|
|
00:33:55.820 --> 00:33:56.550 |
|
reasons. |
|
|
|
00:33:56.550 --> 00:33:58.749 |
|
One of them is that if you add a lot of |
|
|
|
00:33:58.750 --> 00:34:01.125 |
|
random variables together, then if you |
|
|
|
00:34:01.125 --> 00:34:02.839 |
|
add enough of them, then it will end up |
|
|
|
00:34:02.840 --> 00:34:03.000 |
|
there. |
|
|
|
00:34:03.000 --> 00:34:04.280 |
|
Some of them will end up being a |
|
|
|
00:34:04.280 --> 00:34:05.320 |
|
Gaussian distribution. |
|
|
|
00:34:07.080 --> 00:34:09.415 |
|
So there's lots of things end up being |
|
|
|
00:34:09.415 --> 00:34:09.700 |
|
Gaussians. |
|
|
|
00:34:09.700 --> 00:34:11.500 |
|
Gaussians is a really common noise |
|
|
|
00:34:11.500 --> 00:34:13.536 |
|
model, and it also is like really easy |
|
|
|
00:34:13.536 --> 00:34:14.320 |
|
to work with. |
|
|
|
00:34:14.320 --> 00:34:16.060 |
|
Even though it looks complicated. |
|
|
|
00:34:16.060 --> 00:34:17.820 |
|
When you take the log of it ends up |
|
|
|
00:34:17.820 --> 00:34:19.342 |
|
just being a quadratic, which is easy |
|
|
|
00:34:19.342 --> 00:34:20.010 |
|
to minimize. |
|
|
|
00:34:22.250 --> 00:34:24.460 |
|
So there's the Gaussian expression on |
|
|
|
00:34:24.460 --> 00:34:24.950 |
|
the top. |
|
|
|
00:34:26.550 --> 00:34:28.420 |
|
And I. |
|
|
|
00:34:29.290 --> 00:34:30.610 |
|
So let me get my. |
|
|
|
00:34:33.940 --> 00:34:34.490 |
|
There it goes. |
|
|
|
00:34:34.490 --> 00:34:37.060 |
|
OK, so here's the Gaussian expression |
|
|
|
00:34:37.060 --> 00:34:39.260 |
|
one over square of 2π Sigma Ki. |
|
|
|
00:34:39.260 --> 00:34:42.075 |
|
So the parameters here are M UI which |
|
|
|
00:34:42.075 --> 00:34:43.830 |
|
is mu Ki which is the mean. |
|
|
|
00:34:44.980 --> 00:34:47.700 |
|
For the KTH label and the ith feature |
|
|
|
00:34:47.700 --> 00:34:49.946 |
|
in Sigma, Ki is the stair deviation for |
|
|
|
00:34:49.946 --> 00:34:52.080 |
|
the Keith label and the Ith feature. |
|
|
|
00:34:52.900 --> 00:34:54.700 |
|
And so the higher the standard |
|
|
|
00:34:54.700 --> 00:34:57.090 |
|
deviation is, the bigger the Gaussian |
|
|
|
00:34:57.090 --> 00:34:57.425 |
|
is. |
|
|
|
00:34:57.425 --> 00:34:59.920 |
|
So if you look at these plots here, the |
|
|
|
00:34:59.920 --> 00:35:02.150 |
|
it's kind of blurry the. |
|
|
|
00:35:02.770 --> 00:35:05.540 |
|
The red curve or the actually the |
|
|
|
00:35:05.540 --> 00:35:07.130 |
|
yellow curve has like the biggest |
|
|
|
00:35:07.130 --> 00:35:08.880 |
|
distribution, the broadest distribution |
|
|
|
00:35:08.880 --> 00:35:10.510 |
|
and it has the highest variance or |
|
|
|
00:35:10.510 --> 00:35:12.010 |
|
highest standard deviation. |
|
|
|
00:35:14.070 --> 00:35:15.780 |
|
So this is the MLE, the maximum |
|
|
|
00:35:15.780 --> 00:35:17.240 |
|
likelihood estimate of the mean. |
|
|
|
00:35:17.240 --> 00:35:19.809 |
|
It's just the sum of all the X's |
|
|
|
00:35:19.810 --> 00:35:21.850 |
|
divided by the number of X's. |
|
|
|
00:35:21.850 --> 00:35:25.109 |
|
Or, sorry, it's a sum over all the X's. |
|
|
|
00:35:26.970 --> 00:35:30.190 |
|
For which Y n = K divided by the total |
|
|
|
00:35:30.190 --> 00:35:31.900 |
|
number of times that Y n = K. |
|
|
|
00:35:32.790 --> 00:35:34.845 |
|
Because I'm estimating the conditional |
|
|
|
00:35:34.845 --> 00:35:36.120 |
|
conditional mean. |
|
|
|
00:35:36.760 --> 00:35:41.570 |
|
So it's the sum over all the X's time. |
|
|
|
00:35:41.570 --> 00:35:44.060 |
|
This will be where Y and equals K |
|
|
|
00:35:44.060 --> 00:35:45.670 |
|
divided by the count of y = K. |
|
|
|
00:35:46.320 --> 00:35:48.050 |
|
And they're staring deviation squared. |
|
|
|
00:35:48.050 --> 00:35:50.650 |
|
Or the variance is the sum over all the |
|
|
|
00:35:50.650 --> 00:35:53.340 |
|
differences of the X and the mean |
|
|
|
00:35:53.340 --> 00:35:56.890 |
|
squared where Y and equals K divided by |
|
|
|
00:35:56.890 --> 00:35:58.890 |
|
the number of times that y = K. |
|
|
|
00:35:59.640 --> 00:36:01.180 |
|
And you have to estimate the mean |
|
|
|
00:36:01.180 --> 00:36:02.480 |
|
before you Estimate the steering |
|
|
|
00:36:02.480 --> 00:36:02.950 |
|
deviation. |
|
|
|
00:36:02.950 --> 00:36:05.100 |
|
And if you take a statistics class, |
|
|
|
00:36:05.100 --> 00:36:07.980 |
|
you'll probably like prove that this is |
|
|
|
00:36:07.980 --> 00:36:09.945 |
|
an OK thing to do, that you're relying |
|
|
|
00:36:09.945 --> 00:36:11.720 |
|
on one Estimate in order to get the |
|
|
|
00:36:11.720 --> 00:36:12.720 |
|
other Estimate. |
|
|
|
00:36:12.720 --> 00:36:14.420 |
|
But it does turn out it's OK. |
|
|
|
00:36:16.670 --> 00:36:20.220 |
|
Alright, so in our homework for the |
|
|
|
00:36:20.220 --> 00:36:22.890 |
|
temperature Regression, we're going to |
|
|
|
00:36:22.890 --> 00:36:26.095 |
|
assume that Y minus XI is a Gaussian, |
|
|
|
00:36:26.095 --> 00:36:27.930 |
|
so we have two continuous variables. |
|
|
|
00:36:28.900 --> 00:36:29.710 |
|
So. |
|
|
|
00:36:30.940 --> 00:36:34.847 |
|
The idea is that the temperature of |
|
|
|
00:36:34.847 --> 00:36:38.565 |
|
some city on someday predicts the |
|
|
|
00:36:38.565 --> 00:36:41.530 |
|
temperature of Cleveland on some other |
|
|
|
00:36:41.530 --> 00:36:41.850 |
|
day. |
|
|
|
00:36:42.600 --> 00:36:44.600 |
|
With some offset and some variance. |
|
|
|
00:36:45.830 --> 00:36:48.190 |
|
And that is pretty easy to Model. |
|
|
|
00:36:48.190 --> 00:36:51.020 |
|
So here's Sigma I is then the stair |
|
|
|
00:36:51.020 --> 00:36:53.770 |
|
deviation of that offset Prediction and |
|
|
|
00:36:53.770 --> 00:36:54.910 |
|
MU I is the offset. |
|
|
|
00:36:55.560 --> 00:36:58.230 |
|
And I just have Y minus XI minus MU I |
|
|
|
00:36:58.230 --> 00:37:00.166 |
|
squared here instead of Justice XI |
|
|
|
00:37:00.166 --> 00:37:02.590 |
|
minus MU I squared, which would be if I |
|
|
|
00:37:02.590 --> 00:37:03.960 |
|
just said XI is a Gaussian. |
|
|
|
00:37:05.170 --> 00:37:08.820 |
|
And the mean is just why the sum of Y |
|
|
|
00:37:08.820 --> 00:37:11.603 |
|
minus XI divided by north, where north |
|
|
|
00:37:11.603 --> 00:37:12.870 |
|
is the total number of Samples. |
|
|
|
00:37:13.990 --> 00:37:14.820 |
|
Because why? |
|
|
|
00:37:14.820 --> 00:37:16.618 |
|
Is not discrete, so I'm not counting |
|
|
|
00:37:16.618 --> 00:37:20.100 |
|
over certain over only values X where Y |
|
|
|
00:37:20.100 --> 00:37:21.625 |
|
is equal to some value, I'm counting |
|
|
|
00:37:21.625 --> 00:37:22.550 |
|
over all the values. |
|
|
|
00:37:23.410 --> 00:37:25.280 |
|
And the Syrian deviation or their |
|
|
|
00:37:25.280 --> 00:37:28.590 |
|
variance is Y minus XI minus MU I |
|
|
|
00:37:28.590 --> 00:37:29.630 |
|
squared divided by north. |
|
|
|
00:37:30.480 --> 00:37:32.300 |
|
And here's the Python. |
|
|
|
00:37:33.630 --> 00:37:35.840 |
|
Here I just use the mean and steering |
|
|
|
00:37:35.840 --> 00:37:37.630 |
|
deviation functions to get it, but it's |
|
|
|
00:37:37.630 --> 00:37:40.470 |
|
also not a very long formula if I were |
|
|
|
00:37:40.470 --> 00:37:41.340 |
|
to write it all out. |
|
|
|
00:37:44.020 --> 00:37:46.830 |
|
And then X&Y were jointly Gaussian. |
|
|
|
00:37:46.830 --> 00:37:49.660 |
|
So if I say that I need to jointly |
|
|
|
00:37:49.660 --> 00:37:52.850 |
|
Model them, then one way to do it is |
|
|
|
00:37:52.850 --> 00:37:53.600 |
|
by. |
|
|
|
00:37:54.460 --> 00:37:56.510 |
|
By saying that probability of XI given |
|
|
|
00:37:56.510 --> 00:38:00.660 |
|
Y is the joint probability of XI and Y. |
|
|
|
00:38:00.660 --> 00:38:03.070 |
|
So now I have a 2 variable Gaussian |
|
|
|
00:38:03.070 --> 00:38:06.780 |
|
with A2 variable mean and a two by two |
|
|
|
00:38:06.780 --> 00:38:07.900 |
|
covariance matrix. |
|
|
|
00:38:08.920 --> 00:38:11.210 |
|
Divided by the probability of Y, which |
|
|
|
00:38:11.210 --> 00:38:12.700 |
|
is a 1D Gaussian. |
|
|
|
00:38:12.700 --> 00:38:14.636 |
|
Just the Gaussian over probability of |
|
|
|
00:38:14.636 --> 00:38:14.999 |
|
Y. |
|
|
|
00:38:15.000 --> 00:38:16.340 |
|
And if you were to write out all the |
|
|
|
00:38:16.340 --> 00:38:18.500 |
|
math for it would simplify into some |
|
|
|
00:38:18.500 --> 00:38:21.890 |
|
other Gaussian equation, but it's |
|
|
|
00:38:21.890 --> 00:38:23.360 |
|
easier to think about it this way. |
|
|
|
00:38:27.660 --> 00:38:28.140 |
|
Alright. |
|
|
|
00:38:28.140 --> 00:38:31.660 |
|
And then what if XI is continuous but |
|
|
|
00:38:31.660 --> 00:38:32.770 |
|
it's not Gaussian? |
|
|
|
00:38:33.920 --> 00:38:35.750 |
|
And why is discrete? |
|
|
|
00:38:35.750 --> 00:38:37.763 |
|
There's one simple thing I can do is I |
|
|
|
00:38:37.763 --> 00:38:40.770 |
|
can just first turn X into a discrete. |
|
|
|
00:38:40.860 --> 00:38:41.490 |
|
|
|
|
|
00:38:42.280 --> 00:38:45.060 |
|
Into a discrete function, so. |
|
|
|
00:38:46.810 --> 00:38:48.640 |
|
For example if. |
|
|
|
00:38:49.590 --> 00:38:52.260 |
|
Let me venture with my pen again, but. |
|
|
|
00:39:08.410 --> 00:39:08.810 |
|
Can't do it. |
|
|
|
00:39:08.810 --> 00:39:09.170 |
|
I want. |
|
|
|
00:39:15.140 --> 00:39:15.490 |
|
OK. |
|
|
|
00:39:16.820 --> 00:39:20.930 |
|
So for example, X has a range from. |
|
|
|
00:39:21.120 --> 00:39:22.130 |
|
From zero to 1. |
|
|
|
00:39:22.810 --> 00:39:26.332 |
|
That's the case for our intensities of |
|
|
|
00:39:26.332 --> 00:39:28.340 |
|
the pixel, intensities of amnesty. |
|
|
|
00:39:29.180 --> 00:39:31.830 |
|
I can just set a threshold for example |
|
|
|
00:39:31.830 --> 00:39:38.230 |
|
of 0.5 and if X is greater than 05 then |
|
|
|
00:39:38.230 --> 00:39:40.369 |
|
I'm going to say that it's equal to 1. |
|
|
|
00:39:41.030 --> 00:39:43.860 |
|
NFX is less than five, then I'm going |
|
|
|
00:39:43.860 --> 00:39:45.050 |
|
to say it's equal to 0. |
|
|
|
00:39:45.050 --> 00:39:46.440 |
|
So now I turn my continuous |
|
|
|
00:39:46.440 --> 00:39:49.350 |
|
distribution into a binary distribution |
|
|
|
00:39:49.350 --> 00:39:51.040 |
|
and now I can just Estimate it using |
|
|
|
00:39:51.040 --> 00:39:52.440 |
|
the Bernoulli equation. |
|
|
|
00:39:53.100 --> 00:39:54.910 |
|
Or I could turn X into 10 different |
|
|
|
00:39:54.910 --> 00:39:57.280 |
|
values by just multiplying X by 10 and |
|
|
|
00:39:57.280 --> 00:39:58.050 |
|
taking the floor. |
|
|
|
00:39:58.050 --> 00:39:59.560 |
|
So now the values are zero to 9. |
|
|
|
00:40:01.490 --> 00:40:04.150 |
|
So that's one that's actually the one |
|
|
|
00:40:04.150 --> 00:40:06.110 |
|
of the easiest way to deal with the |
|
|
|
00:40:06.110 --> 00:40:08.190 |
|
continuous variable that's not |
|
|
|
00:40:08.190 --> 00:40:08.850 |
|
Gaussian. |
|
|
|
00:40:12.900 --> 00:40:15.950 |
|
Sometimes X will be like text, so for |
|
|
|
00:40:15.950 --> 00:40:18.800 |
|
example it could be like blue, orange |
|
|
|
00:40:18.800 --> 00:40:19.430 |
|
or green. |
|
|
|
00:40:20.080 --> 00:40:22.070 |
|
And then you just need to Map those |
|
|
|
00:40:22.070 --> 00:40:25.390 |
|
different text tokens into integers. |
|
|
|
00:40:25.390 --> 00:40:26.441 |
|
So I might say blue. |
|
|
|
00:40:26.441 --> 00:40:28.654 |
|
I'm going to say I'm going to Map blue |
|
|
|
00:40:28.654 --> 00:40:30.620 |
|
into zero, orange into one, green into |
|
|
|
00:40:30.620 --> 00:40:32.580 |
|
two, and then I can just Solve by |
|
|
|
00:40:32.580 --> 00:40:33.060 |
|
counting. |
|
|
|
00:40:36.610 --> 00:40:38.830 |
|
And then finally I need to also |
|
|
|
00:40:38.830 --> 00:40:40.380 |
|
Estimate the probability of Y. |
|
|
|
00:40:41.060 --> 00:40:42.990 |
|
One common thing to do is just to say |
|
|
|
00:40:42.990 --> 00:40:45.880 |
|
that Y is equally likely to be all the |
|
|
|
00:40:45.880 --> 00:40:46.860 |
|
possible labels. |
|
|
|
00:40:47.550 --> 00:40:49.440 |
|
And that can be a good thing to do, |
|
|
|
00:40:49.440 --> 00:40:51.169 |
|
because maybe our training distribution |
|
|
|
00:40:51.170 --> 00:40:52.870 |
|
isn't even, but you don't think you're |
|
|
|
00:40:52.870 --> 00:40:54.310 |
|
training distribution will be the same |
|
|
|
00:40:54.310 --> 00:40:55.790 |
|
as the test distribution. |
|
|
|
00:40:55.790 --> 00:40:58.340 |
|
So then you say that probability of Y |
|
|
|
00:40:58.340 --> 00:41:00.470 |
|
is uniform even though it's not uniform |
|
|
|
00:41:00.470 --> 00:41:00.920 |
|
in training. |
|
|
|
00:41:01.630 --> 00:41:03.530 |
|
If it's uniform, you can just ignore it |
|
|
|
00:41:03.530 --> 00:41:05.910 |
|
because it won't have any effect on |
|
|
|
00:41:05.910 --> 00:41:07.060 |
|
which Y is most likely. |
|
|
|
00:41:07.980 --> 00:41:09.860 |
|
FY is discrete and non uniform. |
|
|
|
00:41:09.860 --> 00:41:11.810 |
|
You can just solve it by counting how |
|
|
|
00:41:11.810 --> 00:41:14.050 |
|
many times is Y equal 1 divided by all |
|
|
|
00:41:14.050 --> 00:41:16.850 |
|
my data is the probability of Y equal |
|
|
|
00:41:16.850 --> 00:41:17.070 |
|
1. |
|
|
|
00:41:17.790 --> 00:41:19.450 |
|
If it's continuous, you can Model it as |
|
|
|
00:41:19.450 --> 00:41:21.660 |
|
a Gaussian or chop it up into bins and |
|
|
|
00:41:21.660 --> 00:41:23.000 |
|
then turn it into a classification |
|
|
|
00:41:23.000 --> 00:41:23.360 |
|
problem. |
|
|
|
00:41:25.690 --> 00:41:26.050 |
|
Right. |
|
|
|
00:41:28.290 --> 00:41:31.550 |
|
So I'll give you your minute or two, |
|
|
|
00:41:31.550 --> 00:41:32.230 |
|
Stretch break. |
|
|
|
00:41:32.230 --> 00:41:33.650 |
|
But I want you to think about this |
|
|
|
00:41:33.650 --> 00:41:34.370 |
|
while you do that. |
|
|
|
00:41:35.390 --> 00:41:38.100 |
|
So suppose I want to classify a fruit |
|
|
|
00:41:38.100 --> 00:41:40.230 |
|
based on description and my Features |
|
|
|
00:41:40.230 --> 00:41:42.389 |
|
are weight, color, shape and whether |
|
|
|
00:41:42.390 --> 00:41:44.190 |
|
it's a hard whether the outside is |
|
|
|
00:41:44.190 --> 00:41:44.470 |
|
hard. |
|
|
|
00:41:45.330 --> 00:41:47.960 |
|
And so first, here's some examples of |
|
|
|
00:41:47.960 --> 00:41:49.100 |
|
those Features. |
|
|
|
00:41:49.100 --> 00:41:50.750 |
|
See if you can figure out which fruit |
|
|
|
00:41:50.750 --> 00:41:51.990 |
|
correspond to these Features. |
|
|
|
00:41:52.630 --> 00:41:56.150 |
|
And second, what might be a good set of |
|
|
|
00:41:56.150 --> 00:41:58.080 |
|
models to use for probability of XI |
|
|
|
00:41:58.080 --> 00:41:59.730 |
|
given fruit for those four Features? |
|
|
|
00:42:01.210 --> 00:42:03.620 |
|
So you have two minutes to think about |
|
|
|
00:42:03.620 --> 00:42:05.630 |
|
it and Oregon Stretch or use the |
|
|
|
00:42:05.630 --> 00:42:07.240 |
|
bathroom or check your e-mail or |
|
|
|
00:42:07.240 --> 00:42:07.620 |
|
whatever. |
|
|
|
00:44:24.040 --> 00:44:24.730 |
|
Alright. |
|
|
|
00:44:26.640 --> 00:44:31.100 |
|
So first, what is the top 1.5 pounds |
|
|
|
00:44:31.100 --> 00:44:31.640 |
|
red round? |
|
|
|
00:44:31.640 --> 00:44:33.750 |
|
Yes, OK, good. |
|
|
|
00:44:33.750 --> 00:44:34.870 |
|
That's what I was thinking. |
|
|
|
00:44:34.870 --> 00:44:37.930 |
|
What's the 2nd 115 pounds? |
|
|
|
00:44:39.070 --> 00:44:39.810 |
|
Avocado. |
|
|
|
00:44:39.810 --> 00:44:41.260 |
|
That's a huge avocado. |
|
|
|
00:44:43.770 --> 00:44:44.660 |
|
What is it? |
|
|
|
00:44:46.290 --> 00:44:48.090 |
|
Watermelon watermelons, what I was |
|
|
|
00:44:48.090 --> 00:44:48.450 |
|
thinking. |
|
|
|
00:44:49.170 --> 00:44:52.140 |
|
.1 pounds purple round and not hard. |
|
|
|
00:44:53.330 --> 00:44:54.980 |
|
I was thinking of a Grape. |
|
|
|
00:44:54.980 --> 00:44:55.980 |
|
OK, good. |
|
|
|
00:44:57.480 --> 00:44:58.900 |
|
There wasn't really, there wasn't |
|
|
|
00:44:58.900 --> 00:45:00.160 |
|
necessarily a right answer. |
|
|
|
00:45:00.160 --> 00:45:01.790 |
|
It's just kind of what I was thinking. |
|
|
|
00:45:02.800 --> 00:45:05.642 |
|
Alright, and then how do you Model the |
|
|
|
00:45:05.642 --> 00:45:07.700 |
|
probability of the feature given the |
|
|
|
00:45:07.700 --> 00:45:08.450 |
|
fruit for each of these? |
|
|
|
00:45:08.450 --> 00:45:09.550 |
|
So let's say the weight. |
|
|
|
00:45:09.550 --> 00:45:11.172 |
|
What would be a good model for |
|
|
|
00:45:11.172 --> 00:45:13.270 |
|
probability of XI given the label? |
|
|
|
00:45:15.080 --> 00:45:17.420 |
|
Gaussian would, Gaussian would probably |
|
|
|
00:45:17.420 --> 00:45:18.006 |
|
be a good choice. |
|
|
|
00:45:18.006 --> 00:45:19.820 |
|
It has each of these probably has some |
|
|
|
00:45:19.820 --> 00:45:21.250 |
|
expectation, maybe a Gaussian |
|
|
|
00:45:21.250 --> 00:45:22.130 |
|
distribution around it. |
|
|
|
00:45:24.000 --> 00:45:26.490 |
|
Alright, what about the color red, |
|
|
|
00:45:26.490 --> 00:45:27.315 |
|
green, purple? |
|
|
|
00:45:27.315 --> 00:45:28.440 |
|
What could I do for that? |
|
|
|
00:45:31.440 --> 00:45:35.610 |
|
So I could use a multinomial so I can |
|
|
|
00:45:35.610 --> 00:45:37.210 |
|
just turn it into discrete very |
|
|
|
00:45:37.210 --> 00:45:39.410 |
|
discrete numbers, integer numbers and |
|
|
|
00:45:39.410 --> 00:45:41.480 |
|
then count and the shape. |
|
|
|
00:45:50.470 --> 00:45:52.470 |
|
So if there's assuming that there's |
|
|
|
00:45:52.470 --> 00:45:54.470 |
|
other shapes, I don't know if there are |
|
|
|
00:45:54.470 --> 00:45:55.880 |
|
star fruit for example. |
|
|
|
00:45:56.790 --> 00:45:58.940 |
|
And then multinomial. |
|
|
|
00:45:58.940 --> 00:46:00.640 |
|
But either way I'll turn it in discrete |
|
|
|
00:46:00.640 --> 00:46:04.090 |
|
variables and count and the yes nodes. |
|
|
|
00:46:05.540 --> 00:46:07.010 |
|
So that will be Binomial. |
|
|
|
00:46:08.240 --> 00:46:08.540 |
|
OK. |
|
|
|
00:46:14.840 --> 00:46:18.500 |
|
All right, so now we know how to |
|
|
|
00:46:18.500 --> 00:46:20.770 |
|
Estimate probability of X given Y. |
|
|
|
00:46:20.770 --> 00:46:23.065 |
|
Now after I go through all that work on |
|
|
|
00:46:23.065 --> 00:46:25.178 |
|
the training data and I get new test |
|
|
|
00:46:25.178 --> 00:46:25.512 |
|
sample. |
|
|
|
00:46:25.512 --> 00:46:27.900 |
|
Now I want to know what's the most |
|
|
|
00:46:27.900 --> 00:46:29.620 |
|
likely label of that test sample. |
|
|
|
00:46:31.200 --> 00:46:31.660 |
|
So. |
|
|
|
00:46:32.370 --> 00:46:33.860 |
|
I can write this in two ways. |
|
|
|
00:46:33.860 --> 00:46:36.615 |
|
One is I can write Y is the argmax over |
|
|
|
00:46:36.615 --> 00:46:38.735 |
|
the product of probability of XI given |
|
|
|
00:46:38.735 --> 00:46:39.959 |
|
Y times probability of Y. |
|
|
|
00:46:40.990 --> 00:46:44.334 |
|
Or I can write it as the argmax of the |
|
|
|
00:46:44.334 --> 00:46:46.718 |
|
log of that, which is just the argmax |
|
|
|
00:46:46.718 --> 00:46:48.970 |
|
of Y of the sum over I of log of |
|
|
|
00:46:48.970 --> 00:46:50.904 |
|
probability of XI given Yi plus log of |
|
|
|
00:46:50.904 --> 00:46:51.599 |
|
probability of Y. |
|
|
|
00:46:52.570 --> 00:46:55.130 |
|
And I can do that because the thing |
|
|
|
00:46:55.130 --> 00:46:57.798 |
|
that maximizes X also maximizes log of |
|
|
|
00:46:57.798 --> 00:46:59.280 |
|
X and vice versa. |
|
|
|
00:46:59.280 --> 00:47:01.910 |
|
And that's actually a really useful |
|
|
|
00:47:01.910 --> 00:47:04.270 |
|
property because often the logs are |
|
|
|
00:47:04.270 --> 00:47:05.745 |
|
probabilities are a lot simpler. |
|
|
|
00:47:05.745 --> 00:47:08.790 |
|
And for example, if I took for example |
|
|
|
00:47:08.790 --> 00:47:10.434 |
|
at the Gaussian, if I take the log of |
|
|
|
00:47:10.434 --> 00:47:11.950 |
|
the Gaussian, then it just becomes a |
|
|
|
00:47:11.950 --> 00:47:12.760 |
|
squared term. |
|
|
|
00:47:13.640 --> 00:47:16.400 |
|
And the other thing is that these |
|
|
|
00:47:16.400 --> 00:47:18.350 |
|
probability of Xis might be. |
|
|
|
00:47:18.470 --> 00:47:21.553 |
|
If I have a lot of them, if I have like |
|
|
|
00:47:21.553 --> 00:47:23.723 |
|
500 of them and they're on average like |
|
|
|
00:47:23.723 --> 00:47:26.320 |
|
.1, that would be like .1 to the 500, |
|
|
|
00:47:26.320 --> 00:47:27.530 |
|
which is going to go outside in |
|
|
|
00:47:27.530 --> 00:47:28.690 |
|
numerical precision. |
|
|
|
00:47:28.690 --> 00:47:30.740 |
|
So if you try to Compute this product |
|
|
|
00:47:30.740 --> 00:47:32.290 |
|
directly, you're probably going to get |
|
|
|
00:47:32.290 --> 00:47:34.470 |
|
0 or some kind of wonky value. |
|
|
|
00:47:35.190 --> 00:47:37.320 |
|
And so it's much better to take the sum |
|
|
|
00:47:37.320 --> 00:47:39.265 |
|
of the logs than to take the product of |
|
|
|
00:47:39.265 --> 00:47:40.060 |
|
the probabilities. |
|
|
|
00:47:42.650 --> 00:47:44.290 |
|
Right, so, but I can compute the |
|
|
|
00:47:44.290 --> 00:47:45.830 |
|
probability of X&Y or the log |
|
|
|
00:47:45.830 --> 00:47:48.004 |
|
probability of X&Y for each value of Y |
|
|
|
00:47:48.004 --> 00:47:49.630 |
|
and then choose the value with maximum |
|
|
|
00:47:49.630 --> 00:47:50.240 |
|
likelihood. |
|
|
|
00:47:50.240 --> 00:47:51.686 |
|
That will work in the case of the |
|
|
|
00:47:51.686 --> 00:47:53.409 |
|
digits because I only have 10 digits. |
|
|
|
00:47:54.420 --> 00:47:56.940 |
|
And so I can check for each possible |
|
|
|
00:47:56.940 --> 00:48:00.365 |
|
Digit, how likely is the sum of log |
|
|
|
00:48:00.365 --> 00:48:01.958 |
|
probability of XI given Yi plus |
|
|
|
00:48:01.958 --> 00:48:03.770 |
|
probability log probability of Y. |
|
|
|
00:48:03.770 --> 00:48:06.980 |
|
And then I choose the Digit Digit label |
|
|
|
00:48:06.980 --> 00:48:08.570 |
|
that makes this most likely. |
|
|
|
00:48:11.240 --> 00:48:12.580 |
|
That's pretty simple. |
|
|
|
00:48:12.580 --> 00:48:14.110 |
|
In the case of Y is discrete. |
|
|
|
00:48:14.900 --> 00:48:16.415 |
|
And again, I just want to emphasize |
|
|
|
00:48:16.415 --> 00:48:18.983 |
|
that this thing of turning product of |
|
|
|
00:48:18.983 --> 00:48:21.070 |
|
probabilities into a sum of log |
|
|
|
00:48:21.070 --> 00:48:23.250 |
|
probabilities is really, really widely |
|
|
|
00:48:23.250 --> 00:48:23.760 |
|
used. |
|
|
|
00:48:23.760 --> 00:48:27.610 |
|
Almost anytime you Solve for anything |
|
|
|
00:48:27.610 --> 00:48:29.140 |
|
with probabilities, it involves that |
|
|
|
00:48:29.140 --> 00:48:29.380 |
|
step. |
|
|
|
00:48:31.840 --> 00:48:34.420 |
|
Now if Y is continuous, it's a bit more |
|
|
|
00:48:34.420 --> 00:48:36.610 |
|
complicated and I. |
|
|
|
00:48:37.440 --> 00:48:39.890 |
|
So I have the derivation here for you. |
|
|
|
00:48:39.890 --> 00:48:42.166 |
|
So this is for the case. |
|
|
|
00:48:42.166 --> 00:48:44.859 |
|
I'm going to use as an example the case |
|
|
|
00:48:44.860 --> 00:48:47.470 |
|
where I'm modeling probability of Y |
|
|
|
00:48:47.470 --> 00:48:51.400 |
|
minus XI of 1 dimensional Gaussian. |
|
|
|
00:48:53.280 --> 00:48:56.260 |
|
And anytime you solve this kind of |
|
|
|
00:48:56.260 --> 00:48:58.320 |
|
thing you're going to go through, you |
|
|
|
00:48:58.320 --> 00:48:59.580 |
|
would go through the same derivation. |
|
|
|
00:48:59.580 --> 00:49:00.280 |
|
If it's not. |
|
|
|
00:49:00.280 --> 00:49:03.180 |
|
Just like a simple matter of if you |
|
|
|
00:49:03.180 --> 00:49:05.000 |
|
don't have discrete wise, if you have |
|
|
|
00:49:05.000 --> 00:49:06.360 |
|
continuous wise, then you have to find |
|
|
|
00:49:06.360 --> 00:49:08.320 |
|
the Y that actually maximizes this |
|
|
|
00:49:08.320 --> 00:49:10.760 |
|
because you can't check all possible |
|
|
|
00:49:10.760 --> 00:49:12.310 |
|
values of a continuous variable. |
|
|
|
00:49:14.180 --> 00:49:15.390 |
|
So it's not. |
|
|
|
00:49:16.540 --> 00:49:17.451 |
|
It's a lot. |
|
|
|
00:49:17.451 --> 00:49:18.362 |
|
It's a lot. |
|
|
|
00:49:18.362 --> 00:49:20.350 |
|
It's a fair number of equations, but |
|
|
|
00:49:20.350 --> 00:49:23.420 |
|
it's not anything super complicated. |
|
|
|
00:49:23.420 --> 00:49:24.940 |
|
Let me see if I can get my cursor up |
|
|
|
00:49:24.940 --> 00:49:25.960 |
|
there again, OK? |
|
|
|
00:49:26.710 --> 00:49:29.560 |
|
Alright, so first I take the partial |
|
|
|
00:49:29.560 --> 00:49:32.526 |
|
derivative of the log probability of |
|
|
|
00:49:32.526 --> 00:49:34.780 |
|
X&Y with respect to Y and set it equal |
|
|
|
00:49:34.780 --> 00:49:35.190 |
|
to 0. |
|
|
|
00:49:35.190 --> 00:49:36.890 |
|
So you might remember from calculus |
|
|
|
00:49:36.890 --> 00:49:38.720 |
|
like if you want to find the min or Max |
|
|
|
00:49:38.720 --> 00:49:39.580 |
|
of some value. |
|
|
|
00:49:40.290 --> 00:49:43.109 |
|
Then take the partial with respect to |
|
|
|
00:49:43.110 --> 00:49:44.750 |
|
some variable. |
|
|
|
00:49:44.750 --> 00:49:47.340 |
|
You take the partial derivative with |
|
|
|
00:49:47.340 --> 00:49:48.800 |
|
respect to that variable and set it |
|
|
|
00:49:48.800 --> 00:49:49.539 |
|
equal to 0. |
|
|
|
00:49:50.680 --> 00:49:51.360 |
|
And. |
|
|
|
00:49:53.080 --> 00:49:55.020 |
|
So here I did that. |
|
|
|
00:49:55.020 --> 00:49:58.100 |
|
Now I've plugged in this Gaussian |
|
|
|
00:49:58.100 --> 00:50:00.200 |
|
distribution and taken the log. |
|
|
|
00:50:01.050 --> 00:50:02.510 |
|
And I kind of like there's some |
|
|
|
00:50:02.510 --> 00:50:04.020 |
|
invisible steps here, because there's |
|
|
|
00:50:04.020 --> 00:50:06.410 |
|
some terms like the log of one over |
|
|
|
00:50:06.410 --> 00:50:07.940 |
|
square of 2π Sigma. |
|
|
|
00:50:08.580 --> 00:50:10.069 |
|
That just don't. |
|
|
|
00:50:10.069 --> 00:50:12.290 |
|
Those terms don't matter because they |
|
|
|
00:50:12.290 --> 00:50:13.080 |
|
don't involve Y. |
|
|
|
00:50:13.080 --> 00:50:14.743 |
|
So the partial derivative of those |
|
|
|
00:50:14.743 --> 00:50:16.215 |
|
terms with respect to Y is 0. |
|
|
|
00:50:16.215 --> 00:50:19.090 |
|
So I just didn't include them. |
|
|
|
00:50:19.750 --> 00:50:21.815 |
|
So these are the terms that include Y |
|
|
|
00:50:21.815 --> 00:50:23.590 |
|
and I've already taken the log. |
|
|
|
00:50:23.590 --> 00:50:25.550 |
|
This was originally east to the -, 1 |
|
|
|
00:50:25.550 --> 00:50:27.839 |
|
half whatever is shown here, and the |
|
|
|
00:50:27.839 --> 00:50:30.360 |
|
log of X of X is equal to X. |
|
|
|
00:50:31.840 --> 00:50:33.490 |
|
And so I get this guy. |
|
|
|
00:50:34.450 --> 00:50:36.530 |
|
Now I broke it out into different |
|
|
|
00:50:36.530 --> 00:50:39.320 |
|
terms, so I did the quadratic of Y |
|
|
|
00:50:39.320 --> 00:50:41.190 |
|
minus XI minus MU I ^2. |
|
|
|
00:50:42.420 --> 00:50:44.100 |
|
Mainly so that I don't have to use the |
|
|
|
00:50:44.100 --> 00:50:45.620 |
|
chain rule and I can keep my |
|
|
|
00:50:45.620 --> 00:50:46.740 |
|
derivatives really Simple. |
|
|
|
00:50:47.830 --> 00:50:51.959 |
|
So here I just broke that out to y ^2 y |
|
|
|
00:50:51.960 --> 00:50:54.130 |
|
axis YMUI. |
|
|
|
00:50:54.130 --> 00:50:55.530 |
|
And again, I don't need to worry about |
|
|
|
00:50:55.530 --> 00:50:57.779 |
|
the MU I squared over Sigma I squared |
|
|
|
00:50:57.780 --> 00:50:59.750 |
|
because it doesn't involve Y so I just |
|
|
|
00:50:59.750 --> 00:51:00.230 |
|
left it out. |
|
|
|
00:51:02.140 --> 00:51:03.990 |
|
I. |
|
|
|
00:51:04.100 --> 00:51:07.021 |
|
Take the derivative with respect to Y. |
|
|
|
00:51:07.021 --> 00:51:09.468 |
|
So the derivative of y ^2 is 2 Y. |
|
|
|
00:51:09.468 --> 00:51:10.976 |
|
So this half goes away. |
|
|
|
00:51:10.976 --> 00:51:14.080 |
|
Derivative of YX is just X. |
|
|
|
00:51:15.070 --> 00:51:18.000 |
|
So this should be a subscript I. |
|
|
|
00:51:18.730 --> 00:51:21.120 |
|
And then I did the same for these guys |
|
|
|
00:51:21.120 --> 00:51:21.330 |
|
here. |
|
|
|
00:51:22.500 --> 00:51:25.740 |
|
It's just basic algebra, so I just try |
|
|
|
00:51:25.740 --> 00:51:27.610 |
|
to group the terms that involve Y and |
|
|
|
00:51:27.610 --> 00:51:29.480 |
|
the terms that don't involve Yi, put |
|
|
|
00:51:29.480 --> 00:51:30.840 |
|
the terms that don't involve Y and the |
|
|
|
00:51:30.840 --> 00:51:33.370 |
|
right side, and then finally I divide |
|
|
|
00:51:33.370 --> 00:51:36.830 |
|
the coefficient of Y and I get this guy |
|
|
|
00:51:36.830 --> 00:51:37.150 |
|
here. |
|
|
|
00:51:38.030 --> 00:51:41.269 |
|
So at the end Y is equal to 1 over the |
|
|
|
00:51:41.270 --> 00:51:44.408 |
|
sum over all the features of 1 / sqrt. |
|
|
|
00:51:44.408 --> 00:51:46.690 |
|
I mean one over Sigma I ^2. |
|
|
|
00:51:47.420 --> 00:51:50.580 |
|
Plus one over Sigma y ^2 which is the |
|
|
|
00:51:50.580 --> 00:51:52.160 |
|
standard deviation of the Prior of Y. |
|
|
|
00:51:52.160 --> 00:51:53.906 |
|
Or if I just assumed uniform likelihood |
|
|
|
00:51:53.906 --> 00:51:55.520 |
|
of Yi wouldn't need that term. |
|
|
|
00:51:56.610 --> 00:51:59.400 |
|
And then that's times the sum over all |
|
|
|
00:51:59.400 --> 00:52:02.700 |
|
the features of that feature value. |
|
|
|
00:52:02.700 --> 00:52:03.930 |
|
This should be subscript I. |
|
|
|
00:52:04.940 --> 00:52:10.430 |
|
Plus MU I divided by Sigma I ^2 plus mu |
|
|
|
00:52:10.430 --> 00:52:13.811 |
|
Y, the Prior mean of Y divided by Sigma |
|
|
|
00:52:13.811 --> 00:52:14.539 |
|
y ^2. |
|
|
|
00:52:16.150 --> 00:52:18.940 |
|
And so this is just a, it's actually |
|
|
|
00:52:18.940 --> 00:52:19.849 |
|
just a weighted. |
|
|
|
00:52:19.850 --> 00:52:22.823 |
|
If you say that one over Sigma I |
|
|
|
00:52:22.823 --> 00:52:26.035 |
|
squared is Wei, it's like a weight for |
|
|
|
00:52:26.035 --> 00:52:27.565 |
|
that prediction of the ith feature. |
|
|
|
00:52:27.565 --> 00:52:29.830 |
|
This is just a weighted average of the |
|
|
|
00:52:29.830 --> 00:52:31.720 |
|
predictions from all the Features |
|
|
|
00:52:31.720 --> 00:52:33.250 |
|
that's weighted by one over the |
|
|
|
00:52:33.250 --> 00:52:35.573 |
|
steering deviation squared or one over |
|
|
|
00:52:35.573 --> 00:52:36.190 |
|
the variance. |
|
|
|
00:52:37.590 --> 00:52:40.421 |
|
And so I have one over the sum over I |
|
|
|
00:52:40.421 --> 00:52:45.683 |
|
of WI plus WY times, the sum X plus mu |
|
|
|
00:52:45.683 --> 00:52:49.722 |
|
I XI plus MU I times, Wei plus mu Y |
|
|
|
00:52:49.722 --> 00:52:50.100 |
|
times. |
|
|
|
00:52:50.100 --> 00:52:50.670 |
|
Why? |
|
|
|
00:52:51.630 --> 00:52:53.240 |
|
Amy sounds similar, unfortunately. |
|
|
|
00:52:54.780 --> 00:52:56.430 |
|
So it's just the weighted average of |
|
|
|
00:52:56.430 --> 00:52:57.910 |
|
all the predictions of the individual |
|
|
|
00:52:57.910 --> 00:52:58.174 |
|
features. |
|
|
|
00:52:58.174 --> 00:53:00.093 |
|
And it makes sense that it kind of |
|
|
|
00:53:00.093 --> 00:53:01.624 |
|
makes sense intuitively that the weight |
|
|
|
00:53:01.624 --> 00:53:02.650 |
|
is 1 over the variance. |
|
|
|
00:53:02.650 --> 00:53:04.490 |
|
So if you have really high variance, |
|
|
|
00:53:04.490 --> 00:53:05.790 |
|
then the weight is small. |
|
|
|
00:53:05.790 --> 00:53:08.155 |
|
So if, for example, maybe the |
|
|
|
00:53:08.155 --> 00:53:09.839 |
|
temperature in Sacramento is a really |
|
|
|
00:53:09.840 --> 00:53:11.513 |
|
bad predictor for the temperature in |
|
|
|
00:53:11.513 --> 00:53:12.984 |
|
Cleveland, so it will have high |
|
|
|
00:53:12.984 --> 00:53:14.840 |
|
variance and it gets a little weight, |
|
|
|
00:53:14.840 --> 00:53:16.460 |
|
while the temperature in Cleveland the |
|
|
|
00:53:16.460 --> 00:53:19.130 |
|
previous day is much more highly |
|
|
|
00:53:19.130 --> 00:53:20.849 |
|
predictive, has lower variance, so |
|
|
|
00:53:20.850 --> 00:53:21.639 |
|
it'll get more weight. |
|
|
|
00:53:32.280 --> 00:53:35.380 |
|
So let me pause here. |
|
|
|
00:53:35.380 --> 00:53:38.690 |
|
So any questions about? |
|
|
|
00:53:39.670 --> 00:53:43.255 |
|
Estimating the likelihoods P of X given |
|
|
|
00:53:43.255 --> 00:53:47.970 |
|
Y, or solving for the Y that makes. |
|
|
|
00:53:47.970 --> 00:53:49.880 |
|
That's most likely given your |
|
|
|
00:53:49.880 --> 00:53:50.500 |
|
likelihoods. |
|
|
|
00:53:52.460 --> 00:53:54.470 |
|
And obviously if I'm happy to work |
|
|
|
00:53:54.470 --> 00:53:56.610 |
|
through this in office hours as well in |
|
|
|
00:53:56.610 --> 00:53:59.940 |
|
the TAS should also if you want to like |
|
|
|
00:53:59.940 --> 00:54:01.100 |
|
spend more time working through the |
|
|
|
00:54:01.100 --> 00:54:01.530 |
|
equations. |
|
|
|
00:54:03.920 --> 00:54:04.930 |
|
I just want to pause. |
|
|
|
00:54:04.930 --> 00:54:07.830 |
|
I know it's a lot of math to soak up. |
|
|
|
00:54:09.870 --> 00:54:13.260 |
|
And really, it's not that memorizing |
|
|
|
00:54:13.260 --> 00:54:14.370 |
|
these things isn't important. |
|
|
|
00:54:14.370 --> 00:54:15.860 |
|
It's really the process that you just |
|
|
|
00:54:15.860 --> 00:54:17.385 |
|
set the partial derivative with respect |
|
|
|
00:54:17.385 --> 00:54:20.140 |
|
to Y, set it to zero, and then you do |
|
|
|
00:54:20.140 --> 00:54:20.540 |
|
the. |
|
|
|
00:54:21.250 --> 00:54:23.120 |
|
Do the partial derivative and solve the |
|
|
|
00:54:23.120 --> 00:54:23.510 |
|
algebra. |
|
|
|
00:54:26.700 --> 00:54:28.050 |
|
All right, I'll go on then. |
|
|
|
00:54:28.050 --> 00:54:31.990 |
|
So far, this is pure maximum likelihood |
|
|
|
00:54:31.990 --> 00:54:32.530 |
|
estimation. |
|
|
|
00:54:32.530 --> 00:54:34.920 |
|
I'm not, I'm not imposing any kinds of |
|
|
|
00:54:34.920 --> 00:54:36.470 |
|
Priors over my parameters. |
|
|
|
00:54:37.570 --> 00:54:39.600 |
|
In practice, you do want to impose a |
|
|
|
00:54:39.600 --> 00:54:41.010 |
|
Prior in your parameters to make sure |
|
|
|
00:54:41.010 --> 00:54:42.220 |
|
you don't have any zeros. |
|
|
|
00:54:43.750 --> 00:54:46.380 |
|
Otherwise, like if some in the digits |
|
|
|
00:54:46.380 --> 00:54:48.809 |
|
case for example the test sample had a |
|
|
|
00:54:48.810 --> 00:54:50.470 |
|
dot in an unlikely place. |
|
|
|
00:54:50.470 --> 00:54:52.662 |
|
If I had just had like a one and some |
|
|
|
00:54:52.662 --> 00:54:54.030 |
|
unlikely pixel, all the probabilities |
|
|
|
00:54:54.030 --> 00:54:55.630 |
|
would be 0 and you wouldn't know what |
|
|
|
00:54:55.630 --> 00:54:57.620 |
|
the label is because of that one stupid |
|
|
|
00:54:57.620 --> 00:54:57.970 |
|
pixel. |
|
|
|
00:54:58.730 --> 00:55:01.040 |
|
So you want to have some kind of Prior? |
|
|
|
00:55:01.730 --> 00:55:03.425 |
|
To avoid these zero probabilities. |
|
|
|
00:55:03.425 --> 00:55:06.260 |
|
So the most common case if you're |
|
|
|
00:55:06.260 --> 00:55:08.760 |
|
estimating a distribution of discrete |
|
|
|
00:55:08.760 --> 00:55:10.430 |
|
variables like a multinomial or |
|
|
|
00:55:10.430 --> 00:55:13.010 |
|
Binomial, is to just initialize with |
|
|
|
00:55:13.010 --> 00:55:13.645 |
|
some count. |
|
|
|
00:55:13.645 --> 00:55:16.180 |
|
So you just say for example alpha |
|
|
|
00:55:16.180 --> 00:55:16.880 |
|
equals one. |
|
|
|
00:55:17.610 --> 00:55:20.110 |
|
And now I say the probability of X I = |
|
|
|
00:55:20.110 --> 00:55:21.620 |
|
V given y = K. |
|
|
|
00:55:22.400 --> 00:55:24.950 |
|
Is Alpha plus the count of how many |
|
|
|
00:55:24.950 --> 00:55:27.740 |
|
times XI equals V and y = K. |
|
|
|
00:55:28.690 --> 00:55:31.865 |
|
Divided by the all the different values |
|
|
|
00:55:31.865 --> 00:55:35.300 |
|
of alpha plus account of XI equals that |
|
|
|
00:55:35.300 --> 00:55:37.610 |
|
value in y = K probably for clarity I |
|
|
|
00:55:37.610 --> 00:55:39.700 |
|
should have used something other than B |
|
|
|
00:55:39.700 --> 00:55:41.630 |
|
in the denominator, but hopefully |
|
|
|
00:55:41.630 --> 00:55:42.230 |
|
that's clear enough. |
|
|
|
00:55:43.060 --> 00:55:46.170 |
|
Here's the and then here's the Python |
|
|
|
00:55:46.170 --> 00:55:47.070 |
|
for that, so it's just. |
|
|
|
00:55:47.880 --> 00:55:50.350 |
|
Sum of all the values where XI equals V |
|
|
|
00:55:50.350 --> 00:55:52.470 |
|
and y = K Plus some alpha. |
|
|
|
00:55:53.300 --> 00:55:54.980 |
|
So if alpha equals zero, then I don't |
|
|
|
00:55:54.980 --> 00:55:55.710 |
|
have any Prior. |
|
|
|
00:55:56.840 --> 00:56:00.450 |
|
And then I'm just dividing by the sum |
|
|
|
00:56:00.450 --> 00:56:04.270 |
|
of times at y = K and there will be. |
|
|
|
00:56:04.850 --> 00:56:06.540 |
|
The number of alphas will be equal to |
|
|
|
00:56:06.540 --> 00:56:08.150 |
|
the number of different values, so this |
|
|
|
00:56:08.150 --> 00:56:10.510 |
|
is like a little bit of a shortcut, but |
|
|
|
00:56:10.510 --> 00:56:11.330 |
|
it's the same thing. |
|
|
|
00:56:12.860 --> 00:56:14.760 |
|
If I have a continuous variable and |
|
|
|
00:56:14.760 --> 00:56:15.060 |
|
I've. |
|
|
|
00:56:15.730 --> 00:56:17.010 |
|
Modeled it with the Gaussian. |
|
|
|
00:56:17.010 --> 00:56:18.470 |
|
Then the usual thing to do is just to |
|
|
|
00:56:18.470 --> 00:56:20.180 |
|
add a small value to your steering |
|
|
|
00:56:20.180 --> 00:56:21.420 |
|
deviation or your variance. |
|
|
|
00:56:22.110 --> 00:56:24.320 |
|
And you might want to make that value |
|
|
|
00:56:24.320 --> 00:56:27.650 |
|
if N is unknown, then make it dependent |
|
|
|
00:56:27.650 --> 00:56:29.300 |
|
on north so that if you have a huge |
|
|
|
00:56:29.300 --> 00:56:31.395 |
|
number of samples then the effect of |
|
|
|
00:56:31.395 --> 00:56:33.880 |
|
the Prior will go down, which is what |
|
|
|
00:56:33.880 --> 00:56:34.170 |
|
you want. |
|
|
|
00:56:36.140 --> 00:56:39.513 |
|
So for example, you can say that the |
|
|
|
00:56:39.513 --> 00:56:41.990 |
|
stern deviation is whatever this |
|
|
|
00:56:41.990 --> 00:56:44.770 |
|
whatever the MLE estimate of the stern |
|
|
|
00:56:44.770 --> 00:56:47.340 |
|
deviation is, plus some small value |
|
|
|
00:56:47.340 --> 00:56:49.730 |
|
sqrt 1 over the length of north. |
|
|
|
00:56:50.420 --> 00:56:51.350 |
|
Of X, sorry. |
|
|
|
00:57:00.440 --> 00:57:02.670 |
|
So what the Prior does is it. |
|
|
|
00:57:02.810 --> 00:57:05.995 |
|
In the case of the discrete variables, |
|
|
|
00:57:05.995 --> 00:57:09.110 |
|
the Prior is trying to push your |
|
|
|
00:57:09.110 --> 00:57:11.152 |
|
Estimate towards a uniform likelihood. |
|
|
|
00:57:11.152 --> 00:57:13.000 |
|
In fact, in both cases it's pushing it |
|
|
|
00:57:13.000 --> 00:57:14.280 |
|
towards a uniform likelihood. |
|
|
|
00:57:15.400 --> 00:57:18.670 |
|
So if you had a really large alpha, |
|
|
|
00:57:18.670 --> 00:57:20.550 |
|
then let's say. |
|
|
|
00:57:22.090 --> 00:57:23.440 |
|
Let's say that. |
|
|
|
00:57:24.620 --> 00:57:25.850 |
|
Or I don't know if I can think of |
|
|
|
00:57:25.850 --> 00:57:26.170 |
|
something. |
|
|
|
00:57:28.140 --> 00:57:29.550 |
|
Let's say you have a population of |
|
|
|
00:57:29.550 --> 00:57:30.900 |
|
students and you're trying to estimate |
|
|
|
00:57:30.900 --> 00:57:32.510 |
|
the probability that a student is male. |
|
|
|
00:57:33.520 --> 00:57:36.570 |
|
If I say alpha equals 1000, then I'm |
|
|
|
00:57:36.570 --> 00:57:37.860 |
|
going to need like an awful lot of |
|
|
|
00:57:37.860 --> 00:57:40.156 |
|
students before I budge very far from a |
|
|
|
00:57:40.156 --> 00:57:42.070 |
|
5050 chance that a student is male or |
|
|
|
00:57:42.070 --> 00:57:42.620 |
|
female. |
|
|
|
00:57:42.620 --> 00:57:44.057 |
|
Because I'll start with saying there's |
|
|
|
00:57:44.057 --> 00:57:46.213 |
|
1000 males and 1000 females, and then |
|
|
|
00:57:46.213 --> 00:57:48.676 |
|
I'll count all the males and add them |
|
|
|
00:57:48.676 --> 00:57:50.832 |
|
to 1000, count all the females, add |
|
|
|
00:57:50.832 --> 00:57:53.370 |
|
them to 1000, and then I would take the |
|
|
|
00:57:53.370 --> 00:57:55.210 |
|
male plus 1000 count and divide it by |
|
|
|
00:57:55.210 --> 00:57:57.660 |
|
2000 plus the total population. |
|
|
|
00:57:59.130 --> 00:58:00.860 |
|
If Alpha is 0, then I'm going to get |
|
|
|
00:58:00.860 --> 00:58:03.410 |
|
just my raw empirical Estimate. |
|
|
|
00:58:03.410 --> 00:58:06.810 |
|
So if I had like 3 students and I say |
|
|
|
00:58:06.810 --> 00:58:09.090 |
|
alpha equals zero, and I have two males |
|
|
|
00:58:09.090 --> 00:58:11.140 |
|
and a female, then I'll say 2/3 of them |
|
|
|
00:58:11.140 --> 00:58:11.550 |
|
are male. |
|
|
|
00:58:12.410 --> 00:58:14.670 |
|
If I say alpha is 1 and I have two |
|
|
|
00:58:14.670 --> 00:58:17.110 |
|
males and a female, then I would say |
|
|
|
00:58:17.110 --> 00:58:20.490 |
|
that my probability of male is 3 / 5 |
|
|
|
00:58:20.490 --> 00:58:24.100 |
|
because it's 2 + 1 / 3 + 2. |
|
|
|
00:58:27.060 --> 00:58:28.330 |
|
Their deviation it's the same. |
|
|
|
00:58:28.330 --> 00:58:30.240 |
|
It's like trying to just broaden your |
|
|
|
00:58:30.240 --> 00:58:32.600 |
|
variance from what you would Estimate |
|
|
|
00:58:32.600 --> 00:58:33.580 |
|
directly from the data. |
|
|
|
00:58:36.500 --> 00:58:39.260 |
|
So I think I will not ask you all these |
|
|
|
00:58:39.260 --> 00:58:41.210 |
|
probabilities because they're kind of |
|
|
|
00:58:41.210 --> 00:58:43.220 |
|
you've shown the ability to count |
|
|
|
00:58:43.220 --> 00:58:44.810 |
|
before mostly. |
|
|
|
00:58:46.550 --> 00:58:47.640 |
|
And. |
|
|
|
00:58:47.850 --> 00:58:50.060 |
|
So here's for example, the probability |
|
|
|
00:58:50.060 --> 00:58:54.509 |
|
of X 1 = 0 and y = 0 is 2 out of four. |
|
|
|
00:58:54.510 --> 00:58:56.050 |
|
I can get that just by looking down |
|
|
|
00:58:56.050 --> 00:58:56.670 |
|
these rows. |
|
|
|
00:58:56.670 --> 00:58:58.870 |
|
It takes a little bit of time, but |
|
|
|
00:58:58.870 --> 00:59:02.786 |
|
there's four times that y = 0 and out |
|
|
|
00:59:02.786 --> 00:59:06.660 |
|
of those two times X 1 = 0 and so this |
|
|
|
00:59:06.660 --> 00:59:07.440 |
|
is 2 out of four. |
|
|
|
00:59:08.090 --> 00:59:08.930 |
|
And the same. |
|
|
|
00:59:08.930 --> 00:59:11.260 |
|
I can use the same counting method to |
|
|
|
00:59:11.260 --> 00:59:13.120 |
|
get all of these other probabilities |
|
|
|
00:59:13.120 --> 00:59:13.410 |
|
here. |
|
|
|
00:59:15.770 --> 00:59:19.450 |
|
So just to check that everyone's awake, |
|
|
|
00:59:19.450 --> 00:59:22.970 |
|
if I, what is the probability of Y? |
|
|
|
00:59:23.840 --> 00:59:27.370 |
|
And X 1 = 1 and X 2 = 1. |
|
|
|
00:59:28.500 --> 00:59:30.019 |
|
So can you get it from? |
|
|
|
00:59:30.019 --> 00:59:32.560 |
|
Can you get it from this guy under an |
|
|
|
00:59:32.560 --> 00:59:33.450 |
|
independence? |
|
|
|
00:59:33.450 --> 00:59:35.670 |
|
So get it from this under an under an I |
|
|
|
00:59:35.670 --> 00:59:36.540 |
|
Bayes assumption. |
|
|
|
00:59:41.350 --> 00:59:43.240 |
|
Let's say I should say probability of Y |
|
|
|
00:59:43.240 --> 00:59:43.860 |
|
equal 1. |
|
|
|
00:59:45.380 --> 00:59:47.910 |
|
Probability of y = 1 given X 1 = 1 and |
|
|
|
00:59:47.910 --> 00:59:48.930 |
|
X 2 = 1. |
|
|
|
00:59:57.500 --> 01:00:00.560 |
|
And you don't worry about simplifying |
|
|
|
01:00:00.560 --> 01:00:02.610 |
|
your numerator and denominator. |
|
|
|
01:00:03.530 --> 01:00:05.110 |
|
What are the things that get multiplied |
|
|
|
01:00:05.110 --> 01:00:05.610 |
|
together? |
|
|
|
01:00:10.460 --> 01:00:14.350 |
|
Not sort of, partly that's in there. |
|
|
|
01:00:15.220 --> 01:00:17.880 |
|
Raise your hand if you think the |
|
|
|
01:00:17.880 --> 01:00:18.560 |
|
answer. |
|
|
|
01:00:19.550 --> 01:00:21.130 |
|
I just want to give everyone time. |
|
|
|
01:00:24.650 --> 01:00:27.962 |
|
But I mean probability of y = 1 given X |
|
|
|
01:00:27.962 --> 01:00:29.960 |
|
1 = 1 and X 2 = 1. |
|
|
|
01:00:39.830 --> 01:00:41.220 |
|
A Naive Bayes assumption. |
|
|
|
01:01:24.310 --> 01:01:25.800 |
|
The raise your hand if you. |
|
|
|
01:01:26.490 --> 01:01:27.030 |
|
Finished. |
|
|
|
01:01:56.450 --> 01:01:57.740 |
|
But don't tell me the answer yet. |
|
|
|
01:02:18.470 --> 01:02:19.260 |
|
Equals one. |
|
|
|
01:02:23.210 --> 01:02:23.420 |
|
Alright. |
|
|
|
01:02:23.420 --> 01:02:24.830 |
|
Did anybody get it yet? |
|
|
|
01:02:24.830 --> 01:02:25.950 |
|
Raise your hand if you did. |
|
|
|
01:02:25.950 --> 01:02:26.910 |
|
I just don't want to. |
|
|
|
01:02:28.170 --> 01:02:29.110 |
|
Give it too early. |
|
|
|
01:03:46.370 --> 01:03:46.960 |
|
Alright. |
|
|
|
01:03:48.170 --> 01:03:52.029 |
|
Example, some people have gotten it, so |
|
|
|
01:03:52.030 --> 01:03:53.950 |
|
let me I'll start going through it. |
|
|
|
01:03:53.950 --> 01:03:55.480 |
|
All right, so the Naive Bayes |
|
|
|
01:03:55.480 --> 01:03:56.005 |
|
assumption. |
|
|
|
01:03:56.005 --> 01:03:57.760 |
|
So this would be. |
|
|
|
01:03:58.060 --> 01:03:58.250 |
|
OK. |
|
|
|
01:04:00.690 --> 01:04:02.960 |
|
OK, probability it's actually my touch |
|
|
|
01:04:02.960 --> 01:04:03.230 |
|
screen. |
|
|
|
01:04:03.230 --> 01:04:04.400 |
|
I think is kind of broken. |
|
|
|
01:04:05.250 --> 01:04:09.560 |
|
Probability of X1 given Y times |
|
|
|
01:04:09.560 --> 01:04:14.815 |
|
probability X2 given Y sorry equals |
|
|
|
01:04:14.815 --> 01:04:15.200 |
|
one. |
|
|
|
01:04:16.630 --> 01:04:19.050 |
|
Times probability of Y equal 1. |
|
|
|
01:04:19.910 --> 01:04:21.950 |
|
Right, so it's the product of the |
|
|
|
01:04:21.950 --> 01:04:23.180 |
|
probabilities of the Features. |
|
|
|
01:04:23.180 --> 01:04:24.730 |
|
Give them label times the probability |
|
|
|
01:04:24.730 --> 01:04:25.240 |
|
of the label. |
|
|
|
01:04:26.500 --> 01:04:29.990 |
|
And so that will be probability of XYX. |
|
|
|
01:04:31.030 --> 01:04:32.819 |
|
1 = 1. |
|
|
|
01:04:33.850 --> 01:04:37.317 |
|
Given probability of Yi mean given y = |
|
|
|
01:04:37.317 --> 01:04:38.260 |
|
1 is 3/4. |
|
|
|
01:04:42.110 --> 01:04:46.010 |
|
And probably the X 2 = 1 given y = 1 is |
|
|
|
01:04:46.010 --> 01:04:46.750 |
|
3/4. |
|
|
|
01:04:49.250 --> 01:04:52.550 |
|
And the probability that y = 1 is two |
|
|
|
01:04:52.550 --> 01:04:53.940 |
|
quarters or 1/2. |
|
|
|
01:04:58.570 --> 01:05:00.180 |
|
So it's 930 seconds. |
|
|
|
01:05:01.120 --> 01:05:01.390 |
|
Right. |
|
|
|
01:05:02.580 --> 01:05:05.846 |
|
And the probability that y = 0 given X |
|
|
|
01:05:05.846 --> 01:05:08.059 |
|
1 = 1 and Y 1 = 1. |
|
|
|
01:05:09.800 --> 01:05:11.840 |
|
I mean sorry, the probability of y = 0 |
|
|
|
01:05:11.840 --> 01:05:14.480 |
|
given the X is equal equal 1. |
|
|
|
01:05:15.620 --> 01:05:16.770 |
|
Is. |
|
|
|
01:05:18.600 --> 01:05:19.190 |
|
Let's see. |
|
|
|
01:05:20.250 --> 01:05:23.780 |
|
So that would be 2 fourths times 2 |
|
|
|
01:05:23.780 --> 01:05:24.300 |
|
fourths. |
|
|
|
01:05:25.180 --> 01:05:26.320 |
|
Times 2 fourths. |
|
|
|
01:05:27.260 --> 01:05:31.300 |
|
So if X 1 = 1 and X2 equal 1, then it's |
|
|
|
01:05:31.300 --> 01:05:33.540 |
|
more likely that Y is equal to 1 than |
|
|
|
01:05:33.540 --> 01:05:35.070 |
|
that Y is equal to 0. |
|
|
|
01:05:41.720 --> 01:05:46.750 |
|
If I had if I use my Prior, this is how |
|
|
|
01:05:46.750 --> 01:05:48.055 |
|
the probabilities would change. |
|
|
|
01:05:48.055 --> 01:05:51.060 |
|
So if I say alpha equals one, you can |
|
|
|
01:05:51.060 --> 01:05:52.900 |
|
see that the probabilities get less |
|
|
|
01:05:52.900 --> 01:05:53.510 |
|
Peaky. |
|
|
|
01:05:53.510 --> 01:05:56.422 |
|
So I went from 1/4 to 261 quarter and |
|
|
|
01:05:56.422 --> 01:05:58.951 |
|
3/4 to 2/6 and four six for example. |
|
|
|
01:05:58.951 --> 01:06:02.316 |
|
So 1/3 and 2/3 is more uniform than 1/4 |
|
|
|
01:06:02.316 --> 01:06:03.129 |
|
and 3/4. |
|
|
|
01:06:05.050 --> 01:06:07.040 |
|
And then if the initial estimate was |
|
|
|
01:06:07.040 --> 01:06:09.020 |
|
1/2, the final Estimate will still be |
|
|
|
01:06:09.020 --> 01:06:11.620 |
|
1/2 because it's because this Prior is |
|
|
|
01:06:11.620 --> 01:06:13.650 |
|
just trying to push things towards 1/2. |
|
|
|
01:06:20.780 --> 01:06:24.220 |
|
So I want to give one example of a use |
|
|
|
01:06:24.220 --> 01:06:24.550 |
|
case. |
|
|
|
01:06:24.550 --> 01:06:25.685 |
|
So I've actually. |
|
|
|
01:06:25.685 --> 01:06:28.360 |
|
I mean I want to say like I used Naive |
|
|
|
01:06:28.360 --> 01:06:30.630 |
|
Bayes, but I use that assumption pretty |
|
|
|
01:06:30.630 --> 01:06:31.440 |
|
often. |
|
|
|
01:06:31.440 --> 01:06:33.480 |
|
For example if I wanted to Estimate a |
|
|
|
01:06:33.480 --> 01:06:35.210 |
|
distribution of RGB colors. |
|
|
|
01:06:36.740 --> 01:06:38.410 |
|
I would first convert it to a different |
|
|
|
01:06:38.410 --> 01:06:39.860 |
|
color space, but let's just say I want |
|
|
|
01:06:39.860 --> 01:06:41.780 |
|
to Estimate distribution of LGBT RGB |
|
|
|
01:06:41.780 --> 01:06:42.390 |
|
colors. |
|
|
|
01:06:42.390 --> 01:06:45.055 |
|
Then even though it's 3 dimensions, is |
|
|
|
01:06:45.055 --> 01:06:45.690 |
|
a pretty. |
|
|
|
01:06:45.690 --> 01:06:47.920 |
|
You need like a lot of data to estimate |
|
|
|
01:06:47.920 --> 01:06:48.610 |
|
that distribution. |
|
|
|
01:06:48.610 --> 01:06:50.700 |
|
And So what I might do is I'll say, |
|
|
|
01:06:50.700 --> 01:06:52.820 |
|
well, I'm going to assume that RG and B |
|
|
|
01:06:52.820 --> 01:06:54.645 |
|
are independent and so the probability |
|
|
|
01:06:54.645 --> 01:06:57.350 |
|
of RGB is just the probability of R |
|
|
|
01:06:57.350 --> 01:06:58.808 |
|
times probability of G times |
|
|
|
01:06:58.808 --> 01:06:59.524 |
|
probability B. |
|
|
|
01:06:59.524 --> 01:07:01.600 |
|
And I compute a histogram for each of |
|
|
|
01:07:01.600 --> 01:07:04.940 |
|
those, and I use that to get my as my |
|
|
|
01:07:04.940 --> 01:07:06.230 |
|
likelihood Estimate. |
|
|
|
01:07:06.560 --> 01:07:08.520 |
|
So it's like really commonly used in |
|
|
|
01:07:08.520 --> 01:07:10.120 |
|
that kind of setting where you want to |
|
|
|
01:07:10.120 --> 01:07:11.770 |
|
Estimate the distribution of multiple |
|
|
|
01:07:11.770 --> 01:07:13.380 |
|
variables and there's just no way to |
|
|
|
01:07:13.380 --> 01:07:13.810 |
|
get a Joint. |
|
|
|
01:07:13.810 --> 01:07:17.100 |
|
The only options you really have are to |
|
|
|
01:07:17.100 --> 01:07:18.410 |
|
make something the Naive Bayes |
|
|
|
01:07:18.410 --> 01:07:21.330 |
|
assumption or to do a mixture of |
|
|
|
01:07:21.330 --> 01:07:23.416 |
|
Gaussians, which we'll talk about later |
|
|
|
01:07:23.416 --> 01:07:24.320 |
|
in the semester. |
|
|
|
01:07:26.380 --> 01:07:27.940 |
|
Right, But here's the case where it's |
|
|
|
01:07:27.940 --> 01:07:29.450 |
|
used for object detection. |
|
|
|
01:07:29.450 --> 01:07:32.280 |
|
So this was by Schneiderman Kanadi and |
|
|
|
01:07:32.280 --> 01:07:35.500 |
|
it was the most accurate face and car |
|
|
|
01:07:35.500 --> 01:07:36.520 |
|
detector for a while. |
|
|
|
01:07:37.450 --> 01:07:39.720 |
|
They detector is based on wavelet |
|
|
|
01:07:39.720 --> 01:07:41.420 |
|
coefficients which are just like local |
|
|
|
01:07:41.420 --> 01:07:42.610 |
|
intensity differences. |
|
|
|
01:07:43.320 --> 01:07:46.010 |
|
And the. |
|
|
|
01:07:46.090 --> 01:07:48.880 |
|
The It's a Probabilistic framework, so |
|
|
|
01:07:48.880 --> 01:07:51.070 |
|
they're trying to say whether if you |
|
|
|
01:07:51.070 --> 01:07:54.107 |
|
Extract a window of Features from the |
|
|
|
01:07:54.107 --> 01:07:56.386 |
|
image, some Features over some part of |
|
|
|
01:07:56.386 --> 01:07:56.839 |
|
the image. |
|
|
|
01:07:57.450 --> 01:07:59.020 |
|
And Extract all the wavelet |
|
|
|
01:07:59.020 --> 01:08:00.330 |
|
coefficients. |
|
|
|
01:08:00.330 --> 01:08:02.390 |
|
Then you want to say that it's a face |
|
|
|
01:08:02.390 --> 01:08:03.950 |
|
if the probability of those |
|
|
|
01:08:03.950 --> 01:08:05.853 |
|
coefficients is greater given that it's |
|
|
|
01:08:05.853 --> 01:08:08.390 |
|
a face, than given that's not a face |
|
|
|
01:08:08.390 --> 01:08:10.330 |
|
times the probability that's a face |
|
|
|
01:08:10.330 --> 01:08:11.730 |
|
over the probability that's not a face. |
|
|
|
01:08:12.430 --> 01:08:14.680 |
|
So it's this basic Probabilistic Model. |
|
|
|
01:08:14.680 --> 01:08:16.740 |
|
And again, the probability modeling. |
|
|
|
01:08:16.740 --> 01:08:17.920 |
|
The probability of all those |
|
|
|
01:08:17.920 --> 01:08:19.370 |
|
coefficients is way too hard. |
|
|
|
01:08:20.330 --> 01:08:23.290 |
|
On the other hand, modeling all the |
|
|
|
01:08:23.290 --> 01:08:25.560 |
|
Features as independent given the label |
|
|
|
01:08:25.560 --> 01:08:26.950 |
|
is a little bit too much of a |
|
|
|
01:08:26.950 --> 01:08:28.410 |
|
simplifying assumption. |
|
|
|
01:08:28.410 --> 01:08:30.270 |
|
So they use this algorithm that they |
|
|
|
01:08:30.270 --> 01:08:33.220 |
|
call semi Naive Bayes which is proposed |
|
|
|
01:08:33.220 --> 01:08:34.040 |
|
earlier. |
|
|
|
01:08:35.220 --> 01:08:37.946 |
|
Where you just you Model the |
|
|
|
01:08:37.946 --> 01:08:39.803 |
|
probabilities of little groups of |
|
|
|
01:08:39.803 --> 01:08:41.380 |
|
features and then you say that the |
|
|
|
01:08:41.380 --> 01:08:43.166 |
|
total probability is the probability |
|
|
|
01:08:43.166 --> 01:08:44.830 |
|
the product or the probabilities of |
|
|
|
01:08:44.830 --> 01:08:45.849 |
|
these groups of Features. |
|
|
|
01:08:46.710 --> 01:08:47.845 |
|
So they call these patterns. |
|
|
|
01:08:47.845 --> 01:08:50.160 |
|
So first you do some look at the mutual |
|
|
|
01:08:50.160 --> 01:08:51.870 |
|
information, you have ways of measuring |
|
|
|
01:08:51.870 --> 01:08:54.050 |
|
the dependence of different variables, |
|
|
|
01:08:54.050 --> 01:08:56.470 |
|
and you cluster the Features together |
|
|
|
01:08:56.470 --> 01:08:58.280 |
|
based on their dependencies. |
|
|
|
01:08:58.920 --> 01:09:00.430 |
|
And then for little clusters of |
|
|
|
01:09:00.430 --> 01:09:02.149 |
|
Features, 3 Features. |
|
|
|
01:09:03.060 --> 01:09:05.800 |
|
You Estimate the probability of the |
|
|
|
01:09:05.800 --> 01:09:08.500 |
|
Joint combination of these features and |
|
|
|
01:09:08.500 --> 01:09:11.230 |
|
then the total probability of all the |
|
|
|
01:09:11.230 --> 01:09:11.620 |
|
Features. |
|
|
|
01:09:11.620 --> 01:09:12.920 |
|
I'm glad this isn't worker. |
|
|
|
01:09:12.920 --> 01:09:14.788 |
|
The total probability of all the |
|
|
|
01:09:14.788 --> 01:09:16.660 |
|
features is the product of the |
|
|
|
01:09:16.660 --> 01:09:18.270 |
|
probabilities of each of these groups |
|
|
|
01:09:18.270 --> 01:09:18.840 |
|
of Features. |
|
|
|
01:09:19.890 --> 01:09:21.140 |
|
And so you Model. |
|
|
|
01:09:21.140 --> 01:09:23.616 |
|
Likely a set of features are given that |
|
|
|
01:09:23.616 --> 01:09:25.270 |
|
it's a face, and how likely they are |
|
|
|
01:09:25.270 --> 01:09:27.790 |
|
given that it's not a face or given a |
|
|
|
01:09:27.790 --> 01:09:29.280 |
|
random patch from an image. |
|
|
|
01:09:29.930 --> 01:09:32.260 |
|
And then that can be used to classify |
|
|
|
01:09:32.260 --> 01:09:33.060 |
|
images as face. |
|
|
|
01:09:33.060 --> 01:09:33.896 |
|
You're not face. |
|
|
|
01:09:33.896 --> 01:09:35.560 |
|
And you would Estimate this separately |
|
|
|
01:09:35.560 --> 01:09:37.120 |
|
for cars and for each orientation of |
|
|
|
01:09:37.120 --> 01:09:38.110 |
|
car question. |
|
|
|
01:09:43.310 --> 01:09:45.399 |
|
So the question was what beat the 2005 |
|
|
|
01:09:45.400 --> 01:09:45.840 |
|
model? |
|
|
|
01:09:45.840 --> 01:09:47.750 |
|
I'm not really sure that there was |
|
|
|
01:09:47.750 --> 01:09:50.180 |
|
something that beat it in 2006, but |
|
|
|
01:09:50.180 --> 01:09:53.820 |
|
that when Dalal Triggs SVM based |
|
|
|
01:09:53.820 --> 01:09:55.570 |
|
detector came out. |
|
|
|
01:09:56.200 --> 01:09:57.680 |
|
And I think it might have been, I |
|
|
|
01:09:57.680 --> 01:10:00.617 |
|
didn't look it up so I'm not sure, but |
|
|
|
01:10:00.617 --> 01:10:02.930 |
|
I was, I'm pretty confident it was the |
|
|
|
01:10:02.930 --> 01:10:04.947 |
|
most accurate up to 2005, but not |
|
|
|
01:10:04.947 --> 01:10:06.070 |
|
confident after that. |
|
|
|
01:10:07.250 --> 01:10:10.430 |
|
And now it took a while for face |
|
|
|
01:10:10.430 --> 01:10:12.650 |
|
detection to get more accurate than |
|
|
|
01:10:12.650 --> 01:10:15.630 |
|
most famous face detector was actually |
|
|
|
01:10:15.630 --> 01:10:18.330 |
|
the Viola joins detector, which was |
|
|
|
01:10:18.330 --> 01:10:20.515 |
|
popular because it was really fast. |
|
|
|
01:10:20.515 --> 01:10:24.046 |
|
This thing man at a couple frames per |
|
|
|
01:10:24.046 --> 01:10:26.414 |
|
second, but Viola Jones ran at 15 |
|
|
|
01:10:26.414 --> 01:10:28.560 |
|
frames per second in 2001. |
|
|
|
01:10:30.310 --> 01:10:31.960 |
|
But Viola Jones wasn't quite as |
|
|
|
01:10:31.960 --> 01:10:32.460 |
|
accurate. |
|
|
|
01:10:35.210 --> 01:10:37.840 |
|
Alright, so Summary of Naive bees. |
|
|
|
01:10:38.180 --> 01:10:38.790 |
|
And. |
|
|
|
01:10:39.940 --> 01:10:41.740 |
|
So the key assumption is that the |
|
|
|
01:10:41.740 --> 01:10:43.460 |
|
Features are independent given the |
|
|
|
01:10:43.460 --> 01:10:43.870 |
|
labels. |
|
|
|
01:10:46.730 --> 01:10:48.110 |
|
The parameters are just the |
|
|
|
01:10:48.110 --> 01:10:50.173 |
|
probabilities, are the parameters of |
|
|
|
01:10:50.173 --> 01:10:51.990 |
|
each of these probability functions, |
|
|
|
01:10:51.990 --> 01:10:53.908 |
|
the probability of each feature given Y |
|
|
|
01:10:53.908 --> 01:10:55.750 |
|
and probability of Y and justice. |
|
|
|
01:10:55.750 --> 01:10:57.250 |
|
Like in the Simple fruit example I |
|
|
|
01:10:57.250 --> 01:10:59.405 |
|
gave, you can use different models for |
|
|
|
01:10:59.405 --> 01:10:59.976 |
|
different features. |
|
|
|
01:10:59.976 --> 01:11:02.340 |
|
Some of the features could be discrete |
|
|
|
01:11:02.340 --> 01:11:04.120 |
|
values and some could be continuous |
|
|
|
01:11:04.120 --> 01:11:04.560 |
|
values. |
|
|
|
01:11:04.560 --> 01:11:05.520 |
|
That's not a problem. |
|
|
|
01:11:08.520 --> 01:11:10.150 |
|
You have to choose which probability |
|
|
|
01:11:10.150 --> 01:11:11.510 |
|
function you're going to use for each |
|
|
|
01:11:11.510 --> 01:11:11.940 |
|
feature. |
|
|
|
01:11:14.450 --> 01:11:16.250 |
|
Nine days can be useful if you have |
|
|
|
01:11:16.250 --> 01:11:18.080 |
|
limited training data, because you only |
|
|
|
01:11:18.080 --> 01:11:19.560 |
|
have to Estimate these one-dimensional |
|
|
|
01:11:19.560 --> 01:11:21.150 |
|
distributions, which you can do from |
|
|
|
01:11:21.150 --> 01:11:22.370 |
|
relatively few Samples. |
|
|
|
01:11:23.000 --> 01:11:24.420 |
|
And if the features are not highly |
|
|
|
01:11:24.420 --> 01:11:26.540 |
|
interdependent, and it can also be |
|
|
|
01:11:26.540 --> 01:11:27.970 |
|
useful as a baseline if you want |
|
|
|
01:11:27.970 --> 01:11:29.766 |
|
something that's fast to code, train |
|
|
|
01:11:29.766 --> 01:11:30.579 |
|
and test. |
|
|
|
01:11:30.580 --> 01:11:32.900 |
|
So as you do your homework, I think out |
|
|
|
01:11:32.900 --> 01:11:34.860 |
|
of the methods, Naive Bayes has the |
|
|
|
01:11:34.860 --> 01:11:37.140 |
|
lowest training plus test time. |
|
|
|
01:11:37.140 --> 01:11:40.139 |
|
Logistic regression is going to be |
|
|
|
01:11:40.140 --> 01:11:42.618 |
|
roughly tied for test time, but it |
|
|
|
01:11:42.618 --> 01:11:43.680 |
|
takes an awful lot. |
|
|
|
01:11:43.680 --> 01:11:45.980 |
|
Well, it takes longer to train. |
|
|
|
01:11:45.980 --> 01:11:48.379 |
|
KNN takes no time to train, but takes a |
|
|
|
01:11:48.380 --> 01:11:49.570 |
|
whole lot longer to test. |
|
|
|
01:11:54.630 --> 01:11:56.830 |
|
So when not to use? |
|
|
|
01:11:56.830 --> 01:11:58.760 |
|
Usually Logistic or linear regression |
|
|
|
01:11:58.760 --> 01:12:01.070 |
|
will work better if you have enough |
|
|
|
01:12:01.070 --> 01:12:01.440 |
|
data. |
|
|
|
01:12:02.230 --> 01:12:05.510 |
|
And the reason is that under most |
|
|
|
01:12:05.510 --> 01:12:07.860 |
|
probability the exponential |
|
|
|
01:12:07.860 --> 01:12:09.790 |
|
distribution of probability models |
|
|
|
01:12:09.790 --> 01:12:11.940 |
|
which include Binomial, multinomial and |
|
|
|
01:12:11.940 --> 01:12:12.530 |
|
Gaussian. |
|
|
|
01:12:13.640 --> 01:12:15.657 |
|
You can rewrite Naive Bayes as a linear |
|
|
|
01:12:15.657 --> 01:12:18.993 |
|
function of the input features, but the |
|
|
|
01:12:18.993 --> 01:12:21.740 |
|
linear function is highly constrained |
|
|
|
01:12:21.740 --> 01:12:23.750 |
|
based on this, estimating likelihoods |
|
|
|
01:12:23.750 --> 01:12:25.650 |
|
for each feature separately. |
|
|
|
01:12:25.650 --> 01:12:27.500 |
|
Where linear and logistic regression, |
|
|
|
01:12:27.500 --> 01:12:28.970 |
|
which we'll talk about next Thursday, |
|
|
|
01:12:28.970 --> 01:12:30.815 |
|
are not constrained, you can solve for |
|
|
|
01:12:30.815 --> 01:12:32.300 |
|
the full range of coefficients. |
|
|
|
01:12:33.440 --> 01:12:35.050 |
|
The other issue is that it doesn't |
|
|
|
01:12:35.050 --> 01:12:37.890 |
|
provide a very good confidence Estimate |
|
|
|
01:12:37.890 --> 01:12:39.720 |
|
because it over counts the influence of |
|
|
|
01:12:39.720 --> 01:12:40.880 |
|
dependent variables. |
|
|
|
01:12:40.880 --> 01:12:42.860 |
|
If you repeat a feature of many times, |
|
|
|
01:12:42.860 --> 01:12:44.680 |
|
it's going to count it every time, and |
|
|
|
01:12:44.680 --> 01:12:47.215 |
|
so it will tend to have too much weight |
|
|
|
01:12:47.215 --> 01:12:48.930 |
|
and give you bad confidence estimates. |
|
|
|
01:12:51.010 --> 01:12:55.100 |
|
9 Bayes is easy and fast to train, Fast |
|
|
|
01:12:55.100 --> 01:12:56.130 |
|
for inference. |
|
|
|
01:12:56.130 --> 01:12:57.400 |
|
You can use it with different kinds of |
|
|
|
01:12:57.400 --> 01:12:58.040 |
|
variables. |
|
|
|
01:12:58.040 --> 01:12:59.220 |
|
It doesn't account for feature |
|
|
|
01:12:59.220 --> 01:13:00.730 |
|
interaction, doesn't provide good |
|
|
|
01:13:00.730 --> 01:13:01.670 |
|
confidence estimates. |
|
|
|
01:13:02.390 --> 01:13:04.210 |
|
And it's best when used with discrete |
|
|
|
01:13:04.210 --> 01:13:06.270 |
|
variables, those that can be fit well |
|
|
|
01:13:06.270 --> 01:13:08.830 |
|
by a Gaussian, or if you use kernel |
|
|
|
01:13:08.830 --> 01:13:10.690 |
|
density estimation, which is something |
|
|
|
01:13:10.690 --> 01:13:11.840 |
|
that we'll talk about later in this |
|
|
|
01:13:11.840 --> 01:13:13.580 |
|
semester, a more general like |
|
|
|
01:13:13.580 --> 01:13:15.080 |
|
continuous distribution function. |
|
|
|
01:13:17.210 --> 01:13:19.560 |
|
And justice, as a reminder, don't pack |
|
|
|
01:13:19.560 --> 01:13:21.730 |
|
up until I'm done, but this will be the |
|
|
|
01:13:21.730 --> 01:13:22.570 |
|
second to last slide. |
|
|
|
01:13:24.220 --> 01:13:25.890 |
|
So things remember. |
|
|
|
01:13:27.140 --> 01:13:28.950 |
|
So Probabilistic models are really |
|
|
|
01:13:28.950 --> 01:13:30.837 |
|
large class of machine learning |
|
|
|
01:13:30.837 --> 01:13:31.160 |
|
methods. |
|
|
|
01:13:31.160 --> 01:13:32.590 |
|
There are many different kinds of |
|
|
|
01:13:32.590 --> 01:13:34.690 |
|
machine learning methods that are based |
|
|
|
01:13:34.690 --> 01:13:36.480 |
|
on estimating the likelihoods of the |
|
|
|
01:13:36.480 --> 01:13:38.170 |
|
label given the data or the data given |
|
|
|
01:13:38.170 --> 01:13:38.730 |
|
the label. |
|
|
|
01:13:39.580 --> 01:13:41.630 |
|
Naive Bayes assumes that Features are |
|
|
|
01:13:41.630 --> 01:13:45.430 |
|
independent given the label, and it's |
|
|
|
01:13:45.430 --> 01:13:46.860 |
|
easy and fast to estimate the |
|
|
|
01:13:46.860 --> 01:13:48.920 |
|
parameters and reduces the risk of |
|
|
|
01:13:48.920 --> 01:13:50.480 |
|
overfitting when you have limited data. |
|
|
|
01:13:52.270 --> 01:13:52.590 |
|
It's. |
|
|
|
01:13:52.590 --> 01:13:55.190 |
|
You don't usually have to derive how to |
|
|
|
01:13:55.190 --> 01:13:57.910 |
|
solve for the likelihood parameters, |
|
|
|
01:13:57.910 --> 01:13:59.660 |
|
but you can do it if you want to by |
|
|
|
01:13:59.660 --> 01:14:00.954 |
|
taking the partial derivative. |
|
|
|
01:14:00.954 --> 01:14:03.540 |
|
Usually it's usually you would be using |
|
|
|
01:14:03.540 --> 01:14:06.140 |
|
a common a common kind of Model and you |
|
|
|
01:14:06.140 --> 01:14:07.290 |
|
can just look up the Emily. |
|
|
|
01:14:09.490 --> 01:14:11.160 |
|
The Prediction involves finding the way |
|
|
|
01:14:11.160 --> 01:14:13.190 |
|
that maximizes the probability of the |
|
|
|
01:14:13.190 --> 01:14:15.150 |
|
data and the label, either by trying |
|
|
|
01:14:15.150 --> 01:14:17.250 |
|
all the possible values of Y or solving |
|
|
|
01:14:17.250 --> 01:14:18.230 |
|
the partial derivative. |
|
|
|
01:14:19.270 --> 01:14:21.535 |
|
And finally, Maximizing log probability |
|
|
|
01:14:21.535 --> 01:14:24.060 |
|
of I is equivalent to Maximizing |
|
|
|
01:14:24.060 --> 01:14:25.360 |
|
probability of. |
|
|
|
01:14:25.520 --> 01:14:27.310 |
|
Sorry, Maximizing log probability of |
|
|
|
01:14:27.310 --> 01:14:30.270 |
|
X&Y is equivalent to maximizing the |
|
|
|
01:14:30.270 --> 01:14:32.250 |
|
probability of X&Y, and it's usually |
|
|
|
01:14:32.250 --> 01:14:34.000 |
|
much easier, so it's important to |
|
|
|
01:14:34.000 --> 01:14:34.390 |
|
remember that. |
|
|
|
01:14:35.970 --> 01:14:36.180 |
|
Right. |
|
|
|
01:14:36.180 --> 01:14:37.840 |
|
And then next class I'm going to talk |
|
|
|
01:14:37.840 --> 01:14:40.030 |
|
about logistic regression and linear |
|
|
|
01:14:40.030 --> 01:14:40.700 |
|
regression. |
|
|
|
01:14:41.530 --> 01:14:44.870 |
|
And one more thing is I posted a review |
|
|
|
01:14:44.870 --> 01:14:49.310 |
|
questions and answers to the 1st 2 |
|
|
|
01:14:49.310 --> 01:14:51.440 |
|
cannon and this lecture on the web |
|
|
|
01:14:51.440 --> 01:14:52.050 |
|
page. |
|
|
|
01:14:52.050 --> 01:14:53.690 |
|
You don't have to do them but they're |
|
|
|
01:14:53.690 --> 01:14:55.410 |
|
good review for the exam or just the |
|
|
|
01:14:55.410 --> 01:14:56.820 |
|
check your knowledge after each |
|
|
|
01:14:56.820 --> 01:14:57.200 |
|
lecture. |
|
|
|
01:14:57.890 --> 01:14:58.750 |
|
Thank you. |
|
|
|
01:15:11.030 --> 01:15:11.320 |
|
I. |
|
|
|
|